From: fatalist on
On Jun 19, 12:15 am, "steveu" <steveu(a)n_o_s_p_a_m.coppice.org> wrote:
> >fatalist <simfid...(a)gmail.com> wrote:
> >(snip)
>
> >> Why even bother with FPGAs ?
>
> >> GPUs are much cheaper (funded by millions of hard-core gamers who
> >> shell out big bucks to NVidia and AMD) and CUDA is rather well
> >> standardized and adopted programming framework with future path
>
> >> The only reason to use FPGA might be reducing latency to absolute
> >> minimum. As for data throughput I suspect GPU will beat FPGA hands
> >> down
>
> >Not so long ago I was figuring out how to do 1e15 six bit adds
> >per second using FPGAs.  I figured that I could do it with $100,000
> >worth of FPGAs which was a little more (not a lot more) than the
> >project could support.  
>
> >I didn't go through the math for GPU, but I believe that 1e15/s
> >will also take a lot of GPUs.
>
> >> Of course, if your problem cannot be formulated as SIMD program to run
> >> same computational routine on many pieces of data at the same time
> >> there is no benefit in using massively-parallel GPUs at all
>
> >Especially single precision floating point.  Small fixed point
> >works very well with FPGA logic.  The barrel shifter required
> >to normalize floating point data does not fit well in most
> >FPGA families.
>
> >Funny, though, as you say it is the gamers buying the GPUs,
> >and games pretty much only need single precision.  There are
> >some considering doing double precision in GPU specifically
> >for GPU based scientific computing.  
>
> The latest devices from ATI and nVidia do double precision, and the Fermi
> devices from nVidia are seriously trying to attack high performance
> computing (though nVidia seem to be badly screwing up on their execution
> right now).
>
> Steve- Hide quoted text -
>
> - Show quoted text -

Fermi can run double-precision 8 times faster than the previous
generation of NVidia chips (or so they claim)
But it has major heat issues - you can heat your room in the winter
with couple of those.
From: Impoliticus on
>The only reason to use FPGA might be reducing latency to absolute
>minimum. As for data throughput I suspect GPU will beat FPGA hands
>down

I'm curious about the last statement here about data throughput. In terms
of data in and data out, I'd think an FPGA (at least some of the high end
Xilinx parts) would beat a GPU hands down, although I know next to nothing
about GPUs. My frame of reference is a design using a Virtex 4 part which
is handling 10.6Gbps of data in AND out simultaneously. Can today's GPUs
hit these numbers?
From: glen herrmannsfeldt on
Impoliticus <swiston(a)n_o_s_p_a_m.uiuc.edu> wrote:
>>The only reason to use FPGA might be reducing latency to absolute
>>minimum. As for data throughput I suspect GPU will beat FPGA hands
>>down

> I'm curious about the last statement here about data throughput. In terms
> of data in and data out, I'd think an FPGA (at least some of the high end
> Xilinx parts) would beat a GPU hands down, although I know next to nothing
> about GPUs. My frame of reference is a design using a Virtex 4 part which
> is handling 10.6Gbps of data in AND out simultaneously. Can today's GPUs
> hit these numbers?

With a systolic array you should get good throughput in an FPGA
and get a lot of processing done. I don't think latency is
usually the issue, though.

It seems to me that it depends a lot on what the problem is that
you are trying to solve.

-- glen
From: steveu on
>>The only reason to use FPGA might be reducing latency to absolute
>>minimum. As for data throughput I suspect GPU will beat FPGA hands
>>down
>
>I'm curious about the last statement here about data throughput. In
terms
>of data in and data out, I'd think an FPGA (at least some of the high end
>Xilinx parts) would beat a GPU hands down, although I know next to
nothing
>about GPUs. My frame of reference is a design using a Virtex 4 part
which
>is handling 10.6Gbps of data in AND out simultaneously. Can today's GPUs
>hit these numbers?

PCI-E 2.0 x16 can do 8GBps, but I don't know how much of that the fastest
GPUs can sustain. That is the total of I and O. Back in the AGP days the
data rates used to be highly asymmetric. You could pump huge amounts into a
graphics cards, but only get modest amounts back. With PCI-E, things should
be symmetric.

Steve

From: Rick Lyons on
On Thu, 17 Jun 2010 18:25:48 -0700 (PDT), HardySpicer
<gyansorova(a)gmail.com> wrote:

>I heard somewhere that PC GPUs can be used to do say FFTs. They are
>cheap and very powerful (though not that easy to prorgam). You can get
>up to 1000 processors on a GPU so it could have all manner of
>applications. However, the I/O would slow things down I expect unless
>the CPU and GPU were on the same chip (lets say). Has anybody linked
>GPUs with FPGA I/O?
>
>
>Hardy

Hello Hardy,
You might take a look at:

http://www.dsprelated.com/blogs-1/nf/Seth_Benton.php

and

http://www.dsprelated.com/blogs-1/nf/Shehrzad_Qureshi.php

Goos Luck,
[-Rick-]