From: steveu on
>On Jun 18, 7:01=A0am, Rune Allnor <all...(a)tele.ntnu.no> wrote:
>> On 18 Jun, 03:25, HardySpicer <gyansor...(a)gmail.com> wrote:
>>
>> > I heard somewhere that PC GPUs can be used to do say FFTs. They are
>> > cheap and very powerful (though not that easy to prorgam).
>>
>> I have seen people =A0come up with this 'brilliant' idea
>> every couple of years for a couple of decades, already.
>> The common factor is that people look exclusively at the
>> number of FLOPS / gates / processing units, and forget that
>> the GPUs are intensely tuned to highly specialized tasks.
>>
>> Which means that it easily takes at least as much work to
>> re-formulate the generic task at hand to fit the special
>> structure of the GPU pipeline (which might not be possible
>> at all), as would be required doing the job with a generic
>> FPU in the first place.
>>
>> Rune
>
>You don't have to know anything about GPU architecture to do GPU
>computing nowadays
>
>Matlab + Jacket will get you started in no time (if you don't mind
>shelling out some bucks)
>
>http://www.accelereyes.com/
>
>The only requirement is that your problem has to be formulated in SIMD
>fashion (e.g. doing multidimensional FFT) to see a benefit

... and the vectors need to be big. A lot of small FFTs, for example, works
out slower on a GPU than on an i7. A lot of big FFTs can work out several
times as fast on the GPU.

Steve

From: glen herrmannsfeldt on
fatalist <simfidude(a)gmail.com> wrote:
(snip)

> Why even bother with FPGAs ?

> GPUs are much cheaper (funded by millions of hard-core gamers who
> shell out big bucks to NVidia and AMD) and CUDA is rather well
> standardized and adopted programming framework with future path

> The only reason to use FPGA might be reducing latency to absolute
> minimum. As for data throughput I suspect GPU will beat FPGA hands
> down

Not so long ago I was figuring out how to do 1e15 six bit adds
per second using FPGAs. I figured that I could do it with $100,000
worth of FPGAs which was a little more (not a lot more) than the
project could support.

I didn't go through the math for GPU, but I believe that 1e15/s
will also take a lot of GPUs.

> Of course, if your problem cannot be formulated as SIMD program to run
> same computational routine on many pieces of data at the same time
> there is no benefit in using massively-parallel GPUs at all

Especially single precision floating point. Small fixed point
works very well with FPGA logic. The barrel shifter required
to normalize floating point data does not fit well in most
FPGA families.

Funny, though, as you say it is the gamers buying the GPUs,
and games pretty much only need single precision. There are
some considering doing double precision in GPU specifically
for GPU based scientific computing.

-- glen
From: HardySpicer on
On Jun 19, 1:08 am, Vladimir Vassilevsky <nos...(a)nowhere.com> wrote:
> HardySpicer wrote:
> > On Jun 18, 4:02 pm, Vladimir Vassilevsky <nos...(a)nowhere.com> wrote:
>
> >>HardySpicer wrote:
>
> >>>I heard somewhere that PC GPUs can be used to do say FFTs. They are
> >>>cheap and very powerful (though not that easy to prorgam). You can get
> >>>up to 1000 processors on a GPU so it could have all manner of
> >>>applications. However, the I/O would slow things down I expect unless
> >>>the CPU and GPU were on the same chip (lets say). Has anybody linked
> >>>GPUs with FPGA I/O?
>
> >>Hardy, can you do anything other then babbling nonsense? If you can,
> >>download a library for ATI or NVIDIA, compile it and see for youself.
>
> > That wasn't the question. Clearly English is not your first language
> > so I understand your confusion.
> > My question was, has anybody interfaced their own FPGA board with a
> > GPU so that I/O can be speeded up.
> > Don't bother answering Vlad if you just want to flame.
>
> Hardy, what do you know about FFT, GPU, FPGA ? Do you at least
> understand the difference between them? Have you ever made anything
> practical, or at least can you write a "hello world" program ?
> Why don't you try doing anything yourself, instead of casting utter
> nonsense ?
>
> VLV

I don't have to apologise or justify myself to an offensive vampire.
Now go and lap up your oil-spill or some such.


Hardy
From: Vladimir Vassilevsky on


HardySpicer wrote:
> On Jun 19, 1:08 am, Vladimir Vassilevsky <nos...(a)nowhere.com> wrote:
>
>>HardySpicer wrote:
>>
>>>On Jun 18, 4:02 pm, Vladimir Vassilevsky <nos...(a)nowhere.com> wrote:
>>
>>>>HardySpicer wrote:
>>
>>>>>I heard somewhere that PC GPUs can be used to do say FFTs. They are
>>>>>cheap and very powerful (though not that easy to prorgam). You can get
>>>>>up to 1000 processors on a GPU so it could have all manner of
>>>>>applications. However, the I/O would slow things down I expect unless
>>>>>the CPU and GPU were on the same chip (lets say). Has anybody linked
>>>>>GPUs with FPGA I/O?
>>
>>>>Hardy, can you do anything other then babbling nonsense? If you can,
>>>>download a library for ATI or NVIDIA, compile it and see for youself.
>>
>>>That wasn't the question. Clearly English is not your first language
>>>so I understand your confusion.
>>>My question was, has anybody interfaced their own FPGA board with a
>>>GPU so that I/O can be speeded up.
>>>Don't bother answering Vlad if you just want to flame.
>>
>>Hardy, what do you know about FFT, GPU, FPGA ? Do you at least
>>understand the difference between them? Have you ever made anything
>>practical, or at least can you write a "hello world" program ?
>>Why don't you try doing anything yourself, instead of casting utter
>>nonsense ?
>>
>
> I don't have to apologise or justify myself to an offensive vampire.
> Now go and lap up your oil-spill or some such.

Too bad, Hardy. You can't write "hello world" program, you watch too
much TV and have too little of imagination. What else you can not do?

VLV

From: steveu on
>fatalist <simfidude(a)gmail.com> wrote:
>(snip)
>
>> Why even bother with FPGAs ?
>
>> GPUs are much cheaper (funded by millions of hard-core gamers who
>> shell out big bucks to NVidia and AMD) and CUDA is rather well
>> standardized and adopted programming framework with future path
>
>> The only reason to use FPGA might be reducing latency to absolute
>> minimum. As for data throughput I suspect GPU will beat FPGA hands
>> down
>
>Not so long ago I was figuring out how to do 1e15 six bit adds
>per second using FPGAs. I figured that I could do it with $100,000
>worth of FPGAs which was a little more (not a lot more) than the
>project could support.
>
>I didn't go through the math for GPU, but I believe that 1e15/s
>will also take a lot of GPUs.
>
>> Of course, if your problem cannot be formulated as SIMD program to run
>> same computational routine on many pieces of data at the same time
>> there is no benefit in using massively-parallel GPUs at all
>
>Especially single precision floating point. Small fixed point
>works very well with FPGA logic. The barrel shifter required
>to normalize floating point data does not fit well in most
>FPGA families.
>
>Funny, though, as you say it is the gamers buying the GPUs,
>and games pretty much only need single precision. There are
>some considering doing double precision in GPU specifically
>for GPU based scientific computing.

The latest devices from ATI and nVidia do double precision, and the Fermi
devices from nVidia are seriously trying to attack high performance
computing (though nVidia seem to be badly screwing up on their execution
right now).

Steve