From: "Andy "Krazy" Glew" on
Michael S wrote:
> First, 95% of the people can't do proper SIMD+multicore on host CPU to
> save their lives

Right. If only 5% (probably less) of people can't do SIMD+multicore on
host CPU, but 10% can do it on a coherent threaded microarchitecture,
which is better?



> On Dec 11, 7:23 am, "Andy \"Krazy\" Glew" <ag-n...(a)patten-glew.net> wrote:
>> And, frankly, it is easier to tune your code to get good utilization on
>> a GPU. Yes, easier. Try it yourself. Not for really ugly code, but
>> for simple codes, yes, CUDA is easier. In my experience. And I'm a
>> fairly good x86 programmer, and a novice CUDA GPU programmer. I look
>> forward to Terje reporting his experience tuning code for CUDA (as long
>> as he isn't tuning wc).
>
> I'd guess you played with microbenchmerks.

Yep, you're right.

But even on the simplest microbenchmark, DAXPY, I needed to spend less
time tuning it on CUDA than I did tuning it on x86.

Now, there are some big real world apps where coherent threading falls
off a cliff. Where SIMT just doesn't work.

But if GPGPU needs less work for simple stuff, and comparable work for
hard stuff, and if the places where it falls off a cliff are no more
common than where MIMD CPU falls off a cliff...

If this was Willamette versus CUDA, there would not even be a question.
CUDA is easier to tune than Wmt. Nhm is easier to tune than Wmt, but it
still has a fairy complex microarchitecture, with lots of features that
get in the way. Sometimes simpler is better.
From: "Andy "Krazy" Glew" on
Andy "Krazy" Glew wrote:
> Michael S wrote:
>> First, 95% of the people can't do proper SIMD+multicore on host CPU to
>> save their lives
>
> Right. If only 5% (probably less) of people can't do SIMD+multicore on
> host CPU, but 10% can do it on a coherent threaded microarchitecture,
> which is better?

Urg. Language. Poor typing skills.

If only 5% can do good SIMD+multicore tuning on a host CPU,
but 10% can do it on a coherent threaded GPU-style microarchitecture,
which is better?
From: "Andy "Krazy" Glew" on
Robert Myers wrote:
> On Dec 9, 11:12 pm, "Andy \"Krazy\" Glew" <ag-n...(a)patten-glew.net>
> wrote:
>
>> And Nvidia needs to get out of the discrete graphics board market niche
>> as soon as possible. If they can do so, I bet on Nvidia.
>
> Cringely thinks, well, the link says it all:
>
> http://www.cringely.com/2009/12/intel-will-buy-nvidia/

Let's have some fun. Not gossip, but complete speculation. Let's think
about what companies might have a business interest in or be capable of
buying Nvidia. Add to this one extra consideration: Jen-Hsun Huang is
rumored to have wanted the CEO position in such merger possibilities in
the past

http://www.tomsguide.com/us/nvidia-amd-acquisition,news-594.html

The list:

---> Intel + Nvidia:
I almost hope not, but Cringely has described the possibility.
However, Jen-Hsun would be unlikely to get CEO. Would he be happy with
being in charge of all graphics operations at Intel?
PRO: Nvidia is in CA in OR, 2 big Oregon sites. CON: Nvidia is in CA,
which is being deprecated by all cost sensitive companies.
CON: Retention. I suspect that not only would many Larrabee and Intel
integrated GPU guys leave in such a merger, but also many Nvidia guys
would too. For many people, Nvidia's biggest advantage is that it is not
Intel or AMD.

---> AMD + Nvidia:
I know, AMD already has ATI. But this crops up from time to time. I
think that it is unlikely now, but possible if either makes a mistep and
shrinks market-cap-wise.

http://www.tomsguide.com/us/nvidia-amd-acquisition,news-594.html, 2008.

---> IBM + Nvidia:
Also crops up. Maybe marginally more likely than AMD. Perhaps more
likely now that Cell is deprecated. But IMHO unlikely that IBM wants to
be in consumer. Most likely if IBM went for HPC/servers/Tesla.

http://www.tomsguide.com/us/nvidia-amd-acquisition,news-594.html, 2008.

---> Apple + Nvidia:

Now, this is interesting. But Apple has been burned by Nvidia before.

---> Oracle/Sun + Nvidia:

Long shot. Does Larry really want to risk that much money seeking
world domination, over and above Sun?

---> Samsung + Nvidia:

I keep coming around to this being the most likely, although cultural
differences seem to suggest not. Very different company styles.

---> ARM + Nvidia:

???. Actually, ARM could not buy Nvidia, would have to be some other
sort of deal. But the combination would be interesting. Nvidia's
market would probably be cratered by Intel in the short term, but that
might happen anyway.

---> Some unknown Chinese or Taiwanese PC maker + Nvidia ...: ???

---> Micron + Nvidia:

Finance challenged, but might have interesting potential.


OK, I am sipping dregs here. Did I miss anything?

Oh, yes:

---> Cisco and Nvidia:

Already allied in supercomputers. Makes a lot of sense technically, if
you believe in GPGPU for HPC and servers and databases. But would open
Cisco up to Intel counter-attacks.
The more I learn about Cisco routing, the more I believe that a
coherent threaded GPU-style machine would be really good. Particularly
if they use some of the dynamic CT techniques I described at my Berkeley
ParLab talk in August of this year,
From: nmm1 on
In article <4B22B782.3020600(a)patten-glew.net>,
Andy \"Krazy\" Glew <ag-news(a)patten-glew.net> wrote:
>Andy "Krazy" Glew wrote:
>> Michael S wrote:
>>> First, 95% of the people can't do proper SIMD+multicore on host CPU to
>>> save their lives
>>
>> Right. If only 5% (probably less) of people can't do SIMD+multicore on
>> host CPU, but 10% can do it on a coherent threaded microarchitecture,
>> which is better?
>
>Urg. Language. Poor typing skills.
>
>If only 5% can do good SIMD+multicore tuning on a host CPU,
>but 10% can do it on a coherent threaded GPU-style microarchitecture,
>which is better?

The "probably less" is a gross understatement. Make it 0.5%. And
the only reason that rather more can do it on a GPU is that they are
tackling simpler tasks. Put them onto Dirichlet tesselation, and
watch them sweat :-)


Regards,
Nick Maclaren.
From: Del Cecchi on

"Andy "Krazy" Glew" <ag-news(a)patten-glew.net> wrote in message
news:4B22BE97.20600(a)patten-glew.net...
> Robert Myers wrote:
>> On Dec 9, 11:12 pm, "Andy \"Krazy\" Glew" <ag-n...(a)patten-glew.net>
>> wrote:
>>
>>> And Nvidia needs to get out of the discrete graphics board market
>>> niche
>>> as soon as possible. If they can do so, I bet on Nvidia.
>>
>> Cringely thinks, well, the link says it all:
>>
>> http://www.cringely.com/2009/12/intel-will-buy-nvidia/
>
> Let's have some fun. Not gossip, but complete speculation. Let's
> think about what companies might have a business interest in or be
> capable of buying Nvidia. Add to this one extra consideration:
> Jen-Hsun Huang is rumored to have wanted the CEO position in such
> merger possibilities in the past
>
> http://www.tomsguide.com/us/nvidia-amd-acquisition,news-594.html
>
> The list:
>
> ---> Intel + Nvidia:
> I almost hope not, but Cringely has described the possibility.
> However, Jen-Hsun would be unlikely to get CEO. Would he be happy
> with being in charge of all graphics operations at Intel?
> PRO: Nvidia is in CA in OR, 2 big Oregon sites. CON: Nvidia is in
> CA, which is being deprecated by all cost sensitive companies.
> CON: Retention. I suspect that not only would many Larrabee and
> Intel integrated GPU guys leave in such a merger, but also many
> Nvidia guys would too. For many people, Nvidia's biggest advantage
> is that it is not Intel or AMD.

And where would all the GPU guys go after the merger? In this
economy?
What's in it for Intel?
>
> ---> AMD + Nvidia:
> I know, AMD already has ATI. But this crops up from time to time.
> I think that it is unlikely now, but possible if either makes a
> mistep and shrinks market-cap-wise.
>
> http://www.tomsguide.com/us/nvidia-amd-acquisition,news-594.html,
> 2008.
>
> ---> IBM + Nvidia:
> Also crops up. Maybe marginally more likely than AMD. Perhaps more
> likely now that Cell is deprecated. But IMHO unlikely that IBM
> wants to be in consumer. Most likely if IBM went for
> HPC/servers/Tesla.

IBM is slowly getting out of the hardware business, in general. And
IBM certainly doesn't need Nvidia to do multiprocessors or HPC.
Selling graphics cards for a few hundred bucks to go in PCs is close
to the last thing IBM seems to be interested in.
(snip)

I have been reading stories about what IBM is doing and why for going
on 40 years now and very very few have even been close to accurate.

The "raw rumors and random data" from Datamation used to be my
favorite. :-)

del


First  |  Prev  |  Next  |  Last
Pages: 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
Prev: PEEEEEEP
Next: Texture units as a general function