From: "Andy "Krazy" Glew" on
Ken Hagan wrote:
> On Wed, 09 Dec 2009 08:47:40 -0000, Torben �gidius Mogensen
> <torbenm(a)diku.dk> wrote:
>
>> Sure, some low-level optimisations
>> may not apply, but if the new platform is a lot faster than the old,
>> that may not matter. And you can always address the optimisation issue
>> later.
>
> I don't think Andy was talking about poor optimisation. Perhaps these
> libraries have assumed the fairly strong memory ordering model of an
> x86, and in its absence would be chock full of bugs.

Nick is correct to say that memory ordering is harder to port around
than instruction set or word size.

A surprisingly large number of supercomputer customers use libraries and
tools that have some specific x86 knowledge.

For example, folks who use tools like Pin, the binary instrumentation
tool. Although Intel makes Pin available on some non-x86 machines,
where do you think Pin runs best?

Or the Boehm garbage collector for C++. Although it's fairly portable -

http://www.hpl.hp.com/personal/Hans_Boehm/gc/#where says
The collector is not completely portable, but the distribution includes
ports to most standard PC and UNIX/Linux platforms. The collector should
work on Linux, *BSD, recent Windows versions, MacOS X, HP/UX, Solaris,
Tru64, Irix and a few other operating systems. Some ports are more
polished than others.

again, if your platform is "less polished"...

Plus, there are the libraries and tools like Intel's Thread Building
Blocks.

Personally, I prefer not to use libraries that are tied to one processor
architecture, but many people just want to get their job done.

The list goes on.

Like I said, I was surprised at how many supercomputer customers
expressed this x86 orientation. I expected them to care little about x86.



From: Andrew Reilly on
On Wed, 09 Dec 2009 21:25:32 -0800, Andy \"Krazy\" Glew wrote:

> Like I said, I was surprised at how many supercomputer customers
> expressed this x86 orientation. I expected them to care little about
> x86.

I still expect those who use Cray or NEC vector supers, or any of the
scale-up SGI boxes, or any of the Blue-foo systems to care very little
indeed. The folk who seem to be getting mileage from the CUDA systems
probably only care peripherally. I suspect that it depends on how your
focus group self-selects.

Yes there are some big-iron x86 systems now, but they haven't even been a
majority on the top500 for very long.

I suppose that it doesn't take too long for bit-rot to set in, if the
popular crowd goes in a different direction.

Cheers,

--
Andrew
From: Terje Mathisen on
Robert Myers wrote:
> Nvidia stock has drooped a bit after the *big* bounce it took on the
> Larrabee announcement, but I'm not sure why everyone is so negative on
> Nvidia (especially Andy). They don't appear to be in much more
> parlous a position than anyone else. If Fermi is a real product, even
> if only at a ruinous price, there will be buyers.

I have seen a report by a seismic processing software firm, indicating
that their first experiments with GPGPU programming had gone very well:

After 8 rounds of optimization, which basically consisted of mapping
their problem (acoustic wave propagation, according to Kirchoff) onto
the actual capabilities of a GPU card, they went from being a little
slower than the host CPU up to nearly two orders of magnitude faster.

This meant that Amdahl's law started rearing it's ugly head: The setup
overhead took longer than the actual processing, so now they are working
on moving at least some of that surrounding code on the GPU as well.

Anyway, with something like 40-100x speedups, oil companies will be
willing to spend at least $1000+ per chip.

However, I'm guessing that the global oil processing market has not more
than 100 of the TOP500 clusters, so this is 100K to 1M chips if everyone
would scrap their current setup.

Terje
--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
From: Terje Mathisen on
Robert Myers wrote:
> On Dec 9, 11:12 pm, "Andy \"Krazy\" Glew"<ag-n...(a)patten-glew.net>
> wrote:
>
>> And Nvidia needs to get out of the discrete graphics board market niche
>> as soon as possible. If they can do so, I bet on Nvidia.
>
> Cringely thinks, well, the link says it all:
>
> http://www.cringely.com/2009/12/intel-will-buy-nvidia/

A rumor which has re-surfaced at least every year for as long as I can
remember, gaining strength since the AMD/ATI deal was announced.

Yes, it would well happen, Intel does have some spare change laying
around in the couch cushions. :-)

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
From: Torben �gidius Mogensen on
"Andy \"Krazy\" Glew" <ag-news(a)patten-glew.net> writes:


> I think that Nvidia absolutely has to have a CPU to have a chance of
> competing. One measly ARM chip or Power PC on an Nvidia die.

They do have Tegra, which is an ARM11 core on a chip with a graphics
processor (alas, not CUDA compatible) plus some other stuff. Adding one
or more ARM cores to a Fermi would not be that far a step. It would
require porting CUDA to ARM, though.

> Or, heck, a reasonably efficient way of decoupling one of Nvidia's
> processors and running 1 thread, non-SIMT, of scalar code.

The Nvidia processors lack interrupts andotheer stuff necesary for
running an OS, so it is probably better with a different processor.

> isn't Intel using PowerVR in some Atom chips?

I know ARM uses PowerVR, but I hadn't heard Intel doing so.

Torben
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Prev: PEEEEEEP
Next: Texture units as a general function