From: Paul Wallich on
Robert Myers wrote:
> On Dec 7, 9:51 am, "Andy \"Krazy\" Glew" <ag-n...(a)patten-glew.net>
> wrote:
>> n...(a)cam.ac.uk wrote:
>>> Now, there I beg to disagree. I have never seen anything reliable
>>> indicating that Larrabee has ever been intended for consumers,
>>> EXCEPT as a 'black-box' GPU programmed by 'Intel partners'. And
>>> some of that information came from semi-authoritative sources in
>>> Intel. Do you have a reference to an conflicting statement from
>>> someone in Intel?
>> http://software.intel.com/en-us/blogs/2008/08/11/siggraph-larrabee-an...
>>
>> Just a blog, not official, although of course anything blogged at Intel
>> is semi-blest (believe me, I know the flip side.)
>
> The blog post reminded me. I have assumed, for years, that Intel
> planned on putting many (>>4) x86 cores on a single-die. I'm sure I
> can find Intel presentations from the nineties that seem to make that
> clear if I dig hard enough.
>
> From the very beginning, Larrabee seemed to be a technology of destiny
> in search of a mission, and the first, most obvious mission for any
> kind of massive parallelism is graphics. Thus, Intel explaining why
> it would introduce Larrabee at Siggraph always seemed a case of
> offering an explanation where none would be needed if the explanation
> weren't something they weren't sure they believed themselves (or that
> anyone else would). It just seemed like the least implausible mission
> for hardware that had been designed to a concept rather than to a
> mission. A more plausible claim that they were aiming at HPC probably
> wouldn't have seemed like a very attractive business proposition for a
> company the size of Intel.
>
> Also from the beginning, I wondered if Intel seriously expected to be
> able to compete at the high end with dedicated graphics engines using
> x86 cores. Either there was something about the technology I was
> missing completely, it was just another Intel bluff, or the "x86"
> cores that ultimately appeared on a graphics chips for market would be
> to an x86 as we know it as, say, a, lady bug is to a dalmatian.

From an outside perspective, this sounds a lot like the Itanic roadmap:
announce something brilliant and so far out there that your competitors
believe you must have solutions to all the showstoppers up your sleeve.
Major difference being that Larrabee's potential/probable competitors
didn't fold.

paul
From: Robert Myers on
On Dec 9, 3:47 am, torb...(a)diku.dk (Torben Ægidius Mogensen) wrote:

>
> Libraries are, of course, important to supercomputer users.  But if they
> are written in a high-level language and the new CPU uses the same
> representation of floating-point numbers as the old (e.g., IEEE), they
> should compile to the new platform.  Sure, some low-level optimisations
> may not apply, but if the new platform is a lot faster than the old,
> that may not matter.  And you can always address the optimisation issue
> later.
>
But if some clever c programmer or committee of c programmers has made
a convoluted and idiosyncratic change to a definition in a header
file, you may have to unscramble all kinds of stuff hidden under
macros just to get it to compile and link, and that effort can't be
deferred until later.

Robert.
From: Robert Myers on
On Dec 9, 1:10 pm, Paul Wallich <p...(a)panix.com> wrote:

>  From an outside perspective, this sounds a lot like the Itanic roadmap:
> announce something brilliant and so far out there that your competitors
> believe you must have solutions to all the showstoppers up your sleeve.
> Major difference being that Larrabee's potential/probable competitors
> didn't fold.

In American football, "A good quarterback can freeze the opposition’s
defensive secondary with a play-action move, a pump fake or even his
eyes."

http://www.dentonrc.com/sharedcontent/dws/drc/opinion/editorials/stories/DRC_Editorial_1123.2e4a496a2.html

where the analogy is used in a political context.

If I were *any* of the players in this game, I'd be studying the
tactics of quarterbacks who need time to find an open receiver, since
*no one* appears to have the right product ready for prime time. If I
were Intel, I'd be nervous, but if I were any of the other players,
I'd be nervous, too.

Nvidia stock has drooped a bit after the *big* bounce it took on the
Larrabee announcement, but I'm not sure why everyone is so negative on
Nvidia (especially Andy). They don't appear to be in much more
parlous a position than anyone else. If Fermi is a real product, even
if only at a ruinous price, there will be buyers.

N.B. I follow the financial markets for information only. I am not an
active investor.

Robert.
From: "Andy "Krazy" Glew" on
Robert Myers wrote:
> Nvidia stock has drooped a bit after the *big* bounce it took on the
> Larrabee announcement, but I'm not sure why everyone is so negative on
> Nvidia (especially Andy). They don't appear to be in much more
> parlous a position than anyone else. If Fermi is a real product, even
> if only at a ruinous price, there will be buyers.

Let me be clear: I'm not negative on Nvidia. I think their GPUs are the
most elegant of the lot. If anything, I am overcompensating: within
Intel, I was probably the biggest advocate of Nvidia style
microarchitecture, arguing against a lot of guys who came to Intel from
ATI. Also on this newsgroup.

However, I don't think that anyone can deny that Nvidia had some
execution problems recently. For their sake, I hope that they have
overcome them.

Also, AMD/ATI definitely overtook Nvidia. I think that Nvidia
emphasized elegance, and GP GPU futures stuff, whereas ATI went the
slightly inelegant way of combining SIMT Coherent Threading with VLIW.
It sounds more elegant when you phrase it my way, "combining SIMT
Coherent Threading with VLIW", than when you have to describe it without
my terminology. Anyway, ATI definitely had a performance per transistor
advantage. I suspect they will continue to have such an advantage over
Fermi, because, after all, VLIW works to some limited extent.

I think Fermi is more programmable and more general purpose, while ATI's
VLIW approach has efficiencies in some areas.

I think that Nvidia absolutely has to have a CPU to have a chance of
competing. One measly ARM chip or Power PC on an Nvidia die. Or maybe
one CPU chip, one GPU chip, and a stack of memory in a package; or a GPU
plus a memory interface with a lousy CPU. Or, heck, a reasonably
efficient way of decoupling one of Nvidia's processors and running 1
thread, non-SIMT, of scalar code. SIMT is great, but there is important
non-SIMT scalar code.

Ultimately, the CPU vendors will squeeze GPU-only vendors out of the
market. AMD & ATI are already combined. If Intel's Larrabee is
stalled, it gives Nvidia some breathing room, bit not much. Even if
Larrabee is completely cancelled, which I doubt, Intel would eventually
squeeze Nvidia out with its evolving integrated graphics. Which,
although widely dissed, really has a lot of potential.

Nvidia's best chance is if Intel thrashes, dithering between Larrabee
and Intel's integrated graphics and ... isn't Intel using PowerVR in
some Atom chips? I.e. Intel currently has at least 3 GPU solutions in
flight. *This* sounds like the sort of thrash Intel had -
x86/i960/i860 ... I personally think that Intel's best path to success
would be to go with a big core + the Intel integrated graphics GPU,
evolved, and then jump to Larrabee. But if they focus on Larrabee, or
an array of Atoms + a big core, their success will just be delayed.

Intel is its own biggest problem, with thrashing.

Meanwhile, AMD/ATI are in the best position. I don't necessarily like
Fusion CPU/GPU, but they have all the pieces. But it's not clear they
know how to use it.

And Nvidia needs to get out of the discrete graphics board market niche
as soon as possible. If they can do so, I bet on Nvidia.
From: Robert Myers on
On Dec 9, 11:12 pm, "Andy \"Krazy\" Glew" <ag-n...(a)patten-glew.net>
wrote:

> And Nvidia needs to get out of the discrete graphics board market niche
> as soon as possible. If they can do so, I bet on Nvidia.

Cringely thinks, well, the link says it all:

http://www.cringely.com/2009/12/intel-will-buy-nvidia/

Robert.

First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Prev: PEEEEEEP
Next: Texture units as a general function