From: ChrisQ on
Ken Hagan wrote:
> On Tue, 08 Dec 2009 08:45:09 -0000, Torben �gidius Mogensen
> <torbenm(a)diku.dk> wrote:
>
>> The main (only?) advantage of the x86 ISA is for running legacy software
>> (yes, I do consider Windows to be legacy software). And I don't see
>> this applying for Larrabee -- you can't exploit the parallelism when you
>> run dusty decks.
>
> But you can exploit the parallelism where you really needed it and carry
> on using the dusty decks for all the other stuff, without which you
> don't have a rounded product.

The obvious question then is: Would one of many x86 cores be fast enough
on it's own to run legacy windows code like office, photoshop etc ?...

Regards,

Chris
From: "Andy "Krazy" Glew" on
Torben �gidius Mogensen wrote:
> "Andy \"Krazy\" Glew" <ag-news(a)patten-glew.net> writes:
>
>> Although I remain an advocate of GPU style coherent threading
>> microarchitectures - I think they are likely to be more power
>> efficient than simple MIMD, whether SMT/HT or MCMT - the pull of X86
>> will be powerful.
>
> The main (only?) advantage of the x86 ISA is for running legacy software
> (yes, I do consider Windows to be legacy software). And I don't see
> this applying for Larrabee -- you can't exploit the parallelism when you
> run dusty decks.
>
> When developing new software, you want to use high-level languages and
> don't really care too much about the underlying instruction set -- the
> programming model you have to use (i.e., shared memory versus message
> passing, SIMD vs. MIMD, etc.) is much more important, and that is
> largely independent of the ISA.
>
> Torben


I wish that this were so.

I naively thought it were so, e.g. for big supercomputers. After all,
they compile all of their code from scratch, right? What do they care
if the actual parallel compute engines are non-x86? Maybe have an x86 in
the box, to run legacy stuff.

Unfortunately, they do care. It may not be the primary concern - after
all, they often compile their code from scratch. But, if not primary,
it is one of the first of the secondary concerns.

Reason: Tools. Ubiquity. Libraries. Applies just as much to Linux as to
Windows. You are running along fine on your non-x86 box, and then
realize that you want to use some open source library that has been
developed and tested mainly on x86. You compile from source, and there
are issues. All undoubtedly solvable, but NOT solved right away. So as
a result, you either can't use the latest and greatest library, or you
have to fix it.

Like I said, this was supercomputer customers telling me this. Not all
- but maybe 2/3rds. Also, especially, the supercomputer customers'
sysadmins.

Perhaps supercomputers are more legacy x86 sensitive than game consoles...

I almost believed this when I wrote it. And then I thought about flash:

.... Than game consoles that want to start running living room
mediacenter applications. That want to start running things like x86
binary plugins, and Flash. Looking at

http://www.adobe.com/products/flashplayer/systemreqs/

The following minimum hardware configurations are recommended for
an optimal playback experience: ... all x86, + PowerPC G5.

I'm sure that you can get a version that runs on your non-x86,
non-PowerPC platform. ... But it's a hassle.

===

Since I would *like* to work on chips in the future as I have in the
past, and since I will never work at Intel or AMD again, I *want* to
believe that non-x86s can be successful. I think they can be
successful. But we should not fool ourselves: there are significant
obstacles, even in the most surprising market segments where x86
compatibility should not be that much of an issue.

We, the non-x86 forces of the world, need to recognize those obstacles,
and overcome them. Not deny their existence.
From: Bernd Paysan on
Andy "Krazy" Glew wrote:
> I almost believed this when I wrote it. And then I thought about flash:
>
> ... Than game consoles that want to start running living room
> mediacenter applications. That want to start running things like x86
> binary plugins, and Flash. Looking at
>
> http://www.adobe.com/products/flashplayer/systemreqs/
>
> The following minimum hardware configurations are recommended for
> an optimal playback experience: ... all x86, + PowerPC G5.
>
> I'm sure that you can get a version that runs on your non-x86,
> non-PowerPC platform. ... But it's a hassle.

It's mainly a deal between the platform maker and Adobe. Consider another
market, where x86 is non-existent: Smartphones. They are now real
computers, and Flash is an issue. Solution: Adobe ports the Flash plugin
over to ARM, as well. They already have Flash 9.4 ported (runs on the Nokia
N900), and Flash 10 will get an ARM port soon, as well, and spread around to
more smartphones. Or Skype: Also necessary, also proprietary, but also
available on ARM. As long as the device maker cares, it's their hassle, not
the user's hassle (and even a "free software only" Netbook Ubuntu it's too
much of a hassle to install the Flash plugin to be considered fine for mere
mortals).

This of course would be much less of a problem if Flash wasn't something
proprietary from Adobe, but an open standard (or at least based on an open
source platform), like HTML.

Note however, that even for a console maker, backward compatibility to the
previous platform is an issue. Sony put the complete PS2 logic (packet into
a newer, smaller chip) on the first PS3 generation to allow people to play
PS2 games with their PS3. If they completely change architecture with the
PS4, will they do that again? Or are they now fed up with this problem, and
decide to go to x86, and be done with that recurring problem?

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
From: Del Cecchi on

"Andy "Krazy" Glew" <ag-news(a)patten-glew.net> wrote in message
news:4B1DD041.7090009(a)patten-glew.net...
> Del Cecchi wrote:
>> "Andy "Krazy" Glew" <ag-news(a)patten-glew.net> wrote in message
>> news:4B1D1685.7080307(a)patten-glew.net...
>>> nmm1(a)cam.ac.uk wrote:
>>>
>>>> I have never seen anything reliable
>>>> indicating that Larrabee has ever been intended for consumers,
>>>
>>> http://software.intel.com/en-us/blogs/2008/08/11/siggraph-larrabee-and-the-future-of-computing/
>>>
>>> Just a blog, not official, although of course anything blogged at
>>> Intel is semi-blest (believe me, I know the flip side.)
>>
>> Does this mean Larrabee won't be the engine for the PS4?
>>
>> We were assured that it was not long ago.
>
> My guess is that Intel was pushing for Larrabee to be the PS4 chip.
>
> And, possibly, Sony agreed. Not unreasonably, if Intel had made a
> consumer grade Larrabee. Since Larrabee's nig pitch is
> programmability - cache coherence, MIMD, vectors, familiar stuff.
> As opposed to the Cell's idiosyncrasies and programmer hostility,
> which are probably in large part to blame for Sony's lack of success
> with the PS3.

I believe Cell was Sony's idea in the first place. I could be wrong
about that but it was sure the vibe at the time. And Sony's lateness
and high price was at least as much due to the Blue Ray drive
included, which did lead to them winning the DVD war
>
> Given the present Larrabee situation, Sony is probably scrambling.
> Options:
>
> a) go back to Cell.
>
> b) more likely, eke out a year or so with Cell and a PS4 stretch,
> and then look around again - possibly at the next Larrabee
>
> c) AMD/ATI Fusion
>
> d) Nvidia? Possibly with the CPU that Nvidia is widely rumored to
> be working on.
>
> AMD/ATI and Nvidia might seem the most reasonable, except that both
> companies have had trouble delivering. AMD/ATI look best now, but
> Nvidia has more "vision". Whatever good that will do them.
>
> Larrabee's attractions remain valid. It is more programmer
> friendly. But waiting until Larrabee is ready may be too painful.
>
> Historically, game consoles have a longer lifetime than PCs. They
> were programmed closer to the metal, and hence needed stability in
> order to warrant software investment.
>
> But DX10-DX11 and Open GL are *almost* good enough for games. And
> allow migrating more frequently to the latest and greatest.
>
> Blue-sky possibility: the PS3-PS4 transition breaking with the
> tradition of console stability. The console might stay stable form
> factor and UI and device wise - screen pixels, joysticks, etc. - but
> may start changing the underlying compute and graphics engine more
> quickly than in the best.
>
> Related: net games.


>
>
>


From: Torben �gidius Mogensen on
"Andy \"Krazy\" Glew" <ag-news(a)patten-glew.net> writes:

> Torben �gidius Mogensen wrote:

>> When developing new software, you want to use high-level languages and
>> don't really care too much about the underlying instruction set -- the
>> programming model you have to use (i.e., shared memory versus message
>> passing, SIMD vs. MIMD, etc.) is much more important, and that is
>> largely independent of the ISA.

> I naively thought it were so, e.g. for big supercomputers. After all,
> they compile all of their code from scratch, right? What do they care
> if the actual parallel compute engines are non-x86? Maybe have an x86
> in the box, to run legacy stuff.
>
> Unfortunately, they do care. It may not be the primary concern -
> after all, they often compile their code from scratch. But, if not
> primary, it is one of the first of the secondary concerns.
>
> Reason: Tools. Ubiquity. Libraries. Applies just as much to Linux as
> to Windows. You are running along fine on your non-x86 box, and then
> realize that you want to use some open source library that has been
> developed and tested mainly on x86. You compile from source, and
> there are issues. All undoubtedly solvable, but NOT solved right
> away. So as a result, you either can't use the latest and greatest
> library, or you have to fix it.
>
> Like I said, this was supercomputer customers telling me this. Not
> all - but maybe 2/3rds. Also, especially, the supercomputer
> customers' sysadmins.

Libraries are, of course, important to supercomputer users. But if they
are written in a high-level language and the new CPU uses the same
representation of floating-point numbers as the old (e.g., IEEE), they
should compile to the new platform. Sure, some low-level optimisations
may not apply, but if the new platform is a lot faster than the old,
that may not matter. And you can always address the optimisation issue
later.

Besides, until recently supercomputers were not mainly x86-based.

> Perhaps supercomputers are more legacy x86 sensitive than game consoles...
>
> I almost believed this when I wrote it. And then I thought about flash:
>
> ... Than game consoles that want to start running living room
> mediacenter applications. That want to start running things like x86
> binary plugins, and Flash. Looking at
>
> http://www.adobe.com/products/flashplayer/systemreqs/
>
> The following minimum hardware configurations are recommended for
> an optimal playback experience: ... all x86, + PowerPC G5.
>
> I'm sure that you can get a version that runs on your non-x86,
> non-PowerPC platform. ... But it's a hassle.

Flash is available on ARM too. And if another platform becomes popular,
Adobe will port Flash to this too. But that is not the issue: Flash
doesn't run on the graphics processor, it runs on the main CPU, though
it may use the graphics processor through a standard API that hides the
details of the GPU ISA.

Torben

First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Prev: PEEEEEEP
Next: Texture units as a general function