From: nmm1 on
In article <7zzl5sr0sz.fsf(a)pc-003.diku.dk>,
Torben �gidius Mogensen <torbenm(a)diku.dk> wrote:
>"Andy \"Krazy\" Glew" <ag-news(a)patten-glew.net> writes:
>
>>
>> Reason: Tools. Ubiquity. Libraries. Applies just as much to Linux as
>> to Windows. You are running along fine on your non-x86 box, and then
>> realize that you want to use some open source library that has been
>> developed and tested mainly on x86. You compile from source, and
>> there are issues. All undoubtedly solvable, but NOT solved right
>> away. So as a result, you either can't use the latest and greatest
>> library, or you have to fix it.
>>
>> Like I said, this was supercomputer customers telling me this. Not
>> all - but maybe 2/3rds. Also, especially, the supercomputer
>> customers' sysadmins.
>
>Libraries are, of course, important to supercomputer users. But if they
>are written in a high-level language and the new CPU uses the same
>representation of floating-point numbers as the old (e.g., IEEE), they
>should compile to the new platform. Sure, some low-level optimisations
>may not apply, but if the new platform is a lot faster than the old,
>that may not matter. And you can always address the optimisation issue
>later.

Grrk. All of the above is partially true, but only partially. The
problem is almost entirely with poor-quality software (which is,
regrettably, most of it). Good quality software is portable to
quite wildly different systems fairly easily. It depends on whether
you are talking about performance-critical, numerical libraries
(i.e. what supercomputer users really want to do) or administrative
and miscellaneous software.

For the former, the representation isn't enough, as subtle differences
like hard/soft underflow and exception handling matter, too. And you
CAN'T disable optimisation for supercomputers, because you can't
accept the factor of 10+ degradation. It doesn't help, anyway,
because you will be comparing with an optimised version on the
other systems.

With the latter, porting is usually trivial, provided that the
program has not been rendered non-portable by the use of autoconfigure,
and that it doesn't use the more ghastly parts of the infrastructure.
But most applications that do rely on those areas aren't relevant
to supercomputers, anyway, because they are concentrated around
the GUI area (and, yes, flash is a good example).

I spent a decade managing the second-largest supercomputer in UK
academia, incidentally, and some of the systems I managed were
'interesting'.

>Besides, until recently supercomputers were not mainly x86-based.
>
>> Perhaps supercomputers are more legacy x86 sensitive than game consoles...

Much less so.

From: Ken Hagan on
On Wed, 09 Dec 2009 08:47:40 -0000, Torben �gidius Mogensen
<torbenm(a)diku.dk> wrote:

> Sure, some low-level optimisations
> may not apply, but if the new platform is a lot faster than the old,
> that may not matter. And you can always address the optimisation issue
> later.

I don't think Andy was talking about poor optimisation. Perhaps these
libraries have assumed the fairly strong memory ordering model of an x86,
and in its absence would be chock full of bugs.

> Flash is available on ARM too. And if another platform becomes popular,
> Adobe will port Flash to this too.

When hell freezes over. It took Adobe *years* to get around to porting
Flash to x64.

They had 32-bit versions for Linux and Windows for quite a while, but no
64-bit version for either. To me, that suggests the problem was the
int-size rather than the platform, and it just took several years to clean
it up sufficiently. So I suppose it is *possible* that the next port might
not take so long. On the other hand, both of these targets have Intel's
memory model, so I'd be surprised if even this "clean" version was truly
portable.
From: Ken Hagan on
On Tue, 08 Dec 2009 13:13:32 -0000, ChrisQ <meru(a)devnull.com> wrote:

> The obvious question then is: Would one of many x86 cores be fast enough
> on it's own to run legacy windows code like office, photoshop etc ?...

Almost certainly. From my own experience, Office 2007 is perfectly usable
on a 2GHz Pentium 4 and only slightly sluggish on a 1GHz Pentium 3. These
applications are already "lightly multi-threaded", so some of the
longer-running operations are spun off on background threads, so if you
had 2 or 3 cores that were even slower, that would probably still be OK
because the application *would* divide the workload. For screen drawing,
the OS plays a similar trick.

I would also imagine that Photoshop had enough embarrassing parallelism
that even legacy versions might run faster on a lot of slow cores, but I'm
definitely guessing here.
From: Noob on
Bernd Paysan wrote:

> This of course would be much less of a problem if Flash wasn't something
> proprietary from Adobe [...]

A relevant article:
Free Flash community reacts to Adobe Open Screen Project
http://www.openmedianow.org/?q=node/21
From: Stefan Monnier on
> They had 32-bit versions for Linux and Windows for quite a while, but no
> 64-bit version for either. To me, that suggests the problem was the

It's just a question of market share.
Contrary to Free Software where any idiot can port the code to his
platform if he so wishes, propretary software first requires collecting
a large number of idiots so as to justify
compiling/testing/marketing/distributing the port.


Stefan
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Prev: PEEEEEEP
Next: Texture units as a general function