From: Bill Todd on
nmm1(a)cam.ac.uk wrote:
> In article <kKudnelkCfoGQEPXnZ2dnUVZ_t2dnZ2d(a)metrocastcablevision.com>,
> Bill Todd <billtodd(a)metrocast.net> wrote:
>>>> Do you really think that Merced in a then-current process "_would_
>>>> have been, by far, the fastest cpu on the planet" - especially for
>>>> general (rather than heavily FP-specific) code? ...
>>> Oh, yes, indeed, it would have been - if they had delivered in 1997
>>> what they were promising in 1995-6.
>> That's irrelevant: the question was how an actual (shipped-in-mid-2001)
>> Merced would have performed if delivered in 1997 (possibly 1998) using a
>> then-current process, not how some vaporware being talked about earlier
>> might have performed.
>
> You may have gathered that I agree with you from the rest of the
> paragraph :-)

Sometimes it's difficult to be sure. Ironically, both you and Robert
have somewhat similarly subtle ways of conveying sarcasm.

I'm actually kind of interested in whether I might be wrong here, since
I don't often find myself on the opposite side of a technical issue from
Terje (not that I viewed his assertion as necessarily any more than an
off-the-cuff remark that he may not have thought much about).

>
> But the insoluble problems were software and not hardware - they
> could, at least if they had handled the project excellently, have
> delivered what they claimed in terms of hardware.

Even that observation is not really relevant to the question here -
because Terje's assertion (at least as I read it) refers to what they
actually delivered (had it been delivered earlier), not to what they
might have hoped to deliver.

- bill
From: eternal september on
Hello all,

"Andy "Krazy" Glew" <ag-news(a)patten-glew.net> wrote in message
news:4ADEA866.5090000(a)patten-glew.net...
> I look forward to slowly, incrementally, increasing the scope of the
> dataflow in OOO machines.
> * Probably the next step is to make the window bigger, by multilevel
> techniques.


What is your favorite multilevel technique. I don't think I ever heard your
opinion on HSW (Hierarchical Scheduling Windows
http://portal.acm.org/citation.cfm?id=774861.774865)...

Thanks,
Ned

From: ChrisQ on
kenney(a)cix.compulink.co.uk wrote:

>> so I naturally wonder,
>> what's happened in the meantime ?.
>
> Rant mode on
>
> Software bloat. The programs I use except for games have not visibly
> increased in speed since my first PC. They have developed a lot more
> bells and whistles but not got faster. DOS could run programs and a GUI
> (GEM) in 512kb of memory. Windows 3.1 would run with a mb though it
> needed 4mb to get maximum performance, I understand that Windows 7 has a
> minimum requirement of 2gb. Just about all the increases in hardware
> speed have been used to run more elaborate software at the same speed.
>
> Rant mode off.
>
> Ken Young

Agreed, but the graphics is orders of magnitude better and that's partly
where all the spare cpu power has gone. But little has changed in
fundamental architectural terms for a very long time. When I take the
lid off any new machine and see a device whose function I can't estimate
within a minute or two, then I shall become curious about the progress
of computing again.

I caught the first 5 or 10 minutes of some physics character on tv a few
days ago. The sort of pop science that the beeb do so well, with the
dramatic music, graphics and odd angle shots of the presenter trying to
look wise and intelligent. His thesis was that we would soon have
intelligence in everything, with billions of microprocessors. My first
reaction was who will write all the code for this ?, as there's already
a shortage of embedded programmers who really know what they are doing.
Probably India and China, but I digress. In a roundabout sort of way,
he was right and it's already happening as more and more applications
for embedded devices appear. If you track the development of computing,
the story is one of an initial ivory towers world inhabited primarily by
mathematicians, to a commodity used for amusement by everyone, but
commodity means standardisation and dumbing down far enough to make it
cheap. Now we are seeing the next stage, with some of the the
functionality of the desktop becoming obsolete in favour of distributed
computing. Separate appliances with lower throughput and less power hungry.

In the old days, there were a wide variety of system designs, whereas
now there are few. Why ? - well partly because the old machines were
built from small scale devices. Bit slice, ttl, etc, with the wiring
between them and perhaps microcode defining the overall architecture.
The state of technology produced a level playing field that anyone with
a little capital and a good idea could exploit. Now, the state of
technology is such that only the largest corporations can afford to
develop new devices. It's completely frozen out all the small vendors in
terms of visibility. It's well known that the mainstream never really
develops anything really new. It exists merely to maintain the status
quo, perhaps with a few crumbs of improvement from time to time. It's
the bits at the edges where all the interesting stuff happens, but there
are effectively no edges left now. Mainstream computing has dug itself
into a very deep hole, which is why i'm so pessimistic about future
developments...

Regards,

Chris


From: Bernd Paysan on
ChrisQ wrote:
> In the old days, there were a wide variety of system designs, whereas
> now there are few. Why ? - well partly because the old machines were
> built from small scale devices. Bit slice, ttl, etc, with the wiring
> between them and perhaps microcode defining the overall architecture.
> The state of technology produced a level playing field that anyone with
> a little capital and a good idea could exploit. Now, the state of
> technology is such that only the largest corporations can afford to
> develop new devices.

Sorry, that's not true. Especially in the "small embedded intelligent
device" area we are talking about. The scale of integration changed: You
will produce a SoC for these applications. I.e. the parts are build from
incredible small devices: GDS polygons, to be precise (people use more
convenient building blocks, though). Most people who make SoCs embed some
standard core like an ARM (e.g. Cortex M0) or an 8051 (shudder - takes the
same area as a Cortex M0, but is horrible!), but that's because they chose
so, not because it's not feasible to develop your own architecture.

Architectures that usually don't surface to the user - e.g. when I embed a
b16 in a device of ours, it's not user-programmable. It's not visible what
kind of microprocessor there is or if there is any at all.

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
From: nmm1 on
In article <DHADm.42619$XI.25627(a)newsfe24.ams2>,
ChrisQ <meru(a)devnull.com> wrote:
>
>But anyway, getting back on topic, how about von neumann is dead ?, and
>what could replace it ?.

While Von Neumann is dead, Von Neumann computing isn't, and current
designs are almost all multiple Von Neumann threads with coherent
shared memory. Except for GPUs and some specialist systems, where
the coherence is patchy.

Note that I am not saying that is a good approach, merely that it is
what we have today.


Regards,
Nick Maclaren.