From: Chris Gray on
nmm1(a)cam.ac.uk writes:

> As you know, I am radical among radicals, but what I should like to
> see is a 1,024 core chip, with an enhanced 2-D grid memory topology,
> back-to-back with its NON-shared memory, NO hardware floating-point,

Why do you say no HW float, Nick? You know far more about dealing with
applications and floating point than I do, but as an ex-Myriod, my
recollection is that the lack of HW floating point was one of the big
problems with the first Myrias system (SPS-1, based on MC68000). The SW
floating point we had was just so slow that it outweighed any other factors
relating to the system (like cost). It could have been faster if it didn't
need to stick to IEEE semantics, but I doubt if it would have been an order
of magnitude faster.

--
Experience should guide us, not rule us.

Chris Gray cg(a)GraySage.COM
http://www.Nalug.ORG/ (Lego)
http://www.GraySage.COM/cg/ (Other)
From: Del Cecchi on

"Mayan Moudgill" <mayan(a)bestweb.net> wrote in message
news:YNmdnS6S7-70-DnXnZ2dnUVZ_hSdnZ2d(a)bestweb.net...
>
> I've been reading comp.arch off and on for more than 20 years now.
> In the past few years the SNR has deteriorated considerably, and I
> was wondering why. Maybe people who used to post at comp.arch are on
> other formums? Maybe its that I've gotten a little harder to
> impress? Then I thought about the quality of most papers at ISCA and
> Micro, the fact that both EDF and MPF have gone away, and I think
> the rot is not confined to just comp.arch.
>
> So, whats going on? I'm sure part of it is that the latest
> generation of architects is talking at other sites.
>
> However, equally important is that there are far fewer of them. The
> number of companies designing processors has gone down and there are
> fewer startups doing processors. So, less architects.
>
> Within those processors there is less architecture (or micro
> architecture) being done; instead, the imperative that clock cycle
> has to be driven down leaves less levels of logic per cycle, which
> in turn means that the "architecture" has to be simpler. So, less to
> talk about.
>
> There is less low-hanging fruit around; most of the simpler and
> obviously beneficial ideas are known, and most other ideas are more
> complex and harder to explain/utilize.
>
> A larger number of decisions are being driven by the details of the
> process, libraries and circuit families. This stuff is less
> accessible to a non-practitioner, and probably propietary to boot.
>
> A lot of the architecture that is being done is
> application-specific. Consequently, its probably more apt to be
> discussed in comp.<application> than comp.arch. A lot of the
> trade-offs will make sense only in that context.
>
> Basically, I think the field has gotten more complicated and less
> accessible to the casual reader (or even the gifted well read
> amateur). The knowledge required of a computer architect have
> increased to the point that its probably impossible to acquire even
> a *basic* grounding in computer architecture outside of actually
> working in the field developing a processor or _possibly_ studying
> with one of a few PhD programs. The field has gotten to the point
> where it _may_ require architects to specialize in different
> application areas; a lot of the skills transfer, but it still
> requires retraining to move from, say, general-purpose processors to
> GPU design.
>
> I look around and see a handful of guys posting who've actually been
> doing computer architecture. But its a shrinking pool....
>
> Ah, well - I guess I can always go hang out at
> alt.folklore.computers.

I retired. :-)

del


From: Del Cecchi on

"Mayan Moudgill" <mayan(a)bestweb.net> wrote in message
news:_didndLEUohd6TjXnZ2dnUVZ_vWdnZ2d(a)bestweb.net...
> Anne & Lynn Wheeler wrote:
>
>> Mayan Moudgill <mayan(a)bestweb.net> writes:
>>
>>>Consider this: at one time, IBM had at least 7 teams developing
>>>different processors: Rochester, Endicott, Poughkeepsie/Fishkill,
>>>Burlington, Raliegh, Austin & Yorktown Heights (R&D).
>>
>>
>> don't forget los gatos vlsi lab ... did chips for various disk
>> division
>> products (like jib prime for 3880 disk controller) . also put in
>> lots of
>> work on blue iliad (1st 32bit 801 ... never completed). then there
>> was
>> stuff going outside the US.
>
> Of course <smack> forgot Boblingen. Hmm....can't think of anywhere
> else, though IBM labs at Haifa and Zurich might have done some work.
>

As of a couple years ago, Haifa was still doing stuff. As was
Boeblingen. But these days processors seem to be multi site efforts.
IBM probably still has two designs going for their own use, power and
whatever they call the mainframe architecture. And whatever they
might be doing for outside customers.

The obstacle is that a modern processor chip design costs a LOT of
money, tens to hundreds of millions of dollars. And that is just the
processor chip.

You can probably count the number of folks willing to put up that kind
of money on your fingers.

del


From: nmm1 on
In article <af12055e-adbd-4d70-97b0-3380e211479f(a)s31g2000yqs.googlegroups.com>,
Robert Myers <rbmyersusa(a)gmail.com> wrote:
>>
>I'm not a writer of browsers, but I suspect there is a ton of
>embarrassing or nearly-embarrassing parallelism to exploit.

I have a lot of experience with such applications, over several
decades, and I am sure that there isn't. If you investigate the
time taken by such things, it is normal for most of it to go in
critical paths. Almost none of the protocols are either designed
or suitable for parallelism.

>> >No longer
>> >does the browser freeze because of some java script in an open tab.
>>
>> Oh, YEAH. =A0I use a browser that has been multi-threaded for a fair
>> number of versions, and it STILL does that :-(
>>
>Yes, they sometimes do, but you can still regain control without
>killing everything--if you know which process to kill. ;-)

Eh? When I said "multi-threaded", I meant multi-threaded. If you
kill the browser process, you lose EVERYTHING you are doing, in all
of its tabs. And I hope that you aren't imagining that you can kill
one thread in a process, from outside, and expect the process to
carry on.

>General parallelism is indeed very hard. We differ in the estimation
>of how much low-hanging fruit there is.

I have been watching this area closely (and doing some work on it)
for about 40 years, and have been actively and heavily involved for
15 years. Neither I nor the major vendors nor the application
developers think that there is much low-hanging fruit left.

You may also have missed the point that most low-hanging fruit can
be picked equally easily by writing the application to use multiple
processes, and that also allows the use of distributed memory
systems. So, if there is masses of it, why have so few people
tackled it in so many decades?


Regards,
Nick Maclaren.
From: Terje Mathisen on
Mayan Moudgill wrote:
> Robert Myers wrote:
>> I don't know about computer architecture, but the general feeling in
>> physics has always been that almost no one (except the speaker and his
>> small circle of peers, of course) is smart enough to do physics, and
>> you seem to be echoing that unattractive sentiment here.
>>
>
> Unlike physics, you don't have to be smart to do computer architecture;
> Its much more of an art-form. However, its informed by a lot of
> knowledge. When one makes an architectural trade-off, one has to evaluate:
[big snip]
>
> Of course, it helps to have an encylopedic knowledge of what was done
> before, both in hardware and in the software that ran on it.

Mayan, being able to do all that pretty much defines you as _very smart_
in my book! :-)

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"