From: Robert Myers on
On Sep 7, 4:58 pm, Mayan Moudgill <ma...(a)bestweb.net> wrote:

> Unlike physics, you don't have to be smart to do computer architecture;

Depends on what you mean by smart, I'm sure. There are different
kinds of smart.

> Its much more of an art-form. However, its informed by a lot of
> knowledge. When one makes an architectural trade-off, one has to evaluate:
> - how much will this benefit?
> - how will it be implemented?
> - how will it fit together with the rest of the design?
> - is there some better way of doing things?
> - does it really solve the problem you think its going to solve?
> - how will it affect cycle time? area? power? yield?
>
> If you draw a box with a particular feature, you'd better be able to
> answer the time/area/power question. That depends on having a good feel
> for how it will translate into actual hardware, which in turn requires
> you to understand both what you could do if you could do full-custom
> and what you could do if you were restricted to libraries. You have to
> know the process you're designing in, and its restrictions - in
> particular, wire-delay will get more important. If you're using dynamic
> logic, there are even more issues. You have to keep in mind the
> limitations of the tools to place/route/time/perform noise-analysis etc.
> You have to understand the pipeline you're going to fit into, and the
> floorplan of the processor, so that you can budget for wire delays and
> chop up the block into appropriate stages.
>
> And this does not even address the issues of coming up with the features
> in the first place. Thats generally driven by the application or
> application mix you are trying to tackle. You have to be able to
> understand where the bottlenecks are. Then you have to come up with ways
> to remove them. Quite often, this can be done without changes to the
> architecture, or changes to the architecture that appear to solve a
> completely different problem. Also, if you remove a bottleneck, you have
> to figure out whether there's going to be a bottleneck just behind it.
>
> Of course, it helps to have an encylopedic knowledge of what was done
> before, both in hardware and in the software that ran on it.

This forum has discussed SMT/hyper-threading to a fare-thee-well, but
the discussion can't get much beyond power/die-area/performance trade-
offs because the real choices are, as you insist, deep in details that
you never get to see unless you are actually doing the design and know
a great many things about the market tradeoffs.

Prefetch is hugely important, but how it actually works must involve a
great deal of reverse-engineering on the part of competitors, because
meaningful details never seem to be forthcoming from manufacturers.
I'm assuming that Microsoft's compiler designers, for example, know
lots of useful things that most others don't, and that they got them
from the horse's mouth under an NDA.

It must be frustrating to see so much semi-ignorant discussion, but
the little gems that occasionally fall on the carpet are well worth it
to some of us.

Why *didn't* the P4 have a barrel shifter? Because the watts couldn't
be spared, I'm sure, but why was NetBurst jammed into that box? I'm
sure there is an answer that doesn't involve involve hopelessly arcane
details. Whether it's worth the time of any real computer achitect to
talk about it would have to be an individual decision.

Robert Myers.

From: Anne & Lynn Wheeler on

Robert Myers <rbmyersusa(a)gmail.com> writes:
> You've such a way with words. Nick. Browsers, which are probably the
> OS of the future, are already multi-threaded or soon to be. No longer
> does the browser freeze because of some java script in an open tab.
> Browsers that don't seize that advantage will fall by the wayside.
> The same will happen all over software, and at increasing levels of
> fineness of division of labor.

i frequently have couple hundred concurrent tabs ... since tabs
originally introduced. lots of improvements over the past 4-5 yrs on
handling multiple hundred concurrent tabs ... they all along did some
amount of internal multi-threading ... but not necessarily mapping
concurrent threads to different processors.

in any case, just mapping tab/threads to processors, won't necessarily
fix my problems for some time yet (having at least as many physical
processors as I have concurrent tabs).

in my undergraduate days ... I did a lot on resource management and
scheduling ... and when threads were totally independent I could take
advantage of multiple physical processors (and not let hogs, hog
resources).

however, one of the premier multi-threaded transaction processing from
60s was CICS (univ. where I was undergraduate was selected to be one of
the original cics product betatest locations and I got tasked to
support/debug the deployment, 40yrs ago now).

In any case ... it wasn't until a couple yrs ago that CICS
multi-threaded support was upgraded to support multiple processors (up
until then large installations might have 100 or more different
concurrent CICS "images" ... some still have multiple concurrent CICS
images). cics multiprocessor exploitation
http://www.ibmsystemsmag.com/mainframe/septemberoctober05/tipstechniques/10093p1.aspx


--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Robert Myers on
On Sep 7, 5:10 pm, n...(a)cam.ac.uk wrote:
> In article <aaf198b8-b33b-4214-a142-b0958f6d9...(a)m11g2000yqf.googlegroups..com>,
> Robert Myers  <rbmyers...(a)gmail.com> wrote:
>
> >On Sep 7, 3:38=A0pm, n...(a)cam.ac.uk wrote:
>
> >> Only a complete loon would
> >> expect current software to do anything useful on large numbers of
> >> processors, let alone with a new architecture!
>
> >You've such a way with words. Nick.  Browsers, which are probably the
> >OS of the future, are already multi-threaded or soon to be.
>
> So?  If you think that making something "multi-threaded" means that
> it can make use of large numbers of processors, you have a lot to
> learn about developing parallel programs.  And, by "large", I don't
> mean 4-8, I mean a lot more.
>
I'm not a writer of browsers, but I suspect there is a ton of
embarrassing or nearly-embarrassing parallelism to exploit.

> >No longer
> >does the browser freeze because of some java script in an open tab.
>
> Oh, YEAH.  I use a browser that has been multi-threaded for a fair
> number of versions, and it STILL does that :-(
>
Yes, they sometimes do, but you can still regain control without
killing everything--if you know which process to kill. ;-)

> >Browsers that don't seize that advantage will fall by the wayside.
> >The same will happen all over  software, and at increasing levels of
> >fineness of division of labor.
>
> Yeah.  That's what I was being told over 30 years ago.  Making use
> of parallelism is HARD - anyone who says it is easy is a loon.  Yes,
> there are embarassingly parallel requirements, but there are fewer
> than most people think, and even they hit scalability problems if
> not carefully designed.
>
General parallelism is indeed very hard. We differ in the estimation
of how much low-hanging fruit there is.

Robert.

From: Chris Gray on
Robert Myers <rbmyersusa(a)gmail.com> writes:

> You've such a way with words. Nick. Browsers, which are probably the
> OS of the future, are already multi-threaded or soon to be. No longer
> does the browser freeze because of some java script in an open tab.
> Browsers that don't seize that advantage will fall by the wayside.
> The same will happen all over software, and at increasing levels of
> fineness of division of labor.

I'm also in the camp of not believing this will go far. It all eventually
has to be rendered to your screen. As far as I know, that currently
involves serialization in things like the X interface, or the equivalent
in Windows. Those interfaces are serialized so that you can predict what
your display will look like after any given set of operations. There needs
to be some changes to those interfaces (and possibly also at the driver
level, where X/whatever is telling the video driver what to do). Games
can do this stuff, but they tend to take over complete control.

Asside: the browser as OS is not something that this old fart is going
to embrace!

--
Experience should guide us, not rule us.

Chris Gray cg(a)GraySage.COM
http://www.Nalug.ORG/ (Lego)
http://www.GraySage.COM/cg/ (Other)
From: Paul Wallich on
nmm1(a)cam.ac.uk wrote:
> In article <a_CdnXqAUf36KDnXnZ2dnUVZ8oCdnZ2d(a)lyse.net>,
> Terje Mathisen <Terje.Mathisen(a)tmsw.no> wrote:
>> Mayan Moudgill wrote:
>>> So, whats going on? I'm sure part of it is that the latest generation of
>>> architects is talking at other sites.
>>>
>>> However, equally important is that there are far fewer of them. The
>>> number of companies designing processors has gone down and there are
>>> fewer startups doing processors. So, less architects.
>> Maybe. It might also be that the number of _good_ architects are more or
>> less constant, and the required minimum size of a team has gone up.
>
> Not really. I don't think that the number of top architects needed
> on a team is likely to be very different. The reasons are almost
> certainly that architecture is now dominated by 'safe' designs (i.e.
> those with proven markets), and a few very large companies. The
> active architects cannot post about what they are working on, and
> there isn't a lot of peripheral activity.

About 5 years ago when I interviewed a bunch of people (some of whom
used to be active here about a similar question one consensus seemed to
be that -- for the time being at least -- computer architecture as
generally understood wasn't where the action was. The cheaply-accessible
design space had in large part been visited, the cost of designing and
building a high-end-competitive CPU had long since winnowed the field,
and the CPU was no longer where the interesting problems were. Building
stuff on top of CPUs, hooking them together in interesting ways with
other intelligent chunks of systems and stuff like that were considered
more interesting, at least when "interesting" has to include
accessibility to people outside a sub-subspecialty.

Perhaps a good analogy would be automobile engines. Around the turn of
the 20th century, you had gasoline internal combustion, diesel, steam
and other eternal-combustion, electric, and probably a bunch of others
until things shook out. Then the field became interesting only to people
who studied internal combustion, then as those designs became
standardized, an ever-narrowing subset of even-number-of-cylinders IC.
Now with hybrids and electrics you're getting a bunch of interesting
choices again, some with the architecture of the IC and other engines
exposed, sometimes not so much. But it took a new level of other
technology and of constraints to make the field open to highly-visible
innovation again.

paul