From: Eugene Miya on
> link to the paper?

Just easier for me to read it in and cut.

Newsgroups: comp.parallel,comp.sys.super
Subject: [l/m 9/20/2005] IBM and Amdahl -- comp.parallel (20/28) FAQ
Summary:
References:
Sender:
Reply-To: eugene(a)nasS.nasaN.govIP (Eugene N. Miya) SNIP
Followup-To: poster
Distribution: world
Organization: Usenet Self-Moderation Project
Keywords:
Approved: eugene

Archive-Name: superpar-faq
Last-modified: 20 Oct 2005

20 IBM and Amdahl (the man, the article)
22 Grand challenges and HPCC
24 Suggested (required) readings
26 Dead computer architecture society
28 Dedications
2 Introduction and Table of Contents and justification
4 Comp.parallel news group history
6 parlib
8 comp.parallel group dynamics
10 Related news groups, archives and references
12
14 References
16
18 Supercomputing and Crayisms


Keywords and phrases:
Watson memo, marketing, COBOL, mythology, aspiring blue boxes, DEC,
laws: Amdahl, other,
challenge, debates, prizes: Karp, Bell, Hillis


# Is a supercomputer, a mainframe?
.....
846 line clipped


What's Amdahl's Law?
====================

What's this I hear about "Amdahl's other law/rule?"
===================================================


The following is probably one of the single most important papers
(several people have recommended this article be read weekly) challenging
parallelism. Well, we'll make it monthly here. It's kinda like UFOs.


The following document is Copyright AFIPS 1967.
Copyright pending (checked with the Copyright Clearance Center).
Nod also given by the author.
This document's use shall be for non-profit educational use only.

Citation:

%A Dr. Gene M. Amdahl
%Z International Business Machines Corporation, Sunnyvale, California
%T Validity of the single processor approach to achieving
large scale computing capabilities
%J AFIPS Proc. of the SJCC
%V 30
%D 1967
%P 483-485



Validity of the single processor
approach to achieving large scale
computing capabilities

DR. GENE M. AMDAHL
International Business Machines Corporation
Sunnyvale, California

The indented table structure is editorial to improve readability.
The original paper did not have it.

INTRODUCTION

For a decade prophets have voiced the contention that the organization
of a single computer has reached its limits and that truly significant
advances can be made only by interconnection of a multiplicity of computers
in such a manner as to permit cooperative solution. Variously the
proper direction has been pointed out as general purpose computers
with a generalized interconnection of memories, or as specialized computers
with geometrically related memory interconnections and controlled by one or
more instruction streams.

Demonstration is made of the continued validity of the single processor
approach and of the weaknesses of the multiple processor approach in
terms of application to real problems and their attendant irregularities.

The arguments presented are based on statistical characteristics of
computation on computers over the last decade and upon the operational
requirements within problems of physical interest. An additional reference
will be one of the most thorough analyses of relative computer capabilities
currently published --
"Changes in Computer Performance," *Datamation*, September 1966,
Professor Kenneth E. Knight, Stanford School of Business Administration.

The first characteristic of interest is the fraction of the computational
load which is associted with data management housekeeping. This fraction
is very nearly constant for about ten years, and accounts for 40% of the
executed instructions in production runs. In an entirely dedicated
special purpose environment this might be reduced by a factor of two,
but it is highly improbably [sic] that it could be reduced by a factor of
three. The nature overhead appears to be sequential so that it is likely
to be amenable to parallel processing techniques. Overhead alone would
then place an upper limit on throughput of five to seven times the sequential
processing rate, even if housekeeping were done in a separate processor.
The non-housekeeping part of the problem could exploit at most a processor
of performance three to four time the performance of the
housekeeping processor. A fairly obvious conclusion which can be drawn at
this point is that the effort expended on achieving high parallel processing
rates is wasted unless it is accompanied by achievement in sequential
processing rates of very nearly the same magnitude.

Data management housekeeping is not the only problem to plague oversimplified
approaches to high speed computation. The physical problems which are of
practical interest tend to have rather significant complications.
Examples of these complications are as follows:
boundaries are likely to be irregular;
interiors are likely to be inhomogeneous;
computations required may be dependent on the states of variables
at each point;
propagation rates of different physical effects may be quite different; the rate of convergence, or convergence at all, may be strongly
dependent on sweeping through the array along different
axes on succeeding passes;
etc.
The effect of each of these complications is very severe on any
computer organization based on geometrically related processors in
a paralleled processing system. Even the existence of rectangular boundaries
has the interesting property that for spatial dimension N there are $3 sup N$
different point geometries to be dealt with in a nearest neighbor computation.
If the second nearest neighbor were also involved, there would be $5 sup N$
different point geometries to contend with. An irregular boundary
compounds this problem as does an inhomogeneous interior. Computations
which are dependent on the states of variables would require the processing
at each point to consume approximately the same computational time as the
some of computations of all physical effects within a large region.
Differences or changes in propagation rate may affect the mesh point
relationships.

Ideally the computation of the action of the neighboring points upon the
point under consideration involves their values at a previous time
proportional to the mesh spacing and inversely proportional to the
propagation rate. Since the time step is normally kept constant,
a faster propagation r
From: Eugene Miya on
>> >Unfortunately neither you nor mr fuller have come up with superior
>> >solutions for many of these problems. Domes haven't replaced
>> >rectilinear frame construction, and alternatives haven't replaced von
>> >neumann.
>>
>> Not only that. Stewart Brand recanted his support for domes in his smart
>> buildings book. This is not to say that they don't use uses like
>> radomes. And it was once useful "largest dome in Livermore" to find a
>> future officemate's house. Boy what a time to leave my RSA data frobb at
>> home.... No access to my quote database for what Brand said.

In article <1162511337.107692.170550(a)h48g2000cwc.googlegroups.com>,
BDH <bhauth(a)gmail.com> wrote:
>Eh, I'm not trying to sell domes here.

You are attempting an intellectual technique to justify "free" thinking.
You aren't the first, and nor will you be the last.
Fuller was also popular for this when I was in high school.
He had a few good ideas. But he also had good techicians.

Del is pretty sharp. He and I may not always disagree, but
we do agree here. von Neumann was not always right: he was somewhat
wrong about Fortran and things like word processing (another waste),
but he is not shrugged lightly. He was one of the smartest
mathematicians who ever lived. He would never have bothered with
something lik Usenet.

And yes I considered the use of the words agree or disagree, and I
definitely chose the latter. ;^)

--
From: Eugene Miya on
>> Hmmm, no.
>> I've worked on several benchmark teams and have to sweat various issues.
>> If you want brutal, non-trivial slow problems outside cryptanalysis,
>> I can think of many. A favorite might be Golomb rulers.
>> No floating point. Can fit in memory.

In article <1162514526.832733.62140(a)h54g2000cwb.googlegroups.com>,
BDH <bhauth(a)gmail.com> wrote:
>Golomb rulers are already found with massively parallel computation. I
>guess I'm not sure what you mean.

Not very far. Not many. The parallelism is quite bounded.

>> On the contrary what von Neumann wrot was a very good simple system for
>> his time.
>
>Fine, it's unfair to blame him like that.
>
>Some people definitely have a higher opinion of him than I do though.

Well few of us will ever a book like his game theory book, some of his
papers, his work at the Manhattan Project and on the H-bomb, etc.

Much less influence on computers. Others attempted too complex designs.
You don't hear much about them.

>> I recall the special APL character set
>> (the Culler-Fried keyboard was little better and its was QWERTY).
>> I extracted from an officemate involved with the CDC Star-100 that the
>> direction of evaluation was a poor choice.
>
>But its influence died off too. Why?

APL or the Star?

The Star probably died most because it was a system which lacked balance.
Its vector pipeline could not compensate for the poor scalar performance.
It took too long to learn how to use at the compiler level, and
its software came too late that it destroyed the careers and psyches of
people assigned to work with it during the Cold War. And that's an over
simple way of saying that. I never had to use the machine. I know and
have interviewed people who did.


>> Part of the problem which kills simple languages like these are the
>> kinds of numeric applications which have volumetric and border (edges,
>> faces, vertices, and hyper-structures (4D and higher Ds) which are
>> exception cases which then go to irregular geometries, etc.
>
>That can be turned into arrays too.

Oh, they are arrays, no question, but few outside certain communities
can appreciate good gather-scatter hardware and operations.


Anyways. I'm outta here.



--
From: BDH on
> Do VLIW type techniques (plausibly) scale

I say sure. SIMD can help.

> Isn't communication and memory infrastructure still the
> main issue/concern?

Yes. For everything. But it's not so hard to devise different
addressing schemes that can fix that.

From: Richard on
[Please do not mail me a copy of your followup]

eugene(a)cse.ucsc.edu (Eugene Miya) spake the secret code
<454a981f$1(a)darkstar> thusly:

>> link to the paper?
>
>Just easier for me to read it in and cut.

Thanks! It was a nice read.
--
"The Direct3D Graphics Pipeline" -- DirectX 9 draft available for download
<http://www.xmission.com/~legalize/book/download/index.html>