From: David L. Craig on
On Jul 20, 2:49 pm, Robert Myers <rbmyers...(a)gmail.com> wrote:

> There is always IBM, of course[...]

Ah, yes, for over a hundred years so far, anyway. ;-)
But do you mention them as the designers of AIX-POWER,
OS/400-iSeries, whatever-x86, or big iron? I have noticed
the x86 boxes have always been trying to catch up with the
mainframes but the gap really doesn't change much.

> I doubt if mass-market x86 hypervisors ever crossed the
> imagination at IBM, even as the barbarians were at the
> gates.

You'd be wrong. A lot of IBMers and customer VMers were
watching what Intel was going to do with the 80386 next
generations to support machine virtualization. While
Intel claimed it was coming, by mainframe standards, they
showed they just weren't serious. Not only can x86 not
fully virtualize itself, it has known design flaws that
can be exploited to compromise the integrity of its
guests and the hypervisor. That it is used widely as a
consolidation platform boggles the minds of those in the
know. We're waiting for the eventual big stories.

> Also, to be fair to markets, the cost-no-object
> exercises the government undertook even after
> those early 90's memos delivered almost nothing of
> any real use.  Lots of money has been squandered on
> some really dumb ideas.

> Moving the discussion to some place slightly less
> visible than comp.arch might not produce more
> productive flights of fancy, but I, for one, am
> interested in what is physically possible [...].

Some ideas are looking to be not so dumb; e.g., quantum
computing. I wonder what JVN would make of them if he were
still around? I suspect it's hard to get more blue-sky
physically possible than those beasties.
From: Robert Myers on
On Jul 20, 5:31 pm, "David L. Craig" <dlc....(a)gmail.com> wrote:
> On Jul 20, 2:49 pm, Robert Myers <rbmyers...(a)gmail.com> wrote:
>
> > There is always IBM, of course[...]
>
> Ah, yes, for over a hundred years so far, anyway. ;-)
> But do you mention them as the designers of AIX-POWER,
> OS/400-iSeries, whatever-x86, or big iron?

I'm thinking of IBM as the general contractor for, say, Blue Waters.
The CPU will be Power 7, but the OS will apparently be Linux. My
assumption is that, as a matter of national policy, the US government
wants to keep IBM as the non-x86 option.

> I have noticed
> the x86 boxes have always been trying to catch up with the
> mainframes but the gap really doesn't change much.
>
> > I doubt if mass-market x86 hypervisors ever crossed the
> > imagination at IBM, even as the barbarians were at the
> > gates.
>
> You'd be wrong.  A lot of IBMers and customer VMers were
> watching what Intel was going to do with the 80386 next
> generations to support machine virtualization.  While
> Intel claimed it was coming, by mainframe standards, they
> showed they just weren't serious.  Not only can x86 not
> fully virtualize itself, it has known design flaws that
> can be exploited to compromise the integrity of its
> guests and the hypervisor.  That it is used widely as a
> consolidation platform boggles the minds of those in the
> know.  We're waiting for the eventual big stories.
>

Well, *I* never thought they were serious. I assumed that, if
virtualization other than a Vmware-type hack ever came to Intel, it
would be a feature of IA-64, where I assumed that virtualization had
been penciled in from the beginning.

I'm waiting for the big stories, too. At this point, building secure
systems is surely a bigger national priority than having the most
flops.

> > Also, to be fair to markets, the cost-no-object
> > exercises the government undertook even after
> > those early 90's memos delivered almost nothing of
> > any real use.  Lots of money has been squandered on
> > some really dumb ideas.
> > Moving the discussion to some place slightly less
> > visible than comp.arch might not produce more
> > productive flights of fancy, but I, for one, am
> > interested in what is physically possible [...].
>
> Some ideas are looking to be not so dumb; e.g., quantum
> computing.  I wonder what JVN would make of them if he were
> still around?  I suspect it's hard to get more blue-sky
> physically possible than those beasties.

Maybe quantum entanglement is the answer to moving data around.

Robert.

From: Andrew Reilly on
On Tue, 20 Jul 2010 11:49:03 -0700, Robert Myers wrote:

> (90%+ efficiency for Linpack, 10% for anything even slightly more
> interesting).

Have you, or anyone else here, ever read any studies of the sensitivities
of the latter performance figure to differences in interconnect bandwidth/
expense? I.e., does plugging another fat IB tree into every node in
parallel, doubling cross section bandwidth, raise the second figure to
20%?

Is 10% (of peak FP throughput, I would guess) really representative of
the real production code used by the typical buyer of these large-scale
HPC systems? I'm not counting the build-it-and-they-will-come
installations at places like Universities, but the built-to-solve-problem-
X ones at places like oil companies, weather forecasters and (I guess)
weapons labs. I don't work in any of those kinds of environments, so I
don't know anything about the code that they run.

Would moving that efficiency number higher be better than making 10%-
efficiency machines less expensive?

Cheers,

--
Andrew
From: Edward Feustel on
On Tue, 20 Jul 2010 08:31:46 -0700, Andy Glew <"newsgroup at
comp-arch.net"> wrote:


>Me, I'm just the MLP guy: give me a certain number of channels and
>bandwidth, I try to make the best use of them. MLP is one of the ways
>of making more efficient use of whatever limited bandwidth you have. I
>guess that's my mindset - making the most of what you have. Not because
>I don't want to increase the overall memory bandwidth. But because I
>don't have any great ideas on how to do so, apart from
> a) More memory channels
> b) Wider memory channnels
> c) Memory channels/DRAMs that handle short bursts/high address
>bandwidth efficiently
> d) DRAMs with a high degree of internal banking
> e) aggressive DRAM scheduling
> Actually, c,d,e are really ways of making more efficient use of
>bandwidth, i.e. preventing pins from going idle because the burst length
>is giving you a lot of data you don't want.
> f) stacking DRAMs
> g) stacking DRAMs with an interface chip such as Tom Pawlowski of
>micron proposes, and a new abstract DRAM interface, enabling all of the
>good stuff above but keeping DRAM a comodity
> h) stacking DRAMs with an interface chip and a processor chip (with
>however many processors you care to build).
>
It is interesting as to what we thought the original poser was
interested in. I was intrigued with the notion of higher bandwidth
inter-processor communication a'la the CM-2, not just higher bandwidth
a'la STAR 100 or the various CRAYs. Our use of many processors would
appear to cry out for this.

What about "higher-level constructs" that permit processors to "know"
what they are trying to "obtain"/"give" and that permit the processors
to overlap/schedule operations on things that are larger than a few
bytes. I realize this takes more gates, but we appear to have gates
to spare.
Ed
From: nmm1 on
In article <8ant0rFf0gU1(a)mid.individual.net>,
Andrew Reilly <areilly---(a)bigpond.net.au> wrote:
>On Tue, 20 Jul 2010 11:49:03 -0700, Robert Myers wrote:
>
>> (90%+ efficiency for Linpack, 10% for anything even slightly more
>> interesting).
>
>Have you, or anyone else here, ever read any studies of the sensitivities
>of the latter performance figure to differences in interconnect bandwidth/
>expense? I.e., does plugging another fat IB tree into every node in
>parallel, doubling cross section bandwidth, raise the second figure to
>20%?

A little, and I have done a bit of testing. It does help, sometimes
considerably, but the latency is at least as important as the bandwidth.

>Would moving that efficiency number higher be better than making 10%-
>efficiency machines less expensive?

It's marginal. The real killer is the number of programs where even
a large improvement would allow only a small increase in scalability.


Regards,
Nick Maclaren.