From: jgd on
In article <hr1ltf$pal$1(a)smaug.linux.pwf.cam.ac.uk>, nmm1(a)cam.ac.uk ()
wrote:

> I have met almost nobody in the IT business who believes that there is
> nothing left to invent, though I meet a lot who claim that great god
> Compatibility rules, and must not be challenged.

Oh, it can be challenged, all right. It's just that the required gains
from doing so are steadily increasing as the sunk costs in the current
methods grow.

> However, I have met a LOT of "computer scientists" who have claimed
> that their idea is new because it has been relabelled.

Damn right. "Innovation" has become an almost meaningless term, because
of the way marketing departments abuse it.

Harking back up-thread a little, I'm not at all sure we've reached the
level of late nineteenth-century physics. I suspect that we're in the
equivalent of the early post-Newton period, and our practical
application of computing is at about the level of Brunel's engineering.
Vastly superior to the previous era, but still quite inefficient and far
more expensive than it needs to be. How to improve it? There's the rub.

--
John Dallman, jgd(a)cix.co.uk, HTML mail is treated as probable spam.
From: "Andy "Krazy" Glew" on
On 4/24/2010 6:34 PM, Robert Myers wrote:
> On Apr 24, 9:03 pm, "nedbrek"<nedb...(a)yahoo.com> wrote:
>
>>
>> Yea, this is turning into the "End of Microarchitecture" thread :)
>>
>
> Time is running out for the "We did it all fifty years ago" types,
> anyway.
>
> It will take something like photons or quantum mechanics to make
> computer architecture interesting again, and no one knows how long we
> will have to wait, but it will happen.

What does it take to be new?

Something like photons or quantum computers will make computer hardware really interesting (I don't want to say "again",
because I think that computer architecture is still interesting). But it will lead to a big reinforcement of the "we did
it all fifty years ago" mentality, because, probably, such a new material wll be faster but will have integration
constraints at the beginning, so people will brush off all of the old computer architecture for really device limited
machines.

However, whenever this happens, even though old ideas are being brushed off, there is often something new. At the very
last the tradeoffs are different. But often there is some new improvement.
I may be known as a curmudgeon who often asks "What is different this time, from the last time such an idea was
proposed?" But my intention is not to kill the nw idea which may be revival of an old idea. It is just to understand
what is different, and, ideally, to prevent old mistakes from being blindly repeated.

However, I still think that there is interestng computer architecture to be done. Ad I think that interesting computer
architecture is still being done right now, and has been done in the recent past.

---+++ For example, recent work that is interesting computer architecture, that seems to me to not just be stuff that
was done 50 years ago:

* I really do think that there is something new in GPU-style SIMT / Coherent Threading. It is *not* just SIMD revived.

* In what ways was/has SpMT Speculative Multithreading been done before?

* the security project I was working on, whose cancellation led to me leaving Intel, was new, and I think valuable.
Yes, it was a revival of some old ideas; but there were some new ideas in it, that I think might make some of those old
ideas work again.

--+++ Near future work that I think need not just be a brushing off of old ideas:

* Although there has been much, much, past work on parallel programming and multiprocessor, I think there is more to be
done. I don't think, as Nick seems to think, that all of the best ways to do parallel programming are already known,
and that programmers have to stop being lazy and practice them. I think there are new techniques that need to be
created. And I think that there is a high likelihood that some hardware support may facilitate this.

For example, I think that even MultiCore and ManyCore have not made parallel programming cheap enough. I think that we
need to get to a place where a programmr never has to weigh whether to fork a thread vs. staying sequential. He just
forks a thread. And the system, if it turns out that thread forking has o much overhead, ignoes the programmer's
expression of parallelism.

I.e. I think that the biggest improvements in parallel programming systems may come in anti-parallelization --- taking
explicitly parallel code for millions of threads, and REDUCING the parallelism so that it only needs thousands of
threads. This can be done by a compiler. The hardware version of this might be to support millions of threads, but in
a lightweight threading system where such threads run concurrently, not simultaneously. Other similar techniques
include futures, etc.

--+++ I could go on and talk about many of the standard issues - power, etc. Also, as y'all know, I'm a big fan of
asyncronous / charge recovery / AC power to chips.

---

My big concern and frustration has not been that there are no new IDEAS to work on. But that Intel and AMD, where I
worked, are not the places to work on new ideas.

Certainly not Intel and AMD's flagship processor families.

Within Intel, it looks like innovation happens outside the flagship processors. Atom has not done anything new, but I
expect that it will. AMD has been getting innovation from ATI.

I have hope that there will be innovation given the current ferment: Apple with PA-Semi, Google with Agnilux. Oracle
with Sun. ARM.

Innovation happens outside the main marketplace.

This is a pity, but it seems to be reality.
From: Quadibloc on
On Apr 24, 7:34 pm, Robert Myers <rbmyers...(a)gmail.com> wrote:
> On Apr 24, 9:03 pm, "nedbrek" <nedb...(a)yahoo.com> wrote:

> > Yea, this is turning into the "End of Microarchitecture" thread :)
>
> Time is running out for the "We did it all fifty years ago" types,
> anyway.

It was in 1969 that IBM came out with the System 360 model 195.

That was the one with a cache _and_ a pipeline.

Just like a Pentium.

So I think time is running out for them. By the time 2019 rolls
around, what with all the problems in keeping Moore's Law going,
_something_ original will have to be achieved by computer designers of
the present age!

John Savard
From: Quadibloc on
On Apr 25, 8:04 am, Robert Myers <rbmyers...(a)gmail.com> wrote:

> So long as people are convinced that there is nothing new under the
> sun, computer-related disciplines will be less and less like science
> and more like religion, where everything important was revealed in
> ages past and the only thing left to do is to build ever more
> grandiose cathedrals.

Building even more grandiose parallel sysplex complexes (what you
young whippernsappers call symmetric multiprocessing) is indeed
unfortunate. Giving microprocessors vector capabilities lifted from
the TX-2 and the AN/FSQ-31, as if nobody had ever heard of the Cray-1,
is also unfortunate.

Decoupled microarchitecture *is* something new under the sun which
lets you have the speed of a hardwired computer and the flexibility of
a microprogrammed one.

In software, there's just-in-time compilation that has made emulation
of other architectures 95% efficient instead of 10% or 1% efficient.

And, of course, there are all the innovations in chip making that have
led to computers with six times the power of million-dollar mainframes
of the 1960s cluttering our trash bins.

But if by innovation you mean doing something completely different...
the trouble with that is that you need an *application* for the new
hardware. Thus, for example, neural nets and fuzzy logic only have a
few limited special-purpose applications.

Latency, not throughput, is the bottleneck for many types of
application. A great deal has been done to make better use of parallel
systems, but not everything is parallelizable.

The fresh idea that is really wanted is someone letting us implement
our stale old architectures with gallium nitride or indium
phosphide... or on silicon carbide so that we won't have to work as
hard to cool our ever-tinier chips... and _that_ fresh idea,
hopefully, will come before long. The kind of fresh idea you're
seeking... well, if a paradigm shift is possible, if some utterly new
and different architecture can solve our existing computing tasks more
rapidly, it will come in its own due time.

Engineering problems get solved by cracking down on them and working
hard. Major new discoveries, like the Theory of Relativity, require
the inspired visionary to see the new way that everyone else has
missed... and then to go far enough down the path to prove that the
new idea is something real and worthwhile, not just an empty
speculation.

If there is another way, someday someone will demonstrate that it
works. Until then, radically different ideas will be largely ignored
by those in the business of getting things done. What else could be
expected?

John Savard
From: Quadibloc on
On Apr 25, 9:08 am, n...(a)cam.ac.uk wrote:
> though I meet a lot who claim that great god
> Compatibility rules, and must not be challenged.

Upwards compatibility is my shepherd...

Even though I walk through the valley of upgrades,
I shall not have to buy all my software over again,
for You are with me.

John Savard