From: jacko on
On 15 Oct, 05:44, Jean <alertj...(a)rediffmail.com> wrote:
> In last couple of decades the exponential increase in computer
> performance was because of the advancements in both computer
> architecture and fabrication technology.
> What will be the case for future ? Can I comment that the next major
> leap in computer performance will not because of breakthroughs in
> computer architecture but rather from new underlying technology ?

After having designed an instruction set I think there are some
architectural improvements to be made, but they are few. Most of my
research is now in mathematics.

A few of the architecture and fab improvements I see happening are:
1. DISCO-FETs for faster lower power switching.
2. A simpler instruction set.
3. Multiple cores set up as an execution ring, with memory spliting,
cache spliting.
4. New number systems.

cheers jacko
http:/sites.google.com/site/jackokring
From: Mayan Moudgill on
ChrisQ wrote:

> Yes, but utimately boring and really just rearranging the deck chairs.
> Compared to the 70's and 80's the pace of development is essentially
> static.

Name one feature introduced in the 80s which you consider not
"rearranging the deck chairs". For extra credit identify the
architecture in the 50s and 60s which first used this feature (or a
variant).
From: ChrisQ on
Mayan Moudgill wrote:

>
> Name one feature introduced in the 80s which you consider not
> "rearranging the deck chairs". For extra credit identify the
> architecture in the 50s and 60s which first used this feature (or a
> variant).

No credits to gain, sorry. Not a computer architect, but do have a
general interest and design hardware and write software around
architecture. My comment about progress has more to do with performance
gains, apparent new directions and willingness to take risks in
computing, which seemed very significant in the 70's and 80's. The 3
year old xeon machine on the desktop here seems not much faster than the
10+ year old 1997 Alpha box recently retired, so I naturally wonder,
what's happened in the meantime ?. All the hardware parts look familiar,
even down to the dos'ish bios and pci slots, when one would have
expected to see something very different after 10 to 15 years of
'progress'. Ok, we have pci express, more and faster memory, dual cpus
that have heatsinks filling half the box, but what else has changed ?.
Of course, this is not particularly scientific, but seems to be a valid
point of view as a user of casual interest and suggests that there are
other forces at work. I'm sure that there is, as usual, no shortage of
good ideas.

Seems to me that barriers to progress are as much cultural as
commercial. In the 60's the US put men on the moon and the attitudes
that allowed that to happen are being lost by an aging western
civilisation that has become far too complacent, safe and risk averse.
All this trickles down and becomes pervasive. Add to that the
monopolisation of the architectural gene pool and i'm not expecting much
to happen any time soon...

Regards,

Chris




From: Daniel A. Jimenez on
In article <FZFBm.15$sl7.11(a)newsfe18.ams2>, ChrisQ <meru(a)devnull.com> wrote:
[with deletions]
>> Instruction set architecture: multi-media extensions
>> micro-architecture: 2-bit branch prediction
>
>Yes, but utimately boring and really just rearranging the deck chairs.

Sorry, can't let that one go. There have been tremendous improvements in
branch prediction accuracy from the late eighties to today. Without
highly accurate branch prediction, the pipeline is filled with too many
wrong path instructions so it's not worth going to deeper pipelines.
Without deeper pipelines we don't get higher clock rates. So without
highly accurate branch predictors, clock rates and performance would be
much worse than they are today. If we hadn't hit the power wall in the
early 2000s we would still be improving performance through better branch
prediction and deeper pipelines.

Trace cache is another more-or-less recent microarchitectural innovation
that allowed Pentium 4 to get away with decoding one x86 instruction
per cycle and still have peak IPC greater than 1.

Cracking instructions into micro-ops, scheduling the micro-ops, then fusing
the micro-ops back together in a different way later in the pipeline allows
an effectively larger instruction window and more efficient pipeline.
That's a relatively recent innovation, too.

History-based memory schedulers are another recent innovation that
promises to improve performance significantly.

MIT built RAW and UT Austin built TRIPS. These are really weird
architectures and microarchitectures that could be very influential
for future processors.

Not to mention network processors and GPUs. See Hot Chips proceedings
for more examples of microarchitectural innovation in real chips, and
ISCA/MICRO/HPCA for more speculative stuff.
--
Daniel Jimenez djimenez(a)cs.utexas.edu
"I've so much music in my head" -- Maurice Ravel, shortly before his death.
" " -- John Cage
From: EricP on
Daniel A. Jimenez wrote:
> ...
> Trace cache is another more-or-less recent microarchitectural innovation
> that allowed Pentium 4 to get away with decoding one x86 instruction
> per cycle and still have peak IPC greater than 1.

Actually trace cache goes back to the VAX HPS, circa 1985.
They called the decoded instruction cache a "node cache".
As far as I know, VAX HPS was never built though.

> Cracking instructions into micro-ops, scheduling the micro-ops, then fusing
> the micro-ops back together in a different way later in the pipeline allows
> an effectively larger instruction window and more efficient pipeline.
> That's a relatively recent innovation, too.

Except for the fused micro-ops, this was also VAX HPS.

See

Critical Issues Regarding HPS, A High Performance Architecture
Pratt, Melvin, Hwu, Shebanow
ACM 1985

Eric