From: kenney on
In article <K_LBm.6106$D95.3011(a)newsfe22.ams2>, meru(a)devnull.com
(ChrisQ) wrote:

> so I naturally wonder,
> what's happened in the meantime ?.

Rant mode on

Software bloat. The programs I use except for games have not visibly
increased in speed since my first PC. They have developed a lot more
bells and whistles but not got faster. DOS could run programs and a GUI
(GEM) in 512kb of memory. Windows 3.1 would run with a mb though it
needed 4mb to get maximum performance, I understand that Windows 7 has a
minimum requirement of 2gb. Just about all the increases in hardware
speed have been used to run more elaborate software at the same speed.

Rant mode off.

Ken Young
From: Bernd Paysan on
nmm1(a)cam.ac.uk wrote:
> People knew how to build a functional steam locomotive in Hero's
> day - they didn't have the technological base to do it.

Most of the antic technology that was build was more a sort of "toy",
maybe to impress people, but not intended to be useful. You had slaves
for that, for useful work. Machines were "magic"¹. Hero of Alexandria
obviously didn't really know how to build a functional steam locomotive,
but he knew how to build nice toys. My father assembled an Aeolipile
out of a used ewer and an old bicycle wheel a few years ago - it's fun
to watch it slowly spinning around (the kindergarten aged children in
the neighborhood like it), but it's so grossly inefficient that you
can't use the power for anything useful.

The breakthrough for the steam engine was when it could provide more
power per cost than humans or animals. This is why many people still
measure engines in HP. The crude ancient steam engines in China had
even been used to power a vehicle, but they weren't a success there
either, so I guess they had not been cost-efficient. Water-mills have
been cost efficient and they were successes both in ancient China and in
Europe.

¹) It's like opening an ancient Chinese "paddle steamer", and
discovering that inside they had just people with pedals turning the
paddle wheels, and no steam engine ;-).

--
Bernd Paysan
"If you want it done right, you have to do it yourself"
http://www.jwdt.com/~paysan/
From: Brett Davis on
In article <hb86g3$fo6$1(a)apu.cs.utexas.edu>,
djimenez(a)cs.utexas.edu (Daniel A. Jimenez) wrote:

> In article <FZFBm.15$sl7.11(a)newsfe18.ams2>, ChrisQ <meru(a)devnull.com> wrote:
> [with deletions]
> >> Instruction set architecture: multi-media extensions
> >> micro-architecture: 2-bit branch prediction
> >
> >Yes, but utimately boring and really just rearranging the deck chairs.
>
> Sorry, can't let that one go. There have been tremendous improvements in
> branch prediction accuracy from the late eighties to today. Without
> highly accurate branch prediction, the pipeline is filled with too many
> wrong path instructions so it's not worth going to deeper pipelines.
> Without deeper pipelines we don't get higher clock rates. So without
> highly accurate branch predictors, clock rates and performance would be
> much worse than they are today. If we hadn't hit the power wall in the
> early 2000s we would still be improving performance through better branch
> prediction and deeper pipelines.
>
> Trace cache is another more-or-less recent microarchitectural innovation
> that allowed Pentium 4 to get away with decoding one x86 instruction
> per cycle and still have peak IPC greater than 1.
>
> Cracking instructions into micro-ops, scheduling the micro-ops, then fusing
> the micro-ops back together in a different way later in the pipeline allows
> an effectively larger instruction window and more efficient pipeline.
> That's a relatively recent innovation, too.
>
> History-based memory schedulers are another recent innovation that
> promises to improve performance significantly.
>
> MIT built RAW and UT Austin built TRIPS. These are really weird
> architectures and microarchitectures that could be very influential
> for future processors.

I tried googling "MIT RAW" and "UT Austin TRIPS" and got no hits, could
you find some links, there are a bunch of comp.arch readers that would
love to learn more.

> Not to mention network processors and GPUs. See Hot Chips proceedings
> for more examples of microarchitectural innovation in real chips, and
> ISCA/MICRO/HPCA for more speculative stuff.
From: Robert Myers on
On Oct 17, 6:06 pm, Brett Davis <gg...(a)yahoo.com> wrote:

>
> > MIT built RAW and UT Austin built TRIPS.  These are really weird
> > architectures and microarchitectures that could be very influential
> > for future processors.
>
> I tried googling "MIT RAW" and "UT Austin TRIPS" and got no hits, could
> you find some links, there are a bunch of comp.arch readers that would
> love to learn more.

google

stream processor mit raw

http://groups.csail.mit.edu/cag/raw/documents/

I know less about TRIPS, but the google

stream processor trips darpa

yields tons of stuff

Robert.

From: Andrew Reilly on
On Sat, 17 Oct 2009 08:42:03 -0500, kenney wrote:

> In article <K_LBm.6106$D95.3011(a)newsfe22.ams2>, meru(a)devnull.com
> (ChrisQ) wrote:
>
>> so I naturally wonder,
>> what's happened in the meantime ?.
>
> Rant mode on
>
> Software bloat. The programs I use except for games have not visibly
> increased in speed since my first PC. They have developed a lot more
> bells and whistles but not got faster. DOS could run programs and a GUI
> (GEM) in 512kb of memory. Windows 3.1 would run with a mb though it
> needed 4mb to get maximum performance, I understand that Windows 7 has a
> minimum requirement of 2gb. Just about all the increases in hardware
> speed have been used to run more elaborate software at the same speed.
>
> Rant mode off.

Yeah: and how much more powerful is my phone, than my first graphical
Unix workstation, that I used to do real work? (lots)

I know that's a popular rant: I've even given it myself, from time to
time. Isn't it the case, though, that for most of that "popular
software" speed is a non-issue? Either a given operation is "fast
enough" (and that's not hard to achieve, when the limitations are those
of hands, eyes and brain of the user), or "not fixable by software":
network and server latencies, disk access latencies, and so on.

So compettition and More's law have given us faster and faster computers,
with higher and higher resolution screens, with graphics systems with
hardware assist for various basic operations (including 3D texture
mapping). Is it any surprise that the fixed required response time gets
burned by rendering *beautifully* kearned arbitrarily scalable text,
rather than blatting out the single fixed-width system font? Or that
various program and desktop icons are lovingly alpha-blended from
scalable vector graphic representations, rather than composed from fixed-
resolution, four-bit-colour pallette?

There are certainly aspects of this whimsical algorithmic flexibility
that jar: how can it possibly take a dual core computer with billions of
instructions per second up its sleve *many seconds* to pull up the
"recently used documents" menu, every time? (Unless, of course, the
whimsical algorithm that performs that function operates by starting up
all of the applications so that it can ask them, or similar dumbness.)

Cheers,

--
Andrew