From: Robert Myers on
On Oct 17, 9:17 pm, Andrew Reilly <andrew-newsp...(a)areilly.bpc-
users.org> wrote:

>
> There are certainly aspects of this whimsical algorithmic flexibility
> that jar: how can it possibly take a dual core computer with billions of
> instructions per second up its sleve *many seconds* to pull up the
> "recently used documents" menu, every time?

It's because the people who write software are the smartest on the
planet. Just ask anyone in the business.

There's a story kicking around about Ballmer booting Vista to demo it
for someone and losing it Ballmer-style as the machine sat there
forever without giving a clue as to what it was doing.

To be fair, those gorgeously-expensive graphics are about the only
thing computers have left going for them as far as the average
customer is concerned. That's what the customer sees in the store,
and that's what the customer buys. Nothing else works, really: not
security, not reliability, not response time, not usability.

The only thing that would really make any difference is if computers
really could act intelligent--for example, it knows that it might get
away with the "Recent Documents" foulup once or twice, but not time
after time and could invent a shortcut around the most general case
that works well enough most of the time. Getting a factor of two
through hardware is hard. Gobbling it up several times over in
spaghetti code is easy.

Robert.
From: jacko on

> The only thing that would really make any difference is if computers
> really could act intelligent--for example, it knows that it might get
> away with the "Recent Documents" foulup once or twice, but not time
> after time and could invent a shortcut around the most general case
> that works well enough most of the time.  Getting a factor of two
> through hardware is hard.  Gobbling it up several times over in
> spaghetti code is easy.
>
> Robert.

This does imply that languages which forbode the software bad days are
the best place to put research. I think method local write variables,
with only one write point, and many read points. Yes spagetti
languages are long in the tooth

cheers jacko
From: Daniel A. Jimenez on
In article <ggtgp-1A606D.17063817102009(a)netnews.asp.att.net>,
Brett Davis <ggtgp(a)yahoo.com> wrote:
>In article <hb86g3$fo6$1(a)apu.cs.utexas.edu>,
> djimenez(a)cs.utexas.edu (Daniel A. Jimenez) wrote:
>> MIT built RAW and UT Austin built TRIPS. These are really weird
>> architectures and microarchitectures that could be very influential
>> for future processors.
>
>I tried googling "MIT RAW" and "UT Austin TRIPS" and got no hits, could
>you find some links, there are a bunch of comp.arch readers that would
>love to learn more.

I think your Google is broken.

The first hit for "MIT RAW" on Google for me is this:
http://groups.csail.mit.edu/cag/raw/

The first hit for "UT Austin TRIPS" on Google for me is this:
http://www.cs.utexas.edu/~trips/

These are exactly the right pages to begin learning about these
projects.
--
Daniel Jimenez djimenez(a)cs.utexas.edu
"I've so much music in my head" -- Maurice Ravel, shortly before his death.
" " -- John Cage
From: Robert Myers on
On Oct 18, 7:17 am, jacko <jackokr...(a)gmail.com> wrote:

>
> This does imply that languages which forbode the software bad days are
> the best place to put research. I think method local write variables,
> with only one write point, and many read points. Yes spagetti
> languages are long in the tooth

Much as I loathe c, I don't think it's the problem.

I *think* the problem is that modern computers and OS's have to cope
with so many different things happening asynchronously that writing
good code is next to impossible, certainly using any of the methods
that anyone learned in school.

It's been presented here has a new problem with widely-available SMP,
but I don't think that's correct. Computers have always been hooked
to nightmares of concurrency, if only in the person of the operator.
As we've come to expect more and more from that tight but impossible
relationship, things have become ever more challenging and clumsy.

All that work that was done in the first six days of the history of
computing was aimed at doing the same thing that human "computers"
were doing calculating the trajectories of artillery shells. Leave
the computer alone, and it can still manage that sort of very
predictable calculation tolerably well.

Even though IBM and its camp-followers had to learn early how to cope
with asynchronous events ("transactions"), they generally did so by
putting much of the burden on the user: if you didn't talk to the
computer in just exactly the right way at just exactly the right time,
you were ignored.

Even the humble X-windowing system contemplates an interaction that
would at one time have been unimaginable in the degree of expected
flexibility and tolerance for unpredictability, and the way the X-
windowing system often works in practice shows it, to pick some
example other than Windows.

In summary:

1. The problem is built in to what we expect from computers. It is
not a result of multi-processing.

2. No computer language that I am aware of would make noticeable
difference.

3. Nothing will get better until people start operating on the
principle that the old ideas never were good enough and never will be.

Eugene will tell me that it's easy to take pot shots. Well, maybe it
is. It's also easy to keep repeating the same smug but inadequate
answers over and over again.

Robert.



From: Brett Davis on
In article <hbf01b$9q1$1(a)fio.cs.utexas.edu>,
djimenez(a)cs.utexas.edu (Daniel A. Jimenez) wrote:

> In article <ggtgp-1A606D.17063817102009(a)netnews.asp.att.net>,
> Brett Davis <ggtgp(a)yahoo.com> wrote:
> >In article <hb86g3$fo6$1(a)apu.cs.utexas.edu>,
> > djimenez(a)cs.utexas.edu (Daniel A. Jimenez) wrote:
> >> MIT built RAW and UT Austin built TRIPS. These are really weird
> >> architectures and microarchitectures that could be very influential
> >> for future processors.
> >
> >I tried googling "MIT RAW" and "UT Austin TRIPS" and got no hits, could
> >you find some links, there are a bunch of comp.arch readers that would
> >love to learn more.
>
> I think your Google is broken.

I obviously did a news search instead of a web search. My bad.

> The first hit for "MIT RAW" on Google for me is this:
> http://groups.csail.mit.edu/cag/raw/

Lots of broken links, and TRIPS looks better anyway.

> The first hit for "UT Austin TRIPS" on Google for me is this:
> http://www.cs.utexas.edu/~trips/

Some nice info on the architecture, and comparison chart with MIT RAW
and others:
http://www.cs.utexas.edu/users/cart/trips/publications/computer04.pdf

Results with a real uber chip:
http://www.cs.utexas.edu/users/cart/trips/publications/micro06_trips.pdf

No papers since 2006, no real benefit on most (serial) code, nice
speedups on code that has since been rewritten to use x86 vector
instructions that give much the same speedups.

Does not appear to have a die size advantage, could in fact be at a
large disadvantage, and could also have a heat disadvantage.

Adding vector units to the 16 ALUs to make it competitive again is a
non-starter due to space and heat.

Worthy of more research, but still roadkill under the wheels of the
killer micros, much less something an order of magnitude faster, like a
ATI chip.

I would go with an ATI chip and its 1600 vector pipes instead as the
roadmap to the future for mass computation.

Cool info though, TRIPS is the first modern data flow architecture I
have looked at. Probably the last as well. ;(

Brett