From: nmm1 on
In article <NJmdnambB9AD8knWnZ2dnUVZ8jednZ2d(a)giganews.com>,
<jgd(a)cix.compulink.co.uk> wrote:
>
>> I have met almost nobody in the IT business who believes that there is
>> nothing left to invent, though I meet a lot who claim that great god
>> Compatibility rules, and must not be challenged.
>
>Oh, it can be challenged, all right. It's just that the required gains
>from doing so are steadily increasing as the sunk costs in the current
>methods grow.

In my experience, that is almost always overstated, and very often
used as an excuse to avoid thinking out of the box. In particular,
once software runs on two hardware architectures, porting it to a
third is usually easy.


Regards,
Nick Maclaren.
From: nmm1 on
In article <67017c45-2e79-4791-904d-8105b509f678(a)q15g2000yqj.googlegroups.com>,
Quadibloc <jsavard(a)ecn.ab.ca> wrote:
>On Apr 25, 9:08=A0am, n...(a)cam.ac.uk wrote:
>> though I meet a lot who claim that great god
>> Compatibility rules, and must not be challenged.
>
>Upwards compatibility is my shepherd...
>
>Even though I walk through the valley of upgrades,
>I shall not have to buy all my software over again,
>for You are with me.

Yeah. I remember that, but then, I am approaching retirement.
Virtually no modern software (in compiled) form will survive
two changes of operating system version number, and a great
deal won't survive one.


Regards,
Nick Maclaren.
From: Robert Myers on
Andy "Krazy" Glew wrote:
> On 4/24/2010 6:34 PM, Robert Myers wrote:
>> On Apr 24, 9:03 pm, "nedbrek"<nedb...(a)yahoo.com> wrote:
>>
>>>
>>> Yea, this is turning into the "End of Microarchitecture" thread :)
>>>
>>
>> Time is running out for the "We did it all fifty years ago" types,
>> anyway.
>>
>> It will take something like photons or quantum mechanics to make
>> computer architecture interesting again, and no one knows how long we
>> will have to wait, but it will happen.
>
> What does it take to be new?

I'll know it when I see it.

Robert.
From: jgd on
In article <hr2cs0$5a1$1(a)smaug.linux.pwf.cam.ac.uk>, nmm1(a)cam.ac.uk ()
wrote:
> >Oh, [the value of compatibility] can be challenged, all right.
> >It's just that the required gains from doing so are steadily
> >increasing as the sunk costs in the current methods grow.
> In my experience, that is almost always overstated, and very often
> used as an excuse to avoid thinking out of the box. In particular,
> once software runs on two hardware architectures, porting it to a
> third is usually easy.

Perfectly true, provided that the architectures are as alike as, say,
x86, MIPS, SPARC and PowerPC are. Which is really quite a lot alike

Porting to something like Cell (using the SPEs), or MPI clustering, or
something else based on different system-architecture principles is
another matter.

--
John Dallman, jgd(a)cix.co.uk, HTML mail is treated as probable spam.
From: Brett Davis on
In article <i4GAn.30112$0_7.26359(a)newsfe25.iad>,
Robert Myers <rbmyersusa(a)gmail.com> wrote:

> nedbrek wrote:
>
> > Bundling in itself isn't too bad, you need somewhere to stash dependency
> > info.
> >
> > But, Itanium tried to record independence - turns out, determining
> > dependence is much more important (see Smith's dependency chain processing
> > research).
>
> The paper I found
>
> An Instruction Set and Microarchitecture for
> Instruction Level Distributed Processing
> Ho-Seop Kim and James E. Smith
> Department of Electrical and Computer Engineering
> University of Wisconsin�Madison

URL:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.6.790&rep=rep1&type=pdf

This looks like my first pass at going 8 wide, but predates my
efforts by half a decade. I was hoping to find a followup, and/or
for people to post similar work.

Something similar:

Dependence-Based Scheduling Revisited: A Tale of Two Baselines
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.126.7879&rep=rep1&type=pdf

Braid is interesting, even though I think its the wrong way to go:

Achieving Out-of-Order Performance with Almost In-Order Complexity
http://users.ece.utexas.edu/~tsengf/files/braids08.pdf

Something else interesting:

Overcoming the Limitations of Conventional Vector Processors
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.6.7669&rep=rep1&type=pdf

The Landscape of Parallel Computing Research: A View from Berkeley
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.67.8705&rep=rep1&type=pdf


Brett

> advertises the ability to run at a high clock rate and also proposes
> binary translation. This paper was, of course, before the Pentium 4
> clock rate debacle, before Transmeta folded, and before power
> consumption became an obsession.
>
> That is not to say that the idea may still not have merit. On the face
> of it, keeping dependent chains together has the obvious advantage of
> increasing locality, so that computation can be efficiently parceled out
> over threads in a core, over separate cores on a chip, or even
> conceivably over multiple sockets.
>
> Robert.