From: kenney on
In article <hr2cvu$5hs$1(a)smaug.linux.pwf.cam.ac.uk>, nmm1(a)cam.ac.uk ()
wrote:

> Virtually no modern software (in compiled) form will survive
> two changes of operating system version number, and a great
> deal won't survive one.

Well the two programs I use most Ameol (an off line reader) and Clarris
Works Five work fine on XP. I know Ameol will work on Windows 7.
Microsoft can be accused of a lot of things but they did keep their
program compatibility promises.

Ken Young
From: Quadibloc on
On Apr 26, 2:35 am, n...(a)cam.ac.uk wrote:

> However, it is used as an argument to avoid considering (say) the
> interrupt-free architecture that I have posted on this newsgroup.

I would indeed think that the idea of an "interrupt-free architecture"
would scare people. What, do all the I/O with wait loops?

Of course, I don't think you mean _that_. Given today's multithreaded
architectures, like Intel's chips with HyperThreading (two threads...
what a name) or those from Sun... why not eschew interrupts
completely, and just throw another thread on the barbeque when there
would have been an interrupt?

So you have a thread stack in RAM, and a finite number of the active
threads have their state in on-chip registers at any one time.

An interesting idea, but my inclination _would_ be to preserve upwards
compatibility. Each new thread ought to context-switch back to some
kernel code, as if it were just an interrupt routine, so that the
thread can be closed by privileged code. This lets this idea be
transparent to everyone except the OS writer, and allows compatible
chips to be produced which don't include this fancy (and likely gate-
intensive) facility.

John Savard
From: Bernd Paysan on
nmm1(a)cam.ac.uk wrote:
> An apocryphal story is that one compiler (back in the days when Linpack
> ruled) checked for the code being Linpack, and replaced it with some
> hand-tuned assembler. The rules were changed to forbid that!

I remember that the HP Fortran compiler compiled a hand-optimized matrix
multiplication whenever it found something resembling a matrix
multiplication (more than 15 years ago), and I'm quite ok with that
approach. That was a challenge from one of the postdocs on my university,
and I won it by writing a matrix multiplication which was twice as fast as
HP's, and very close to the predicted possibility.

What's *not* ok is when the replaced code is not working in the general
case, but only for these special cases that are used in the benchmark. But
replacing a common idiom with better code is certainly within the scope of
an optimizing compiler.

--
Bernd Paysan
"If you want it done right, you have to do it yourself!"
http://www.jwdt.com/~paysan/
From: nmm1 on
In article <a0237522-9f61-449b-b72e-bb7c872d226d(a)u31g2000yqb.googlegroups.com>,
Quadibloc <jsavard(a)ecn.ab.ca> wrote:
>
>> However, it is used as an argument to avoid considering (say) the
>> interrupt-free architecture that I have posted on this newsgroup.
>
>I would indeed think that the idea of an "interrupt-free architecture"
>would scare people. What, do all the I/O with wait loops?
>
>Of course, I don't think you mean _that_. Given today's multithreaded
>architectures, like Intel's chips with HyperThreading (two threads...
>what a name) or those from Sun... why not eschew interrupts
>completely, and just throw another thread on the barbeque when there
>would have been an interrupt?

No, because you don't get any real benefit from simply doing that.

The proposal was to back off the misdesign of forcing all the current
requirements through a single mechanism, which causes no end of
problems (and has done for many decades). Actually, a more correct
statement is a resumption-free architecture, as it is resumption
that is the problem.

Instructions (especially in high-performance cores) could be designed
so that they would always complete. No information need be kept for
backing off. Arbitary pre-execution is allowed.

Things like floating-point fixups would be handled in-thread, by an
extracode-like mechanism. Say, an instruction that tests a code,
and saves registers and calls a stored location if necessary. All
tried and tested stuff.

I/O interrupts would be handled by an event loop in separate threads,
which would be woken up when there is something to do. And quite
probably having their own cores to do it. SOP on some systems.

True errors would stop the thread (i.e. NOT be resumable) and pass
control to a supervising thread. Again, tried and tested.

Machine-check interrupts might be handled by a separate processor
or like I/O ones. But we know that they will always be a problem.

Time-slice and attention interrupts would be handled by the ability
of a privileged process to insert a suspend instruction at the end
of the current instruction pipeline of another process. The pipeline
would then simply drain, cleanly, and suspend when it had quiesced.

Single stepping debuggers would be handled by the ability to run a
thread in a suitable mode (say, suspending after each instruction,
before every branch or whatever). But NO interruption of any
instruction that had started.

Any others?

Note that any hardware lookahead would either be trivially cancellable
or count as being part of the instruction pipeline.


Regards,
Nick Maclaren.
From: Anne & Lynn Wheeler on

HP: last Itanium man standing
http://www.theregister.co.uk/2010/04/26/itanium_hp_last_standing/

from above:

Make no mistake: If Hewlett-Packard had not coerced chip maker Intel
into making Itanium into something it never should have been, the point
we have come to in the history of the server business would have got
here a hell of a lot sooner than it has. But the flip side is that a
whole slew of chip innovation outside of Intel might never have
happened.

.... snip ...

--
42yrs virtualization experience (since Jan68), online at home since Mar1970