From: nmm1 on
In article <fd8ee2e2-103d-48e2-80fe-5a2e93053cd1(a)l2g2000yqd.googlegroups.com>,
Robert Myers <rbmyersusa(a)gmail.com> wrote:
>On Oct 1, 9:39=A0pm, Andrew Reilly <andrew-newsp...(a)areilly.bpc-
>users.org> wrote:
>
>> I suspect that our difference of
>> opinion comes from the "level" that one might like to be doing the
>> experimentation/tuning. =A0You seem to be arguing that we'll only make
>> forward progress if we use languages/tools that expose the exact hardware
>> semantics so that we can arrange our applications to suit. =A0That may ve=
>ry
>> well be the right answer, but it's not one that I like the sound of. =A0I
>> would vastly prefer to be able to describe application parallelism in
>> something approaching a formalism, sufficiently abstract that it will be
>> able to both withstand generations of hardware change and be amenable to
>> system tuning. =A0Quite a bit of that sort of tuning is likely to be bett=
>er
>> in a VM or dynamic compilation environment because there's some scope for
>> tuning strategies and even (locking) algorithms at run-time.
>
>We are in violent agreement. Nothing in the field ever seems to
>happen that way. If there were a plausible formalism that looked like
>it would stick, I think it would make a big difference, but that's the
>kind of bias I have that Nick snickers at. Short of that, I'd prefer
>that the tinkering be done before any metal is cut. Fat chance, I
>think.

No, I agree with that, though may disagree on what constitutes
"plausible" :-)


Regards,
Nick Maclaren.
From: nmm1 on
In article <7iknkoF31ud7sU1(a)mid.individual.net>,
Andrew Reilly <andrew-newspost(a)areilly.bpc-users.org> wrote:
>
>Most of the *languages* (or the committees that steer them and the
>communities that use them) that I know about are only *just* starting to
>think about how best to express and make use of genuine concurrency.

Algol 68 did, many research languages (including practical ones, like
Simula and Smalltalk) did, Ada did around 1980, Fortran did from 1985
and has had parallelism since Fortran 90 (SIMD, to be sure), and so on.
It goes back rather further than most people think.

>Up
>until now concurrency has been the domain of individual applications and
>non-core libraries (and OSes) (with a few notable but not terribly
>popular exceptions, like Occam and erlang). There are *still* blogs and
>articles being written to the effect that threads make programs too hard
>to maintain, and that programmers should therefore avoid them.

They do. POSIX-style threads (as in many languages) are truly evil.
Almost no programmers can use them correctly, and even fewer can debug
them, let alone tune them.


Regards,
Nick Maclaren.
From: kenney on
In article
<1f764cfa-998b-47d0-af93-bbbf66d599da(a)j39g2000yqh.googlegroups.com>,
rbmyersusa(a)gmail.com (Robert Myers) wrote:

> It does not seem
> equally likely that processors would be (or even could be) similarly
> accommodating.

That varies I can remember a three chip mini that had mask programmable
micro code. Versions of this were produced for Lisp and for Forth. For
something like Forth the VM only needs to actual registers for the stack
and return stack pointers though performance can be improved by using
more. Still that is a matter of implementation for standard Forth the
user has no need and often no way tell how the language has been
implemented.

Ken Young
From: ChrisQ on
kenney(a)cix.compulink.co.uk wrote:

>
> That varies I can remember a three chip mini that had mask programmable
> micro code. Versions of this were produced for Lisp and for Forth. For
> something like Forth the VM only needs to actual registers for the stack
> and return stack pointers though performance can be improved by using
> more. Still that is a matter of implementation for standard Forth the
> user has no need and often no way tell how the language has been
> implemented.
>
> Ken Young

Bringing that a bit more up to date, what's wrong with the idea of
having a loadable control store on a modern micro of say, Xeon class ?.
Such a machine may have a fairly extensive instruction set, or be
completely stripped down to a microcode loader. The advantages are that
each and every application could optimise an instruction set to suit the
needs of the application, language and needs. Even some low end early
mini's had this facility, ie: Vax730 and others, from memory.

The problem with general purpose computing is just that - it's general
purpose, with all the compromises that it entails...

Chris
From: Andrew Reilly on
On Fri, 02 Oct 2009 12:07:34 +0100, ChrisQ wrote:
> Bringing that a bit more up to date, what's wrong with the idea of
> having a loadable control store on a modern micro of say, Xeon class ?.

Mainly because that class of micro doesn't do much with "control store"
as such. All of the useful instructions are directly decoded and
implemented. Sure, some of the curlier ones are still sequenced with
something like a control store (string ops, for example), but not most.

> Such a machine may have a fairly extensive instruction set, or be
> completely stripped down to a microcode loader. The advantages are that
> each and every application could optimise an instruction set to suit the
> needs of the application, language and needs. Even some low end early
> mini's had this facility, ie: Vax730 and others, from memory.

The argument against this kind of thing is that these machines are used
as time-sliced time sharing systems, and that loadable control state
constitutes a large lump of extra state that needs to be swapped on a
task switch. The alternative devised in that time frame was to expose
the vertical microcode instruction set (sort of), and replace writable
control store with on-chip instruction cache: RISC was born. (The
control store still gets swapped on task switch, unless the cache is big
enough, and there are process IDs in the cache tags, but the advantage is
that you get to do it piece-meal on-demand, and hopefully while you're
busy doing something useful...)

> The problem with general purpose computing is just that - it's general
> purpose, with all the compromises that it entails...

Sure there are compromises, but with primitive enough primitives, many
(most?) applications wind up needing the same sorts of facilities, and
when transistor budgets don't require any good idea be left out, there's
not a lot to be gained by going special-purpose. (usually, YMMV, etc...)

Cheers,

--
Andrew