From: BDH on
> Don't you need an
> appropriate language, and perhaps a plausible parallel machine
> abstraction, before you start on the compiler?

Done and done. People like me love that stuff! But do you have somebody
that's looking for it?

> How would your language be
> different from Verilog or VHDL or Occam?

Me, I'd go for a language with attributes of lisp and APL, with layered
optimized compilation (using graph-rewriting) making it both high and
low level, with code-data mixing with some explicit type data. I also
have a couple new basic abstractions that I reckon an efficient
architecture can be made to parallel, but they're a little bit alien.

> What would be different, this
> time?

Hum. I'm more arrogant?

I know! I'm not in a position to do any harm!

> Or are you, perhaps, hinting at the "High Productivity" DARPA project, or
> one of Sun, IBM or Cray's sub-projects, each of which, I assume, has
> working answers to my previous questions?

Working, for certain values of working. But not, I don't think, the
right answers.

From: BDH on
> >Maybe I'm biased - I hate Java.
>
> That's like saying you think procedural programming sucks because you
> hate Pascal.

No, it's like saying you think procedural programming sucks, but
concede maybe you really just hate Pascal.

From: BDH on
> with code-data mixing with some explicit type data.

That is, code produces a set of mixes of data and future code. This can
then be transformed. When wanted, the code is then evaluated on the
data it is mixed with.

From: BDH on
> How would your language be
> different from Verilog or VHDL or Occam? What would be different, this
> time?

Or are you asking what I think of locks and message passing and so
forth? Well, I'm more concerned with VLIW type techniques.

From: Nick Maclaren on

In article <1162421248.333748.269580(a)m7g2000cwm.googlegroups.com>,
"BDH" <bhauth(a)gmail.com> writes:
|> > Look not everything is parallel.
|> >
|> > As a start, you should read Amdahl's (law) paper.
|>
|> The core of that is pretty much obvious. But the slow things can be
|> made more parallel.

Sigh. There are at least the following classes of problem:

Programs where it is known how to do that, and the task is merely
redesigning
Programs where it is NOT known how to do that, but IS believed to be
feasible in theory
Programs where believed to be infeasible to do that, but it is not
known for sure
Programs where it is KNOWN to be infeasible

There are also many programs where the design makes it infeasible, even
when it is feasible in theory, and ones where it can be made to be
feasible by changing the objective of the program slightly.

|> > I think for theoretic purposes, Backus' Turing lecture on getting out of
|> > the von Neumann paradigm has potential (you have to read these words
|> > carefully, FP didn't go far, and FL also stagnated).
|>
|> I don't know what FP and OO and half a dozen other acronyms are
|> supposed to be. ...

That is very clear. FP, to me, is usually floating-point, but it
clearly means functional programming here. By all means post dubious
and controversial assertions, as it is a good way to learn, but do not
tell people who know vastly more than you do that they are talking
nonsense.

|> > >it's to do with the language used to program gets turned into an RTL
|> > >(register transfer language) which specifies all variables as
|> > >registers, and then this has to be mapped onto a machine which only has
|> > >so many regs.
|>
|> Thanks a lot, Mr Von Neumann. That is not a very good system.

That is true, but the parallelism issue has little to do with registers,
and Backus was NOT talking primarily about parallelism as it is now
understood in that remark, but about the 'memory wall'.

The parallelism issue is about the language, yes, but it is about how
to express algorithms without introducing non-fundamental serialisation.
They are conceptually slightly different issues.


Regards,
Nick Maclaren.