From: Nick Maclaren on 2 Nov 2006 05:22
In article <4qtfa0Foj3v6U1(a)individual.net>,
"Andrew Reilly" <andrew-newspost(a)areilly.bpc-users.org> writes:
|> On Wed, 01 Nov 2006 21:27:06 -0800, BDH wrote:
|> How are the developers, however smart, going to express their algorithms
|> without introducing sequential dependencies, however inadvertently?
|> (What is an algorithm, without sequential dependencies?) Don't you need an
|> appropriate language, and perhaps a plausible parallel machine
|> abstraction, before you start on the compiler? How would your language be
|> different from Verilog or VHDL or Occam? What would be different, this
Oh, THAT'S easy. The standard definition of an algorithm is too
restrictive, and it is perfectly possible to have ones based around
mathematical invariants with no ordering dependency at all. The only
way to do it is to get the teaching of mathematics turned on its head
so that it does not introduce serialisation until at least university.
Now, how to get there from here, and how to get that accepted by any
of the communities involved, are entirely different questions. To which
I have no answer :-)
|> Or are you, perhaps, hinting at the "High Productivity" DARPA project, or
|> one of Sun, IBM or Cray's sub-projects, each of which, I assume, has
|> working answers to my previous questions?
You are being ironic, of course?
From: BDH on 2 Nov 2006 05:28
> Oh, THAT'S easy. The standard definition of an algorithm is too
> restrictive, and it is perfectly possible to have ones based around
> mathematical invariants with no ordering dependency at all. The only
> way to do it is to get the teaching of mathematics turned on its head
> so that it does not introduce serialisation until at least university.
From: BDH on 2 Nov 2006 05:31
> > >Very good! A logarithmic time sort for a million ints would fill up a
> > >modern CPU, and would only run a million times faster.
> On further thought, flash memory would be more representative of the
> density a device of this form could achieve. Make that 4 million
But here is the thing - you CAN use a smaller sort core than what
you're sorting, using (sort size)*log(sort size / core size) memory
transfers. And you can do multiple simultaneous smaller sorts instead
of one larger sort on the same hardware.
From: BDH on 2 Nov 2006 05:46
> |> The core of that is pretty much obvious. But the slow things can be
> |> made more parallel.
> Sigh. There are at least the following classes of problem:
> Programs where it is KNOWN to be infeasible
What examples of these do you like?
> |> I don't know what FP and OO and half a dozen other acronyms are
> |> supposed to be. ...
> That is very clear.
Get correcting! You clearly have lots of work!
> That is true, but the parallelism issue has little to do with registers,
> and Backus was NOT talking primarily about parallelism as it is now
> understood in that remark, but about the 'memory wall'.
More of a data transfer wall. Alright, I have my plan and all, but how
would you deal with this?
> The parallelism issue is about the language, yes, but it is about how
> to express algorithms without introducing non-fundamental serialisation.
> They are conceptually slightly different issues.
OK, what is graph theory missing that's hard to fix?
From: BDH on 2 Nov 2006 05:58
> By all means post dubious
> and controversial assertions, as it is a good way to learn, but do not
> tell people who know vastly more than you do that they are talking
What, you want more self-deprecation?
(Jumps in puddle of Prejudice-B-Gone.)