From: Robert Myers on
On Apr 30, 10:53 am, Quadibloc <jsav...(a)ecn.ab.ca> wrote:
> On Apr 30, 6:45 am, Robert Myers <rbmyers...(a)gmail.com> wrote:
>
> > Lots of ways of "solving" Navier-Stokes on huge clusters with limited
> > bandwidth, but they entail considerable self-deception.
>
> But on a high-bandwidth cluster, a relaxation method could involve
> considerably less self-deception.
>
With enough bandwidth and enough memory, one can solve the time-
dependent Navier-Stokes equations with a very high level of confidence
as to accuracy. There is no need to use relaxation methods, and time-
dependence is critical to understanding turbulence. I keep waiting
for the nation's best effort to appear.

Robert.

From: Ken Hagan on
On Fri, 30 Apr 2010 13:45:27 +0100, Robert Myers <rbmyersusa(a)gmail.com>
wrote:

> Lots of ways of "solving" Navier-Stokes on huge clusters with limited
> bandwidth, but they entail considerable self-deception. LBM may prove
> to be little more than a new kind of self-deception, but it is
> naturally local and naturally parallel.

If it is naturally local and parallel then building wafer-sized custom
silicon for just this problem is probably "just an engineering problem".
Since fluid mechanics has, er, quite a few applications, there might even
be the money to pay for it.
From: Morten Reistad on
In article <hrebnj$rl2$1(a)soup.linux.pwf.cam.ac.uk>, <nmm1(a)cam.ac.uk> wrote:
>In article <4bdaa920$0$22940$e4fe514c(a)news.xs4all.nl>,

>More seriously, it's two, not one, actually - and that's not the
>real issue, anyway. Your mistake is to assume that parallelism
>is necessarily about doing several logically unrelated tasks at
>once. That is only one form of it, and not the most useful one.
>
>Many mathematicians can 'think in parallel', which includes the
>ability to think in terms of the transformation of invariants over
>a set of data. My point is that people are reluctant to move from
>the very serial logic that they were taught at school - and I am
>including the top level of academic scientists when I am using
>the word 'people' in that respect. We need a paradigm shift, in
>mathematics and science teaching as much as computing.

This ability to "think in parallell" is not unique to mathematicians.
Most physicists, biologists, economists; even theologians understand
this paradigm as a matter of fact. It is when faced with the programming
that most people lose the trail.

>
>Yes, I know that I am a long-haired and wild-eyed radical ....

It is Mayday tomorrow. Let us make a Glorious post-Moore revolution!

-- mrr


From: Morten Reistad on
In article <4bdaa920$0$22940$e4fe514c(a)news.xs4all.nl>,
Casper H.S. Dik <Casper.Dik(a)Sun.COM> wrote:
>nmm1(a)cam.ac.uk writes:
>
>>That being said, MOST of the problem IS only that people are very
>>reluctant to change. We could parallelise ten or a hundred times
>>as many tasks as we do before we hit the really intractable cases.
>
>Reluctant? It's in our genes; we can only do one task at the same
>time and whenever we subdivide a task, we'll do so serialized.
>That's why we use the languages and the algorithms we use.

I concur. After nearly 25 years of herding programmers I find
that only about 5% can conceptually handle non-imperative code. Like
drivers for communication devices, or make good code for window
displays. Much less handle small bits of these in parallell.

I see the only way out as making application-specific languages
that act as a middle layer between business logic and the
parallellised engine; with a strongish coercion towards making
generalised descriptions and declarations that can be executed
both in parallell, and with consistent state handling.

-- mrr

From: Morten Reistad on
In article <hreg32$6s1$1(a)soup.linux.pwf.cam.ac.uk>, <nmm1(a)cam.ac.uk> wrote:
>In article <hrec1m$jse$1(a)news.eternal-september.org>,
>nedbrek <nedbrek(a)yahoo.com> wrote:

>>I'm curious what sort of problems these are?
>
>Anything where the underlying problem requires a complete solution
>to one step before proceeding to the next, and the solution of a
>step is a provably intractable problem (except by executing the
>logic). The extreme answer is sequentially analysing data as it
>comes in, in real time.

Linking is a good example, because it can be solved by extra
passes. The early passes determine the external results that must
be propagated to other steps; the next fully solve the steps. For
linking the difference between these is pretty big.

>>My day-to-day tasks are:
>>1) Compiling (parallel)
>>2) Linking (serial)
>>3) Running a Tcl interpreter (serial)
>>4) Simulating microarchitectures (serial, but I might be able to run
>>multiple simulations at once, given enough RAM).
>>
>>I'm particularly interested in parallel linking.
>
>Linking is fairly simply parallelisable, in the same way that most
>such transformations are - i.e. more in theory than practice. The
>only problem is when you have do do a large amount of the work of
>one part to work out what other tasks that part implies.

Fully parallellising linking will require a few more passes, since
there are interdependencies. With various architectures you may even
have to iterate.

-- mrr