From: Barry Margolin on
In article <1168489207.874750.19330(a)77g2000hsv.googlegroups.com>,
wv9557(a)yahoo.com wrote:

> I don't think there is anything more anti parallelism like Lisp. Lisp
> is recursive, a function has to basically wait for another instance of
> itself to finish before continuing. Where is the parallelism?

Functions like MAPCAR easily lend themselves to parallel variants that
operate on many elements concurrently. *Lisp, the Lisp dialect for the
massively-parallel Connection Machine, was built around operations like
this.

For coarse-grained parallelism, you can easily make use of the
multi-threading features of most modern Lisps.

--
Barry Margolin, barmar(a)alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
From: Barry Margolin on
In article <m3lkkba0bu.fsf(a)robohate.meer.net>, Madhu <enometh(a)meer.net>
wrote:

> * "Spiros Bousbouras" <1168298748.558477.152070(a)11g2000cwr.XXXXX.com> :
> | If you want to analyse chess positions you can never
> | have too much speed and it has nothing to do with
> | rendering. I'm sure it's the same situation with go and
> | many other games.
>
> But having more than one core will not be a benefit if your algorithms
> are graph based and have to search a tree. IIRC most graph algorithms
> (dfs bfs) are inherently unparallelizable.

I think there was a Chess program for the Connection Machine, a
massively parallel computer with thousands of very simple processors
(or, in the case of the CM-5 model, hundreds of SPARC processors). I
don't know the specifics of the algorithm, but my guess is that it
worked by assigning analysis of different positions at a particular ply
to each processor. Walking the tree isn't very parallelizable, but once
you've reached the leaves you can get quite a bit of benefit.

--
Barry Margolin, barmar(a)alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
*** PLEASE don't copy me on replies, I'll read them in the group ***
From: John Thingstad on
On Thu, 11 Jan 2007 01:37:28 +0100, Robert Uhl <eadmund42(a)NOSPAMgmail.com>
wrote:

> "Tim Bradshaw" <tfb+google(a)tfeb.org> writes:
>>
>> However they do care about things like battery life, noise, and system
>> cost which correlate quite well with power consumption. And they
>> *will* care about power consumption (even the Americans) when the
>> systems start costing significantly more than their purchase cost to
>> run for a year.
>
> How long until that's the case? I just built a new box with a Pentium D
> (said box is never turned off, ever), and the gas & electricity bill for
> my entire home is still around $40-$60/month, depending on the season of
> the year. And I'm a homebrewer, which means that I spend a significant
> amount of electricity heating 6 1/2 gallons of liquid and boiling it
> down to 5 1/4 gallons. Oh, and it's winter here in Denver, so I have to
> heat my home.
>

Well modern computers come with power saving features.
Max consumption on my machine is about 400 W.
For a machine with dual graphics boards consumption can be as high as 1000
W.
But average consumption is much lower, more like 40 W.
So about as much as a light-bulb.

--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/
From: Juan R. on

Pascal Bourguignon ha escrito:

> > Neural Networks,
> >
> > To what end?
>
> To do your job in your place. In ten years, we'll have enough
> processing power and memory in desktop computers to modelize a whole
> human brain. Better have parallal processors then, if you want to
> emulate one at an acceptable speed.

>From where did you get that data?

So far as i know the prediction is that at some time in the second half
of this century, fast supercomputer could only offers us a 1000s MD
simulation for a _E. coli_ (~ 10^10 heavy atoms). MD simulations are
very inexpensive and rough. Prediction suggests no accurate _ab initio_
model would be available on this century.

From: John Thingstad on
On Thu, 11 Jan 2007 10:07:12 +0100, Juan R.
<juanrgonzaleza(a)canonicalscience.com> wrote:

>
> Pascal Bourguignon ha escrito:
>
>> > Neural Networks,
>> >
>> > To what end?
>>
>> To do your job in your place. In ten years, we'll have enough
>> processing power and memory in desktop computers to modelize a whole
>> human brain. Better have parallal processors then, if you want to
>> emulate one at an acceptable speed.
>
>> From where did you get that data?
>
> So far as i know the prediction is that at some time in the second half
> of this century, fast supercomputer could only offers us a 1000s MD
> simulation for a _E. coli_ (~ 10^10 heavy atoms). MD simulations are
> very inexpensive and rough. Prediction suggests no accurate _ab initio_
> model would be available on this century.
>

My suggestion is to forget Moor's law.
Computing increase in power increase has been decreasing for some time.
Growth is no longer exponential but scalar.
Say, a quad core CPU has 180% the speed of a single core.
Amdahl's law (wikipedia)

--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/