From: Tim Bradshaw on
Pascal Bourguignon wrote:

> Simulations,

Of what?

> 3D,

perhaps, but there's probably more than enough computing power already
to do all but the rendering, and the rendering will never be done in a
general purpose CPU I should think (as it isn't now, of course).

virtual worlds,

To be interesting these involve information that travels over networks
which have latencies of significant fractions of a second and
constrained bandwidth. Seems unlikely that vast local computing power
will help that much.

Neural Networks,

To what end?

(game) AI.

I know nothing about games really, but I'd lay odds that the thing that
consumes almost all the computational resource is rendering. See above.

>
> Indeed, it would be nice to add several parallel memory buses, or just
> have big L2 or L3 caches.
> With a 64MB L2 cache PER core and 64 cores, you have 4GB of RAM on chip.

There are lots of reasons why that sort of thing is hard and expensive.

From: Spiros Bousbouras on
Tim Bradshaw wrote:
> Pascal Bourguignon wrote:
> > (game) AI.
>
> I know nothing about games really, but I'd lay odds that the thing that
> consumes almost all the computational resource is rendering. See above.

If you want to analyse chess positions you can never
have too much speed and it has nothing to do with
rendering. I'm sure it's the same situation with go and
many other games.

From: Pascal Bourguignon on
"Tim Bradshaw" <tfb+google(a)tfeb.org> writes:

> Pascal Bourguignon wrote:
>
>> Simulations,
>
> Of what?

Of anything. Galaxies, planets, meteo, ecosystems, animals, cells,
nanobots, chemicals, particules, etc.


>> 3D,
>
> perhaps, but there's probably more than enough computing power already
> to do all but the rendering, and the rendering will never be done in a
> general purpose CPU I should think (as it isn't now, of course).
>
> virtual worlds,
>
> To be interesting these involve information that travels over networks
> which have latencies of significant fractions of a second and
> constrained bandwidth. Seems unlikely that vast local computing power
> will help that much.

Let me see, in my far out corner of the net, my ISP doubled my ADSL
speed every year (without me asking anything even). In ten years, I
should have here 2Gb/s of Internet bandwidth. I don't think 64 cores
will be too many to handle that.



> Neural Networks,
>
> To what end?

To do your job in your place. In ten years, we'll have enough
processing power and memory in desktop computers to modelize a whole
human brain. Better have parallal processors then, if you want to
emulate one at an acceptable speed.



> (game) AI.
>
> I know nothing about games really, but I'd lay odds that the thing that
> consumes almost all the computational resource is rendering. See above.


That's because rendering consumes all CPU that nothing else is done in
games (but tricks).


>> Indeed, it would be nice to add several parallel memory buses, or just
>> have big L2 or L3 caches.
>> With a 64MB L2 cache PER core and 64 cores, you have 4GB of RAM on chip.
>
> There are lots of reasons why that sort of thing is hard and expensive.

They won't be anymore.


--
__Pascal Bourguignon__ http://www.informatimago.com/

"Our users will know fear and cower before our software! Ship it!
Ship it and let them flee like the dogs they are!"
From: Steven L. Collins on

"Ben" <benbelly(a)gmail.com> wrote in message
news:1168265574.746548.104790(a)11g2000cwr.googlegroups.com...
> On Jan 8, 8:21 am, "sailormoo...(a)gmail.com" <sailormoo...(a)gmail.com>
> wrote:
>> >From this
>> >linkhttp://itpro.nikkeibp.co.jp/a/it/alacarte/iv1221/matsumoto_1.shtml
>> (Note : Japanese)
>> Matsu, the creator of Ruby, said in the next 10 years, 64 or 128 cores
>> desktop computers will be common, it's nearly impossible to simply
>> write that many threads, it should be done automatically, so maybe
>> functional language will do a better job in parallel programming than
>> procedural language like C or Ruby.
>
> Large embedded system quite often have that many threads. Obviously,
> they aren't all actually executing simultaneously on the processors we
> have right now, but various numbers of them are run depending on the
> platform, so the system is (or should be anyway) coded to handle each
> thread executing at any time. Not that I disagree with your point -
> functional programming would be a great help as our systems grow in
> complexity.
>
> Begin old embedded programmer rant:
> Kids these days just have no idea how to watch for side effects and
> avoid them, or why they should. What are they learning in school?!
> And don't even ask them to create a formal state machine for the side
> effects they need. They'd rather throw fifteen booleans in there and
> hope they can cover every possibility!
> End rant.
>
> Regardless of the language used on an actual product, training people
> in functional programming teaches them the skills they need when
> writing large scale concurrent apps, or small, single threaded apps, or
> any code that they don't want to be patching for the next 30 years.
>
> BTW: Has anyone done any hard real time work using Lisp? How'd it go?
>
See "Real-Time programming in Common Lisp" by James R. Allard and Lowell B.
Hawkinson




From: Tim Bradshaw on
Spiros Bousbouras wrote:
> Tim Bradshaw wrote:

>
> If you want to analyse chess positions you can never
> have too much speed and it has nothing to do with
> rendering. I'm sure it's the same situation with go and
> many other games.

Quite. Those kinds of games are really popular on PCs, I hear: no one
plays all those tedious `video games' any more.