From: sailormoontw on
>From this link
http://itpro.nikkeibp.co.jp/a/it/alacarte/iv1221/matsumoto_1.shtml
(Note : Japanese)
Matsu, the creator of Ruby, said in the next 10 years, 64 or 128 cores
desktop computers will be common, it's nearly impossible to simply
write that many threads, it should be done automatically, so maybe
functional language will do a better job in parallel programming than
procedural language like C or Ruby.

From: Ben on
On Jan 8, 8:21 am, "sailormoo...(a)gmail.com" <sailormoo...(a)gmail.com>
wrote:
> >From this linkhttp://itpro.nikkeibp.co.jp/a/it/alacarte/iv1221/matsumoto_1.shtml
> (Note : Japanese)
> Matsu, the creator of Ruby, said in the next 10 years, 64 or 128 cores
> desktop computers will be common, it's nearly impossible to simply
> write that many threads, it should be done automatically, so maybe
> functional language will do a better job in parallel programming than
> procedural language like C or Ruby.

Large embedded system quite often have that many threads. Obviously,
they aren't all actually executing simultaneously on the processors we
have right now, but various numbers of them are run depending on the
platform, so the system is (or should be anyway) coded to handle each
thread executing at any time. Not that I disagree with your point -
functional programming would be a great help as our systems grow in
complexity.

Begin old embedded programmer rant:
Kids these days just have no idea how to watch for side effects and
avoid them, or why they should. What are they learning in school?!
And don't even ask them to create a formal state machine for the side
effects they need. They'd rather throw fifteen booleans in there and
hope they can cover every possibility!
End rant.

Regardless of the language used on an actual product, training people
in functional programming teaches them the skills they need when
writing large scale concurrent apps, or small, single threaded apps, or
any code that they don't want to be patching for the next 30 years.

BTW: Has anyone done any hard real time work using Lisp? How'd it go?

From: Alex Mizrahi on
(message (Hello 'sailormoontw(a)gmail.com)
(you :wrote :on '(8 Jan 2007 05:21:24 -0800))
(

??>> From this link
s> http://itpro.nikkeibp.co.jp/a/it/alacarte/iv1221/matsumoto_1.shtml
s> (Note : Japanese)
s> Matsu, the creator of Ruby, said in the next 10 years, 64 or 128 cores
s> desktop computers will be common, it's nearly impossible to simply
s> write that many threads, it should be done automatically, so maybe
s> functional language will do a better job in parallel programming than
s> procedural language like C or Ruby.

nobody needs just 64 cores. they need that many cores FOR A SPECIFIC TASK.
if it's web-server, it can be easily paralelizable -- you can handle each
request in a separate thread.
there are well-known paralellization techinques for scientific tasks that
involve large matrices, etc.

so, actually there's no much need for automatic parallel programming. tasks
requiring high-performance ALREADY are running in parallel. i bet if you run
some usual single-core task with some magic auto-parallel language, you
won't make significant benefits.

btw, you don't have to wait 10 years. you can buy GeForce 8800 for 500$, it
has hundreds of computing cores.
http://developer.nvidia.com/object/cuda.html
---
What is CUDA technology?

GPU computing with CUDA technology is an innovative combination of computing
features in next generation NVIDIA GPUs that are accessed through a standard
��C�� language. Where previous generation GPUs were based on ��streaming
shader programs��, CUDA programmers use ��C�� to create programs called
threads that are similar to multi-threading programs on traditional CPUs.
In contrast to multi-core CPUs, where only a few threads execute at the same
time, NVIDIA GPUs featuring CUDA technology process thousands of threads
simultaneously enabling a higher capacity of information flow.
---

it would be nice to program that CUDA thing in Lisp instead of C :)

)
(With-best-regards '(Alex Mizrahi) :aka 'killer_storm)
"People who lust for the Feel of keys on their fingertips (c) Inity")


From: Tim Bradshaw on
sailormoontw(a)gmail.com wrote:

> Matsu, the creator of Ruby, said in the next 10 years, 64 or 128 cores
> desktop computers will be common, it's nearly impossible to simply
> write that many threads, it should be done automatically, so maybe
> functional language will do a better job in parallel programming than
> procedural language like C or Ruby.

I think the only interesting bit of this is what people will do with
this on desktops. Server applications frequently have plenty of
parallelism to exploit and people are becoming very sensitive to power
issues (not, I think, out of any sense of responsibility but because it
is now often hard to fill racks in DCs without exceeding power &
cooling budgets, and both are also expensive of course). This is
finally driving people towards multiple core systems clocked less
aggressively (quadratic dependence of power on clock speed really makes
a difference here).

Even when there is not enough parallelism to exploit you can use
multiple-core machines to consolidate lots of less-threaded
applications efficiently, either using one of the somewhat horrible
machine-level virtualisation things or something more lightweight like
zones.

Of course, all this is predicated on there being enough memory
bandwidth that everything doesn't just starve. I dunno how good
current seriously-multicore systems are in this respect.

But on the desktop most of these applications aren't very interesting,
so finding something for a seriously multicore system to do might be
more of a challenge. There is, of course, the argument that it doesn't
matter very much: given that it's expensive to provide enough memory
bandwidth & that desktop applications are often much more latency
sensitive than server ones, but somehow desktop processors ship with
much less cache than those for servers, one has to wonder whether
anyone actually really notices. I suspect desktops already spend most
of their time either idle or stalled waiting for memory. Adding more
cores will just mean they spend more time doing both. Not that this
will stop anyone, of course.

From: Pascal Bourguignon on
"Tim Bradshaw" <tfb+google(a)tfeb.org> writes:
> I think the only interesting bit of this is what people will do with
> this on desktops.

Simulations, 3D, virtual worlds, Neural Networks, (game) AI.

Indeed, it would be nice to add several parallel memory buses, or just
have big L2 or L3 caches.
With a 64MB L2 cache PER core and 64 cores, you have 4GB of RAM on chip.

--
__Pascal Bourguignon__ http://www.informatimago.com/
Small brave carnivores
Kill pine cones and mosquitoes
Fear vacuum cleaner