From: Dmitry A. Kazakov on
On Wed, 24 Mar 2010 10:55:22 +0100, Georg Bauhaus wrote:

> Ludovic Brenta schrieb:
>> Amine Moulay Ramdane wrote on comp.lang.ada:
>>> Why i am posting here ?
>>> Cause ADA looks like Object Pascal and the algorithms can easily
>>> be ported to ADA.
>>
>> Ada is not Object Pascal. Ada has built-in support for parallel
>> programming. Therefore, porting your Pascal library to Ada is not
>> "easy"; instead one would rewrite your library from scratch, using
>> Ada's built-in features like task types, protected types (for the job
>> queue), arrays of tasks (for the pool itself) and task entries to
>> implement a thread pool. In fact, this has already been done many
>> times.
>
> The lock-free part is the interesting thing.

Yes, but there are two quite different layers of:

1. lock-free primitives implemented in pure Ada
2. implementations based on third party libraries / hardware instructions
(and made compatible with Ada tasking on the given platform)

In both cases porting is not trivial as Ludovic has pointed out. For #1,
which is my favorite, it can well be impossible or less efficient than
locking solutions (e.g. protected object based). It is very interesting to
compare lock-free and locking solutions, because the results are always
surprising.

Anyway, I think that the poster, if he really is willing to contribute,
should take some time to consider how the proposed algorithms map onto the
Ada tasking model, especially taking into account that Ada tasking
primitives are higher level, than ones known in other languages.

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de
From: Georg Bauhaus on
Dmitry A. Kazakov schrieb:
> how the proposed algorithms map onto the
> Ada tasking model, especially taking into account that Ada tasking
> primitives are higher level, than ones known in other languages.

As a side note: it seems anything but easy to explain
the idea of a concurrent language, not a library, and
not CAS things either, as the means to support the programmer
who wishes to express concurrency.
Concurrency is not seen as one of the modes of expression
in language X. Rather, concurrency is seen as an effect
of interweaving concurrency primitives and some algorithm.

What can one do about this?
From: Warren on
Georg Bauhaus expounded in news:4baa27f2$0$6770$9b4e6d93
@newsspool3.arcor-online.net:

> Dmitry A. Kazakov schrieb:
>> how the proposed algorithms map onto the
>> Ada tasking model, especially taking into account that Ada tasking
>> primitives are higher level, than ones known in other languages.
>
> As a side note: it seems anything but easy to explain
> the idea of a concurrent language, not a library, and
> not CAS things either, as the means to support the programmer
> who wishes to express concurrency.
> Concurrency is not seen as one of the modes of expression
> in language X. Rather, concurrency is seen as an effect
> of interweaving concurrency primitives and some algorithm.
>
> What can one do about this?

I thought the Cilk project was rather interesting in
their attempt to make C (and C++) more parallel
to take advantage of multi-core cpus. But the language
still requires that the programmer program the parallel
aspects of the code with some simple language enhancements.

As cores eventually move to 128+-way cores, this needs
to change to take full advantage of shortened elapsed
times, obviously. I think this might require a radical
new high-level language to do it.

Another barrier I see to this is the high cost of
starting a new thread and stack space allocation.

I was disappointed to learn that the Cilk compiler uses
multiple stacks in the same way that any pthread
implementation would. If a single threaded version of
the program needs S bytes of stack, a P-cpu threaded
version would require P * S bytes of stack. They do get
tricky with stack frames when they perform "work stealing"
on a different cpu. But that is as close as they get
to a cactus stack.

Somehow you gotta make thread startup and shutdown
cheaper. The only other thing to do is to create a
pool of re-usable threads. But in my mind, the
optimizing compiler is probably in the best place
to make parallel code optimizations that consist of
short runs of code.

Warren
From: Maciej Sobczak on
On 24 Mar, 17:40, Warren <ve3...(a)gmail.com> wrote:

> Another barrier I see to this is the high cost of
> starting a new thread and stack space allocation.

> Somehow you gotta make thread startup and shutdown
> cheaper.

Why?

The problem of startup/shutdown cost and how many cores you have are
completely orthogonal.
I see no problem in starting N threads at the initialization time, use
them throughout the application lifetime and then shut down at the end
(or never).
The cost of these operations is irrelevant. Make it 10x what it is and
I will be still fine.

If your favorite programming model involves lots of short-running
threads that have to be created and torn down repeatedly, then it has
no relation to multicore. It is just a bad resource usage pattern.

--
Maciej Sobczak * http://www.inspirel.com

YAMI4 - Messaging Solution for Distributed Systems
http://www.inspirel.com/yami4
From: Dmitry A. Kazakov on
On Wed, 24 Mar 2010 15:55:45 +0100, Georg Bauhaus wrote:

> Dmitry A. Kazakov schrieb:
>> how the proposed algorithms map onto the
>> Ada tasking model, especially taking into account that Ada tasking
>> primitives are higher level, than ones known in other languages.
>
> As a side note: it seems anything but easy to explain
> the idea of a concurrent language, not a library, and
> not CAS things either, as the means to support the programmer
> who wishes to express concurrency.

This is a strange claim. A library cannot express concurrency, I mean the
procedural decomposition cannot. There is some magic added which tells that
the procedure is called on a context of a thread or process etc, for nether
is a part of a non-concurrent language. So the idea of a scheduled item
with a context in part independent on the rest and in part sharing things
with other scheduled items needs a lot of words to explain.

> Concurrency is not seen as one of the modes of expression
> in language X.

That is a design fault of the corresponding language.

Then you will need to specify the semantics of shared objects in presence
of concurrency anyway. How would you do this *outside* the language?

> Rather, concurrency is seen as an effect
> of interweaving concurrency primitives and some algorithm.

No, concurrent algorithms are quite different from the sequential ones. The
same can be said about objects (in the context of OOP).

--
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de