From: Eugene Miya on
In article <_oOdneG2v-ejCODYnZ2dnUVZ_rOqnZ2d(a)metrocastcablevision.com>,
Bill Todd <billtodd(a)metrocast.net> wrote:
>Del Cecchi wrote:
> And Threads? Aren't
>> they just parallel sugar on a serial mechanism?
>
>Not when each is closely associated with a separate hardware execution
>context.

Threads are just lightweight processes.
Most people don't see the baggage which gets copied when an OS like Unix
forks(2). And that fork(2) is light weight compared to the old style VMS
spawn and the IBM equivalents. And Dijkstra and others ignored lots of
that stuff when he first wrote about it.


>And when multiple threads are used on a single hardware
>execution context to avoid explicitly asynchronous processing (e.g., to
>let the processor keeping doing useful work on something else while one
>logical thread of execution is waiting for something to happen - without
>disturbing that logical serial process flow), that seems more like
>serial sugar on a parallel mechanism to me.

Distributed memory or shared memory?

>Until individual processors stop being serial in the nature of the way
>they execute code, I'm not sure how feasible getting rid of ideas like
>'threads' will be (at least at some level, though I've never
>particularly liked the often somewhat inefficient use of them to avoid
>explicit asynchrony).

What's their nature?

--
From: Eugene Miya on
In article <elk5as$6pl$1(a)gemini.csx.cam.ac.uk>,
Nick Maclaren <nmm1(a)cus.cam.ac.uk> wrote:

I thought you said you were leaving this Usenet thread?

>In article <4u5ga8F16bmv8U1(a)mid.individual.net>,
>Del Cecchi <cecchinospam(a)us.ibm.com> writes:
>|> In the words of a past president.... "There you go again"
>|> If I have a vector or an array and a parallel paradigm, why on earth
>|> would there be a for loop? Isn't looping stone age sequential thinking?
>|> :-) Even APL in the 70's did away with that. And Threads? Aren't
>|> they just parallel sugar on a serial mechanism?
>
>1970s? 1960s, surely!
>
>It depends on what you mean by threads, but you are correct to some
>extent. Hoare's title is still appropriate.

Oh yeah?

>|> Of course a real programmer can write fortran in any language, as we
>|> used to say.
>
>As Jan Vorbrueggen says, Fortran introduced array operations as a
>first-class concept over 15 years ago!

1991?
Not clear.



Yeah I am trying to figure out your line of reasoning, and it's short of
pulling teeth.

%z Article
%K Hoare78
%A C. A. R. Hoare
%T Communicating Sequential Processes
%J Communications of the ACM (CACM)
%V 21
%N 8
%D August 1978
%P 666-677
%K bhibbard, RBBRS953, frecommended91, hcc, ak,
programming, programming languages, programming primitives,
program structures, parallel programming, concurrency, input, output,
guarded commands, nondeterminacy, coroutines, procedures, multiple entries,
multiple exits, classes, data representations, recursion,
conditional critical regions, monitors, iterative arrays, CSP,
CR categories: 4.20, 4.22, 4.32
maeder bib: synchronisation and concurrency in processes,
parallel programming,
guarded commands, parbegin, synchronous message-passing.
messages, distributed processing, parallel processing,
%X This paper is now expanded into an excellent book detailed by Hoare
and published by Prentice-Hall.
This paper is reproduced in Kuhn and Padua's (1981, IEEE)
survey "Tutorial on Parallel Processing."
Reproduced in "Distributed Computing: Concepts and Implementations"
edited by
McEntire, O'Reilly and Larson, IEEE, 1984.
%X Reproduced in
Sol M. Shatz and Jia-Ping Wang, eds.
"Tutorial: Distributed-Software Engineering,"
IEEE, Los Alamitos, 1989, pages 136-147.
%X Somewhat dated.
%X Hoare's original CSP paper; not very mathematical.
%X A simple program paradigm based on messages whence came Occam and
the Transputer.
%X WDH: An important paper, but a conservative approach to parallel
programming.
%X Chosen because they are clear descriptions of important theoretical
issues.
I would have to check up which of the latter.
%s darrell(a)cypress (Sun Mar 14 14:51:09 1993)

%Q ISO/ANSI
%T Information technology -
Programming languages - Fortran - Part 1: Base language
%R ISO/IEC 1539-1:1997(E)
%I CBEMA/ANSI
%C Washington DC
%D 1997
%K Fortran 95 standard,
%X The 1539 bit is for Fortran,
the -1 is for part 1 (the base language),
the 1997 is the version (that's when "Fortran 95" was formally published),
and the E is for English.

%A L. G. Valiant
%T A Scheme for Fast Parallel Communication
%J SIAM Journal on Computing
%V 11
%N 2
%D May 1982
%P 350-361
%K BSP, bulk synchronous parallelism,
%K graph partitioning,
%X Consider $N = 2^n$ nodes connected by wires
to make an $n$-dimensional binary cube. Suppose that initially the
nodes contain one packet each addressed to distinct nodes of the cube.
We show that there is a distributed randomized algorithm that can
route every packet to its destination without two packets passing down
the same wire at any time, and finishes within time $O({\rm log}~n)$
with overwhelming probability for all such routing requests. Each
packet carries with it $O({\rm log}~n)$ bits of bookkeeping information.
No other communication between the nodes takes place.
%X The algorithm offers the only scheme known for realizing arbitrary
permutations in a sparse $N$ node network in $O({\rm log}~n)$ time and
has evident applications in the design of general purpose parallel computers.
%X WDH: Probabilistic limited-storage routing algorithms for an n-cube.

%A Marc Snir
%A Steve Otto
%A Steven Huss-Lederman
%A David Walker
%A Jack Dongarra
%T MPI -- The Complete Reference
%S Scientific and Engineering Computation Series
%V 1, The MPI Core, 2nd ed.
%I MIT Press
%C Cambrdge, MA
%D 1998
%K book, text,
%K grecommended(06): ast,
%X (AST) The title says it all.
If you want to learn to program in MPI, look here.
The book covers point-to-point and collective communication,
communicators, environmental management, profiling, and more.
%X Book reviewed in Scientific Programing, v13, #1, '05, pp. 57-60.

%A William Gropp
%A Steven Huss-Lederman
%A Andrew Lumsdaine
%A Ewing Lusk
%A Bill Nitzberg
%A William Saphir
%A Marc Snir
%T MPI -- The Complete Reference
%S Scientific and Engineering Computation Series
%V 2, The MPI Extensions, 2nd ed.
%I MIT Press
%C Cambrdge, MA
%D 1998
%K book, text,
%K grecommended(06): ast,
%X (AST) The title says it all.
If you want to learn to program in MPI, look here.
The book covers point-to-point and collective communication,
communicators, environmental management, profiling, and more.
%X Book reviewed in Scientific Programing, v13, #1, '05, pp. 57-60.

%A Jaeyoung Choi
%A Jack J. Dongarra
%A Roldan Pozo
%A David W. Walker
%T ScaLAPACK: A Scalable Linear Algebra Library for
Distributed Memory Concurrent Computers
%J Proc. Frontiers '92: Fourth Symp. on Massively Parallel Computation
%I IEEE
%C McLean, VA
%D October 1992
%P 120-127
%K numerical applications and algorithms,



--
From: jacko on

Eugene Miya wrote:
> In article <_oOdneG2v-ejCODYnZ2dnUVZ_rOqnZ2d(a)metrocastcablevision.com>,
> Bill Todd <billtodd(a)metrocast.net> wrote:
> >Del Cecchi wrote:
> > And Threads? Aren't
> >> they just parallel sugar on a serial mechanism?
> >
> >Not when each is closely associated with a separate hardware execution
> >context.
>
> Threads are just lightweight processes.
> Most people don't see the baggage which gets copied when an OS like Unix
> forks(2). And that fork(2) is light weight compared to the old style VMS
> spawn and the IBM equivalents. And Dijkstra and others ignored lots of
> that stuff when he first wrote about it.

more baggage equals more bombs.

>
> >And when multiple threads are used on a single hardware
> >execution context to avoid explicitly asynchronous processing (e.g., to
> >let the processor keeping doing useful work on something else while one
> >logical thread of execution is waiting for something to happen - without
> >disturbing that logical serial process flow), that seems more like
> >serial sugar on a parallel mechanism to me.
>
> Distributed memory or shared memory?

Any memory which ain't that slow and hot stuff.

> >Until individual processors stop being serial in the nature of the way
> >they execute code, I'm not sure how feasible getting rid of ideas like
> >'threads' will be (at least at some level, though I've never
> >particularly liked the often somewhat inefficient use of them to avoid
> >explicit asynchrony).
>
> What's their nature?

new anti sequential PrologAPL i assume?

From: Nick Maclaren on

In article <457d9f0f$1(a)darkstar>, eugene(a)cse.ucsc.edu (Eugene Miya) writes:
|>
|> I thought you said you were leaving this Usenet thread?

I said that I wasn't going to respond to your nonsense any longer,
which is not the same thing. I am, however, prepared to help with
your education.

|> >It depends on what you mean by threads, but you are correct to some
|> >extent. Hoare's title is still appropriate.
|>
|> Oh yeah?
|>
|> >|> Of course a real programmer can write fortran in any language, as we
|> >|> used to say.
|> >
|> >As Jan Vorbrueggen says, Fortran introduced array operations as a
|> >first-class concept over 15 years ago!
|>
|> 1991?
|> Not clear.
|>
|> Yeah I am trying to figure out your line of reasoning, and it's short of
|> pulling teeth.
|>
|> %A C. A. R. Hoare
|> %T Communicating Sequential Processes

Yup. You've got it. Cooperating. Sequential. Processes. That is
precisely what most modern threads are, so I think that his title is
still appropriate. Don't you?

|> Programming languages - Fortran - Part 1: Base language
|> %R ISO/IEC 1539-1:1997(E)
|> the 1997 is the version (that's when "Fortran 95" was formally published),
|> and the E is for English.

I don't know what you were up to in the early 1990s, but "Fortran 95"
was a minor tweak to Fortran 90, which was the revision that added
the array features. So, try ISO/IEC 1539:1991(E). Yup. 1991.


Regards,
Nick Maclaren.
From: Eugene Miya on
Del Cecchi wrote:
>> > And Threads? Aren't
>> >> they just parallel sugar on a serial mechanism?

In article <_oOdneG2v-ejCODYnZ2dnUVZ_rOqnZ2d(a)metrocastcablevision.com>,
Bill Todd <billtodd(a)metrocast.net> wrote:
>> >Not when each is closely associated with a separate hardware execution
>> >context.

Eugene Miya wrote:
>> Threads are just lightweight processes.
>> Most people don't see the baggage which gets copied when an OS like Unix
>> forks(2). And that fork(2) is light weight compared to the old style VMS
>> spawn and the IBM equivalents. And Dijkstra and others ignored lots of
>> that stuff when he first wrote about it.

In article <1165861096.174435.238830(a)73g2000cwn.googlegroups.com>,
jacko <jackokring(a)gmail.com> wrote:
>more baggage equals more bombs.

Metaphoric? Or real?
Process creation and context switching is lots of copying.

>> >And when multiple threads are used on a single hardware
>> >execution context to avoid explicitly asynchronous processing (e.g., to
>> >let the processor keeping doing useful work on something else while one
>> >logical thread of execution is waiting for something to happen - without
>> >disturbing that logical serial process flow), that seems more like
>> >serial sugar on a parallel mechanism to me.
>>
>> Distributed memory or shared memory?
>
>Any memory which ain't that slow and hot stuff.

How?
Which?


>> >Until individual processors stop being serial in the nature of the way
>> >they execute code, I'm not sure how feasible getting rid of ideas like
>> >'threads' will be (at least at some level, though I've never
>> >particularly liked the often somewhat inefficient use of them to avoid
>> >explicit asynchrony).
>> What's their nature?
>
>new anti sequential PrologAPL i assume?

Never heard of it. So where do people find out more?
I stopped following the New Generation Prolog work some years ago.
Are there still Inference Engines being made in Japan?
And when was APL tacked on?

--