From: Eugene Miya on
In article <4u5ga8F16bmv8U1(a)mid.individual.net>,
Del Cecchi <cecchinospam(a)us.ibm.com> wrote:
>Stefan Monnier wrote:
>>>Language people are part of the problem....
>>
>> Agreed. But I think the problem is not that we (I'm a language people)
>> haven't found the right abstractions to make parallel programming easier.
>> There are no such abstractions.

Numerous syntactic parallelism constructs have been attempted.
PARDO, PARFOR, APL, fork-join, ad nauseum.
They aren't the problem. Whether they are the solution is another issue.

>> Programming languages have been mildly successful at making it possible/easy
>> to write *correct* parallel programs. While that's sufficient for
>> concurrent programming, it's not sufficient for parallel programming, where
>> performance is crucial.

Depends if and how you distinguish concurrent from parallel.
And some don't. And that can be a problem.


>> I think language people need to start looking at how we can add performance
>> to the language's semantics. The reason why parallel programming is hard,
>> I believe, is in part because of all the work it takes to relate the poor
>> performance of your program to its source code.
>>
>> E.g. if you have a piece of code that says something like
>>
>> PARALLEL-FOR(20%) i = 1 TO 50 WITH DO
>> dosomething with i
>> DONE

What's 20%?

As the Cray and subsequent guys have learned:
you are assuming, for instance, no interactions of i on the LHS with
i-1 on the RHS. And not unless you are assured of having at least 1/2
of all available memory (double buffering: not always possible).
Don't think 50 (for instance, think, big number).

A couple of decades ago, Dave Kuck detailed a survey of all the problems
needed in parallel software as an opening session of an ICPP.
Unfortunately that paper is hard to find (it's like 1975 + or minus
a year or 2).

So you are about 1974 compiler non-UIUC technology.

>> the compiler needs to be able to estimate the efficiency of the code and burp
>> with a warning if it turns out that those 50 threads will be busy less than
>> 20% of the time (e.g. because their running time varies too much so they'll
>> wait for the slowest iteration, or because the rest of the time is taken by
>> communication, ...).

I still don't get your use of 20%.


>In the words of a past president.... "There you go again"
>If I have a vector or an array and a parallel paradigm, why on earth
>would there be a for loop?

Loop overloading.
Worked financially for CRI and even IBM for a while.

>Isn't looping stone age sequential thinking?

Lots of sequential, iterative algorithms.
;^)

> :-) Even APL in the 70's did away with that. And Threads? Aren't
>they just parallel sugar on a serial mechanism?
>
>Of course a real programmer can write fortran in any language, as we
>used to say.

True, true.

>> Of course it is tremendously difficult for the compiler to be able to
>> estimate efficiency and it may require more programmer annotations and or
>> restrictions. And maybe some of the performance checks need to be moved to
>> run-time. And of course my above is example is overly simplistic.

I'm not certain how compilers estimate efficiency. It's barely
recognized in the community ("cycles for free").

>> Stefan "not knowing what he's talking about (I warned you: I'm
>> a language people)"
>
>
>--
>Del Cecchi
>"This post is my own and doesn�t necessarily represent IBM�s positions,
>strategies or opinions.�

Lots of caveats.

--
From: Eugene Miya on
In article <4u5gfcF16n2j9U1(a)mid.individual.net>,
=?windows-1252?Q?Jan_Vorbr=FCggen?= <jvorbrueggen(a)not-mediasec.de> wrote:
>>> PARALLEL-FOR
>>> dosomething with i
>
>In current Fortran, one would likely use an array expression, no loops or
>threads in sight. The compiler is completely free (within the defined
>semantics of the expression) to parallelize as it pleases.

Or not.

That's not the objective.

I have to catch up with this thread.

--
From: Eugene Miya on
In article <el1suk$4co$1(a)gemini.csx.cam.ac.uk>,
Nick Maclaren <nmm1(a)cus.cam.ac.uk> wrote:

I thought you said that you were leaving this thread?

Just like my Italian post-doc.

>In article <m3fybv4hys.fsf(a)lhwlinux.garlic.com>,
>Anne & Lynn Wheeler <lynn(a)garlic.com> writes:
>|> old email from somebody in the menlo park knowlege based group.

Misdirected part of the thread wheren Lynn was talking about IBM in the
80s and I was relaying some history which John McCarthy told me about
1950s IBM.


>When the Athena project was in full swing, someone who should have
>known better claimed that IBM was going to implement it under MVS.
>I spoke to some systems people and said "no, IBM isn't going to"
>and pointed out that the MINIMUM path for handling one character
>typed at the keyboard involved (if I recall) 10 context switches,
>and most of them were BEFORE it could be displayed. Well, I wuz
>rite :-)
>
>It was about the era of your communication, too.
>
>It may have been implemented since, but constraints have changed.
>X remains a system killer, even under Unix, and Microsoft's clone
>of Presentation Manager is no better (well, PM itself wasn't much
>better).

Huh. That's this got to do with parallelism?
Athena was a loser system and IBM's "help" didn't.
That was fairly evident at MIT even at the time.

--
From: Bill Todd on
Del Cecchi wrote:

....

And Threads? Aren't
> they just parallel sugar on a serial mechanism?

Not when each is closely associated with a separate hardware execution
context. And when multiple threads are used on a single hardware
execution context to avoid explicitly asynchronous processing (e.g., to
let the processor keeping doing useful work on something else while one
logical thread of execution is waiting for something to happen - without
disturbing that logical serial process flow), that seems more like
serial sugar on a parallel mechanism to me.

Until individual processors stop being serial in the nature of the way
they execute code, I'm not sure how feasible getting rid of ideas like
'threads' will be (at least at some level, though I've never
particularly liked the often somewhat inefficient use of them to avoid
explicit asynchrony).

- bill
From: Nick Maclaren on

In article <4u5ga8F16bmv8U1(a)mid.individual.net>,
Del Cecchi <cecchinospam(a)us.ibm.com> writes:
|>
|> In the words of a past president.... "There you go again"
|> If I have a vector or an array and a parallel paradigm, why on earth
|> would there be a for loop? Isn't looping stone age sequential thinking?
|> :-) Even APL in the 70's did away with that. And Threads? Aren't
|> they just parallel sugar on a serial mechanism?

1970s? 1960s, surely!

It depends on what you mean by threads, but you are correct to some
extent. Hoare's title is still appropriate.

|> Of course a real programmer can write fortran in any language, as we
|> used to say.

As Jan Vorbrueggen says, Fortran introduced array operations as a
first-class concept over 15 years ago!


Regards,
Nick Maclaren.