From: Anne & Lynn Wheeler on
Anne & Lynn Wheeler <lynn(a)garlic.com> writes:
> actually there was some amount of work involving Sowa (when he was at
> IBM) and semantic networks in the 70s.
> http://www.jfsowa.com/

re:
http://www.garlic.com/~lynn/2006v.html#47 Why so little parallelism?

i did some of the infrastructure work on the original relational/sql
system/r implementation
http://www.garlic.com/~lynn/subtopic.html#systemr

part of it was based on the virtual memory management work referenced
here
http://www.garlic.com/~lynn/2006v.html#36 Why these original FORTAN quirks?

porting from cp67 base to vm370 base. even tho the standard (vm370)
product only released a small subset, the rest of the features were
available internally.

at the same time i was doing various work on system/r, i also got
involved in doing some of the infrastructure work on some of the
Sowa-related semantic network stuff.

while some of the results was used in internal tools, it never reached
a product stage.

in recent years, I've privately re-implemented some of the stuff from
scratch.
http://www.garlic.com/~lynn/index.html

it is what I used to maintain the rfc index information
http://www.garlic.com/~lynn/rfcietff.htm

as well as the merged taxonomy and glossary information
http://www.garlic.com/~lynn/index.html#glosnote
From: Nick Maclaren on

In article <m3fybv4hys.fsf(a)lhwlinux.garlic.com>,
Anne & Lynn Wheeler <lynn(a)garlic.com> writes:
|>
|> old email from somebody in the menlo park knowlege based group.
|>
|> To: wheeler
|> Date: 16 February 1988, 13:33:42 PST
|>
|> I am looking for pathlength guidelines for the interactive frontend
|> (scrolling, window moving etc) for the knowledge based systems
|> project. We currently think the pathlength for a scrolling operation
|> may be as much as 40-50K instructions, and are concerned that will
|> result in very sluggish operation on what we assume will be a loaded
|> system (the knowledge processing itself should be compute bound and
|> non-interactive.) Do you have any rules of thumb I can pass on to our
|> developers?

When the Athena project was in full swing, someone who should have
known better claimed that IBM was going to implement it under MVS.
I spoke to some systems people and said "no, IBM isn't going to"
and pointed out that the MINIMUM path for handling one character
typed at the keyboard involved (if I recall) 10 context switches,
and most of them were BEFORE it could be displayed. Well, I wuz
rite :-)

It was about the era of your communication, too.

It may have been implemented since, but constraints have changed.
X remains a system killer, even under Unix, and Microsoft's clone
of Presentation Manager is no better (well, PM itself wasn't much
better).


Regards,
Nick Maclaren.
From: Stefan Monnier on
> Language people are part of the problem....

Agreed. But I think the problem is not that we (I'm a language people)
haven't found the right abstractions to make parallel programming easier.
There are no such abstractions.

Programming languages have been mildly successful at making it possible/easy
to write *correct* parallel programs. While that's sufficient for
concurrent programming, it's not sufficient for parallel programming, where
performance is crucial.

I think language people need to start looking at how we can add performance
to the language's semantics. The reason why parallel programming is hard,
I believe, is in part because of all the work it takes to relate the poor
performance of your program to its source code.

E.g. if you have a piece of code that says something like

PARALLEL-FOR(20%) i = 1 TO 50 WITH DO
dosomething with i
DONE

the compiler needs to be able to estimate the efficiency of the code and burp
with a warning if it turns out that those 50 threads will be busy less than
20% of the time (e.g. because their running time varies too much so they'll
wait for the slowest iteration, or because the rest of the time is taken by
communication, ...).

Of course it is tremendously difficult for the compiler to be able to
estimate efficiency and it may require more programmer annotations and or
restrictions. And maybe some of the performance checks need to be moved to
run-time. And of course my above is example is overly simplistic.


Stefan "not knowing what he's talking about (I warned you: I'm
a language people)"
From: Del Cecchi on
Stefan Monnier wrote:
>>Language people are part of the problem....
>
>
> Agreed. But I think the problem is not that we (I'm a language people)
> haven't found the right abstractions to make parallel programming easier.
> There are no such abstractions.
>
> Programming languages have been mildly successful at making it possible/easy
> to write *correct* parallel programs. While that's sufficient for
> concurrent programming, it's not sufficient for parallel programming, where
> performance is crucial.
>
> I think language people need to start looking at how we can add performance
> to the language's semantics. The reason why parallel programming is hard,
> I believe, is in part because of all the work it takes to relate the poor
> performance of your program to its source code.
>
> E.g. if you have a piece of code that says something like
>
> PARALLEL-FOR(20%) i = 1 TO 50 WITH DO
> dosomething with i
> DONE
>
> the compiler needs to be able to estimate the efficiency of the code and burp
> with a warning if it turns out that those 50 threads will be busy less than
> 20% of the time (e.g. because their running time varies too much so they'll
> wait for the slowest iteration, or because the rest of the time is taken by
> communication, ...).
>
In the words of a past president.... "There you go again"
If I have a vector or an array and a parallel paradigm, why on earth
would there be a for loop? Isn't looping stone age sequential thinking?
:-) Even APL in the 70's did away with that. And Threads? Aren't
they just parallel sugar on a serial mechanism?

Of course a real programmer can write fortran in any language, as we
used to say.

> Of course it is tremendously difficult for the compiler to be able to
> estimate efficiency and it may require more programmer annotations and or
> restrictions. And maybe some of the performance checks need to be moved to
> run-time. And of course my above is example is overly simplistic.
>
>
> Stefan "not knowing what he's talking about (I warned you: I'm
> a language people)"


--
Del Cecchi
"This post is my own and doesn�t necessarily represent IBM�s positions,
strategies or opinions.�
From: Jan Vorbrüggen on
>> E.g. if you have a piece of code that says something like
>>
>> PARALLEL-FOR(20%) i = 1 TO 50 WITH DO
>> dosomething with i
>> DONE
> In the words of a past president.... "There you go again"
> If I have a vector or an array and a parallel paradigm, why on earth
> would there be a for loop? Isn't looping stone age sequential thinking?
[...]
> Of course a real programmer can write fortran in any language, as we
> used to say.

In current Fortran, one would likely use an array expression, no loops or
threads in sight. The compiler is completely free (within the defined
semantics of the expression) to parallelize as it pleases.

Jan