From: Neville Dempsey on
On Jun 4, 6:07 pm, n...(a)cam.ac.uk wrote:
> In article <4C087DD5.6050...(a)patten-glew.net>,
> Andy 'Krazy' Glew  <ag-n...(a)patten-glew.net> wrote:
>
> >On 6/3/2010 11:58 AM, Robert Myers wrote:
> >> On Jun 2, 12:15 am, Andy 'Krazy' Glew<ag-n...(a)patten-glew.net>  wrote:
> Unfortunately, since the demise of Algol 68, the languages that
> are favoured by the masses have been going in the other direction.
> Fortran 90 has not, but it's now a niche market language.

Algol68g uses standard Posix threads to implement the 1968 Standard's
PARallel clauses on Linux, Unix, Mac OSX and Windows. Here is a code
snippet from the actual ALGOL 68 standards document, it demonstrates
the original multi-threading that is part of the ALGOL 68 language
definition.

<code>
proc void eat, speak;
sema mouth = level 1;

par begin
do
down mouth;
eat;
up mouth
od,
do
down mouth;
speak;
up mouth
od
end
</code>

Simply put a PAR before a BEGIN ~, ~, ~, ~ END block then all the ~
clauses in the block are executed in parallel.

It is revealing to think that eating and talking can actually require
parallel processing. Other real world examples would be cooking the
dishes for the sumptuous dinner in the first place. :-)

> >> On Jun 2, 12:15 am, Andy 'Krazy' Glew<ag-n...(a)patten-glew.net> wrote:
> [...] Worse,
> the moves towards languages defining a precise abstract machine
> are regarded as obsolete (though Java is an exception), so most
> languages don't have one.

Algol68 examples:
From Russia: "Virtual machine and fault-tolerant hardware platform
(SAMSON)"
We implemented an Algol 68 to the SAMSON virtual-machine code compiler
and VM interpreter for all the widespread microprocessors.
* http://www.at-software.com/industrial-electronics/virtual-machine-and-fault-tolerant-hardware-platform.html

From Netherlands:
Tanenbaum, A.S.: Design and Implementation of an Algol 68 Virtual
Machine
http://oai.cwi.nl/oai/asset/9497/9497A.pdf

From Cambridge:
S.R. Bourne, "ZCODE A simple Machine", ALGOL68C technical report,

From E. F. Elsworth. University of Aston in Birmingham
"Compilation via an intermediate language - ALGOL68C and ZCODE"
http://comjnl.oxfordjournals.org/cgi/reprint/22/3/226.pdf

> And, no, I do NOT mean the bolt-on extensions that are so common.
> They sometimes work, just, but never well.  You can't turn a
> Model T Ford into something capable of maintaining 60 MPH, reliably,
> by bolting on any amount of modern kit!

To download Linux's Algol68 Compiler, Interpreter & Runtime:
* http://sourceforge.net/projects/algol68

N joy
NevilleDNZ
From: Andy 'Krazy' Glew on
On 6/3/2010 10:02 PM, Mike Hore wrote:
> Andy 'Krazy' Glew wrote:
>
>> ...
>> Nevertheless, I still like to work with compiler teams. History:...
>
> Thanks for that fascinating stuff, Andy. I'm wondering, where was
> Microsoft while this was going on? Did they use Intel's compiler at all,
> or did they "do it their way?"

Most interactions in my day were with Intel's compiler team. Sure, we worked with Microsoft - but they had their own
priorities, and did not necessarily want to deliver technology on the schedule that Intel needed it. If at all.

During P6 development, Intel and Microsoft were not necessarily allied. Microsoft was porting Windows to RISCs. This
resulted in things such as putting MMX in the x87 FP registers - so that an OS change would not be necessary. I.e. so
that Intel could ship MMX without waiting on Microsoft to ship a next OS version.

Most of the time, Microsoft was interested in compiler optimizations that would produce real benefits for real
customers, while Intel was interested in optimizations that would get Intel the best benchmark scores. E.g. Intel
pushed aggressive loop unrolling, and other optimizations that increased code size, while Microsoft was really
interested in optimizations that reduced code size (a) to reduce the number of CDs that a program would ship on, but
also (b) because code size really did matter more than other optimizations, for the mass market that did not max memory out.

For example, Microsoft quite naturally wanted to optimize code for the machines that dominated the mass market. Since
previous generation machines dominate for 3-5 years after a new chip comes out, that makes sense. Intel, on the other
hand, wanted compilers that made the latest and greatest chips, maxed out with high speed memory, look good. You only
really get Microsoft mindshare once the developers have the machines on their desk, whereas Intel developers grudgingly
may work with simulators pre-silicon.

I.e. I don't think that Microsoft was wrong or shortsighted. I think that Intel and Microsoft have different business
interests.

On the other hand, my last project at Intel was much more aligned with Microsoft's and and the mass market's interest.
And we had good cooperation from MS compilers.

Inteol management always complains that Microsoft is not ready to support a great new chip feature when silicon comes
out. But this will always be the case - except when Microsoft also profits immediately and directly from the new feature.
From: Andy 'Krazy' Glew on
On 6/4/2010 1:07 AM, nmm1(a)cam.ac.uk wrote:
> In article<4C087DD5.6050502(a)patten-glew.net>,
> Andy 'Krazy' Glew<ag-news(a)patten-glew.net> wrote:
>> On 6/3/2010 11:58 AM, Robert Myers wrote:
>>> On Jun 2, 12:15 am, Andy 'Krazy' Glew<ag-n...(a)patten-glew.net> wrote:
>>>
>> Getting back to parallelism:
>>
>> I'm most hopeful about programmer expressed parallelism.
>>
>> I think that one of the most important things for compilers will be
>> to large amounts of programmer expressed parallelism in an ideal
>> machine - PRAM? CSP? - to whatever machine you have.
>
> Yes and no. Technically, I agree with you, and have been riding
> that hobby-horse for nearly 40 years now!

It looks like my mind got ahead of my fingers when I typed the above. I should have said

I think that one of the most important things for compilers will be
to map the large amounts of programmer expressed parallelism in an ideal
machine - PRAM? CSP? - to whatever machine you have.

I.e. I am saying "code for thousandss or millions of threads, and let the compiler or system map it to run on the mere
dozens or hundreds of physical processing elements you have."

Still agree, Nick?

> Unfortunately, since the demise of Algol 68

Aha! Nick, I keep hearing you complain about the sad state of modern tools. But you never give details. Finally, here
we have one such:

Algol 68 is also (was also) one of my favorite languages. What aspects did you like, Nick?
From: Andy 'Krazy' Glew on
On 6/5/2010 3:33 AM, Neville Dempsey wrote:
> On Jun 4, 6:07 pm, n...(a)cam.ac.uk wrote:
>> In article<4C087DD5.6050...(a)patten-glew.net>,
>> Andy 'Krazy' Glew<ag-n...(a)patten-glew.net> wrote:
>>
>>> On 6/3/2010 11:58 AM, Robert Myers wrote:
>>>> On Jun 2, 12:15 am, Andy 'Krazy' Glew<ag-n...(a)patten-glew.net> wrote:
>> Unfortunately, since the demise of Algol 68, the languages that
>> are favoured by the masses have been going in the other direction.
>> Fortran 90 has not, but it's now a niche market language.
>
> Algol68g uses standard Posix threads to implement the 1968 Standard's
> PARallel clauses on Linux, Unix, Mac OSX and Windows. Here is a code
> snippet from the actual ALGOL 68 standards document, it demonstrates
> the original multi-threading that is part of the ALGOL 68 language
> definition.
>
> <code>
> proc void eat, speak;
> sema mouth = level 1;
>
> par begin
> do
> down mouth;
> eat;
> up mouth
> od,
> do
> down mouth;
> speak;
> up mouth
> od
> end
> </code>
>
> Simply put a PAR before a BEGIN ~, ~, ~, ~ END block then all the ~
> clauses in the block are executed in parallel.

I have posted about how I think in parallel or concureently, and how I have helped beginning programmers work out bugsin
their code (on sequential machines) by teaching them how to use concurrent assignment or par clauses, and then
automaticaly translate them to sequential.

Algol 68 was one of the most useful languages in teaching me such techniques.
From: MitchAlsup on
On Jun 5, 8:04 pm, Andy 'Krazy' Glew <ag-n...(a)patten-glew.net> wrote:
> I.e. I am saying "code for thousandss or millions of threads, and let the compiler or system map it to run on the mere
> dozens or hundreds of physical processing elements you have."
>
> Still agree, Nick?

Although I am not Nick........

This reminds me of why the pure dataflow machines never materialized.
It seems that once you have created the abstract model necessary to
express pure dataflow, your major task become compressing (managing)
it so that it does not swamp your necessarily limited resources.

Mitch