From: nmm1 on
In article <d26217-chr.ln1(a)vimes.paysan.nom>,
Bernd Paysan <bernd.paysan(a)gmx.de> wrote:
>
>>>Hm, the most essential usage of address-of (unitary prefix &) in C is
>>>to
>>>return multiple parameters. ...
>>
>> The mind boggles. That is so far down the list of its uses that I
>> hadn't even thought of it.
>>
>> You could start with trying to work out how to use scanf without it,
>
>scanf *is* precisely what I'm talking about: Returning multiple values.
>
>int a;
>float b;
>char c[];
>(a, b, c) = tuple_fscanf(file, "%d%f%s");

Er, no. Firstly, you have just added a MASSIVE syntactic extension
to C (and even C++), which I can assure you would be very hard to
add without introducing ambiguities (it has been proposed, and was
rejected for that reason). Secondly, you have missed the fact that
scanf assignments are conditional - that is even more important in
other parts of the library, of course. Specifying tuple assignment
to allow the latter is EXTREMELY hairy, especially as the number of
assigments cannot be determined until the scanf is executed.

>The problem that this sort of "format string" based procedures are
>completely bonkers as API isn't solved ;-).

Not at all. They work very well in suitable languages - BCPL or
K&R C, for example.

> Of course you'd need some
>way to accumulate an arbitrary run-time defined tuple (similar to the
>problems of varargs), if you want to keep this crazy stuff. The good
>news is: Such a tuple as return value on the stack will not mess around
>with addresses that are not there, but maybe push more values on the
>stack as needed - but the stack cleanup after calling tuple_fscanf will
>deal with that. Format string errors then will still lead to wrong
>values in the assigned tuple, but *not* in stuff written into the return
>address (code space).

Producing a language almost, but not entirely, unlike C in order to
remove one of C's problems and introduce something equally bad strikes
me as perverse.

>> and then try to pass a subsection of an array to a function (which
>> then treats the subsection as a complete array).
>
>Ah, that's easy:

Sorry. It was late at night. Try a substructure of a struct, passed
by reference (i.e. as a variable).

Also, there is a strong sense in which arithmetic on array names is
a syntactic variation of the address-of operator - see the standard
for why. To gain any benefit from eliminating the address-of operator
(which was the context of this thread), you have to eliminate that,
too.

I assure you that you cannot eliminate even the address-of operator
without changing C beyond all recognition. I have tried to work out
how and, while the uses that can't be abolished are small in number,
they are essential for many programs.


Regards,
Nick Maclaren.
From: Anne & Lynn Wheeler on

rpw3(a)rpw3.org (Rob Warnock) writes:
> So the various Lisp Machines never existed? ;-} ;-}
> Oh, wait:

old email reference about MIT asking for 801 for lisp machine ... and
being offerred "8100" instead:
http://www.garlic.com/~lynn/2003e.html#email790711
in this post (with misc. other old "801" email pieces)
http://www.garlic.com/~lynn/2003e.html#65 801 (was Re: Reviving Multics)

as mentioned, 8100 used a very slow & underpowered processor that was
developed for control function; my wife had been asked to do a technical
audit of the 8100 ... and very shortly later, 8100 was killed.

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970
From: Robert Myers on
On Jan 2, 4:32 am, Mayan Moudgill <ma...(a)bestweb.net> wrote:
> Robert Myers wrote:
>
>  > The scientists I know generally want to speed things up because they
>  > are in a hurry.
>  >
>  > The question is: is it better to do a bit less physics and/or let the
>  > machine run longer, or is it better to use up expensive scientist/
>  > scientific programmer time and, at the same time, make the code opaque
>  > and not easily transportable?
>  >
>  > If we can't do "unbounded" ("scalable") parallelism, then there is an
>  > end of the road as far as some kinds of science are concerned, and we
>  > may already be close to it or even there in terms of massive
>  > parallelism (geophysical fluid dynamics would be an example).  The
>  > notion that current solutions "scale" is pure bureaucratic fraud.
>  > Manufacturers who want to keep selling more of the same (do you know
>  > any?) cooperate in this fraud, since the important thing is what the
>  > customer thinks.
>  >
>
> If your problems can be solved by simply increasing the number of
> machines, why not go with Beowulf clusters or @Home style parallelism?
> They are cheapa and easy to put together.
>
I don't have a clue about @Home style parallelism. When I see that
the approach has produced real science, I might get interested. My
suspicion is that the approach is very inefficient as to the cost of
energy and manpower. The approach would really only make sense if
hardware were expensive and energy and sysadmin time were free. If
those were ever the case, they aren't now.

For mid-scale problems, clusters built around fairly beefy nodes (a 2
socket 12-core might be a sweet spot in terms of cost for now, a 4
socket 24-core machine might be even better in terms of dealing with
interconnect), might be fairly attractive compared to just about
anything you can buy commercially. If the warehouse-filling clusters
of not so long ago were interesting, then what you can build today
should be even more interesting.

Since I've argued that super-sized computers seem to me to be of
questionable value, maybe that's all the vast bulk of science really
needs. If I really need a computer with respectable bi-section
bandwidth, I can skip waiting for a gigantic machine that runs at 5%
efficiency (or worse) and learn to live with whatever I can build
myself.

> If your problem can't be solved with those approaches, then I suspect
> that going to a different language (or approach, or whatever) is not
> going to be a viable alternative.

I agree. The only thing a better approach would deliver is more ease
of use, but that's more important than the cost of the hardware.

Well, maybe I agee. The typical 5% sustained performance of the big
machines would seem to leave a lot of room for improvement that could
conceivably be addressed by slightly less clueless approaches to
massively parallel software.

Robert.
From: Robert Myers on
On Jan 2, 4:44 am, r...(a)rpw3.org (Rob Warnock) wrote:
> Robert Myers  <rbmyers...(a)gmail.com> wrote:
> +---------------
> | I doubt if operating systems will ever be written in an elegant,
> | transparent, programmer-friendly language.
> +---------------
>
> So the various Lisp Machines never existed?  ;-}  ;-}
> Oh, wait:
>
>    http://en.wikipedia.org/wiki/Lisp_Machine
>     ...
>     Several companies were building and selling Lisp Machines in the
>     1980s: Symbolics (3600, 3640, XL1200, MacIvory and other models),
>     Lisp Machines Incorporated (LMI Lambda), Texas Instruments
>     (Explorer and MicroExplorer) and Xerox (InterLisp-D workstations)..
>     The operating systems were written in Lisp Machine Lisp, InterLisp
>     (Xerox) and later partly in Common Lisp.
>     ...
>     Symbolics continued to develop the 3600 family and its operating
>     system, Genera, and produced the Ivory, a VLSI implementation of
>     the Symbolics architecture. Starting in 1987, several machines
>     based on the Ivory processor were developed: ...
>     ...
>     The MIT-derived Lisp machines ran a Lisp dialect called ZetaLisp,
>     descended from MIT's Maclisp. The operating systems were written
>     from the ground up in Lisp, often using object-oriented extensions.
>     Later these Lisp machines also supported various versions of Common
>     Lisp (with Flavors, New Flavors and CLOS).
>     ...
>
> And there are still some who persist in working in this area even today
> [well, fairly recently], e.g.:
>
>    http://common-lisp.net/project/movitz/
>     Movitz: a Common Lisp x86 development platform
>
>    http://download.plt-scheme.org/mzscheme/mz-103p1-bin-i386-kernel-tgz....
>     Package:            MzScheme
>     Version:            103p1
>     Platform:           x86 Standalone Kernel
>
Even though I know some lisp true-believers, I know nothing about
using it for actual computig. Does a lisp operating system have any
purpose other than to say you did it, or to run the small number of
computers that have been built specifically for lisp? I mean, if I
wanted to badly enough, I could build some fairly interesting
structures out of popsicle sticks, but what would I have accomplished?

Robert.

From: Mayan Moudgill on
Rob Warnock wrote:

> Robert Myers <rbmyersusa(a)gmail.com> wrote:
> +---------------
> | I doubt if operating systems will ever be written in an elegant,
> | transparent, programmer-friendly language.
> +---------------
>
> So the various Lisp Machines never existed? ;-} ;-}

Exactly how is LISP elegant, transparent & programmer-friendly?

RPLACA, RPLACD, {or SETF {note the different brackets}},(CONS x x) ring
any bells?