From: Michael J. Mahon on
Paul Schlyter wrote:
> In article <Ms2dnbdHwN56NeXZnZ2dneKdnZydnZ2d(a)>,
> Michael J. Mahon <mjmahon(a)> wrote:

>>Parallel computing for 8-bit Apple II's!
>>Home page:
> Parallel Apple II's - interesting - but don't you think one would
> get more performance from a modern computer than from a few Apple II's
> working in parallel? :-)

Certainly! In fact, with an 8MHz Zip Chip in my primary //e, I can
generally get a result as fast or faster than eight 1MHz machines. ;-)

But eight machines do provide 512KB of memory along with the eight
processors, and lots of interesting asynchrony.

Of course, the purpose was to create a platform for experimenting
with parallel processing and message-passing--and to serve as a
torture test of NadNet protocols, which it does admirably.

I'm pretty happy with creating a relatively fast (115 kbits/sec)
peer-to-peer network with just 2KB of software, game ports, and
some wire and two transistors per node. ;-)


Parallel computing for 8-bit Apple II's!
Home page:

"The wastebasket is our most important design
tool--and it is seriously underused."
From: mdj on
Paul Schlyter wrote:

> In FORTRAN you could access any address available to the program by
> declaring an array and then exceed the bounds of the array. Negative
> subscripts usually worked too. Port I/O wasn't possible in that way
> though, there you had to resort to subroutines in e.g. assembly language.

Cute. In Pascal it was as simple as declaring a variant record of both
Integer and pointer to Integer types, which easily got you peek and
poke. Note this ony works on architectures where Int and Int^ are the
same size, but in the cases that they aren't you can substitute another
type. It allows you to mimic peek and poke. I used this technique
routinely in Apple Pascal for hardware access when speed wasn't an

Ada actually provides an ADDRESS type, which is specified by the
standard to be implementation dependant. Every type in the language
provides an address attribute that you can access, assign to variables
of type ADDRESS, etc. This gives you equivalent functionality to C's
(void *), albiet with a greater degree of compile time type safety.

Owing to the lack of commercial success of Ada, most people tend to
believe that languages that provide strict type systems are necessarily
less 'efficient' than C/C++. This is actually completely untrue.
Strictly speaking, the more information a compiler has about the
constraints of data types, the more optimisations you can perform.
Indeed, the lack of typing in C/C++ makes many optimisations
impossible, as the program appears to be non-deterministic to the
optimiser, when in reality the actual data range used is fixed, and
small enough to qualify for a number of clever optimisations.

Of course, this also means that the search space for possible
optimisations gets larger, so compilers get slower, and take many more
years to mature. C remains the most frequently implemented compiled
language not because of it's efficiency, but because the modest size of
the language makes it easy to implement, and the optimisations that are
possible on C code are well known.

> Not necessarily. The idea of portability in C is to use the underlying
> hardware as efficiently as possible: if the hardware uses 1-bit complement
> then that C implementation uses 1-bit complement too .... etc etc. If
> written with portability in mind, C code can then be made to run as efficiently
> as possible in very different architectures.

But Paul, using hardware as efficiently as possible and portability are
obviously mutually exclusive concepts :-)

Implementing Java on an architecture that uses 1's complement doesn't
necessarily cause performance issues either. In fact, the
representation of the number is irrelevant until it's either accessed
using bitwise operations, which can be catered for very simply by the
VM by converting the type back and forward as necessary, or when the
number is serialised. In the serialisation case, the byte order is
specified by the specification to guarantee *portability*.

For the most part, bitwise manipulation in C is done to efficiently
utililise memory, be it passing flags to a function as an int, when
it's really going to be treated as an array of booleans, or to provide
space efficient boolean arrays.

In Java, these techniques are redundant; you have a boolean type. The
JIT subsystem can optimise this however it sees fit.

It's actually becoming apparent that Java code can, and often does,
outperform compiled C/C++. The implementation abstraction means that
the JIT or HotSpot compiler can easily exploit the target architectures
particular quirks. A C/C++ program can only exploit the architecture is
written to exploit, and ones that happen to be very similar. Efficient?
yes. Portable? no.

> ...interesting portability problem: "...hey, these 128-bit integers are
> too big! We're having problems storing the value 763 in it....." :-)

It is a problem, as has been stated before in this thread. And think
about when you're back porting. Perhaps you'd like to use a bunch of C
code on an 8 or 16 bit microcontroller? Chances are a bunch of things
will need to be changed because of unforseen portability issues. Or
perhaps in the name of efficiency you'll use smaller data types in some
areas. Such is the nature of C.

It's far safer, IMHO to consider C an assembly language replacement
than a portable high-level language.

> > No. You're confusing the issues. Java is comprised of a language, an
> > API and a virtual machine. The API is written in Java,
> The API is just glue code between the Java program and the VM.

That only applies to the tiny subset of the API that performs I/O! What
about the rest of the API? The vast majority of the API is implemented
in pure Java. It's a hell of a lot more than just glue code to the VM

> > and is just a portable as any other Java program.
> If so, then it's just as impossible to write a Windows program that's bound to
> a specific machine architecture: just write a Windows emualator for any
> target platform you want.

History has shown this isn't true. Any effective emulator for Windows
is actually an emulation of the x86 architecture, which can then run
Windows itself.

The only third party attempt at reimplementing Windows that has had any
success at all is the WINE project, and that too is bound to the x86
architecture. It is also, woefully incomplete, despite being started as
a project back when Win16 was the dominant API.

So Windows emulation relies on implementing a Windows compatible
machine emulator. Not exactly an efficient solution.

Microsoft themselves know this. Why else would they have moved to an
extremely Java like system with .NET ?

> I suppose that happened when JIT was introduced in Java: code called through
> JNI was more efficient than "old Java", without JNI, but less efficient than
> Java with JIT. And the JNI interface is by itself an overhead.

It became far more pronounced after the introduction of JIT, yes. These
days, it's usually more efficient to implement in Java than use JNI, as
there is obviously performance overhead in the JNI interface due to
it's portability restrictions.

> >>> Ever tried porting some C code to a 64 bit platform?
> >>
> >> I've ported some 8-bit and 16-bit C code to 32 bit platforms, so I'm well
> >> acquainted with the problems which then may arise. Porting from 32-bit
> >> to 64-bit is probably not much different.
> >>
> >>> Many C compilers implement int as a 32 bit quantity even on 64 bit
> >>> architectures,
> >>
> >> Those C compilers aren't follow ANSI C compliant, I would say.
> >
> > Sure they are. The ANSI standard requires only that int is at least 16
> > bit, and signed. Addresses are whatever they are on the host
> > architecture.
> Sorry, but ANSI C says more than that about the size of an int.
> ANSI X3.159-1989, paragraph " Types", explicitly says:
> # A "plain" int object has the natural size suggested by the
> # architecture of the execution environemt.
> Now, what do you think the "natural size" on a 64-bit architecture is?
> 32 bits? <g>

Actually, yes! Prior to the introduction of 64 bit machines, there was
not one single implementation of C that declared int to be 64 bit. No
existing code relies on it being that big, and it wastes a ridiculous
amount of memory to declare it as such.

Generally speaking, a C implementation will opt for int as 32 bits,
unless the machine architecture (and this does happen) restricts the
alignment of 32 bit quantities in such a way that breaks the C
standards. Some RISC architectures restrict load/store operations to
addresses that are a multiple of its addressing size, which places
severe restrictions on the sizing that can be chosen for C data types.

The proponents of RISC systems back in the 90's severely underestimated
the volume of non-portable C code out in the wild, if you ask me. In
fact, this issue may have been a big contributor to the lack of success
of these systems. The PPC crowd found out very quickly that they needed
to implement most of the hardware hacks the intel guys did, as it just
wasn't possible to implement efficient compilers for their
architectures, at least not in a reasonable amout of time.

Bytecode to machine code translators on the other hand....

> C follows the idea "Trust the programmer" -- apparently, a lot of programmers
> weren't trustworthy. Therefore "strait-jacket languages" like Pascal and Java
> were needed....

It's not just a matter of trust. And besides, lousy/lazy programmers do
exist, and will continue to.

Comparing Java to Pascal is a little harsh. Pascal was primarily
intended as a teaching language, and it's type system is sufficiently
constrained as to cause a lot of needless limitations on the
programmer. For example, it's almost impossible to write generic code
in Pascal. Java has none of these limitations. It provides an excellent
tradeoff between functionality loss and type safety compared to
languages that predate it.

> Another victory for strait-jacket languages... :-)

Here's a question for you: Besides directly accessing hardware
features, what program is difficult to write in Java, versus C/C++?

> I know the stuff called through JNI must be written in C (using C++ here
> would be messier). Yet, JNI is a hole in Java which also poses a
> danger.

It's actually possible to implement the VM in such a way that any
danger imposed by JNI cannot compromise the security model, or even
crash the application. Typically this isn't done for performance
reasons, but there are implementations that provide this level of

> The method differs, true, but the end result is the same: versatility,
> insecurity, and non-portability.

Not true. The end result, as history has shown, is very different. The
Microsoft approach allows a great deal of 'laziness' on the part of
those porting existing code to the new environment, and allows the
number of portability and security issues to multiply just as easily as
they did in the C/C++ world.

> >>> This compromises not only the portability of Java
> >>> programs, but also the security model provided by the Java platform,
> >>> which is dependent on programs not being able to access memory that
> >>> belong to the VM, or any other program that's running in the same
> >>> memory space.
> >>
> >> How well is that security model maintained in GNU's Java implementation
> >> which runs without any VM and generates native code?
> >
> > Just as well, of course. The language definition is the same, and it
> > still prevents you writing code that will access machine dependant
> > features.
> >
> >> Yep -- Java was about to be standardized some years ago, but then Sun
> >> changed its mind and stopped that. Even C# now has an (ECMA) standard.
> >
> > Which is to be honest a pointless marketing exercise. The language
> > definition itself is useless without its API, as 99.9% of useful C#
> > programs rely on the .NET API, which is in itself not portable - it
> > relies heavily on Win32 to provide its functionality.
> But Win32 is just as portable as the Java API - there are Win32 emulators
> running on other platforms.

This is more precisely put as Intel architecture emulators running on
other architectures. The Win32 platform remains tied to x86. Have a
look at Windows 64 bit edition on the IA-64 and see how it handles
backwards compatibility. It's so inefficient that you just do not do
it, you use real x86 boxes instead.

This is going to remain the case with .NET applications as well,
despite their supposed 'portability' due to the way legacy code can be
called. For some time, .NET applications on Win64 had to run via the
Win32 emulator, which is already terribly slow. This applies for any
application that uses the .NET API, since it merely thunks calls
through to Win32.

The Java approach is different. Reimplement the API in Java. Make it as
portable as anything else written in the language. The result? Java is
implemented on every platform it's viable to run it on.

> Then how come there are Windows emulators running on other platforms?

Which ones? See above. Any successful implementation of Windows
emulation is actually an x86 PC emulator running actual Windows.

> When you say just "Java" I think you usually mean both. Remove the
> platform and you can't run your Java program. Remove the language,
> and you'll have to program in bytecode directly, which would be
> possible, although awkward.

It's probably more accurate to refer to Java as a "meta-platform" as
even though it provides all the functionality of a "platform" it in
itself (generally) requires a host platform to operate. It's certainly
possible to implement the VM as an operating system itself, but the
goal is not to create new concrete platforms, but to provide a unified
"meta-platform" that other existing platforms can host. Java succeeds
in this goal very well.

> UCSD Pascal perhaps? :-) Although now obsolete, that system had
> precisely the same objective. But UCSD Pascal was too early -- the slow
> hardware of that time made emulating a pseudo-machine unacceptably slow.
> And there also was no widespread Internet through which applications
> could be distributed.

All of those limitations have now gone, and history speaks for itself,
I think.

It's certainly a contentious issue, and theres supporters on either
side of the fence. Academically speaking, there's no such thing as true
portability. As long as there are multiple platforms there will be
applications that aren't portable.

It's pointless though to consider the argument in academic terms, only
in practical terms. Java meets its goals of portability. .NET has yet
to meet those goals, and frankly, I doubt it ever will. Microsoft
simply have no interest in making it easy to run Windows applications
on a non-Microsoft platform. You can argue that their system is
inherently portable, but it remains impractical to port Windows

I actually find it pretty amusing to watch. Microsoft continually rely
on the installed base to maintain their market, but in binding new
technologies to the installed base, they sacrifice their ability to
take advantage of new architectures. I have a feeling that as time goes
by, this strategy is going to prove very costly, as every other
'platform' out there just doesn't have the same limitations. GNU/Linux
has portability advantages because almost all the source is open. Java
has it because of it's portable meta-platform. Apple have transitioned
almost seamlessly from PPC to x86.

It's like watching a time-bomb count down, if you ask me ;-)

From: Cameron Kaiser on
pausch(a) (Paul Schlyter) writes:

>Parallell Apple II's - interesting - but don't you thin one would
>get more performance from a modern computer than from a few Apple II's
>working in parallell? :-)

As they say, it's not that the dog walked well, but that the dog ...

Besides, it's a clever idea. :)

Cameron Kaiser * ckaiser(a) * posting with a Commodore 128
personal page:
** Computer Workshops: games, productivity software and more for C64/128! **
** **
From: Paul Schlyter on
In article <nospam-A1B4BA.13530528052006(a)>,

> In article <e5c1e4$24tu$1(a)>,
> pausch(a) (Paul Schlyter) wrote:
>> In article
>> <nospam-7B7751.02054028052006(a)>,
>>> 222222In article <e598gn$134q$1(a)>,
>>> pausch(a) (Paul Schlyter) wrote:
>>>> In article
>>>> <nospam-467DFC.23375126052006(a)>,
>>>>> In article <e56rj1$7lo$1(a)>,
>>>>> pausch(a) (Paul Schlyter) wrote:
>>>>>> In article <1148544621.115246.248890(a)>,
>>>>>> mdj <mdj.mdj(a)> wrote:
>>>>> [...]
>>>>>>> The key concepts that are missing here are pointers, and more
>>>>>>> specifically,
>>>>>>> the ability to perform arbitrary arithmetic on pointer types.
>>>>>> FORTRAN, Pascal and Ada lacks this too.
>>>>> [...]
>>>>> This is certainly false for Ada:
>>>>> In particular, the standard package Interfaces.C.Pointers overloads the
>>>>> "+" and "-" operators for just this purpose.
>>>> OK, Ada has pointer arithmetic - but is it arbitrary?
>>> Well, it's arbitrary up to the limit of erroneous: you can't access
>>> memory that's forbidden by the OS:-)
>> Ouch .... so much for the "security" of Ada......
> I don't see how this follows. One of the goals of Ada is to find more
> errors at compile-time, leaving fewer for run-time. It is certainly
> possible to write an erroneous program in Ada, but one must do so
> explicitly. It's much harder to do by accident.

It's easy to exceed the bounds of an array by accident and overwrite other
data structures in the program with garbage. Apparently it's no more difficult
in Ada than in FORTRAN or C ......

Paul Schlyter, Grev Turegatan 40, SE-114 38 Stockholm, SWEDEN
e-mail: pausch at stockholm dot bostream dot se
From: mdj on

John B. Matthews wrote:

> It is arguably false for many widely used flavors of Pascal including
> Object Pascal and Turbo Pascal.

It's completely false, even for "standard" versions of Pascal: The
language provides for pointer types and variant records (unions) so
it's easy to write a valid program that stores both a numeric type
(typically Integer) and a pointer in the same memory location.

Once you have that, you can break all the rules you like, and be a
perfectly valid Pascal program :-)