From: Michael J. Mahon on
Lyrical Nanoha wrote:
> On Tue, 30 May 2006, Michael J. Mahon wrote:
>> (Come to think of it, maybe Linux can eventually "mop up" in the market
>> and then we'll have a *real* _de facto_ Unix standard. ;-)
> Except POSIX/SUS3 is the "de jure" Unix standard. Though I'd love to
> see a decent *x that fully conforms and is free (no, Solaris doesn't cut
> the mustard).

Yes, but what could be agreed on in Posix always left out some of
the best stuff...


Parallel computing for 8-bit Apple II's!
Home page:

"The wastebasket is our most important design
tool--and it is seriously underused."
From: Paul Schlyter on
In article <1148872885.123157.309890(a)>,
mdj <mdj.mdj(a)> wrote:

> Paul Schlyter wrote:
>> In FORTRAN you could access any address available to the program by
>> declaring an array and then exceed the bounds of the array. Negative
>> subscripts usually worked too. Port I/O wasn't possible in that way
>> though, there you had to resort to subroutines in e.g. assembly language.
> Cute. In Pascal it was as simple as declaring a variant record of both
> Integer and pointer to Integer types, which easily got you peek and
> poke. Note this ony works on architectures where Int and Int^ are the
> same size, but in the cases that they aren't you can substitute another
> type. It allows you to mimic peek and poke. I used this technique
> routinely in Apple Pascal for hardware access when speed wasn't an
> issue.
> Ada actually provides an ADDRESS type, which is specified by the
> standard to be implementation dependant. Every type in the language
> provides an address attribute that you can access, assign to variables
> of type ADDRESS, etc. This gives you equivalent functionality to C's
> (void *), albiet with a greater degree of compile time type safety.
> Owing to the lack of commercial success of Ada, most people tend to
> believe that languages that provide strict type systems are necessarily
> less 'efficient' than C/C++. This is actually completely untrue.
> Strictly speaking, the more information a compiler has about the
> constraints of data types, the more optimisations you can perform.
> Indeed, the lack of typing in C/C++ makes many optimisations
> impossible, as the program appears to be non-deterministic to the
> optimiser, when in reality the actual data range used is fixed, and
> small enough to qualify for a number of clever optimisations.

Once, many years ago, I attended some lectures on Ada optimization.
The lecturer there said that, ideally, any program which has no input
but only output should be optimized to a "print constant string" statement.
I.e. such programs should actually be executed at compile time!
Of course, that constant string might be very long. But it is an
interesting idea that the program:

#include <stdio.h>
#include <math.h>

int main()
int i;
printf( " x sqrt(x)\n\n" );
for( i=0; i<100; i++ )
printf( "%4d %10.4f\n", i, sqrt(i) );
return 0;

in the ideal case should be optimized to:

#include <stdio.h>

int main()
printf( " x sqrt(x)\n"
" 0 0.0000\n"
" 1 1.0000\n"
" 2 1.4142\n"
" 3 1.7321\n"
" 4 2.0000\n"
" 5 2.2361\n"
" 6 2.4495\n"
" 7 2.6458\n"
" 8 2.8284\n"
" 9 3.0000\n"
" 10 3.1623\n"
" 11 3.3166\n"
" 12 3.4641\n"
" 13 3.6056\n"
" 14 3.7417\n"
" 15 3.8730\n"
" 16 4.0000\n"
" 17 4.1231\n"
" 18 4.2426\n"
" 19 4.3589\n"
" 20 4.4721\n"
" 21 4.5826\n"
" 22 4.6904\n"
" 23 4.7958\n"
" 24 4.8990\n"
" 25 5.0000\n"
" 26 5.0990\n"
" 27 5.1962\n"
" 28 5.2915\n"
" 29 5.3852\n"
" 30 5.4772\n"
" 31 5.5678\n"
" 32 5.6569\n"
" 33 5.7446\n"
" 34 5.8310\n"
" 35 5.9161\n"
" 36 6.0000\n"
" 37 6.0828\n"
" 38 6.1644\n"
" 39 6.2450\n"
" 40 6.3246\n"
" 41 6.4031\n"
" 42 6.4807\n"
" 43 6.5574\n"
" 44 6.6332\n"
" 45 6.7082\n"
" 46 6.7823\n"
" 47 6.8557\n"
" 48 6.9282\n"
" 49 7.0000\n"
" 50 7.0711\n"
" 51 7.1414\n"
" 52 7.2111\n"
" 53 7.2801\n"
" 54 7.3485\n"
" 55 7.4162\n"
" 56 7.4833\n"
" 57 7.5498\n"
" 58 7.6158\n"
" 59 7.6811\n"
" 60 7.7460\n"
" 61 7.8102\n"
" 62 7.8740\n"
" 63 7.9373\n"
" 64 8.0000\n"
" 65 8.0623\n"
" 66 8.1240\n"
" 67 8.1854\n"
" 68 8.2462\n"
" 69 8.3066\n"
" 70 8.3666\n"
" 71 8.4261\n"
" 72 8.4853\n"
" 73 8.5440\n"
" 74 8.6023\n"
" 75 8.6603\n"
" 76 8.7178\n"
" 77 8.7750\n"
" 78 8.8318\n"
" 79 8.8882\n"
" 80 8.9443\n"
" 81 9.0000\n"
" 82 9.0554\n"
" 83 9.1104\n"
" 84 9.1652\n"
" 85 9.2195\n"
" 86 9.2736\n"
" 87 9.3274\n"
" 88 9.3808\n"
" 89 9.4340\n"
" 90 9.4868\n"
" 91 9.5394\n"
" 92 9.5917\n"
" 93 9.6437\n"
" 94 9.6954\n"
" 95 9.7468\n"
" 96 9.7980\n"
" 97 9.8489\n"
" 98 9.8995\n"
" 99 9.9499\n" );
return 0;

> Of course, this also means that the search space for possible
> optimisations gets larger, so compilers get slower, and take many more
> years to mature. C remains the most frequently implemented compiled
> language not because of it's efficiency, but because the modest size of
> the language makes it easy to implement, and the optimisations that are
> possible on C code are well known.

>> Not necessarily. The idea of portability in C is to use the underlying
>> hardware as efficiently as possible: if the hardware uses 1-bit complement
>> then that C implementation uses 1-bit complement too .... etc etc. If
>> written with portability in mind, C code can then be made to run as efficiently
>> as possible in very different architectures.
> But Paul, using hardware as efficiently as possible and portability are
> obviously mutually exclusive concepts :-)
> Implementing Java on an architecture that uses 1's complement doesn't
> necessarily cause performance issues either. In fact, the
> representation of the number is irrelevant until it's either accessed
> using bitwise operations, which can be catered for very simply by the
> VM by converting the type back and forward as necessary, or when the
> number is serialised. In the serialisation case, the byte order is
> specified by the specification to guarantee *portability*.
> For the most part, bitwise manipulation in C is done to efficiently
> utililise memory, be it passing flags to a function as an int, when
> it's really going to be treated as an array of booleans, or to provide
> space efficient boolean arrays.

There's even a special data type for that in C: the bit field. It's
not an array though -- it's merely a way to give specific bits of an int
a name of its own, and let the compiler take care of the needed shifting
and masking.

> In Java, these techniques are redundant; you have a boolean type. The
> JIT subsystem can optimise this however it sees fit.
> It's actually becoming apparent that Java code can, and often does,
> outperform compiled C/C++. The implementation abstraction means that
> the JIT or HotSpot compiler can easily exploit the target architectures
> particular quirks. A C/C++ program can only exploit the architecture is
> written to exploit, and ones that happen to be very similar. Efficient?
> yes. Portable? no.
>> ...interesting portability problem: "...hey, these 128-bit integers are
>> too big! We're having problems storing the value 763 in it....." :-)
> It is a problem, as has been stated before in this thread. And think
> about when you're back porting. Perhaps you'd like to use a bunch of C
> code on an 8 or 16 bit microcontroller? Chances are a bunch of things
> will need to be changed because of unforseen portability issues. Or
> perhaps in the name of efficiency you'll use smaller data types in some
> areas. Such is the nature of C.
> It's far safer, IMHO to consider C an assembly language replacement
> than a portable high-level language.

C is frequently referred to as "portable assembler". C is also the
target language for the output from compilers of other languages,
such as Eiffel, and also early C++ compilers.

>>> No. You're confusing the issues. Java is comprised of a language, an
>>> API and a virtual machine. The API is written in Java,
>> The API is just glue code between the Java program and the VM.
> That only applies to the tiny subset of the API that performs I/O! What
> about the rest of the API? The vast majority of the API is implemented
> in pure Java. It's a hell of a lot more than just glue code to the VM

I don't consider that an API, but a library! An API should provide access
to something else but itself. Such as external hardware, or some other
software module.

Almost all languages has a library of functions or classes. The "high
level assembler" language C is no exception. Consider for instance
the qsort() function in the Standard C library -- is that function an
API? Or is it a library function?

>>> and is just a portable as any other Java program.
>> If so, then it's just as impossible to write a Windows program that's bound to
>> a specific machine architecture: just write a Windows emualator for any
>> target platform you want.
> History has shown this isn't true. Any effective emulator for Windows
> is actually an emulation of the x86 architecture, which can then run
> Windows itself.

Emulating the x86 architecture isn't enough. The Mac has now switched to
the x86 CPU, yet you cannot boot Windows on any x86 equipped Mac. You
must emulate the surrounding hardware environment as well.

> The only third party attempt at reimplementing Windows that has had any
> success at all is the WINE project, and that too is bound to the x86
> architecture. It is also, woefully incomplete, despite being started as
> a project back when Win16 was the dominant API.
> So Windows emulation relies on implementing a Windows compatible
> machine emulator. Not exactly an efficient solution.
> Microsoft themselves know this. Why else would they have moved to an
> extremely Java like system with .NET ?
>> I suppose that happened when JIT was introduced in Java: code called through
>> JNI was more efficient than "old Java", without JNI, but less efficient than
>> Java with JIT. And the JNI interface is by itself an overhead.
> It became far more pronounced after the introduction of JIT, yes. These
> days, it's usually more efficient to implement in Java than use JNI, as
> there is obviously performance overhead in the JNI interface due to
> it's portability restrictions.
>>>>> Ever tried porting some C code to a 64 bit platform?
>>>> I've ported some 8-bit and 16-bit C code to 32 bit platforms, so I'm well
>>>> acquainted with the problems which then may arise. Porting from 32-bit
>>>> to 64-bit is probably not much different.
>>>>> Many C compilers implement int as a 32 bit quantity even on 64 bit
>>>>> architectures,
>>>> Those C compilers aren't follow ANSI C compliant, I would say.
>>> Sure they are. The ANSI standard requires only that int is at least 16
>>> bit, and signed. Addresses are whatever they are on the host
>>> architecture.
>> Sorry, but ANSI C says more than that about the size of an int.
>> ANSI X3.159-1989, paragraph " Types", explicitly says:
>> # A "plain" int object has the natural size suggested by the
>> # architecture of the execution environemt.
>> Now, what do you think the "natural size" on a 64-bit architecture is?
>> 32 bits? <g>
> Actually, yes! Prior to the introduction of 64 bit machines, there was
> not one single implementation of C that declared int to be 64 bit. No
> existing code relies on it being that big, and it wastes a ridiculous
> amount of memory to declare it as such.
> Generally speaking, a C implementation will opt for int as 32 bits,
> unless the machine architecture (and this does happen) restricts the
> alignment of 32 bit quantities in such a way that breaks the C
> standards. Some RISC architectures restrict load/store operations to
> addresses that are a multiple of its addressing size, which places
> severe restrictions on the sizing that can be chosen for C data types.
> The proponents of RISC systems back in the 90's severely underestimated
> the volume of non-portable C code out in the wild, if you ask me. In
> fact, this issue may have been a big contributor to the lack of success
> of these systems. The PPC crowd found out very quickly that they needed
> to implement most of the hardware hacks the intel guys did, as it just
> wasn't possible to implement efficient compilers for their
> architectures, at least not in a reasonable amout of time.
> Bytecode to machine code translators on the other hand....
>> C follows the idea "Trust the programmer" -- apparently, a lot of programmers
>> weren't trustworthy. Therefore "strait-jacket languages" like Pascal and Java
>> were needed....
> It's not just a matter of trust. And besides, lousy/lazy programmers do
> exist, and will continue to.
> Comparing Java to Pascal is a little harsh. Pascal was primarily
> intended as a teaching language,

....just like the original intention of Basic -- and both these languages are
still used to write actual applications, even though Basic nowadays usually
is called "Visual Basic" and Pascal usually is called "Delphi".

> and it's type system is sufficiently
> constrained as to cause a lot of needless limitations on the
> programmer. For example, it's almost impossible to write generic code
> in Pascal. Java has none of these limitations. It provides an excellent
> tradeoff between functionality loss and type safety compared to
> languages that predate it.

....but why does java lack unsinged integer data types? And regarding safety:
Java integers still overflow silently, just like C integers do.

>> Another victory for strait-jacket languages... :-)
> Here's a question for you: Besides directly accessing hardware
> features, what program is difficult to write in Java, versus C/C++?

Manipulating binary data is a bit awkward due to the lack of unsigned
byte and integer types in Java.

Complex arithmetic: Java lacks a Complex data type, and also lacks the
capability of operator overloading. Here, FORTRAN, C++ or C99 are the
preferred languages. Vector arithmetic: ditto - here C++ or Fortran-9x
are the preferred languages. Bignum arithmetic: Ditto - C++ preferred.
Any other form of arithmetic where the built-in data types are

>> I know the stuff called through JNI must be written in C (using C++ here
>> would be messier). Yet, JNI is a hole in Java which also poses a
>> danger.
> It's actually possible to implement the VM in such a way that any
> danger imposed by JNI cannot compromise the security model, or even
> crash the application. Typically this isn't done for performance
> reasons, but there are implementations that provide this level of
> safety.

That would require the VM to run in a memory space different from the
memory space of the called C code. Switching memory spaces when calling
C code is definitely a performance bottleneck.

>> The method differs, true, but the end result is the same: versatility,
>> insecurity, and non-portability.
> Not true. The end result, as history has shown, is very different. The
> Microsoft approach allows a great deal of 'laziness' on the part of
> those porting existing code to the new environment, and allows the
> number of portability and security issues to multiply just as easily as
> they did in the C/C++ world.
>>>>> This compromises not only the portability of Java
>>>>> programs, but also the security model provided by the Java platform,
>>>>> which is dependent on programs not being able to access memory that
>>>>> belong to the VM, or any other program that's running in the same
>>>>> memory space.
>>>> How well is that security model maintained in GNU's Java implementation
>>>> which runs without any VM and generates native code?
>>> Just as well, of course. The language definition is the same, and it
>>> still prevents you writing code that will access machine dependant
>>> features.
>>>> Yep -- Java was about to be standardized some years ago, but then Sun
>>>> changed its mind and stopped that. Even C# now has an (ECMA) standard.
>>> Which is to be honest a pointless marketing exercise. The language
>>> definition itself is useless without its API, as 99.9% of useful C#
>>> programs rely on the .NET API, which is in itself not portable - it
>>> relies heavily on Win32 to provide its functionality.
>> But Win32 is just as portable as the Java API - there are Win32 emulators
>> running on other platforms.
> This is more precisely put as Intel architecture emulators running on
> other architectures. The Win32 platform remains tied to x86. Have a
> look at Windows 64 bit edition on the IA-64 and see how it handles
> backwards compatibility. It's so inefficient that you just do not do
> it, you use real x86 boxes instead.
> This is going to remain the case with .NET applications as well,
> despite their supposed 'portability' due to the way legacy code can be
> called. For some time, .NET applications on Win64 had to run via the
> Win32 emulator, which is already terribly slow. This applies for any
> application that uses the .NET API, since it merely thunks calls
> through to Win32.
> The Java approach is different. Reimplement the API in Java. Make it as
> portable as anything else written in the language. The result? Java is
> implemented on every platform it's viable to run it on.
>> Then how come there are Windows emulators running on other platforms?
> Which ones? See above. Any successful implementation of Windows
> emulation is actually an x86 PC emulator running actual Windows.
>> When you say just "Java" I think you usually mean both. Remove the
>> platform and you can't run your Java program. Remove the language,
>> and you'll have to program in bytecode directly, which would be
>> possible, although awkward.
> It's probably more accurate to refer to Java as a "meta-platform" as
> even though it provides all the functionality of a "platform" it in
> itself (generally) requires a host platform to operate. It's certainly
> possible to implement the VM as an operating system itself, but the
> goal is not to create new concrete platforms, but to provide a unified
> "meta-platform" that other existing platforms can host. Java succeeds
> in this goal very well.
>> UCSD Pascal perhaps? :-) Although now obsolete, that system had
>> precisely the same objective. But UCSD Pascal was too early -- the slow
>> hardware of that time made emulating a pseudo-machine unacceptably slow.
>> And there also was no widespread Internet through which applications
>> could be distributed.
> All of those limitations have now gone, and history speaks for itself,
> I think.
> It's certainly a contentious issue, and theres supporters on either
> side of the fence. Academically speaking, there's no such thing as true
> portability. As long as there are multiple platforms there will be
> applications that aren't portable.
> It's pointless though to consider the argument in academic terms, only
> in practical terms. Java meets its goals of portability. .NET has yet
> to meet those goals, and frankly, I doubt it ever will. Microsoft
> simply have no interest in making it easy to run Windows applications
> on a non-Microsoft platform. You can argue that their system is
> inherently portable, but it remains impractical to port Windows
> applications.
> I actually find it pretty amusing to watch. Microsoft continually rely
> on the installed base to maintain their market, but in binding new
> technologies to the installed base, they sacrifice their ability to
> take advantage of new architectures. I have a feeling that as time goes
> by, this strategy is going to prove very costly, as every other
> 'platform' out there just doesn't have the same limitations. GNU/Linux
> has portability advantages because almost all the source is open. Java
> has it because of it's portable meta-platform. Apple have transitioned
> almost seamlessly from PPC to x86.
> It's like watching a time-bomb count down, if you ask me ;-)

It may not be a time bomb, but rather stagnation.

We can compare with another technology: television. Consider the US
NTSC ("Never The Same Color") TV system which has been in use for over
50 years now. The color problem of this system has since long been
fixed, and the fix was implemented in the European TV systems which
went color about a decade after the US. But the US TV system cannot
be upgraded because of the huge amounts of TV sets out there that one
would need to maintain backwards compatibility to. Only now, when
switching to digital TV, it will be upgraded -- but it'll be there for
at least another decade or so, for backwards compatibility.

Paul Schlyter, Grev Turegatan 40, SE-114 38 Stockholm, SWEDEN
e-mail: pausch at stockholm dot bostream dot se
From: sicklittlemonkey on
mdj wrote:
> Write a large desktop application for the proprietary API
> of any platform, be it MacOS, Windows, and you've got yourself a pretty
> thick portability barrier.
> If you write your application in a portable language/platform such as
> Java (which is really the only successful one) you eliminate this
> problem.

I would argue that the modern web browser (i.e. HTML/Javascript/AJAX
etc) is effectively the only other major 'portable' platform. Somewhat
ironically, Microsoft pioneered AJAX technology, but probably backed
off in horror once they realised that WebOutlook didn't really need to
have a MS backend.

> This has been happening for years, and is really only a few steps away
> from being complete. Sun is the only vendor left who is really pushing
> their version of UNIX in favour of Linux, everyone else has decided to
> fold their own unique advantages back into the Linux kernel. At this
> point it becomes the big bad guy Linux versus the other Open Source
> alternatives ;-)

With Sun (Java) and the open source community inching ever closer, it
could be hoped that this day is not far off.

Still, .NET can't be ruled out, as Microsoft have bought some of the
finest minds that money can buy (Jim Blinn, sob sob) and are pushing
hard to gain ground in high-growth markets like mobile/embedded, the
video game industry, home consumer/media managemen/formats etc.

They've twisted Java into something not-half-bad. Just not portable
outside it's own ecosystem. Which is very bad.


From: Paul Schlyter on
In article <gLOdnYcddOPrceHZnZ2dnUVZ_uqdnZ2d(a)>,
Michael J. Mahon <mjmahon(a)> wrote:

> And the number of economically important "different" platforms is
> decreasing to a very small number, largely as a result of market
> forces triggered by a huge object code base (which the market thinks
> of as non-portable).

Will it shrink all the way down to one such platform?

> And, perhaps because of designers' clarity, or perhaps because of their
> laziness, we have settled almost universally on a set of data repre-
> sentations that are common across architectures: IEEE floating point,
> twos-complement integers in power-of-two byte sizes, and 8-bit ASCII.

FYI: there is no such thing as "8-bit ASCII". ASCII is a 7-bit code.

If you want to use 8-bit character codes you'll have a lot to choose
among. Nowadays some flavor of ISO-8859 is the most common: in the
US and in western Europe ISO-8859-1 (aka ISO-Latin-1 -- yes there are
other versions of ISO-Latin than ISO-Latin-1) is usually used.

> Big alphabets are still a (national) problem,

The alphabets need not be bigger than ours to be a problem. The Greek
alphabet is smaller than the Latin alphabet, but you'll have to use
ISO-8859-7 to get access to Greek letters. The Cyrillic alphabet
is only slightly larger than ours, but to get access to that alphabet
you'll need to use ISO-8859-5.

Even some countries whose languages use the Latin alphabet still can't
use ISO-8859-1 because it lacks a few characters in their alphanet.
In Turkey one must use ISO-8859-3 and in Iceland ISO-8859-4.

There's a way out of this mess with different varieties of "8-bit
ASCII" though: Unicode. Yes, it's coming and it's getting more and
more noticeable. Windows-NT used UCS-2 for its internal
representation of e.g filenames, and in Windows-2000 a switch was made
to UTF-16 (both are 16-bit character codes, but while UCS-2 can only
hold the first 64K of the Unicode alphabet, UTF-16 can hold the entire
Unicode alphabet). Java uses either UCS-2 or UTF-16 (don't know
which, maybe it doesn't matter except for display purposes) internally
in strings. In situations where a 16-bit character code is
impractical, UTF-8 is the obvious choice. Linux already supports
that in at least some distributions, and by choosing an UTF-8 locale,
one can have console output in Greek or Cyrillic characters.

Java is interesting here btw - a Java compiler already accepts
source code in Unicode - a proper BOM (Byte Order Mark) at the
beginning of the file will tell the Java compiler which flavor
of Unicode it is (UTF-8, UTF-16, UCS-4).

People who were used to programming langauges like FORTRAN and
Pascal, which were case insensitive, intially got a bit confused
by e.g. C code like this:

int a = 12;
int A = 45;

Doing the equivalent in Pascal would cause a "variable already
defined" error. But in C this works because C is case sensitive,
and here a and A are two different variables.

In Java you can have even more fun that this - consider:

int A = 12;
int A = 45;
int A = 82;
int A = 113;
int A = 176;
int A = 217;

(this should really have been written in UTF-8 to be correct,
but displaying it would make it look like this anyway)

The same variable A defined six times? That cannot work .... yes it

The first A is the capital Latin A - character code 0x41
The second A is the capital Cyrillic A - character code 0x391
The third A is the capital Greek Alfa - character code 0x410
The fourth A is the capital Cherokee A - character code 0x13AA
The fifth A is the capital Canadian Aboriginal A - character code 0x15C5
The sixth A is the Math Symbol capital A - character code 0x1D5A0
(the last case works only if Java uses UTF-16 rather than UCS-2)

Finally, we might have:

int C = (A+A)*A - (A+A)*A;

and from a source listing it'll be impossible to figure out the value
of C (hint: it's not zero....) .... unless we take a hex dump of the
source code so we can see which identifiers really are used!

In Java, (almost) the entire Unicode alphabet can be used as names
for identifiers -- so expect some really "interesting" Java software
in, say, 10 years....

> It's pointless though to consider the argument in academic terms, only
> in practical terms. Java meets its goals of portability. .NET has yet
> to meet those goals, and frankly, I doubt it ever will. Microsoft
> simply have no interest in making it easy to run Windows applications
> on a non-Microsoft platform. You can argue that their system is
> inherently portable, but it remains impractical to port Windows
> applications.

When the number of economically important "different" platforms
has decreased to one (M$-Window$), then .NET will finally have become
"universally portable" .... :-)

Will this horror future scenario ever happen? Well, we're getting there:
Macs are nowadays Intel based, and in the future there will probably be
Windows versions which can boot on the Mac. Today's sypercomputers are
usually clusters based on Intel machines. But what about Linux? Linux
will be there, of course, but since Linux is free it's not "economically
important".... :-)

Paul Schlyter, Grev Turegatan 40, SE-114 38 Stockholm, SWEDEN
e-mail: pausch at stockholm dot bostream dot se
From: Cameron Kaiser on
pausch(a) (Paul Schlyter) writes:

>Once, many years ago, I attended some lectures on Ada optimization.
>The lecturer there said that, ideally, any program which has no input
>but only output should be optimized to a "print constant string" statement.
>I.e. such programs should actually be executed at compile time!
>Of course, that constant string might be very long.

Heaven help the compiler if there is an infinite loop involved ...

Cameron Kaiser * ckaiser(a) * posting with a Commodore 128
personal page:
** Computer Workshops: games, productivity software and more for C64/128! **
** **