From: mdj on

Michael J. Mahon wrote:
> mdj wrote:
> > Paul Schlyter wrote:
> >
> >
> >>So this bounds checking can not be turned off in Ada, for efficiency ?
> >
> >
> > It's compiler dependent - but the Ada compilers I've used allow pretty
> > much all runtime checking to be disabled. This is reasonable - after a
> > degree of testing I guess... I've personally never written anything in
> > Ada that ran slowly enough to bother with it.
>
> Turning off error/assertion checking after development and test and
> before distributing software for use is like wearing life vests during
> training, but taking them off before going to sea (to paraphrase
> Dijkstra).

Absolutely... Unless you really need the extra few cycles of speed it's
more sensible to leave it on. Some people though, would prefer that
their customer see an application crash than show a helpful error
message indicating that a programmer attempted to divide something by
zero ;-)

Even in this case, you could just compile it out per module as
required.

I've never understood why after all these years and all the fancy stuff
that's implemented in a CPU why they never grew the fairly obvious
feature of throwing an interrupt on overflow. It would mean such errors
could be trapped with zero overhead in the no error case.

From: Michael J. Mahon on
mdj wrote:
> Paul Schlyter wrote:

<snip>

>>If so, then it's just as impossible to write a Windows program that's bound to
>>a specific machine architecture: just write a Windows emualator for any
>>target platform you want.
>
>
> History has shown this isn't true. Any effective emulator for Windows
> is actually an emulation of the x86 architecture, which can then run
> Windows itself.
>
> The only third party attempt at reimplementing Windows that has had any
> success at all is the WINE project, and that too is bound to the x86
> architecture. It is also, woefully incomplete, despite being started as
> a project back when Win16 was the dominant API.
>
> So Windows emulation relies on implementing a Windows compatible
> machine emulator. Not exactly an efficient solution.
>
> Microsoft themselves know this. Why else would they have moved to an
> extremely Java like system with .NET ?

Actually, there are (were, in the case of Alpha?) native Windows NT
ports to both Alpha and IA-64. Neither of these machines was used
"like an x86"--they are good, high performance ports.

<snip>

>>But Win32 is just as portable as the Java API - there are Win32 emulators
>>running on other platforms.
>
> This is more precisely put as Intel architecture emulators running on
> other architectures. The Win32 platform remains tied to x86. Have a
> look at Windows 64 bit edition on the IA-64 and see how it handles
> backwards compatibility. It's so inefficient that you just do not do
> it, you use real x86 boxes instead.

If anything, this is an argument against your case.

The IA-64 architecture contains a hardware x86 emulator that is a
performance disaster (relative to object code compilation). Any
use of this emulator is bound to be a performance problem.

The fact that x86 compatibility performs poorly is an argument for
how different IA-64, and the IA-64 Windows port, is from the native
x86 version.

Native IA-64 code flies on Windows for IA-64.

> This is going to remain the case with .NET applications as well,
> despite their supposed 'portability' due to the way legacy code can be
> called. For some time, .NET applications on Win64 had to run via the
> Win32 emulator, which is already terribly slow. This applies for any
> application that uses the .NET API, since it merely thunks calls
> through to Win32.

That's just a matter of prioritizing which parts of the system get
ported sooner and which later. Understandably, Microsoft designers
decided that Java code was not performance critical. ;-)

> The Java approach is different. Reimplement the API in Java. Make it as
> portable as anything else written in the language. The result? Java is
> implemented on every platform it's viable to run it on.
>
>
>>Then how come there are Windows emulators running on other platforms?
>
>
> Which ones? See above. Any successful implementation of Windows
> emulation is actually an x86 PC emulator running actual Windows.

The ones I mentioned--see above. ;-)

I assure you that the Alpha port and the IA-64 port of Windows are
*not* written to x86 standards and limitations.

I note, however, that in any "portable" program, tradeoffs are made
that affect native performance on any *particular* machine.

I think you underestimate how effective the lower levels of NT are,
particularly the HAL, at insulating higher levels of the OS from
peculiarities of the metal.

Modern OSs--and NT is one of the most modern--have adopted techniques
for achieving relative portability with only modest effects on native
performance.

<snip>

> It's certainly a contentious issue, and theres supporters on either
> side of the fence. Academically speaking, there's no such thing as true
> portability. As long as there are multiple platforms there will be
> applications that aren't portable.

And the number of economically important "different" platforms is
decreasing to a very small number, largely as a result of market
forces triggered by a huge object code base (which the market thinks
of as non-portable).

Some would call it a vicious cycle, some a virtuous cycle, which
eliminates all but one contender for the binary interface. But there
is no disputing events.

Practically speaking, most portability problems are a result of either
resource constraints or data representation differences.

Moore's "Law" has all but vanquished resource limitations for all but
the most demanding (and therefore niche) applications.

And, perhaps because of designers' clarity, or perhaps because of their
laziness, we have settled almost universally on a set of data repre-
sentations that are common across architectures: IEEE floating point,
twos-complement integers in power-of-two byte sizes, and 8-bit ASCII.
Big alphabets are still a (national) problem, and pointers are still
often confused with other data types, but, by and large, the data
wars are over. Even the completely arbitrary endianness problem is
all but vanquished, with big endian machines switch-hitting and little
endian machines enjoying their popularity.

In this day and age, you really have to be creative (or malevolent)
to come up with a legitimate portability barrier. ;-)

> It's pointless though to consider the argument in academic terms, only
> in practical terms. Java meets its goals of portability. .NET has yet
> to meet those goals, and frankly, I doubt it ever will. Microsoft
> simply have no interest in making it easy to run Windows applications
> on a non-Microsoft platform. You can argue that their system is
> inherently portable, but it remains impractical to port Windows
> applications.

Your last point is perhaps the most significant--the clear motivation
to *obstruct* portability "away" from your platform. This is the issue
that perenially prevents any real Unix standard. Only the losers in
the marketshare game *really* want portability--winners never do. ;-)

(Come to think of it, maybe Linux can eventually "mop up" in the market
and then we'll have a *real* _de facto_ Unix standard. ;-)

-michael

Parallel computing for 8-bit Apple II's!
Home page: http://members.aol.com/MJMahon/

"The wastebasket is our most important design
tool--and it is seriously underused."
From: Lyrical Nanoha on
On Tue, 30 May 2006, Michael J. Mahon wrote:

> (Come to think of it, maybe Linux can eventually "mop up" in the market
> and then we'll have a *real* _de facto_ Unix standard. ;-)

Except POSIX/SUS3 is the "de jure" Unix standard. Though I'd love to see
a decent *x that fully conforms and is free (no, Solaris doesn't cut the
mustard).

-uso.
From: mdj on

Michael J. Mahon wrote:

> Actually, there are (were, in the case of Alpha?) native Windows NT
> ports to both Alpha and IA-64. Neither of these machines was used
> "like an x86"--they are good, high performance ports.

While this is essentially true, neither platform was successful in
running the existing codebase, as both the x86 hardware emulator in
IA-64 and the DigitalFX32! software emulator had severe performance
limitations.

Of course, there's no reason the Win32 API can't be redefined as it's
moved to other architectures, but history has shown that doing this
with Win32 didn't produce good results. Vendors didn't exactly line up
to port applications to run natively on Alpha, and aren't doing so now
on IA64.

About the only valid reason I've seen for using Windows on IA64 is to
allow access to SQL Server in 64 bit mode. It tooks years after the
availability of Windows 64 bit edition for SQL Server 64 to be
released, and even now isn't very widely deployed, which is odd as the
benefits are enormous. I guess people are wary of IA64

> > This is more precisely put as Intel architecture emulators running on
> > other architectures. The Win32 platform remains tied to x86. Have a
> > look at Windows 64 bit edition on the IA-64 and see how it handles
> > backwards compatibility. It's so inefficient that you just do not do
> > it, you use real x86 boxes instead.
>
> If anything, this is an argument against your case.

How so? I agree that the operating system itelf can be moved to other
architectures, but the portability limitation still applies to the body
of software the OS depends on to provide the platform. I think you'll
find that the API the way it's defined means that it becomes quite
closely tied to whatever architecture it's ported to.

I think it's reasonable to include success in the measurements. As it
stands, there has been no successful attempts to move Windows to
another architecture. The only one that looks like it will succeed is
x86-64. The reasons for this should be obvious.

Even this port, a relatively straightforward one, took a long time to
emerge. Takes even longer for the device drivers to appear, and even
then, it's successful primarily because it can efficiently execute x86
code in a 64 bit environment.

> The IA-64 architecture contains a hardware x86 emulator that is a
> performance disaster (relative to object code compilation). Any
> use of this emulator is bound to be a performance problem.

It's terrible yes. Probably not much worse than DEC's software emulator
was though. These days, you could probably do a better job in software,
but who's doing it? Surely Intel have a vested interested in making
this a reality, but they've failed to achieve it.

> The fact that x86 compatibility performs poorly is an argument for
> how different IA-64, and the IA-64 Windows port, is from the native
> x86 version.

Indeed. I think that fact that the IA64 port is different from the
'native' x86 version is my point. It should be as easy as recompiling
applications. It's far, far, from that easy.

> Native IA-64 code flies on Windows for IA-64.

Sure it does, although I doubt it runs any faster than native x86 code
on x86, and probably slower, unless you have a use case where more than
4GB of memory can be used by a single application, which at the moment
is basically limited to databases. This is beside the point though.

> > This is going to remain the case with .NET applications as well,
> > despite their supposed 'portability' due to the way legacy code can be
> > called. For some time, .NET applications on Win64 had to run via the
> > Win32 emulator, which is already terribly slow. This applies for any
> > application that uses the .NET API, since it merely thunks calls
> > through to Win32.
>
> That's just a matter of prioritizing which parts of the system get
> ported sooner and which later. Understandably, Microsoft designers
> decided that Java code was not performance critical. ;-)

You seem to need to use the term 'port' a lot. This is precisely my
point about Windows being tied to x86. The platform is significantly
large enough to require prioritisation, which means lots of existing
things won't work, won't even recompile on the new 'port'.

> > Which ones? See above. Any successful implementation of Windows
> > emulation is actually an x86 PC emulator running actual Windows.
>
> The ones I mentioned--see above. ;-)

> I assure you that the Alpha port and the IA-64 port of Windows are
> *not* written to x86 standards and limitations.

I agree, but those ports are written to Alpha and IA64 limitations
instead. Not that it's necessarily a very bad thing, but in the current
environment, it seems to be.

> I note, however, that in any "portable" program, tradeoffs are made
> that affect native performance on any *particular* machine.
>
> I think you underestimate how effective the lower levels of NT are,
> particularly the HAL, at insulating higher levels of the OS from
> peculiarities of the metal.

In theory, yes. Practice on the other hand...

> Modern OSs--and NT is one of the most modern--have adopted techniques
> for achieving relative portability with only modest effects on native
> performance.

A *lot* of modifications have been made to NT since it was released. I
agree the original version was actually reasonably portable. Over time
though, many compromises have been made to the original abstraction
models, most of them in the name of efficiency. Of course, the
increased efficiency comes at a cost of greater coupling to the
hardware architecture, from the perspective of applications that
utilise it. As the underly architecture has evolved, enhancing NT to
take advantage of it has taken a market driven approach rather than
engineering centric to the detriment of the platform from a portability
perspective.

> > It's certainly a contentious issue, and theres supporters on either
> > side of the fence. Academically speaking, there's no such thing as true
> > portability. As long as there are multiple platforms there will be
> > applications that aren't portable.
>
> And the number of economically important "different" platforms is
> decreasing to a very small number, largely as a result of market
> forces triggered by a huge object code base (which the market thinks
> of as non-portable).
>
> Some would call it a vicious cycle, some a virtuous cycle, which
> eliminates all but one contender for the binary interface. But there
> is no disputing events.
>
> Practically speaking, most portability problems are a result of either
> resource constraints or data representation differences.

I agree. Data representation differences are at the core of portability
issues, particularly for C/C++ applications which represent the largest
body of software out there.

> Moore's "Law" has all but vanquished resource limitations for all but
> the most demanding (and therefore niche) applications.

Also true.

> And, perhaps because of designers' clarity, or perhaps because of their
> laziness, we have settled almost universally on a set of data repre-
> sentations that are common across architectures: IEEE floating point,
> twos-complement integers in power-of-two byte sizes, and 8-bit ASCII.
> Big alphabets are still a (national) problem, and pointers are still
> often confused with other data types, but, by and large, the data
> wars are over. Even the completely arbitrary endianness problem is
> all but vanquished, with big endian machines switch-hitting and little
> endian machines enjoying their popularity.
>
> In this day and age, you really have to be creative (or malevolent)
> to come up with a legitimate portability barrier. ;-)

Not really. Write a large desktop application for the proprietary API
of any platform, be it MacOS, Windows, and you've got yourself a pretty
thick portability barrier.

If you write your application in a portable language/platform such as
Java (which is really the only successful one) you eliminate this
problem.

Many people adopt an economic approach and declare the markets
available within the Macintosh and UNIX communities to be too small to
be worth worrying about. This is certainly a valid business decision on
the desktop, but certainly is not in the server room, which is where
alternative platforms are enjoying their biggest resurgence. Ironically
this is being fueled primarily because of environmental issues: server
rooms aren't big enough to host the number of boxes needed when using
Windows server based products. Most server solutions *demand* from a
support perspective that they have the box all to themselves, so you
can maintain the patch levels that have been tested for that
application.

> > It's pointless though to consider the argument in academic terms, only
> > in practical terms. Java meets its goals of portability. .NET has yet
> > to meet those goals, and frankly, I doubt it ever will. Microsoft
> > simply have no interest in making it easy to run Windows applications
> > on a non-Microsoft platform. You can argue that their system is
> > inherently portable, but it remains impractical to port Windows
> > applications.
>
> Your last point is perhaps the most significant--the clear motivation
> to *obstruct* portability "away" from your platform. This is the issue
> that perenially prevents any real Unix standard. Only the losers in
> the marketshare game *really* want portability--winners never do. ;-)

It prevents any real standard, period, Unix or Windows. Of course, it
will never become a problem for Microsoft while the x86 architecture is
one of the price/performance leaders, which it will no doubt continue
to be. However, good old Moore's Law means we're embedding OS's all
over the shop, creating numerous new market opportunities. The lack of
portability in the Windows world becomes an achilles heel for vendors
using that platform, and they lose that market to competitors. While I
have no issue with 'winners' owning markets in theory, the real world
limitations this exposes are very detrimental. Don't agree? Check the
release date for Internet Explorer 6.0, check its conformance to w3c
standards, etc.

It's certainly true that Microsoft will maintain the obstruction, and
will maintain the obstruction of .NET becoming a portable platform.

This is in essence my argument, and really, all technical issues aside,
Microsoft don't want portable Windows applications. I find it
frustrating that people argue "oh, it's just as portable, in theory"
when in this the difference between theory and practice is a LOT bigger
in practice than it is in theory. Anyone who attempts to bypass these
portability constraints will most likely be sued or bought out of
existence, so it just will not happen.

Why people continue to put up with this I just don't understand. I
guess they just don't care. Personally, I'd love to have many of the
innovations that would have occured in a more balanced playing field,
but then when faced with the choices of more market control verses a
technically clean solution, I opt for technically clean every time,
perhaps to my own financial detriment.

> (Come to think of it, maybe Linux can eventually "mop up" in the market
> and then we'll have a *real* _de facto_ Unix standard. ;-)

This has been happening for years, and is really only a few steps away
from being complete. Sun is the only vendor left who is really pushing
their version of UNIX in favour of Linux, everyone else has decided to
fold their own unique advantages back into the Linux kernel. At this
point it becomes the big bad guy Linux versus the other Open Source
alternatives ;-)

For the moment at least, Solaris provides several important technical
advantages that Linux doesn't have, so it'll be around for a while yet.
Personally though, I wish Sun would just 'give up' so that the _de
facto_ standard can truly emerge.

Once that happens, there'll be an open, owned by nobody platform that
runs on every hardware architecture in existence. Certainly there are
interesting times ahead.

Matt

From: Michael J. Mahon on
mdj wrote:
> Michael J. Mahon wrote:
>
>>mdj wrote:
>>
>>>Paul Schlyter wrote:
>>>
>>>
>>>
>>>>So this bounds checking can not be turned off in Ada, for efficiency ?
>>>
>>>
>>>It's compiler dependent - but the Ada compilers I've used allow pretty
>>>much all runtime checking to be disabled. This is reasonable - after a
>>>degree of testing I guess... I've personally never written anything in
>>>Ada that ran slowly enough to bother with it.
>>
>>Turning off error/assertion checking after development and test and
>>before distributing software for use is like wearing life vests during
>>training, but taking them off before going to sea (to paraphrase
>>Dijkstra).
>
>
> Absolutely... Unless you really need the extra few cycles of speed it's
> more sensible to leave it on. Some people though, would prefer that
> their customer see an application crash than show a helpful error
> message indicating that a programmer attempted to divide something by
> zero ;-)
>
> Even in this case, you could just compile it out per module as
> required.
>
> I've never understood why after all these years and all the fancy stuff
> that's implemented in a CPU why they never grew the fairly obvious
> feature of throwing an interrupt on overflow. It would mean such errors
> could be trapped with zero overhead in the no error case.

A number of machines implemented "sticky" overflow, in which the
overflow indicator, if set, would stay set until reset. This allows
a single test at the end of each procedure, for example, but also
loses the precise point of the interrupt.

One problem with your proposal for an overflow interrupt is that in
many quite ordinary computations, an overflow normally occurs. This
would require either enabling and disabling the interrupt with some
frequency or defining special forms of arithmetic ops that ignore
any interrupt.

-michael

Parallel computing for 8-bit Apple II's!
Home page: http://members.aol.com/MJMahon/

"The wastebasket is our most important design
tool--and it is seriously underused."