From: Michael J. Mahon on
Paul Schlyter wrote:
> In article <gpadnZTfo-dlPB_ZnZ2dnUVZ_qmdnZ2d(a)comcast.com>,
> Michael J. Mahon <mjmahon(a)aol.com> wrote:
>
>
>>mdj wrote:
>>
>>>Paul Schlyter wrote:
>>
>><snip>
>>
>>>>I dondt think one can talk about namespaces at all regarding macros in
>>>>preprocessors. The preprocessor does not know what a namespace is. It
>>>>hardly knows anything about the underlying source language. Heck, you
>>>>can even replace reserved words with macros -- try to do THAT using
>>>>the global namespace in a normal way.....
>>>
>>>
>>>Why? Namespaces are a natural extension present in most modern
>>>languages to avoid collisions that occur frequently in larger programs.
>>>Macro's being redefined by included headers from an unrelated piece of
>>>code is a real problem in C/C++ programs.
>>>
>>>Of course, most good programmers utilise some form of poor-mans
>>>namespacing by prefixing their preprocessor names with something that's
>>>likely to be unique. This illustrates the point that the feature would
>>>be useful.
>>
>>In the "olden days", assemblers frequently had HEAD "x" pseudo ops that
>>would treat all identifiers encountered from then on as if they had an
>>"x" prefixed to them, until the next HEAD operation.
>>
>>This didn't produce any visible change in the labels, but it did serve
>>to create "local" scopes if the HEAD characters were well chosen.
>>
>>-michael
>
>
> In the Apple II world, the S-C Assembler implemented local labels which
> had special names: a dot '.' followed by a decimal number from 0 to 99.
> The local labels were visible only between one normal label and the
> next -- after next normal label, all local label names could be reused.

Yes, this is a common convention for defining the scope of local labels.

Frankly, having used a number of such systems, I prefer having a pseudo
op to explicitly discard local labels, since it allows the occasional
global label to be defined within a local label scope. (Error exits
and cleanup are often usefully shared between subroutines.)

-michael

Parallel computing for 8-bit Apple II's!
Home page: http://members.aol.com/MJMahon/

"The wastebasket is our most important design
tool--and it is seriously underused."
From: Michael J. Mahon on
Paul Schlyter wrote:
> In article <1149383070.763983.291190(a)h76g2000cwa.googlegroups.com>,
> mdj <mdj.mdj(a)gmail.com> wrote:
>
>
>>Paul Schlyter wrote:
>>
>>
>>>I fully agree with that! The API is the specification, not the
>>>implementation. There might be some parts of the library which
>>>can be called by outside code but wasn't intended to be called in
>>>that way - they're not part of the API, even though they're part of
>>>the library!
>>
>>Of course, if you use features of a library that aren't part of it's
>>documented interface you'll eventually be cursed by almost everybody.
>>
>>Hey, we're almost back on topic, considering this was supposed to be
>>about 'undocumented opcodes' :-) Of course, my feelings on using
>>undocumented library calls is much the same as my feelings on
>>undocumented opcodes. They are very similar problems.
>>
>>Matt
>
>
> In the Apple II world we had other similar situations: calling Monitor
> ROM routines at some fixed address. Or calling routines in the Applesoft
> Basic ROM's.

And, intrestingly, this also returns us to another earlier theme of this
thread--how widespread use of undocumented features can create a barrier
to the creation of new, improved implementations.

The Apple ROM was not significantly changed until the //e, where much
of the actual code was placed in a bank-switched area. Much of the F8
region became stubs at documented entry points vectoring to the actual
routines. Updating the F8 region of the ROM was known to be a minefield
of compatibility issues.

The notion of defining the ROM entry points more architecturally was
not widespread at the time the Apple II was designed, and the impact
of the subsequent loss of control over ROM code became a problem.

(I've always wondered how much "compatibility issues" and how much
"renegotiation issues" factored into the decision to never update
the Applesoft ROMs to fix bugs...)

Later systems used a combination of less documentation and more
complexity to make calls into the middle of ROM less likely. Still
not an ideal solution, but one well adapted to the Apple II. ;-)

-michael

Parallel computing for 8-bit Apple II's!
Home page: http://members.aol.com/MJMahon/

"The wastebasket is our most important design
tool--and it is seriously underused."
From: mdj on

Paul Schlyter wrote:
> In article <1149381913.929713.169160(a)i40g2000cwc.googlegroups.com>,
> mdj <mdj.mdj(a)gmail.com> wrote:
>
> > Paul Schlyter wrote:
> >
> > How so?
>
> By making it possible to give a subroutine an argument of one type
> while it expected that argument to be of another type.

Unlike C/C++, variable arguments in Java are typed. You can declare the
variable list of be of type Object (in reality, Object[]), but you
still need to cast them back to something useful. Essentially, you
handle Objects of the types that are applicable and anything else you
can either ignore, or throw a runtime error.

It's actually very typesafe.


> > Exactly. The point is to avoid these situations by design. There's no
> > good reason to have machine dependencies within the same class in a OO
> > language. That's poor design.
>
> If you were to write your application from scratch today, that's the
> way to go of course. But have you ever heard about legacy code? Code
> reuse? If not, welcome to the real world! If an old working piece of
> code can be made to run on a new platform with only some small changes
> here and there, that might be preferable to rewriting it all from
> scratch. If so, conditional compilation is a better way to maintain
> that code than to create two versions of what's essentially the same
> code.

:-) This is the essence of th point. It is VASTLY superior to create a
small binding to legacy code via a portable mechanism (as Java and
other languages do) than to inherit the no longer scalable features of
legacy languages. Otherwise, you end up with new code with legacy
problems!

In the real world, real code has real legacy issues. These issues cause
problems like portability, security, and developer productivity. You
have a choice: either continue with the legacy approaches, or learn
from the mistakes, wrap the code in a portable interface and move on.

In the real world, keeping your development pace agile as to not be
eaten by newer products that don't have your legacy issues is a real
problem. There are real solutions.

This is the real world effect of Microsofts design decision (pollute
the new language with old problems) versus the Java model. Still
hobbling with legacy for no reason whatsoever than a couple of very
poorly concieved design ideas. And ones that history has already shown
has a simple, clean solution.

"Those who cannot learn from history are doomed to repeat it...."

> >> I dondt think one can talk about namespaces at all regarding macros in
> >> preprocessors. The preprocessor does not know what a namespace is. It
> >> hardly knows anything about the underlying source language. Heck, you
> >> can even replace reserved words with macros -- try to do THAT using
> >> the global namespace in a normal way.....
> >
> > Why?
>
> Because namespeces belong to the programming language, and macros know
> nothing about the underlying language. Macros is nothing by text
> substitution.

This is the case in C/C++. Simply because C/C++ doesn't have macro
namespaces is not a particularly solid argument that namespaces in
macros is a bad idea. I mean, really, every C/C++ programmer with more
than a little experience emulates the namespace concept with prefixes?
Why? Because you need it!

> > Namespaces are a natural extension present in most modern languages
> > to avoid collisions that occur frequently in larger programs.
> > Macro's being redefined by included headers from an unrelated piece of
> > code is a real problem in C/C++ programs.
>
> Macros by themselves don't generate external symbols though. The linker
> never sees the macro names.
<snip>
> Note that the macros are "global" only over the source file, not over
> the entire application.

Of course, but includes include includes (lol), which include others,
which then perhaps collide with one you've already defined. The scope
over which a macro symbol exists is potentially the entire application.
It's entirely non-determinable.

Heck, you even have to use a convention to ensure a preprocessor file
is applied only once, or worse recursively.

> > Semantically speaking Java is a lot more like Smalltalk than C++. It's
> > a nice bridge between the dynamic world and the static world.
>
> The other extreme could be represented by Javascript, where all typing
> and binding is done at runtime, and where all objects are polymorphic.

Or better by a language like Ruby which doesn't suffer from the
semi-functional quirkiness of javascript.

> The most important difference here is not what Java allows you to to,
> but what Java prevents you to do. In Java OO is mandatory while in
> C++ it's an option. You can even do OO programming in plain C,
> although it means more work for the programmers -- e.g. the "this"
> pointer must always be passed explicitly to methods, constructors and
> destructors must be called explicitly, etc.

Indeed in plain C, you can do 'better' OO that in C++ by implementing a
real dynamic binding system ;-)

The things you're prevented from doing in Java are prevented because
they've proven to be more trouble than they're worth, and there are
other techniques which allow the same things to be implemented without
rehashing error-prone on non-portable paradigms.

They were chosen not because of the whim of some engineer who preferred
things this way, but because research had shown where the common errors
and pitfalls were.

Matt

From: mdj on

Michael J. Mahon wrote:

> And, intrestingly, this also returns us to another earlier theme of this
> thread--how widespread use of undocumented features can create a barrier
> to the creation of new, improved implementations.
>
> The Apple ROM was not significantly changed until the //e, where much
> of the actual code was placed in a bank-switched area. Much of the F8
> region became stubs at documented entry points vectoring to the actual
> routines. Updating the F8 region of the ROM was known to be a minefield
> of compatibility issues.
>
> The notion of defining the ROM entry points more architecturally was
> not widespread at the time the Apple II was designed, and the impact
> of the subsequent loss of control over ROM code became a problem.

Yeah - I used to think the best approach would've been an additional
softswitch and an extra ROM chip, providing absolute compatibility with
the old AutoStart ROM. Considering the cost of manufacture of the IIe,
it would've been a simpler solution.

That said, the engineers that wrote the IIc and enhanced IIe ROMS (I'm
not much of a fan of the original IIe ROM) did a fantastic job.

The only real issue for me was a hardware issue, in that the IIe and
IIc had a feature that would reenable the ROM on reset, which annoyed
me a great deal. I know it was necessary to implement the warm boot,
but it would have been nice if that hardware feature only worked while
PB0 was active.

> (I've always wondered how much "compatibility issues" and how much
> "renegotiation issues" factored into the decision to never update
> the Applesoft ROMs to fix bugs...)

I never really did understand why they licensed a new BASIC from
Microsoft instead of expanding Integer to include the missing features.

> Later systems used a combination of less documentation and more
> complexity to make calls into the middle of ROM less likely. Still
> not an ideal solution, but one well adapted to the Apple II. ;-)

The API based approach used in the IIgs was quite nice, particularly
since it allowed the OS to patch broken ROM code at boot time, although
this in itself caused problems. But then only if one ignored the
explicit directions of Apple to NOT jump to absolute locations in ROM.

But then there were applications that had to be modified to run on
ROM03 IIgs's. Some people never learn ....

From: Lyrical Nanoha on
On Mon, 5 Jun 2006, mdj wrote:

> Yeah - I used to think the best approach would've been an additional
> softswitch and an extra ROM chip, providing absolute compatibility with
> the old AutoStart ROM. Considering the cost of manufacture of the IIe,
> it would've been a simpler solution.

If you're only running 48K apps on DOS 3.x that's easy. Use NEWBASIC
LOADER from Beagle Basic to load FPBASIC off the 1980 DOS 3.3 System
Master. Presto. Apple ][+ ROM on the LC.

> That said, the engineers that wrote the IIc and enhanced IIe ROMS (I'm
> not much of a fan of the original IIe ROM) did a fantastic job.

I never cared for the 6502 //e ROM either. Bug city. Ew.

> I never really did understand why they licensed a new BASIC from
> Microsoft instead of expanding Integer to include the missing features.

Because their own project got bogged down in details...

-uso.