From: Paul Schlyter on
In article <S6mdnYNMG-caHhnZnZ2dnUVZ_vidnZ2d(a)comcast.com>,
Michael J. Mahon <mjmahon(a)aol.com> wrote:

> Paul Schlyter wrote:
>
>> Some programming languages even had these formatting rules built-in.
>> In e.g. FORTRAN (up to FORTRAN-77), these rules had to be obeyed:
>>
>> Column 1: C or * marked the whole line as a comment
>> Columns 2-6: Labels went here - 1-5 decimal digits
>> Column 7: A non-space character here marked this line as a contination line
>> Columns 8-71: The program statement went here
>> Columns 72-80: Comment - anything here was ignored by the compiler
>
> Actually, it was:
>
> Column 1-5: Statement number, or "C" in column 1 for comment lines
> Column 6: Non-blank marks continuation line (often 1-9 to indicate
> sequence of the continuations if more than one)
> Columns 7-72: FORTRAN Statement
> Columns 73-80: ID/sequence field, ignored by compiler

Thanks for the correction. Yep, my fortran skills are getting rusty -- it's some
20 years since I last coded in that language.


> The sequence field was often left blank on manually keypunched cards,
> but was almost always filled with some combination of a "deck" ID and
> a sequence number on machine-punched decks, as were "binary" decks.
>
> I remember many times using an IBM 82 card sorter to restore order
> to a deck containing gravity-induced entropy. ;-)
>
> There is an interesting historical reason for the FORTRAN compiler
> to ignore columns 73-80. FORTRAN was first implemented on the IBM 704
> computer, and that machine and its successors, the 709, 7090, 7040,
> and 7094, all used a binary card console card reader that read in
> "row binary". Since these machines had 36-bit words, the standard
> binary card format was 12 pairs of 36-bit words, read from successive
> rows of the card, and therefore covering only the first 72 columns.
>
> As a result, the computer could not read columns 73-80 from the console
> card reader, and those columns were already conventionally used for
> deck sequence numbers. FORTRAN merely continued that convention.
> And, of course, what the computer could not read, the compiler must
> ignore. ;-)
>
> For example, see:
>
> www.atkielski.com/PDF/data/fortran.pdf
>
> -michael
>
> Parallel computing for 8-bit Apple II's!
> Home page: http://members.aol.com/MJMahon/
>
> "The wastebasket is our most important design
> tool--and it is seriously underused."
>
>
>
>
--
----------------------------------------------------------------
Paul Schlyter, Grev Turegatan 40, SE-114 38 Stockholm, SWEDEN
e-mail: pausch at stockholm dot bostream dot se
WWW: http://stjarnhimlen.se/
From: fred mueller on
Paul Schlyter wrote:
> In article <S6mdnYNMG-caHhnZnZ2dnUVZ_vidnZ2d(a)comcast.com>,
> Michael J. Mahon <mjmahon(a)aol.com> wrote:
>
>> Paul Schlyter wrote:
>>
>>> Some programming languages even had these formatting rules built-in.
>>> In e.g. FORTRAN (up to FORTRAN-77), these rules had to be obeyed:
>>>
>>> Column 1: C or * marked the whole line as a comment
>>> Columns 2-6: Labels went here - 1-5 decimal digits
>>> Column 7: A non-space character here marked this line as a contination line
>>> Columns 8-71: The program statement went here
>>> Columns 72-80: Comment - anything here was ignored by the compiler
>> Actually, it was:
>>
>> Column 1-5: Statement number, or "C" in column 1 for comment lines
>> Column 6: Non-blank marks continuation line (often 1-9 to indicate
>> sequence of the continuations if more than one)
>> Columns 7-72: FORTRAN Statement
>> Columns 73-80: ID/sequence field, ignored by compiler
>
> Thanks for the correction. Yep, my fortran skills are getting rusty -- it's some
> 20 years since I last coded in that language.
>

If you think you are rusty, its been 45 years for me and on an IBM 7094
mentioned below. And just think that was state of the art then. It was
fun getting the back together if you were clumsy and dropped it.
Speaking from experience :-).

>
>> The sequence field was often left blank on manually keypunched cards,
>> but was almost always filled with some combination of a "deck" ID and
>> a sequence number on machine-punched decks, as were "binary" decks.
>>
>> I remember many times using an IBM 82 card sorter to restore order
>> to a deck containing gravity-induced entropy. ;-)
>>
>> There is an interesting historical reason for the FORTRAN compiler
>> to ignore columns 73-80. FORTRAN was first implemented on the IBM 704
>> computer, and that machine and its successors, the 709, 7090, 7040,
>> and 7094, all used a binary card console card reader that read in
>> "row binary". Since these machines had 36-bit words, the standard
>> binary card format was 12 pairs of 36-bit words, read from successive
>> rows of the card, and therefore covering only the first 72 columns.
>>
>> As a result, the computer could not read columns 73-80 from the console
>> card reader, and those columns were already conventionally used for
>> deck sequence numbers. FORTRAN merely continued that convention.
>> And, of course, what the computer could not read, the compiler must
>> ignore. ;-)
>>
>> For example, see:
>>
>> www.atkielski.com/PDF/data/fortran.pdf
>>
>> -michael
>>
>> Parallel computing for 8-bit Apple II's!
>> Home page: http://members.aol.com/MJMahon/
>>
>> "The wastebasket is our most important design
>> tool--and it is seriously underused."
>>
>>
>>
>>
From: Michael J. Mahon on
mdj wrote:
> Paul Schlyter wrote:

<snip>

> My concern is the manner in which small libraries or copy/paste chunks
> of code pollute the relatively portable space of newer languages. In
> ..NET, you can issue a keyword and switch into old-school mode. It
> doesn't exactly provide an environment that encourages good design. Why
> you would include features that good developers won't use is beyond me,
> unless of course it's lousy developers you're catering to, which may
> well be the point.
>
>
>>I suppose you mean "the commercial world" when you say "the real
>>world". Yes, the commerical world is a continuous hectic race where
>>there's not really any time to do solid work. It's more important
>>that the product is flashy and that it appears early. Buggy? Of
>>course, but so what? Let customer support take care of the
>>complaining customers while we're developing the next product to
>>appear early and being flashier still .... the lifetime of a software
>>product is nowadays so short anyway.....
>
>
> It's actually irrelevant what sector we speak of - productivity
> enhancements are productivity enhancements. While there are sectors
> where you can 'afford' the extra time investment required to develop in
> legacy languages, there's little reason to do so.

The previous exchange neglects to make the important distinction between
time spent "up front" on design and implementation, and time spent
"after the fact" on support and maintenance.

It is a sad but inescapable fact of commercial life that there is never
time to do a job right, but always time to try to make it work.

One of my development laws is: "'Quick and dirty' is never quick but
always dirty."

The commercial pressure to get a product out the door that Paul refers
to is so real that it generally precludes the design and implementation
team from doing what they would do in the "best of all possible worlds"
and instead condemns them to shipping a product that has many structual
flaws. Those flaws will cost dearly over the next few years, but, given
the structure of corporate software teams, it is unlikely that senior
team members will have to deal with much of the flak.

Management will be rewarded for "making the schedule" and the elevated
support costs won't hit the fan until months later--when they can be
blamed on an inexperienced implementation team.

Matt is making a idealistic argument for what is achievable with great
discipline--and there's nothing wrong with that! But in the "real
world", discipline is much harder to come by, and almost impossible to
stick to without (rare) management support.

>>In such an environment it's probably best to throw away old code and
>>let everyone reinvent their wheels once more .... with humps and bumps
>>that there's no time to polish away.
>
>
> In this case, the old wheel has humps and bumps with regards to
> portability and security. Should we throw those away? Absolutely.

Never assume that doing something over will mean doing it better.
Life is full of counterexamples.

It will only be done better if a higher quality design and
implementation can be done, and that's a big "if". For one thing,
management always thinks of the bad old solution as a _solution_,
so they are unwilling to invest much time and effort in re-solving
a problem "just" to obtain some "airy-fairy benefit" on the *next*
manager's watch... ;-(

>>>This is the real world effect of Microsofts design decision (pollute
>>>the new language with old problems) versus the Java model. Still
>>>hobbling with legacy for no reason whatsoever than a couple of very
>>>poorly concieved design ideas. And ones that history has already shown
>>>has a simple, clean solution.

Many of the short-sighted additions to otherwise clean languages are
there *exactly* to solve short-term, schedule-driven problems.

We have a long history as a species of mortgaging the future for the
present. Whoever said "Pay me now or pay me later" neglected to
mention the effect of high interest rates. ;-)

As I often say, we tend to use our nose for a wall detector. It works
very well in the sense that it detects all the walls, but by the time
it works, many of the wall's consequences are already felt. ;-)

>>>"Those who cannot learn from history are doomed to repeat it...."

And, statistically, that would be all of us... ;-)

>>Actually, there are lots of people who do this, for their enjoyment.
>>I'm referring to vintage computing of course, and this very newsgroup is
>>part of that movement.
>>
>>Yep, it belongs to "the real world" too....
>
>
> Of course, but when your platform isn't evolving you don't need
> evolving development methodologies. The older techniques are adequate,
> and in the case of the Apple II, a great deal of fun on such a
> constrained environment. However, old techniques only scale so high,
> and hit those limits. In the case of C/C++, those limits have been hit,
> or close to it. There's still a large problem domain you use these
> tools for, as it's the most appropriate. But many new problem domains
> demand tools that have less restrictions, particularly in terms of
> development time. Sometimes this involves using a slightly more
> constrained language. Less is more!

Software folks are more likely to be afflicted with grandiosity than
hardware folks. Perhaps its the stronger engineering discipline of
the hardware world, perhaps its the real smoke when something "blows
up", or perhaps its because hardware people live *constantly* with
constricting limits that discipline their dreams.

Software folks seldom experience real limits anymore. They find it
all too easy to imagine that they can accomplish *anything* through
programming (even if they don't understand how to do it ;-).

Most of the problems of todays software are a result of attempting to
deal with more complexity than we can actually manage--and on a short
schedule. ;-) The irony is that much of the complexity comes from
aspects of "solutions" that are entirely optional--like GUI eye candy.

<fogeymode>

Most of what is done with computers today is much like what was done
decades ago--text editing and formatting, modest number crunching.
Only now, instead of spending 20 minutes choosing your words when
writing a letter, you spend 10 minutes writing and 10 minutes choosing
fonts. ;-)

High quality image processing is relatively new, because of its demand
for large memories and fast processors, but the basic image processing
operations are purely mathematical and easily disciplined. The same is
true for most "media" processing.

Pasting databases together over the web is an example of an emergent
capability, but one that is largely done using scripting languages.

Sometimes it's hard to see just what real value is being created in
this endless hierarchy of levels of interpretation. If anything
turns out to be really useful, then it could be rewritten "flatter"
to save about a factor of 1000 in computing resources!

Maybe in computers, too, power corrupts. ;-)

</fogeymode>

-michael

Parallel computing for 8-bit Apple II's!
Home page: http://members.aol.com/MJMahon/

"The wastebasket is our most important design
tool--and it is seriously underused."
From: Michael J. Mahon on
Paul Schlyter wrote:
> In article <GfqdnShMo7JOFRnZnZ2dnUVZ_tKdnZ2d(a)comcast.com>,
> Michael J. Mahon <mjmahon(a)aol.com> wrote:
>
>
>>Paul Schlyter wrote:
>>
>>>In article <1149383070.763983.291190(a)h76g2000cwa.googlegroups.com>,
>>>mdj <mdj.mdj(a)gmail.com> wrote:
>>>
>>>
>>>
>>>>Paul Schlyter wrote:
>>>>
>>>>
>>>>
>>>>>I fully agree with that! The API is the specification, not the
>>>>>implementation. There might be some parts of the library which
>>>>>can be called by outside code but wasn't intended to be called in
>>>>>that way - they're not part of the API, even though they're part of
>>>>>the library!
>>>>
>>>>Of course, if you use features of a library that aren't part of it's
>>>>documented interface you'll eventually be cursed by almost everybody.
>>>>
>>>>Hey, we're almost back on topic, considering this was supposed to be
>>>>about 'undocumented opcodes' :-) Of course, my feelings on using
>>>>undocumented library calls is much the same as my feelings on
>>>>undocumented opcodes. They are very similar problems.
>>>
>>>In the Apple II world we had other similar situations: calling Monitor
>>>ROM routines at some fixed address. Or calling routines in the Applesoft
>>>Basic ROM's.
>>
>>And, intrestingly, this also returns us to another earlier theme of this
>>thread--how widespread use of undocumented features can create a barrier
>>to the creation of new, improved implementations.
>>
>>The Apple ROM was not significantly changed until the //e, where much
>>of the actual code was placed in a bank-switched area. Much of the F8
>>region became stubs at documented entry points vectoring to the actual
>>routines. Updating the F8 region of the ROM was known to be a minefield
>>of compatibility issues.
>>
>>The notion of defining the ROM entry points more architecturally was
>>not widespread at the time the Apple II was designed, and the impact
>>of the subsequent loss of control over ROM code became a problem.
>>
>>(I've always wondered how much "compatibility issues" and how much
>>"renegotiation issues" factored into the decision to never update
>>the Applesoft ROMs to fix bugs...)
>>
>>Later systems used a combination of less documentation and more
>>complexity to make calls into the middle of ROM less likely. Still
>>not an ideal solution, but one well adapted to the Apple II. ;-)
>>
>>-michael
>
>
> IBM learnt from these mistakes by providing entry points to their ROM
> BIOS in another way: instead of using fixed addresses, they reserved
> 16 of the 256 possible interrupts as documented entry points for the
> ROM BIOS. Early PC's came with as much technical documentation as
> the Apple II did, including full schematics and assembly source code
> listing of the ROM BIOS.
>
> Using soft interrupts as entry points on the Apple II would have been
> infeasible -- but Apple could have used a JMP table instead, positioned
> near the beginning or the end of the ROM address space. CP/M used that
> method for entry points to its BIOS: a series of JMP operations at the
> very beginning of the memory block used by the CP/M BIOS.

Yes, but that would have used 3 bytes for each entry point--a
non-negligible amount of ROM for a 2KB monitor!

In retrospect, it would have been a nice idea, but doing it in the
original Apple II would have cost capability. Even adding autostart
required step and trace to be removed.

-michael

Parallel computing for 8-bit Apple II's!
Home page: http://members.aol.com/MJMahon/

"The wastebasket is our most important design
tool--and it is seriously underused."
From: Peter van Merkerk on
Paul Schlyter wrote:

> IBM learnt from these mistakes by providing entry points to their ROM
> BIOS in another way: instead of using fixed addresses, they reserved
> 16 of the 256 possible interrupts as documented entry points for the
> ROM BIOS. Early PC's came with as much technical documentation as
> the Apple II did, including full schematics and assembly source code
> listing of the ROM BIOS.
>
> Using soft interrupts as entry points on the Apple II would have been
> infeasible -- but Apple could have used a JMP table instead, positioned
> near the beginning or the end of the ROM address space.

....just like Commodore did since the first PET model.