From: Stephen Fuld on
jmfbahciv(a)aol.com wrote:
> In article <QVUJh.147039$5j1.80655(a)bgtnsc04-news.ops.worldnet.att.net>,
> Stephen Fuld <S.Fuld(a)PleaseRemove.att.net> wrote:
>> jmfbahciv(a)aol.com wrote:
>>> In article <MPG.206078dd61655fc398a0f7(a)news.individual.net>,
>>> krw <krw(a)att.bizzzz> wrote:
>>>> In article <et647p$8qk_016(a)s887.apx1.sbo.ma.dialup.rcn.com>,
>>>> jmfbahciv(a)aol.com says...
>> snip
>>
>>>>> Why does everybody keep assuming that PDP-10s have to be limited
>>>>> to 18-bit addressing? Isn't it simply a small matter of wiring
>>>>> to fetch more than 18bits for effective address calculations?
>>>> You have to encode those bits into the ISA somehow, hopefully in a
>>>> way that doesn't muck up every program ever written.
>>> Which bits? The indirect bit?
>> No, the bits needed to address memory. I don't know the PDP 10
>> architecture, but let's look at a generic example.
>
> I don't know byte-addressable architectures; so we're even ;-)

Don't assume that people who don't know the PDP 10 don't know word
addressability. I spent a lot of my career dealing with another 36 bit,
word addressable architecture!
>
>> You have some sort
>> of load instruction to get the value of a data item into a register. So
>> it looks something like
>>
>> Load RegisterId, Address_of_Data_Item
>>
>> Each of these fields (and any other required, such as the indirect bit
>> you mentioned) must be encoded into the bits of the instruction. The
>> number of bits used for each field determines the maximum value for that
>> field.
>
> Only if you right-justify each field. I don't know hardware.
> COBOL and FORTRAN used to be able to read the first n-bytes of
> a record and ignored everything after n. Why can't you do a similar
> thing when designing a machine instruction format?

The hardware has to know whether to do it or not. For example, the CPU
reads the first 36 bits. It has to know whether those bits represent an
"old style", or "original" PDP10 instruction or the start of a "new
style", extended instruction, in which case it needs to read the next
say 36 bits to get the rest of the address. So the question becomes
*How does the CPU know which thing to do?*



If all of the bits in the original instruction are already defined to
mean something else, then you can't use one of them or you would break
the programs that used them in the old way. You could add some sort of
mode switch, to tell the CPU that it is now in "double word instruction
mode", and then you need some way of setting that mode ( a new op code,
or a previously unused bit in an instruction settable mode register, etc.)

The reason that the COBOL and Fortran examples you gave worked, are that
the knowledge of whether to read the rest of the record is logically
coded in the program. Some programs read all the record, other read
less, but each program knew what to do. The CPU doesn't "know" since it
must be prepared to handle both types.


CPUs already
> have to pipeline bits and bytes so they can deliver faster
> CPU time to a calculation. I don't see why there is such difficulty
> picking up an old 18-bit address and placing it in the "new"
> 72-bit address field for the CPU calculation. This kind of stuff
> was already getting done by microcode in my auld days.

Again, doing that isn't hard. Knowing whether to do it or not is the
hard part. :-(

>> So if the PDP 10 used 18 bits for the address field, then it can
>> directly address 2**18 (262,144) things (words or bytes, depending on
>> the architecture) in memory. If you want to address more memory, you
>> need to somehow get more bits into the address field.
>
> Right. Those were the old programs. They knew they had a limit
> and the coders compensated. This thread is about creating a new
> CPU. The specs given was similar to a PDP-10 with, IIRC, two
> exceptions. One of them was the addressing range.
>
> Well, then you don't make a half-word 18 bits. My proposal
> here is to see if it is possible to design a machine instruction
> format that is extensible without falling over the same problems
> we, at DEC, had with the -10.

Sure it is. In fact, it is conceptually pretty easy. You can just use
some bits in the instruction to tell the CPU how long the instruction
is. This is essentially what variable length instruction sets do.
There is no problem doing that if you start out with a clean sheet
design and have that as a goal.

The problem is taking an existing instruction set, in this case the
PDP10, and extending in a way that was never anticipated. What has been
claimed in this thread is that the difficulty of doing that was a major
factor in limiting the PDP 10's growth. While I have no personal
experience to know whether that claim is true, it certainly seems
reasonable.

>> Presumably, you can't easily add a few bits to the length of the basic
>> instruction, as that would break existing programs.
>
> I'm asking why not?

Because then you have to have some way of telling the CPU whether it is
executing the "old" instructions or the "new" ones, without changing how
it treats the old ones.

> I don't know if it's online anymore but my USAGE file specification
> gave instructions about how any customer site could extend
> the USAGE records. As long as all new fields were added to the
> end of each record, the "old" code would read "new" formatted records
> with no error.
>
> So why can't you do the same thing with a new machine instruction
> format definition?

Presumably (and again, I don't know PDP 10 stuff), there was some
mechanism for the software to know where the next record started. So if
the old software read the first N bytes or words of the record, when it
went to read the second record, it or some software library it used
"knew" where to start that second read (a record length coded somewhere,
an end-of record_ character, etc.) That is the information that is
missing in the PDP 10 ISA, so the CPU doesn't know where the next
instruction starts.


--
- Stephen Fuld
(e-mail address disguised to prevent spam)
From: l_cole on
On Mar 18, 5:38 am, jmfbah...(a)aol.com wrote:
> In article <1174179234.007296.81...(a)y66g2000hsf.googlegroups.com>,
> l_c...(a)juno.com wrote:
>
>
>
> >On Mar 17, 3:44 am, jmfbah...(a)aol.com wrote:
> >> In article <45faca01$0$1342$4c368...(a)roadrunner.com>,
> >> Peter Flass <Peter_Fl...(a)Yahoo.com> wrote:
>
> >> >jmfbah...(a)aol.com wrote:
>
> >> >> There were many sane ways to move customers from the one product
> >> >> line to the other, IF that was a goal. The choice was the most
> >> >> insane method. This was part of the IBM thinking that was
> >> >> injected (sorry, Lynn) into middle management. IBM customers
> >> >> were used to being ordered around "for their own good".
>
> >> >Maybe in some respects, but many would say the reason for IBM's success
> >> >was that it always tried to maintain backwards-compatibility. A program
> >> >from the earliest 360 days (executable, not just source) will run the
> >> >same today on the most recent version of the hardware and OS. That's 42
> >> >years of compatibility!
>
> >> That is NOT maintaining backwards compatibility. You create a
> >> design that simply won't _break_ old code. Then you don't have
> >> to spend a single maintenance dollar on your customers' old
> >> code.
>
> >Okay, I'll bite ... if someone out
> >there thinks they understand what BAH
> >is getting at here, would you please
> >explain it to me in English?
> >I recall BAH mentioning something to
> >the effect that she had some difficulty
> >getting across what she means at times
> >and this was certainly the case for me
> >with this reply.
>
> OK. I figured this would be a problem. My apologies for not
> doing mess prevention earlier. ...

Apologies accepted. ;-)

> ... Of course, I'll make it worse :-)
> but I'll try not to.
>
> >ISTM that creating a design that "simply
> >won't _break_ old code" is pretty much
> >the definition of "backwards
> >compatibility" and doing so for decades
> >is "maintaining backwards compatibility".
>
> Maintenance requires funding. It means that _you_ the
> developer has to test all old code after you've written your
> new stuff so that you can ensure it didn't break. This can
> take pots of money, espeically if the "old stuff" requires
> hardware you don't have.
>

Okay, I think vaguely understand what
you're trying to say, although I'm not
sure that I can put it into words
myself even now.
Therefore, let me just say that I agree
with you that "[m]aintenance requires
funds."

Having worked as a software proctologist
in both the Unisys 2200 OS Development
and Continuation (maintenance) groups,
I'm well aware of the fact that some
folks view Continuation (maintenance)
as a money sink they think the Company
would be better off without ... if it
weren't for those pesky customers who
actually wanted their code (both new and
legacy) to work.

I certainly found it bothersome to have
to test out a fix in 3 different software
levels across 5 different types of 2200.
(Or was it 5 different software levels
on 3 different types of machine? ...
My memory isn't as good as it once was.)
Be that as it may, that's what I got paid
to do.

>From my experience outside of bootstrap,
I found that the overwhelming majority
of bugs were caused by new code added
to the OS to support new software
features, not by user code somehow being
bothered by the fact that the new hardware
looked to them like some previous piece of
hardware.
In fact, I will go so far as to say that I
can't remember a single case of a customer
complaining because their new hardware
appearing to work the way some earlier
versions did.

>From my experience in bootstrap, where
I spent most of my time, I found that
the overwhelming bulk of my time was
spent fixing bugs introduced by new
code put in to support new hardware
which was often times touted as being
exactly like some previous piece of
hardware ... only "different".
Bootstrap (INT), along with I/O (IOC)
and hardware fault recovery (HFC), were
continually being hammered by new
hardware.
And there always seemed to be some bozo
in the wings who would argue that
changing software was no big deal since
*All You Have To Do Is Just* change
software once instead of changing each
piece of hardware that's manufactured.

But an interesting thing happened when
the hardware that OS software saw (as
well what as user software saw) stopped
changing (i.e. with the introduction of
the "Hardware-Software Independence"
spec, HSI-002).
INT, HFC, and IOC stopped being hammered
by new hardware (which was now supposed
to be backward compatible), and so life
became much simpler other than for the
occasional peripheral adapt.
In other words, the internal support
costs for these areas presumably should
have dropped the same way customer costs
should have although I have no idea
whether this actually showed up on any
balance sheet any where.

> In addition, IF the old code did break, stating that you maintain
> backwards compatibility implies that you will fix the customer's
> problem; ...

True, and that's want Unisys tries to do.
I assume that this is also the case with
IBM or any other company which tries to
maintain backward compatibility.

> ... this gives the old support (which you want to get rid of)
> the same classification of the code you do want to support. ...

If by "old support" you mean the old
software releases of the company, this
is also true, at least in the case of
Unisys, until the end-of-support date
is reached.
Before that time, however, old
releases can go through periods of
lessening support as is the case with
Unisys to try to reduce maintenance
costs to the company.

> ... It's
> a legal onus for you to fix all old code on your customers sites.

I'm not a lawyer and so I won't speak to
legal onuses, but AFAIK, what you have to
fix is *YOUR* code, not the customers.
This seems perfectly reasonable to me if
your code is broken which is presumably
the case if your hardware is "backward
compatible" (i.e. the customer's software
won't notice, or at least won't care,
about the fact that it's running on a
piece of hardware that's different from
what it was originally designed to run
on, but might notice and care about
different host software).

This is no different than if you release
an entirely new machine with entirely
new software.
You, the company, should fix that which
doesn't work the way you say it should
and so you will either have to have a
support group or a very large legal
department (or both) even if you don't
strive for backward compatibility.

> Not only is that expensive, but now you have to "maintain" each
> and every software release you've ever delivered.
>

False.
As I indicated before, you have to
maintain support for those software
releases that you say you still support.
Backward compatibility, in particular
hardware backward compatibility,
does *NOT* prevent you from saying that
beyond date X, there will be no further
support of software release Y.
Unisys has done this for decades.

> I don't know if you know enough about business and can extrapolate
> all the implications of the above.
>

I don't know if I "know enough about
business and can extrapolate all the
implications of the above" either, but
what I do know is that I worked for a
company which still tries to maintain
the sort of backward compatibility
that you're talking about, and while
that company isn't doing well, I am
convinced that this situation is *NOT*
simply because it tries to maintain
backward compatibility.

Yes, support costs bucks, but that's
literally part of the cost of doing
business.
It also gets you bucks in the form of
repeat business.
AFAIK, Unisys is still selling new
2200's albeit not the way it did in
the "Good Old Days" to existing
customers.
Is DEC still selling new PDP-10's?

> >> I am assuming that you are using the word 'maintenance' very
> >> loosely but it cannot be used this way if you are designing
> >> tomorrow's CPU.
>
> >> This is the most important point of the design. Everything else
> >> is bits and bytes. If you have to maintain your customers' old
> >> code (which includes all of your old code), you'll go bankrupt.
> >> Period.
>
> >Again, if someone out there thinks
> >they understand what BAH is getting in
> >the last paragraph, would you please
> >explain it to me in English?
>
> >So long as a company can continue to
> >make more money than it loses, it isn't
> >going to go bankrupt. Period.
>
> If you have to support every software release you ever shipped,
> you will have to have a support infrastructure unique for each customer.
> I know of nobody who is in production software development who can
> do this with more than a few customers.
>
> As time goes on, you will have to have a development group
> for each customer site, too.
>

Again, backward compatibility does
*NOT* imply that you have to support
every software release that was ever
made.
You have to fix what is supported.
Even if the same bug exists in
previous releases, you are under no
obligation to go back and fix them
as well.
Therefore, the worst case scenario
you are envisioning simply need not
occur.

> >So simply "maintaining" a customer's
> >old code in no way shape or form
> >automatically implies bankruptcy.
>
> Sure it does. You can't make any money with creating
> new development products.
>

No, it doesn't.
You have to make more money than you
lose.
Supporting old code may not make you
money, but it also does *NOT*
automatically imply that you can't do
something else that will.
Therefore, bankruptcy doesn't
automatically follow from supporting
old code.

> >The fact that IBM seems to be pulling
> >off maintaining their customers' old
> >code (as well as their own) pretty
> >clearly demonstrate this is true.
>
> I'll bet IBM charges for keeping the "old" stuff working.

Don't know, but won't be surprised if
they did.
I see nothing wrong with asking a
customer to pay bucks for support for
company software that is no longer
officially supported.
That's no different from asking that
a customer pay for some piece of new
code that a customer wants the company
to write for them, which is also
something Unisys has done.

> So let me ask you this: would you, a PC owner, pay Microsoft
> $100K/year to maintain your 386 DOS 5.0 software?
>
> Note that I think I've underestimated the cost.
>

Probably not, but so what?
Your argument is clearly specious.
If I happened to have some old software which
relied on 386 DOS 5.0 software, there's nothing
preventing me from continuing to run it without
paying any bucks on my old copy of 386 DOS 5.0
software.
Microsoft doesn't charge me for continuing to
use its old unsupported software.
And if my old software ran just fine before on
386 DOS 5.0 software, there's no reason to
believe that it will suddenly stop doing so
tomorrow or anytime in the indefinite future
(barring user error or hardware glitches, of
course) so I have no reason to want to pay.

Besides, can you say, "FreeDOS"?

> /BAH
>
> /BAH


From: jmfbahciv on
In article <1174253082.775074.160140(a)y80g2000hsf.googlegroups.com>,
l_cole(a)juno.com wrote:
>On Mar 18, 5:38 am, jmfbah...(a)aol.com wrote:
>> In article <1174179234.007296.81...(a)y66g2000hsf.googlegroups.com>,
>> l_c...(a)juno.com wrote:
>>
>>
>>
>> >On Mar 17, 3:44 am, jmfbah...(a)aol.com wrote:
>> >> In article <45faca01$0$1342$4c368...(a)roadrunner.com>,
>> >> Peter Flass <Peter_Fl...(a)Yahoo.com> wrote:
>>
>> >> >jmfbah...(a)aol.com wrote:
>>
>> >> >> There were many sane ways to move customers from the one product
>> >> >> line to the other, IF that was a goal. The choice was the most
>> >> >> insane method. This was part of the IBM thinking that was
>> >> >> injected (sorry, Lynn) into middle management. IBM customers
>> >> >> were used to being ordered around "for their own good".
>>
>> >> >Maybe in some respects, but many would say the reason for IBM's success
>> >> >was that it always tried to maintain backwards-compatibility. A
program
>> >> >from the earliest 360 days (executable, not just source) will run the
>> >> >same today on the most recent version of the hardware and OS. That's
42
>> >> >years of compatibility!
>>
>> >> That is NOT maintaining backwards compatibility. You create a
>> >> design that simply won't _break_ old code. Then you don't have
>> >> to spend a single maintenance dollar on your customers' old
>> >> code.
>>
>> >Okay, I'll bite ... if someone out
>> >there thinks they understand what BAH
>> >is getting at here, would you please
>> >explain it to me in English?
>> >I recall BAH mentioning something to
>> >the effect that she had some difficulty
>> >getting across what she means at times
>> >and this was certainly the case for me
>> >with this reply.
>>
>> OK. I figured this would be a problem. My apologies for not
>> doing mess prevention earlier. ...
>
>Apologies accepted. ;-)
>
>> ... Of course, I'll make it worse :-)
>> but I'll try not to.
>>
>> >ISTM that creating a design that "simply
>> >won't _break_ old code" is pretty much
>> >the definition of "backwards
>> >compatibility" and doing so for decades
>> >is "maintaining backwards compatibility".
>>
>> Maintenance requires funding. It means that _you_ the
>> developer has to test all old code after you've written your
>> new stuff so that you can ensure it didn't break. This can
>> take pots of money, espeically if the "old stuff" requires
>> hardware you don't have.
>>
>
>Okay, I think vaguely understand what
>you're trying to say, although I'm not
>sure that I can put it into words
>myself even now.

Yea, it is difficult to do. Especially when each manufacturer
and/or developer had their own siutations.

>Therefore, let me just say that I agree
>with you that "[m]aintenance requires
>funds."
>
>Having worked as a software proctologist
>in both the Unisys 2200 OS Development
>and Continuation (maintenance) groups,
>I'm well aware of the fact that some
>folks view Continuation (maintenance)
>as a money sink they think the Company
>would be better off without ... if it
>weren't for those pesky customers who
>actually wanted their code (both new and
>legacy) to work.
>
>I certainly found it bothersome to have
>to test out a fix in 3 different software
>levels across 5 different types of 2200.

Now you got it. You understand just fine.

>(Or was it 5 different software levels
>on 3 different types of machine? ...
>My memory isn't as good as it once was.)

It was probably all the above by the time you got through
a complete project cycle.

>Be that as it may, that's what I got paid
>to do.

As time goes on, more and more "situations" would have to be
vetted. You can spend your corporate dollars on old stuff,
which makes no money and no new customers, or you can pay
your brightest and bestest to design new stuff that will never
cause the old software to stop running and/or producing consistent
results.

It's much easier to pay your brightest extra man-weeks to do that
than to keep a cadre of hundreds (retraining them because of overturn)
on your payroll.

>
>>From my experience outside of bootstrap,
>I found that the overwhelming majority
>of bugs were caused by new code added
>to the OS to support new software
>features, not by user code somehow being
>bothered by the fact that the new hardware
>looked to them like some previous piece of
>hardware.
>In fact, I will go so far as to say that I
>can't remember a single case of a customer
>complaining because their new hardware
>appearing to work the way some earlier
>versions did.

Let me give you a design example. TOPS-10 had monitor calls
that allowed user mode code to request services from
the monitor such as I/O, date, time, etc. If we implemented
a new sub-call to the monitor call that looked up a file,
we added it to end of the argument list. That way old code
would continue to work and code that needed the new information
would simply add another word to the end of its arglist.

>
>>From my experience in bootstrap, where
>I spent most of my time, I found that
>the overwhelming bulk of my time was
>spent fixing bugs introduced by new
>code put in to support new hardware
>which was often times touted as being
>exactly like some previous piece of
>hardware ... only "different".

<GRIN>

>Bootstrap (INT), along with I/O (IOC)
>and hardware fault recovery (HFC), were
>continually being hammered by new
>hardware.
>And there always seemed to be some bozo
>in the wings who would argue that
>changing software was no big deal since
>*All You Have To Do Is Just* change
>software once instead of changing each
>piece of hardware that's manufactured.

Have you ever read TW's RH20 project report? You'ld get a kick out
of it.

>
>But an interesting thing happened when
>the hardware that OS software saw (as
>well what as user software saw) stopped
>changing (i.e. with the introduction of
>the "Hardware-Software Independence"
>spec, HSI-002).
>INT, HFC, and IOC stopped being hammered
>by new hardware (which was now supposed
>to be backward compatible), and so life
>became much simpler other than for the
>occasional peripheral adapt.
>In other words, the internal support
>costs for these areas presumably should
>have dropped the same way customer costs
>should have although I have no idea
>whether this actually showed up on any
>balance sheet any where.

Mark Crispin has made the claim that his TOPS-20 set of sources
has no bugs in it anymore. He was able to achieve this because
no new hardware support had to be added and shipped "yesterday".
>
>> In addition, IF the old code did break, stating that you maintain
>> backwards compatibility implies that you will fix the customer's
>> problem; ...
>
>True, and that's want Unisys tries to do.
>I assume that this is also the case with
>IBM or any other company which tries to
>maintain backward compatibility.

Sure. WE tried very hard never to "maintain" backward compatibility.
Our solution was make designs that could not break old code.

In the few cases where it was impossible to avoid, we gave
the customers three major monitor releases to adjust. EAch
monitor release cycle was about two years, calendar time.
>
>> ... this gives the old support (which you want to get rid of)
>> the same classification of the code you do want to support. ...
>
>If by "old support" you mean the old
>software releases of the company, this
>is also true, at least in the case of
>Unisys, until the end-of-support date
>is reached.
>Before that time, however, old
>releases can go through periods of
>lessening support as is the case with
>Unisys to try to reduce maintenance
>costs to the company.

Our end-of-support dates were always tied with hardware. I can't
remember any that were software. We were a hardware company
that happened to ship production line software packages that
would allow the customer to get a system up and running.
AFter that, it was all up to the customer to do tweaking.
>
>> ... It's
>> a legal onus for you to fix all old code on your customers sites.
>
>I'm not a lawyer and so I won't speak to
>legal onuses, but AFAIK, what you have to
>fix is *YOUR* code, not the customers.

You first have to prove to your angry customer that it isn't
your code. In order to do that, you need a component-by-component
copy of the customer's system. That's when the suits from both
sides come in and muddy up the situation.

>This seems perfectly reasonable to me if
>your code is broken which is presumably
>the case if your hardware is "backward
>compatible" (i.e. the customer's software
>won't notice, or at least won't care,
>about the fact that it's running on a
>piece of hardware that's different from
>what it was originally designed to run
>on, but might notice and care about
>different host software).
>
>This is no different than if you release
>an entirely new machine with entirely
>new software.
>You, the company, should fix that which
>doesn't work the way you say it should
>and so you will either have to have a
>support group or a very large legal
>department (or both) even if you don't
>strive for backward compatibility.

Yup. That's why we never said maintain backwards compatibility.
That was too broad a legal stroke.

>
>> Not only is that expensive, but now you have to "maintain" each
>> and every software release you've ever delivered.
>>
>
>False.
>As I indicated before, you have to
>maintain support for those software
>releases that you say you still support.

We didn't support n versions of an OS (or tried not to).

>Backward compatibility, in particular
>hardware backward compatibility,
>does *NOT* prevent you from saying that
>beyond date X, there will be no further
>support of software release Y.

Sure. If you are in the OS biz, announcing the end of
the support of an OS will lose you business.

>Unisys has done this for decades.

DEC did it once and never recovered.
>
>> I don't know if you know enough about business and can extrapolate
>> all the implications of the above.
>>
>
>I don't know if I "know enough about
>business and can extrapolate all the
>implications of the above" either, but
>what I do know is that I worked for a
>company which still tries to maintain
>the sort of backward compatibility
>that you're talking about, and while
>that company isn't doing well, I am
>convinced that this situation is *NOT*
>simply because it tries to maintain
>backward compatibility.
>
>Yes, support costs bucks, but that's
>literally part of the cost of doing
>business.
>It also gets you bucks in the form of
>repeat business.
>AFAIK, Unisys is still selling new
>2200's albeit not the way it did in
>the "Good Old Days" to existing
>customers.
>Is DEC still selling new PDP-10's?

The PDP biz is alive and well, AFAIK. The manufacturer does not
have to be named DEC. I am not a purist. I don't even care if the
company is called Micshit iff good stuff is still being produced.

<snip differing business styles>

/BAH
From: jmfbahciv on
In article <wKfLh.96630$as2.70153(a)bgtnsc05-news.ops.worldnet.att.net>,
Stephen Fuld <S.Fuld(a)PleaseRemove.att.net> wrote:
>jmfbahciv(a)aol.com wrote:
>> In article <QVUJh.147039$5j1.80655(a)bgtnsc04-news.ops.worldnet.att.net>,
>> Stephen Fuld <S.Fuld(a)PleaseRemove.att.net> wrote:
>>> jmfbahciv(a)aol.com wrote:
>>>> In article <MPG.206078dd61655fc398a0f7(a)news.individual.net>,
>>>> krw <krw(a)att.bizzzz> wrote:
>>>>> In article <et647p$8qk_016(a)s887.apx1.sbo.ma.dialup.rcn.com>,
>>>>> jmfbahciv(a)aol.com says...
>>> snip
>>>
>>>>>> Why does everybody keep assuming that PDP-10s have to be limited
>>>>>> to 18-bit addressing? Isn't it simply a small matter of wiring
>>>>>> to fetch more than 18bits for effective address calculations?
>>>>> You have to encode those bits into the ISA somehow, hopefully in a
>>>>> way that doesn't muck up every program ever written.
>>>> Which bits? The indirect bit?
>>> No, the bits needed to address memory. I don't know the PDP 10
>>> architecture, but let's look at a generic example.
>>
>> I don't know byte-addressable architectures; so we're even ;-)
>
>Don't assume that people who don't know the PDP 10 don't know word
>addressability.

I wouldn't presume to assume that one.

> I spent a lot of my career dealing with another 36 bit,
>word addressable architecture!
>>
>>> You have some sort
>>> of load instruction to get the value of a data item into a register. So
>>> it looks something like
>>>
>>> Load RegisterId, Address_of_Data_Item
>>>
>>> Each of these fields (and any other required, such as the indirect bit
>>> you mentioned) must be encoded into the bits of the instruction. The
>>> number of bits used for each field determines the maximum value for that
>>> field.
>>
>> Only if you right-justify each field. I don't know hardware.
>> COBOL and FORTRAN used to be able to read the first n-bytes of
>> a record and ignored everything after n. Why can't you do a similar
>> thing when designing a machine instruction format?
>
>The hardware has to know whether to do it or not.

Of course. Each _new_ CPU design will learn the new way.
The goal, in this extensible design discussion, is to
be design an instruction format where a shorter field length
will still work when the length has to be increase by n bits
to accomodate larger numbers.

> For example, the CPU
>reads the first 36 bits. It has to know whether those bits represent an
>"old style", or "original" PDP10 instruction or the start of a "new
>style", extended instruction, in which case it needs to read the next
>say 36 bits to get the rest of the address. So the question becomes
>*How does the CPU know which thing to do?*

The OS knows. You don't boot up a monitor built for a KA-10
on a KL-10. The OS booted on the CPU has been built using
a MACRO (or whatever machine instruction generator) that knows
about the new instructions.

>
>
>
>If all of the bits in the original instruction are already defined to
>mean something else, then you can't use one of them or you would break
>the programs that used them in the old way.

Right. One of our rules that was never to be broken was to never
ever redefine fields or values to mean something else. If we
needed a new field, we added it. We did not use an "old" field
or value. If you ever see our documentation, you will see
fields and values that were labelled "Reserved" or "Customer
Reserved". "Reserved" conveyed to the customer that we may
use the "blank" fields or values in the future; therefore a
smart customer will stay away from using them. We promised
to never use the Customer Reserved.



> You could add some sort of
>mode switch, to tell the CPU that it is now in "double word instruction
>mode", and then you need some way of setting that mode ( a new op code,
>or a previously unused bit in an instruction settable mode register, etc.)

But you are running hardware that doesn't have to know the old.
If there is a case where it does, invoked the old creates a fault
which the OS can then trap and dispatch to a software substitute.
That's how we handled KA-floating point instructions on KLs.
You don't care if it "takes longer". AAMOF, it is mandatory
you don't care.

>
>The reason that the COBOL and Fortran examples you gave worked, are that
>the knowledge of whether to read the rest of the record is logically
>coded in the program.

No. The reason was that there was a law everybody obeyed in that
an end of record character meant end of record. So it didn't matter
what the user program asked for as long as the compiler knew
to advance to the first character after the end-of-record
character.

> Some programs read all the record, other read
>less, but each program knew what to do. The CPU doesn't "know" since it
>must be prepared to handle both types.

Not at all. Some opcodes will not need special handling. Those
opcodes that do need special handling can be trapped and let
software do the setup as a substitute for the hardware.

That's why it's important to design extensibility in the machine
instruction format first.
>
>
> CPUs already
>> have to pipeline bits and bytes so they can deliver faster
>> CPU time to a calculation. I don't see why there is such difficulty
>> picking up an old 18-bit address and placing it in the "new"
>> 72-bit address field for the CPU calculation. This kind of stuff
>> was already getting done by microcode in my auld days.
>
>Again, doing that isn't hard. Knowing whether to do it or not is the
>hard part. :-(

But I'm seeing that as the easy part :-).

In some cases, you will have to throw away certain opcodes. Then
you cause encounters with those to throw a interrupt or trap.

Don't most CPUs have an illegal opcode interrupt or trap?
>
>>> So if the PDP 10 used 18 bits for the address field, then it can
>>> directly address 2**18 (262,144) things (words or bytes, depending on
>>> the architecture) in memory. If you want to address more memory, you
>>> need to somehow get more bits into the address field.
>>
>> Right. Those were the old programs. They knew they had a limit
>> and the coders compensated. This thread is about creating a new
>> CPU. The specs given was similar to a PDP-10 with, IIRC, two
>> exceptions. One of them was the addressing range.
>>
>> Well, then you don't make a half-word 18 bits. My proposal
>> here is to see if it is possible to design a machine instruction
>> format that is extensible without falling over the same problems
>> we, at DEC, had with the -10.
>
>Sure it is. In fact, it is conceptually pretty easy. You can just use
>some bits in the instruction to tell the CPU how long the instruction
>is. This is essentially what variable length instruction sets do.
>There is no problem doing that if you start out with a clean sheet
>design and have that as a goal.

But that [starting out with a clean sheet] is what this thread
is all about!
>
>The problem is taking an existing instruction set, in this case the
>PDP10, and extending in a way that was never anticipated.

No,no,no. Savard gave an overview of his new thingie. Morten
said that, with the exceptions of a couple of things, the PDP-10
was spec'ed. That's why I am and can use the PDP-10 has a basis for
my posts. Otherwise, I wouldn't have peeped in this thread.

>claimed in this thread is that the difficulty of doing that was a major
>factor in limiting the PDP 10's growth. While I have no personal
>experience to know whether that claim is true, it certainly seems
>reasonable.
>
>>> Presumably, you can't easily add a few bits to the length of the basic
>>> instruction, as that would break existing programs.
>>
>> I'm asking why not?
>
>Because then you have to have some way of telling the CPU whether it is
>executing the "old" instructions or the "new" ones, without changing how
>it treats the old ones.

I would define the "new" instructions with a new opcode.
For instance opcode 101 would be the class of instructions that
did full word moves. Opcode 1011 is a move full 36-bit word from memory
into register. Opccode 1012 is the other way. Two decades
later the shiny new CPU designer wants a "move two full 36-bit words
into zbunkregister (new hardware compontent breakthrough). So he
defines that opcode to be 10150. (50 MOVE flavored instructions
had been defined over the two decades.)
>
>> I don't know if it's online anymore but my USAGE file specification
>> gave instructions about how any customer site could extend
>> the USAGE records. As long as all new fields were added to the
>> end of each record, the "old" code would read "new" formatted records
>> with no error.
>>
>> So why can't you do the same thing with a new machine instruction
>> format definition?
>
>Presumably (and again, I don't know PDP 10 stuff), there was some
>mechanism for the software to know where the next record started.

It was an agreed standard that a particular character pattern
would always be the end of record pattern.

> So if
>the old software read the first N bytes or words of the record, when it
>went to read the second record,

Yes.

> it or some software library it used
>"knew" where to start that second read (a record length coded somewhere,
>an end-of record_ character, etc.) That is the information that is
>missing in the PDP 10 ISA, so the CPU doesn't know where the next
>instruction starts.

Right. And that's what I think needs to be designed. That's the
extensibility piece.

If a machine instruction format can be designed so that any of its
fields and values can be expanded, the biz wouldn't have to
go through a "need more bits and have to rewrite the world to
get them" paradigm.

I've never heard of hardware including extensibility in its design;
they left it as an exercise for the software to sort out.

/BAH

From: jmfbahciv on
In article <etm1e0$8qk_001(a)s869.apx1.sbo.ma.dialup.rcn.com>,
jmfbahciv(a)aol.com wrote:
>In article <wKfLh.96630$as2.70153(a)bgtnsc05-news.ops.worldnet.att.net>,
> Stephen Fuld <S.Fuld(a)PleaseRemove.att.net> wrote:
>>jmfbahciv(a)aol.com wrote:
>>> In article <QVUJh.147039$5j1.80655(a)bgtnsc04-news.ops.worldnet.att.net>,
>>> Stephen Fuld <S.Fuld(a)PleaseRemove.att.net> wrote:
<snip>

>> Some programs read all the record, other read
>>less, but each program knew what to do. The CPU doesn't "know" since it
>>must be prepared to handle both types.
>
>Not at all. Some opcodes will not need special handling. Those
>opcodes that do need special handling can be trapped and let
>software do the setup as a substitute for the hardware.
>
>That's why it's important to design extensibility in the machine
>instruction format first.

I've been thinking more about this trapping on an "undefined" or
old instruction opcode. This can also work for opcodes that haven't
been defined yet so software (such as OS) could use the new
opcodes coming in the new CPU design. This code could be run
on the old CPU and the trap could dispatch to software that
emulates the new opcodes. That would allow soft/hardware to
be developed concurrently with less deadline contention.

This one can't be a new idea. But it does allow the old hardware
to run the "new" hardware code.


<snip>

/BAH