From: jmfbahciv on
In article <86odmq22xt.fsf(a)brain.hack.org>,
Michael Widerkrantz <mc(a)hack.org> wrote:
>jmfbahciv(a)aol.com writes:
>
>> Then, as with TOPS-10, DEC essentially canned VMS.
>
>Are you aware that HP still delivers OpenVMS systems?

Yes. I'm aware of it.

> HP also still
>sells[1] and supports Alpha systems although they have moved on to
>IA64 (sometimes known as the Itanic in comp.arch). The largest server
>is the Integrity Superdome with 64 processors (128 cores, 32 cores
>supported under OpenVMS) and 2 Terabytes RAM in a single server.
>
>OpenVMS pages at HP:
>
> http://www.hp.com/go/openvms/
>
>[1] Until April 27, 2007. So buy now!

Q.E.D.

/BAH

From: jmfbahciv on
In article <g8SKh.89779$as2.55140(a)bgtnsc05-news.ops.worldnet.att.net>,
Stephen Fuld <S.Fuld(a)PleaseRemove.att.net> wrote:
>jmfbahciv(a)aol.com wrote:
>> In article <64zKh.153484$5j1.81907(a)bgtnsc04-news.ops.worldnet.att.net>,
>> Stephen Fuld <S.Fuld(a)PleaseRemove.att.net> wrote:
>>> jmfbahciv(a)aol.com wrote:
>>>
>>> snip
>>>
>>>> But the question here isn't about style but why (or how)
>>>> these managers assumed that customers could be dicated to
>>>> as if they were subordinates.
>>> Come on Barb. You know the answer to that.
>>
>> :-)
>>
>>> They assumed that because
>>> it was a good assumption because it was quite often true for over the
>>> previous three decades! When these managers were coming up, IBM was so
>>> dominant that they *could* dictate to customers and most of them would
>>> "obey". Note that I am not commenting on the value or goodness of the
>>> situation, nor of its applicability to the different environment of the
>>> DEC marketplace (where it clearly wasn't nearly as effective), just
>>> answering your question. :-)
>>
>> It made us a lot of money. IBM didn't mind because we were their
>> anti-monopoly cushion.
>
>Well, "lot" is a relative thing. A lot to DEC was pretty small to IBM.

Of course :-).
>
>>
>>> I am making the assumption
>>>> that most managers knew that products were being made and
>>>> sold to other people.
>>> Yes, but in small enough numbers that they could be largely ignored.
>>
>> I sometimes wonder if you moved them all to a mill environment
>> if that kind of self-maintained ignorance would be erased.
>
>I think you stated earlier that when the IBM managers moved to DEC, they
> didn't change their attitude and that they changed DEC (and hurt it).
> I don't know what the "truth" was, but you seem to already know the
>answer to your "wonder".

Nah. We had pretty corral^Woffices and each subgroup was in
its own building. There wasn't any cross product sanity check
at the real-work level. Given the Mill environment where nobody
could show their status with the clothes they wore, the status
was set by the work the accomplished; most of this work did
something useful.

We didn't have to start wearing suits until personnel got snobby.

/BAH



From: jmfbahciv on
In article <1174179234.007296.81830(a)y66g2000hsf.googlegroups.com>,
l_cole(a)juno.com wrote:
>On Mar 17, 3:44 am, jmfbah...(a)aol.com wrote:
>> In article <45faca01$0$1342$4c368...(a)roadrunner.com>,
>> Peter Flass <Peter_Fl...(a)Yahoo.com> wrote:
>>
>> >jmfbah...(a)aol.com wrote:
>>
>> >> There were many sane ways to move customers from the one product
>> >> line to the other, IF that was a goal. The choice was the most
>> >> insane method. This was part of the IBM thinking that was
>> >> injected (sorry, Lynn) into middle management. IBM customers
>> >> were used to being ordered around "for their own good".
>>
>> >Maybe in some respects, but many would say the reason for IBM's success
>> >was that it always tried to maintain backwards-compatibility. A program
>> >from the earliest 360 days (executable, not just source) will run the
>> >same today on the most recent version of the hardware and OS. That's 42
>> >years of compatibility!
>>
>> That is NOT maintaining backwards compatibility. You create a
>> design that simply won't _break_ old code. Then you don't have
>> to spend a single maintenance dollar on your customers' old
>> code.
>>
>
>Okay, I'll bite ... if someone out
>there thinks they understand what BAH
>is getting at here, would you please
>explain it to me in English?
>I recall BAH mentioning something to
>the effect that she had some difficulty
>getting across what she means at times
>and this was certainly the case for me
>with this reply.

OK. I figured this would be a problem. My apologies for not
doing mess prevention earlier. Of course, I'll make it worse :-)
but I'll try not to.

>
>ISTM that creating a design that "simply
>won't _break_ old code" is pretty much
>the definition of "backwards
>compatibility" and doing so for decades
>is "maintaining backwards compatibility".

Maintenance requires funding. It means that _you_ the
developer has to test all old code after you've written your
new stuff so that you can ensure it didn't break. This can
take pots of money, espeically if the "old stuff" requires
hardware you don't have.

In addition, IF the old code did break, stating that you maintain
backwards compatibility implies that you will fix the customer's
problem; this gives the old support (which you want to get rid of)
the same classification of the code you do want to support. It's
a legal onus for you to fix all old code on your customers sites.
Not only is that expensive, but now you have to "maintain" each
and every software release you've ever delivered.

I don't know if you know enough about business and can extrapolate
all the implications of the above.
>
>
>> I am assuming that you are using the word 'maintenance' very
>> loosely but it cannot be used this way if you are designing
>> tomorrow's CPU.
>>
>> This is the most important point of the design. Everything else
>> is bits and bytes. If you have to maintain your customers' old
>> code (which includes all of your old code), you'll go bankrupt.
>> Period.
>>
>
>Again, if someone out there thinks
>they understand what BAH is getting in
>the last paragraph, would you please
>explain it to me in English?
>
>So long as a company can continue to
>make more money than it loses, it isn't
>going to go bankrupt. Period.

If you have to support every software release you ever shipped,
you will have to have a support infrastructure unique for each customer.
I know of nobody who is in production software development who can
do this with more than a few customers.

As time goes on, you will have to have a development group
for each customer site, too.

>So simply "maintaining" a customer's
>old code in no way shape or form
>automatically implies bankruptcy.

Sure it does. You can't make any money with creating
new development products.

>The fact that IBM seems to be pulling
>off maintaining their customers' old
>code (as well as their own) pretty
>clearly demonstrate this is true.

I'll bet IBM charges for keeping the "old" stuff working.
So let me ask you this: would you, a PC owner, pay Microsoft
$100K/year to maintain your 386 DOS 5.0 software?

Note that I think I've underestimated the cost.

/BAH


/BAH
From: jmfbahciv on
In article <QVUJh.147039$5j1.80655(a)bgtnsc04-news.ops.worldnet.att.net>,
Stephen Fuld <S.Fuld(a)PleaseRemove.att.net> wrote:
>jmfbahciv(a)aol.com wrote:
>> In article <MPG.206078dd61655fc398a0f7(a)news.individual.net>,
>> krw <krw(a)att.bizzzz> wrote:
>>> In article <et647p$8qk_016(a)s887.apx1.sbo.ma.dialup.rcn.com>,
>>> jmfbahciv(a)aol.com says...
>
>snip
>
>>>> Why does everybody keep assuming that PDP-10s have to be limited
>>>> to 18-bit addressing? Isn't it simply a small matter of wiring
>>>> to fetch more than 18bits for effective address calculations?
>>> You have to encode those bits into the ISA somehow, hopefully in a
>>> way that doesn't muck up every program ever written.
>>
>> Which bits? The indirect bit?
>
>No, the bits needed to address memory. I don't know the PDP 10
>architecture, but let's look at a generic example.

I don't know byte-addressable architectures; so we're even ;-)

> You have some sort
>of load instruction to get the value of a data item into a register. So
>it looks something like
>
>Load RegisterId, Address_of_Data_Item
>
>Each of these fields (and any other required, such as the indirect bit
>you mentioned) must be encoded into the bits of the instruction. The
>number of bits used for each field determines the maximum value for that
>field.

Only if you right-justify each field. I don't know hardware.
COBOL and FORTRAN used to be able to read the first n-bytes of
a record and ignored everything after n. Why can't you do a similar
thing when designing a machine instruction format? CPUs already
have to pipeline bits and bytes so they can deliver faster
CPU time to a calculation. I don't see why there is such difficulty
picking up an old 18-bit address and placing it in the "new"
72-bit address field for the CPU calculation. This kind of stuff
was already getting done by microcode in my auld days.



> So if the PDP 10 used 18 bits for the address field, then it can
>directly address 2**18 (262,144) things (words or bytes, depending on
>the architecture) in memory. If you want to address more memory, you
>need to somehow get more bits into the address field.

Right. Those were the old programs. They knew they had a limit
and the coders compensated. This thread is about creating a new
CPU. The specs given was similar to a PDP-10 with, IIRC, two
exceptions. One of them was the addressing range.

Well, then you don't make a half-word 18 bits. My proposal
here is to see if it is possible to design a machine instruction
format that is extensible without falling over the same problems
we, at DEC, had with the -10.

>
>Presumably, you can't easily add a few bits to the length of the basic
>instruction, as that would break existing programs.

I'm asking why not? I am not making the same presumption. The CPU
already is breaking up the word that contains the next instruction
into fields and stuff; I don't know if this is the hardware doing it
or the microcode that does the picking and packing. I don't think
it matters which is doing this work. I think all you have to do
is define the fields and their purpose (indirect bit, index, address,
opcode, etc.) and then left justify each field in the hardware.



> There are several
>solutions involving using multiple instructions to build up a longer
>address but they have problems too. In general it is a non-trivial
>problem to increase addressing (especially in fixed length instruction
>ISAs) without really messing things up.

I think it is non-trivial to increasee addressing capacities with
the same product level of architecture. This is a very unclear
sentence. Let me think about it to see if I can't put it into
English ASCII.

>
>As has been discussed here before, not allowing enough bits for larger
>addressability is one of the major mistakes that computer architects make.

Right. The key to this problem is that nobody designed extensibility
into their architecture.

I don't know if it's online anymore but my USAGE file specification
gave instructions about how any customer site could extend
the USAGE records. As long as all new fields were added to the
end of each record, the "old" code would read "new" formatted records
with no error.

So why can't you do the same thing with a new machine instruction
format definition?

/BAH

From: Quadibloc on
kenney(a)cix.compulink.co.uk wrote:
> In article <Cc6dnfNo65ghlmfYnZ2dnUVZ_tKjnZ2d(a)comcast.com>,
> gah(a)ugcs.caltech.edu (glen herrmannsfeldt) wrote:
..
> > the other is with a mode bit somewhere.
>
> See "Soul of a New Machine" for some designers opinion of mode bits.
..
Well, Edson de Castro (who remained nameless in that book, along with
the specific new machine in question, the identity of which was still
obvious *from* the mode bit issue specifically) certainly has earned
the right to be entitled to his opinion.

But since the Data General NOVA architecture used virtually all the
possible 16-bit opcodes, positioning the Eclipse/32 as an alternative
to the VAX on the basis that *we* don't have a nasty incompatible mode
bit, all our extended operations are added to the existing instruction
set, was hugely inefficient in terms of instruction density.

It was clearly done for _marketing_ reasons, not _engineering_
reasons.

That is not always a bad thing: marketing reasons are good when they
relate to what people are going to use the machine for, and even the
seemingly "irrational" preferences of customers are things that a
company producing computers, or any other kind of product, has to take
into account.

When one simply makes up a feature so that one has something to hype,
however, I consider that to be the worst of the possible types of
marketing reason,

Thus, even if mode bits *are* bad, and it would be a good idea to
leave slack in one's assignment of opcode space so as to be able to
avoid them in future (the System/360 architecture and its descendants
may use a few mode bits, but even so, they have *not* done so for the
purpose of rearranging opcode space, for example) there are cases
where avoiding them simply costs too much.

John Savard