From: Jan Vorbrüggen on
> DEC had no plans to produce bigger VAX systems in the early 80s, and
> peripherals were same as 11 range, so commercial non-compute
> price/performance and expandability were non-competitive.

Yes, that's the issue where I see the -10/-20 crowd have a valid point: the
8650 took much too long to arrive.

> DEC was producing useful VAX machines by then, but scared customers
> again with its attitudes towards Alpha: they appeared flakey to
> management who remembered their history. Had they kept pushing VAX as
> their "370" line, positioning Alpha as their RISC/workstation line, they
> would have had a good business, but they had a history of creating FUD
> about their own plans and sowed customer distrust.

I think most people were trusting DEC during the VAX->Alpha transition: there
was substantial overlap, the quality of the systems and the ported software
was very good, there was a substantial performance improvement - intrinsic and
from the support of systems with more than 4 GB of memory - and there was a
plan forward that was actually being executed.

> HP may still be supporting OpenVMS on Itanics.

They do, and VMS is still being actively developed. AFAIK, even the Alpha is
not only being supported but being actively developed. Even VAX/VMS was being
updated, not just supported, just a few years ago. A significant number of
people a running VMS virtual machines on their PCs because they never ported
their applications to something else. It seems likely that these VMs are the
fastest VAXen ever built 8-).

Jan
From: jmfbahciv on
In article <460a8553$0$8961$4c368faf(a)roadrunner.com>,
Peter Flass <Peter_Flass(a)Yahoo.com> wrote:
>jmfbahciv(a)aol.com wrote:
>> In article <460a471e$0$28137$4c368faf(a)roadrunner.com>,
>> Peter Flass <Peter_Flass(a)Yahoo.com> wrote:
>>
>>>Morten Reistad wrote:
>>>
>>>>
>>>>Just watch the pain unfold when Vista cannot run your application.
>>>>With binary-only, Microsoft products you will have a similar experience
>>>>as we had when DEC folded on us. There is no Plan B in this scenario.
>>>>
>>>>
>>>>
>>>>>>The lesson from DEC is that it can happen.
>>>>>>
>>>>>>Always have a Plan B.
>>>>
>>>As I already said, my plan B is Linux.
>>
>>
>> However, there are a lot of system owners who cannot use that
>> as their Plan -anythings because they are not in the software
>> biz.
>
>I got excited when I heard Dell was going to (again) ship retail PCs
>with Linux. Then I contacted support and found out they're going to
>pre-load some dumb version of DOS, and sell Linux separately. While
>everyone here would have no trouble installing a new OS, non-M$ systems
>will continue to lag until they're available pre-loaded and
>pre-configured. M$ knows this, and I'm sure their fingerprintes are all
>over everything.

When JMF was dying (1994) and needed a laptop in order to speak, I
tried to order one from Dell. I was told that it would take
six months to install Unix. I'm an auld OS babe and smelt the
rat.


I have not tested this part of the OS biz since then.

/BAH
From: jmfbahciv on
In article <460a85f4$0$8961$4c368faf(a)roadrunner.com>,
Peter Flass <Peter_Flass(a)Yahoo.com> wrote:
>jmfbahciv(a)aol.com wrote:
>
>> In article <4609993a$0$18859$4c368faf(a)roadrunner.com>,
>> Peter Flass <Peter_Flass(a)Yahoo.com> wrote:
>>
>>>Nick Maclaren wrote:
>>>
>>>>It is a great pity that the new RISC systems (as distinct from previous
>>>>inventions of the approach) concentrated entirely on making the hardware
>>>>simple, often at the cost of making the software hell to get right.
>>>>Which is one of the reasons that many aspects of modern software are
>>>>so much worse than they were 25 years ago.
>>>>
>>>
>>>I, as a programmer, shouldn't have to worry about ordering the
>>>instructions so as not to lose cycles (pipeline slots, whatever.)
>>>That's what hardware/microcode is for.
>>
>>
>> Sure. But you are also a system owner and a system manager.
>> Do the exercise and put each hat on and think from that point
>> of view. You are also the hardware procurer who makes the sole
>> decision of what you are going to purchase and plug in.
>>
>> /BAH
>
>You misunderstand. My argument is that this is, or should be, a
>function of hardware, microcode, etc. It's too low-level to force
>everyone to pay attention to it. It adds a lot of complexity to those
>compilers that do a good job of instruction ordering, and slows down
>stuff compiled by the rest.
>
I did not misunderstand. For a PC _system_ owner, it doesn't
matter which layer provides which functionality. The owner still
has to make the decisions. If these decisions have to have
the input of where each piece is implemented, then the generic
PC owner decides based on trial and error and (not or) gossip.

The only speed this owner will notice is changes of speed.
He won't care if it's 50% slower if the service provided is
perceived to be delivered in the same amount of wallclock time.

/BAH
From: Nick Maclaren on

In article <571pp6F2atkpuU1(a)mid.individual.net>,
=?ISO-8859-1?Q?Jan_Vorbr=FCggen?= <jvorbrueggen(a)not-mediasec.de> writes:
|> > Real-world, but your second question is very relevant. The VAX was
|> > relatively constant in performance between workloads, but the Alpha
|> > varied by an incredible factor (ten or more, on practical workloads).
|>
|> Yes, and that is a very valid comment. I think there was some COBOL program
|> where the VAX was faster than the Alpha, but I don't remember whether it was
|> translated or recompiled (the latter case would be even more surprising).

Not really. There were some things that the Alpha was dire at, and
the program could have been dominated by them. If it were, a little
bit of recoding would probably have speeded it up significantly, quite
likely by above a factor of 2.

|> I think the factor of 2 came from the benchmark workload that DEC used to
|> define "VAX MIPS", so it was actually geared towards the VAX. On anything to
|> do with floating point, the Alpha was much faster.

Not in my experience. It was MUCH faster on 'straight through' codes
that disabled all error detection, but got rather unhappy at the sort
of codes that had a lot of unpredictable branches and indirection
through just-calculated registers. And it got REALLY unhappy if you
enabled reliable numeric error detection, especially in combination
with the previous problems.

If I recall, it was nominally a factor of 4-8 faster, but was only about
a factor of 2-4 faster in practice even on the best codes and comparable
on the worst ones. All of the codes I am referring to were numeric ones,
nominally working in floating-point.


Regards,
Nick Maclaren.
From: jmfbahciv on
In article <56v56fF2aqk2nU3(a)mid.individual.net>,
=?ISO-8859-1?Q?Jan_Vorbr=FCggen?= <jvorbrueggen(a)not-mediasec.de> wrote:
>> No. What happened with that was they sent out signals that VMS
>> was going the way of TOPS-10. The customers were savvy enough
>> to do their own migration plans off the platform without telling
>> anybody.
>
>At the VAX to Alpha transition? Nonsense.

No. JMF worked on that one. I'm talking about after that
was done. I started reading the DEC newsgroup in 1995. There
was a guy who believed that DEC could do no wrong and kept
posting the latest news and stuff going on. EVery post
he made I'd heard before in the late 70s and early 80s
about TOPS-10. The signs were there. I do not make the
claim that DEC management knew what was going on; they certainly
didn't know at the time we got cancelled the first time ...
and the second time.

/BAH




/BAH