From: Charles Shannon Hendrix on
On 2007-03-29, jmfbahciv(a)aol.com <jmfbahciv(a)aol.com> wrote:

> Where were the bottlenecks on the Alphas you saw? My guess would
> be disk. That would be normal in the evolution of system gear
> development.

The Alpha was implemented in stages. The original Alpha paper described
a CPU that actually took several generations for DEC to build.

The initial CPUs were really bad in key areas, and there were nasty
surprises lurking in various corners.

Branch prediction was pretty poor, so anything penalized by getting that
wrong was going to give you grief, just for starters.

Barrier/trap mechanism meant some code caused frequent backtracking and
long "pauses", and it has some really strange behavior/performance with
memory access speed.

When it was running right, it was amazing.

The 21264 was really the first one to really start working nicely,
but only the later versions really started living up to the original
promise.

I never got to see any of the last produced generation, but maybe one
day I'll snag one from eBay to play with, paying more for shipping that
it is worth.

--
shannon "AT" widomaker.com -- ["There are nowadays professors of
philosophy, but not philosophers." ]
From: Charles Shannon Hendrix on
["Followup-To:" header set to alt.folklore.computers.]

>>Were there any projects at all that at least got started thinking about SMP
>>or clustered PDP-11 systems?
>
> Sure. JMF worked on a Unix that ran quite well. However, as far
> as we can tell, those sources no longer exist.

So he worked on a UNIX that did SMP and/or clustering on PDP-11
hardware (modified, I assume)?

What was the hardware like? Did they increase addressing or anything,
or just tied 2-N PDP-11 CPUs together?

>>> They were. The product line lasted that long because they
>>> were the stopgap. The OS development of the -11s were the
>>> way DEC^WDigital "trained" their future VMS developers
>>> and (probably more important) learned corporate folklore.
>>
>>I guess they were pretty upset that the PDP-11 was still being used well into
>>the 1990s.
>
> I am talking about late 80s.

That's fine, but I'm talking about the present even, where there still
are operating PDP-11's.

A local helicopter simulator didn't move from PDP-11 until around 2002.



--
shannon "AT" widomaker.com -- ["There are nowadays professors of
philosophy, but not philosophers." ]
From: Charles Shannon Hendrix on
["Followup-To:" header set to alt.folklore.computers.]

On 2007-03-28, jmfbahciv(a)aol.com <jmfbahciv(a)aol.com> wrote:

> I've spent quite a bit of my thinking time trying to figure out
> how to do the single task of software support with 200 million
> systems. I still don't have it. Micshit is trying by using the
> internet and edictive practices. That's not working either.

It occurs to me that some problems are almost too big.

Think of support requiring a certain amount of bandwidth, where
bandwidth is the messages sent between customers and support staff.

With 200 million systems, that's a lot of messages.

Even if every customer message was 100% perfect with a great description
of the problem, and the solution was easy, there is still a huge amount
of bandwidth.

My contention is that, at least with everything we know how to do today,
the amount of data far exceeds the bandwidth of any support structure we
can create and also pay for.

We could help things out a lot, of course, by greatly reducing the need
for support.

By this I don't just mean reliability, but also reduction of useless
complexity and redundancy when it brings no benefit, or can't be made
optional.

> Number one rule is to not ship security holes and have a backout
> plan when you do.

Microsoft claims that's exactly what they do today.

I suppose you mean a backup plan that doesn't suck... :)

> I haven't thought of any way to do this. Micshit's answer is an
> "as is" which was anathema to the manufacturers of the past.

Not only that, but they've worded their EULA so carefully they could
almost deliberately attack you and you'd find you agreed to it.



--
shannon "AT" widomaker.com -- ["There are nowadays professors of
philosophy, but not philosophers." ]
From: CBFalconer on
jmfbahciv(a)aol.com wrote:
>
.... snip ...
>
> Neither would matter. Look if you increase your "CPU speed" by
> twice, your system will then be constantly waiting on I/O becaues
> the CPU got its job done faster. YOur system software and usage
> had been tweaked over the years to accomodate the behaviour of a
> VAX with its peripherals (this includes memory). Now you replace
> the CENTRAL processing unit with something that goes twice as fast.

And the system simply switches to another process while waiting for
i/o. No problem.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>



--
Posted via a free Usenet account from http://www.teranews.com

From: CBFalconer on
Nick Maclaren wrote:
>
.... snip ...
>
> No, but nor could the Z80 compete on industry-quality functionality
> and reliability. I know quite a few people who used Z80s for that,
> and they never really cut the mustard for mission-critical tasks
> (despite being a factor of 10 or more cheaper).

Nonsense. I had 8080 based communications systems that ran
continuously (no restart) for 2 to 3 years, until brought down by a
mains power failure.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>



--
Posted via a free Usenet account from http://www.teranews.com