From: jmfbahciv on
In article <20070329152047.7053da0c.steveo(a)eircom.net>,
Steve O'Hara-Smith <steveo(a)eircom.net> wrote:
>On Thu, 29 Mar 07 12:30:28 GMT
>jmfbahciv(a)aol.com wrote:
>
>> And some of that work was done by JMF, the other half of my
>> username. It took those with TOPS-10 experience to cause VMS
>> to evolve to be an OS that was useful.
>
> And it took Microsoft to perform the opposite of incremental
>development on it to produce the useless POS it has evolved into.

Consider the people who are doing this.

/BAH
From: Andrew Swallow on
jmfbahciv(a)aol.com wrote:
> In article <mL2dndkRKfoabpbbRVnyvQA(a)bt.com>,
> Andrew Swallow <am.swallow(a)btopenworld.com> wrote:
>> jmfbahciv(a)aol.com wrote:
>>> In article <6meeue.322.ln(a)via.reistad.name>,
>>> Morten Reistad <first(a)last.name> wrote:
>> [snip]
>>>> Lastest pc press blurbs. Vista only runs around 80 of 150
>>>> identified critical XP applications.
>>> So can we make a reasonable assumption that the load tests
>>> involved all games and not critical apps?
>> Or the games were written in the last 2 years and developed on
>> the beta version of Vista.
>
> Sigh! I don't know what I'm going to do with you.
> I wasn't talking about new games. The gamers have been
> furiously typing and installing their old games and haven't
> complained. Now, either the gamers have become jaded and
> don't play as they used to (I have almost eliminated this
> case) or Vista can play all the old games consistently.

Remember how long Vista was at Beta release. It comes down
to are many games that are at least 4 years old still played by
gamers who buy the latest OS?

Andrew Swallow
From: jmfbahciv on
In article <573rgkF2b2dcvU1(a)mid.individual.net>,
=?ISO-8859-1?Q?Jan_Vorbr=FCggen?= <jvorbrueggen(a)not-mediasec.de> wrote:
>>>Yes, that's the issue where I see the -10/-20 crowd have a valid point: the
>>>8650 took much too long to arrive.
>> First they had to figure out how to erase the 4 extra bits.
>
>Say what? I've always heard, if you are referring to Jupiter, that that
>project was running way behing schedule and way over budget and significantly
>contributed to the PDP-10 cancellation. Or is that FUD and urban legend?
>
>Anyway, after the 780 which was at the lower end of reasonable,
>performance-wise,

[spluttering emoticon wiping oatmeal off TTY screen]

That is such an understatement, it's already 3/4 of the way to
China.

/BAH
From: Andrew Reilly on
On Fri, 30 Mar 2007 08:46:53 +0000, Nick Maclaren wrote:

>
> In article <pan.2007.03.30.00.09.47.351963(a)areilly.bpc-users.org>,
> Andrew Reilly <andrew-newspost(a)areilly.bpc-users.org> writes:
> |>
> |> > Dunno. I wasn't talking at that level anyway. If DEC had taken
> |> > the decision to produce a new micro-PDP-11, there would have been
> |> > a LOT of such issues to resolve.
> |>
> |> I played with a nice LSI-11 box at Uni. It wasn't new then, but there
> |> were plenty of 68000 and a few Z8000 systems around by that time too (both
> |> of which could reasonably be called -11 clones).
>
> None of those could hold a candle to the PDP-11 for peripheral driving
> of the sort I am referring to. My colleagues tried all of them, and
> had major difficulties getting round their restrictions.

That's an interesting assertion. How so? All three were
close-to-unpipelined 16-bit processors with about eight general purpose
registers (double-ish on the 68k), running at a few MHz, and similar
sorts of OS support (not counting some instruction restart failure that
turned out to be in the 68k), and a very simple, traditional vectored
interrupt scheme. What makes the -11 better? DMA bus-mastering in the
peripherals? Not in the LSI-11 box that I got to use. I can imagine
heroic peripheral designs if you really wanted that sort of thing, but I
reckon that one of the other micros would have done as well with the same
setup.

On the peripheral front, the Z8390 (?) USART was (with the exception of a
short (3-byte) receive buffer) a really great device that lived on in Macs
and Sun systems for years. (Some of an HDLC protocol stack in the thing,
from memory, but I don't know how much use that got beyond Appletalk
networking.)

Cheers,

--
Andrew

From: jmfbahciv on
In article <460C8655.91D81170(a)yahoo.com>,
CBFalconer <cbfalconer(a)yahoo.com> wrote:
>jmfbahciv(a)aol.com wrote:
>>
>.... snip ...
>>
>> Neither would matter. Look if you increase your "CPU speed" by
>> twice, your system will then be constantly waiting on I/O becaues
>> the CPU got its job done faster. YOur system software and usage
>> had been tweaked over the years to accomodate the behaviour of a
>> VAX with its peripherals (this includes memory). Now you replace
>> the CENTRAL processing unit with something that goes twice as fast.
>
>And the system simply switches to another process while waiting for
>i/o. No problem.
>

It is a problem because the monitor has run every job that was
runnable and _all_ are now waiting on I/O to complete. Look.
We saw this. It was part of our business cycle. Systems were
I/O bound so we built a faster I/O. The same jobs were now
CPU bound so we built a faster CPU. The same jobs were now
I/O bound so we built a faster I/O.....

You know about bottlenecks and roads.

/BAH