From: Nick Maclaren on

In article <pan.2007.03.30.00.09.47.351963(a)areilly.bpc-users.org>,
Andrew Reilly <andrew-newspost(a)areilly.bpc-users.org> writes:
|>
|> > Dunno. I wasn't talking at that level anyway. If DEC had taken
|> > the decision to produce a new micro-PDP-11, there would have been
|> > a LOT of such issues to resolve.
|>
|> I played with a nice LSI-11 box at Uni. It wasn't new then, but there
|> were plenty of 68000 and a few Z8000 systems around by that time too (both
|> of which could reasonably be called -11 clones).

None of those could hold a candle to the PDP-11 for peripheral driving
of the sort I am referring to. My colleagues tried all of them, and
had major difficulties getting round their restrictions.


Regards,
Nick Maclaren.
From: ChrisQuayle on
Nick Maclaren wrote:

> No, but nor could the Z80 compete on industry-quality functionality and
> reliability. I know quite a few people who used Z80s for that, and they
> never really cut the mustard for mission-critical tasks (despite being a
> factor of 10 or more cheaper).
>
>
> Regards,
> Nick Maclaren.

The Z80 was quite neat for it's time, but the argument is bogus. What
made pdp11 and other Dec products so successfull was the end to end
systems engineering approach that dec applied to everything they
built.Compaq PC's were successfull for just the same reason and the same
probably applies to IBM product even now.

The problem with all the early micros is that most of them only provided
half a solution. Nothing wrong with the parts, just half of them were
missing. You had to go out and find the rest yourself...

Chris
From: jmfbahciv on
In article <y7ROh.210385$5j1.52612(a)bgtnsc04-news.ops.worldnet.att.net>,
Stephen Fuld <S.Fuld(a)PleaseRemove.att.net> wrote:
>jmfbahciv(a)aol.com wrote:
>> In article <571ro8F2bdosvU1(a)mid.individual.net>,
>> =?ISO-8859-1?Q?Jan_Vorbr=FCggen?= <jvorbrueggen(a)not-mediasec.de> wrote:
>>>> Only for small problems. What do you do in the cases where a
>>>> reassembly is the way to make the problem go away?
>>> Do a complete SYSGEN?
>>
>> Yes.
>
>
>Was there no alternative between patching the object code and doing a
>complete sysgen?

There were times when a rebuild was necessary. There were other times
when other techniques could be used. The point is that the debugger
should be able to do all of the above plus anything else s/he can
think of. The only way to be able to have all opportunities is to
have machine-readable sources.

> On the system with which I am most familiar (Non-DEC),
>we mostly did partial sysgens where only a small number of modules were
>re-assembled and the system linked. Out of say 400 modules in the OS, a
>typical gen might assemble half a dozen and a large one perhaps a
>hundred. We did full gens (all elements), very rarely.

On a PDP-10 system a complete monitor rebuild would take about 30
minutes; the constraint was that four (I think) modules had to
be built and then everything else could be done in parallel.

An example where a rebuild would be necessary is if the original
build goofed the MONGEN (that was the questionaire the monitor
used to determine which hardware devices, tables (lengths) etc.
would be on the system it was supposed to run.

Another reason would have been if a table or list was too short
and extending it via patching would create more problems. It
would be a more controlled experiment if the list were extended
via a rebuild rather than patching.

PS. Extending fields would be a better reason for a rebuild.

Another reason to do a build would be to incorporate the thousands
"patches" you had done to make sure that all would work as
coded in the sources. That way you can resume your debugging
and testing with a monitor that you know has all fixes set in
ASCII bits. This was one of the most important steps in our
source maintenance procedures.

/BAH

From: jmfbahciv on
In article <PkVOh.211492$5j1.104576(a)bgtnsc04-news.ops.worldnet.att.net>,
Stephen Fuld <S.Fuld(a)PleaseRemove.att.net> wrote:
>Rich Alderson wrote:
>> Stephen Fuld <S.Fuld(a)PleaseRemove.att.net> writes:
>>
>>> jmfbahciv(a)aol.com wrote:
>>>> In article <571ro8F2bdosvU1(a)mid.individual.net>,
>>>> =?ISO-8859-1?Q?Jan_Vorbr=FCggen?= <jvorbrueggen(a)not-mediasec.de>
wrote:
>>>>>> Only for small problems. What do you do in the cases where a
>>>>>> reassembly is the way to make the problem go away?
>>>>> Do a complete SYSGEN?
>>>> Yes.
>>>
>>> Was there no alternative between patching the object code and doing a
>>> complete sysgen? On the system with which I am most familiar (Non-DEC),
>>> we mostly did partial sysgens where only a small number of modules were
>>> re-assembled and the system linked. Out of say 400 modules in the OS, a
>>> typical gen might assemble half a dozen and a large one perhaps a
>>> hundred. We did full gens (all elements), very rarely.
>>
>> A quick look at http://pdp-10.trailing-edge.com says that the Tops-20 V7.0
>> sources involve 165 files, not all of which are going to be Macro-20 source
>> (Link-20 control files, batch jobs, etc.); Tops-10 v7.04 similarly involves
>> 181 files.
>>
>> In porting Tops-20 to the XKL Toad-1, as well as implementing changes at
>> Stanford, we frequently re-compiled individual files, or perhaps a half
dozen
>> or so for a major change, and re-linked against the previous builds' .REL
>> files. I can't speak to Tops-10 development--my involvement with that OS
only
>> began 4 years ago with a new job.
>
>OK, then my question goes back to Jan and Barb. Why was a full sysgen
>required or recommended for the situations you guys were talking about?

I just posted giving (IIRC) four examples.

> Wouldn't partial gens do the job without the need for the large
>resource commitment Jan talked about?

I suppose this depends on what the coder's role is. I can write
better about specific examples rather than do a text book analysis.
Sorry. I don't know how to write better.

/BAH

From: david20 on
In article <460c40e5$0$18932$4c368faf(a)roadrunner.com>, Peter Flass <Peter_Flass(a)Yahoo.com> writes:
>Jan Vorbr�ggen wrote:
>> AFAIK, even the
>> Alpha is not only being supported but being actively developed.
>
>This must be a change, then. A while ago I think I remember seeing a
>"roadmap" that called for one or two bumps, and then nothing.
>
Plans for EV8 were killed off ages ago at the time of the "Alphacide".
EV7 was released and had a speed bump years ago (which was scaled back from the
original plans for the EV79). I think the last speed bump was around mid 2004.

The Alpha end of sales date was supposed to be October 2006.
For various reasons that has now been extended until the end of April 2007.

VMS and Tru64 users have long since learned that published roadmaps are not
worth the paper you would waste printing them out.


David Webb
Security team leader
CCSS
Middlesex University