From: Bill Davidsen on
Peter Flass wrote:
> Tim McCaffrey wrote:
>> In article <hplktn$jac$1(a)news.eternal-september.org>, davidsen(a)tmr.com
>> says...
>>> George Neuner wrote:
>>>> On Sun, 28 Mar 2010 08:16:02 -0500, jmfbahciv <jmfbahciv(a)aol> wrote:
>>>>
>>>>> George Neuner wrote:
>>>>>
>>>>>> Leaving aside why anyone would _want_ to bring back Multics ...
>>>>> To study how to make an OS secure from the beginning project plan
>>>>> to the last edit for shipping?
>>>> Ok, but there are other secure systems to study such as Amoeba and
>>>> EROS. They are more modern than Multics, and in particular, were
>>>> designed to be distributed.
>>>>
>>>> I don't mind anyone studying Multic - there is plenty to learn from it
>>>> (including what not to do) ... but I would be against trying to revive
>>>> it as a working operating system.
>>>>
>>> One of the things which separated MULTICS from most other operating
>>> systems
>> was
>>> the way in which rings were used. Many operating systems use (or
>>> mostly use) only two rings, similarly to the "master mode" and "slave
>>> mode" of 1960's computers. By putting things like libraries and
>>> privileged linked programs (terminology escapes me, it's been ~40
>>> years) in rings, access could be more nuanced.
>>
>> Well, the CDC Cyber 180 series (and NOS/VE) did something similar.
>>
>
> OS/2 uses three: one for the kernel, one for drivers, etc., and the
> third for user programs.

Yes, that's a much more MULTICS model, allowing drivers and system support to
run at "more than user" to access devices, raw memory, etc.
From: Anne & Lynn Wheeler on

Bill Davidsen <davidsen(a)tmr.com> writes:
> One of the things which separated MULTICS from most other operating
> systems was the way in which rings were used. Many operating systems
> use (or mostly use) only two rings, similarly to the "master mode" and
> "slave mode" of 1960's computers. By putting things like libraries and
> privileged linked programs (terminology escapes me, it's been ~40
> years) in rings, access could be more nuanced.

370xa started out with access registers and then added program call (&
return).

issue was that favorite son batch operating system heritage (from real
memory environment) was extensively pointer-passing API.

In the initial transition to virtual memory (OS/VS2 SVS) it was just a
single large virtual memory (pointer-passing APIs still working).

In the migration to MVS ... each application was given its own address
space ... but an image of the MVS kernel appeared in (8mbyte) half of
each (16mbyte) virtual address space (pointer passing API easily worked
since MVS kernel code as in the same address space of each application).

The problem was that there were a lot of "subsystems" (semi-privileged
operations) that had resided outside the kernel (called by applications
with pointer passing API) that were now in their own virtual address
space (i.e. application would generate system call and the kernel would
invoke the subsystem in its own virtual address space).

To continue with pointer-passing API, a "common segment" was created
that also resided in each virtual address space. This started out was
1mbyte of each virtual address space ... but grew ... somewhat
proportional to the number of independent subsystems and the number of
concurrent executing applications. eventually in the timeframe of of
large 3033s ... common segments were threatening to exceed five mbytes
(growing to six ... leaving only 2mbytes for applications in each
application virtual address space).

to start to address the problem ... a subset of 370xa access registers
were introduced in 3033 called dual-address space mode. Application wold
make kernel call to invoke subsystem, kernel would swap the home &
alternate virtual address space pointers before invoking the
subsystem. each instance of subsystem execution would have alternate
address space pointer to its invoking application ... and could directly
address the invoking applications virtual address space.

Now 370xa introduced introduced access registers with more generalized
implemenation of dual-address space as well as 31bit virtual addressing
(31bit virtual addressing would have alleviated the common segment
attempting to completely take-over remaining application area in each
virtual address space).

program call (& return) ... application instruction that references a
kernel hardware table ... that defines available "subsystems" and the
rules for changing around the address space pointers. now applications
can directly invoke a subsystem in different address space, w/o the
overhead of kernel call processing (much like a library routine call
that resides in the same address space).

description of program call
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/10.34?DT=20040504121320

program return
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/10.35?DT=20040504121320

discussion of access registers ("current instruction space and a maximum
of 15 other spaces)
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/2.3.6?DT=20040504121320

discussion of address spaces
http://publibz.boulder.ibm.com/cgi-bin/bookmgr_OS390/BOOKS/dz9zr003/3.8?DT=20040504121320


recent posts mentionin dual-address space &/or access registers:
http://www.garlic.com/~lynn/2010c.html#41 Happy DEC-10 Day
http://www.garlic.com/~lynn/2010d.html#81 LPARs: More or Less?
http://www.garlic.com/~lynn/2010e.html#75 LPARs: More or Less?
http://www.garlic.com/~lynn/2010e.html#76 LPARs: More or Less?

--
42yrs virtualization experience (since Jan68), online at home since Mar1970
From: nmm1 on
In article <hpnbvo$kvb$1(a)news.eternal-september.org>,
Bill Davidsen <davidsen(a)tmr.com> wrote:
>Peter Flass wrote:
>>
>> OS/2 uses three: one for the kernel, one for drivers, etc., and the
>> third for user programs.
>
>Yes, that's a much more MULTICS model, allowing drivers and system support to
>run at "more than user" to access devices, raw memory, etc.

Typical operating system centricity :-(

A massive gain in RAS could be obtained by allow multiple 'rings'
of protection at the application level. It would enable reliable
run-time systems, debuggers and run-time diagnostics, and increase
the number of bugs that were actually located rather than simply
being hacked around until they now longer showed up.

This would make an INCREDIBLE difference to the RAS of parallel
support and the hairy asynchronous 'libraries', such as windowing
systems. At present, it's infeasible even to report bugs in most
of those, because they aren't repeatable and can't be proved not
to be user errors. But, if the component were in a protected
environment, a dump showing that privilege level would be clear
evidence of a bug.

Yes, that would mean the vendors of such things improving their
standards, but at present they have no incentive to ....


Regards,
Nick Maclaren.
From: Michael Wojcik on
nmm1(a)cam.ac.uk wrote:
>
> A massive gain in RAS could be obtained by allow multiple 'rings'
> of protection at the application level. It would enable reliable
> run-time systems, debuggers and run-time diagnostics, and increase
> the number of bugs that were actually located rather than simply
> being hacked around until they now longer showed up.

Yeah. That's one of the nice things about a capability architecture,
like the AS/400 / iSeries / System i family. Much better protection
granularity means much better (more aggressive and precise) error
detection.

On the '400, an off-by-one logic mistake overruns a buffer and you
find out right away, with a check message that includes a stack trace
sent to your message queue; you could terminate the job or break into
the debugger to investigate further.

--
Michael Wojcik
Micro Focus
Rhetoric & Writing, Michigan State University
From: robertwessel2 on
On Apr 9, 9:40 am, Anne & Lynn Wheeler <l...(a)garlic.com> wrote:
> Bill Davidsen <david...(a)tmr.com> writes:
> > One of the things which separated MULTICS from most other operating
> > systems was the way in which rings were used. Many operating systems
> > use (or mostly use) only two rings, similarly to the "master mode" and
> > "slave mode" of 1960's computers. By putting things like libraries and
> > privileged linked programs (terminology escapes me, it's been ~40
> > years) in rings, access could be more nuanced.
>
> 370xa started out with access registers and then added program call (&
> return).
>
> issue was that favorite son batch operating system heritage (from real
> memory environment) was extensively pointer-passing API.
>
> In the initial transition to virtual memory (OS/VS2 SVS) it was just a
> single large virtual memory (pointer-passing APIs still working).
>
> In the migration to MVS ... each application was given its own address
> space ... but an image of the MVS kernel appeared in (8mbyte) half of
> each (16mbyte) virtual address space (pointer passing API easily worked
> since MVS kernel code as in the same address space of each application).
>
> The problem was that there were a lot of "subsystems" (semi-privileged
> operations) that had resided outside the kernel (called by applications
> with pointer passing API) that were now in their own virtual address
> space (i.e. application would generate system call and the kernel would
> invoke the subsystem in its own virtual address space).
>
> To continue with pointer-passing API, a "common segment" was created
> that also resided in each virtual address space. This started out was
> 1mbyte of each virtual address space ... but grew ... somewhat
> proportional to the number of independent subsystems and the number of
> concurrent executing applications. eventually in the timeframe of of
> large 3033s ... common segments were threatening to exceed five mbytes
> (growing to six ... leaving only 2mbytes for applications in each
> application virtual address space).
>
> to start to address the problem ... a subset of 370xa access registers
> were introduced in 3033 called dual-address space mode. Application wold
> make kernel call to invoke subsystem, kernel would swap the home &
> alternate virtual address space pointers before invoking the
> subsystem. each instance of subsystem execution would have alternate
> address space pointer to its invoking application ... and could directly
> address the invoking applications virtual address space.
>
> Now 370xa introduced introduced access registers with more generalized
> implemenation of dual-address space as well as 31bit virtual addressing
> (31bit virtual addressing would have alleviated the common segment
> attempting to completely take-over remaining application area in each
> virtual address space).
>
> program call (& return) ... application instruction that references a
> kernel hardware table ... that defines available "subsystems" and the
> rules for changing around the address space pointers. now applications
> can directly invoke a subsystem in different address space, w/o the
> overhead of kernel call processing (much like a library routine call
> that resides in the same address space).


Hmmm... "Modern" access registers didn't show up until ESA in about
1990. DAS was introduced in S/370, no later than 1980 (actually I'm
not sure exactly when DAS capable hardware shipped, but MVS/SP1, which
was the first release to support DAS, shipped in 1980). XA happened
in 1983. Of course I don't know for sure, but it seems unlikely that
access registers were planned that far before their introduction.
More likely they were an upwards compatible way to extend the DAS
support.

And Program Call / Program Transfer were introduced with DAS. XA
didn't really do much to DAS and PC/PT, other than fairly obvious
extensions to 31 bit (and a bit of cleanup - IIRC, some stuff like
which address space instructions were fetched from in secondary space
mode got nailed down in XA). XA was mostly about 31 bit mode and a
total revamp of the I/O subsystem. ESA did substantially enhance DAS
(with AR support), and added a number of extensions to PC/PT (and
those have continued as the architecture has evolved).