From: Anne & Lynn Wheeler on
Peter Flass <Peter_Flass(a)Yahoo.com> writes:
> Sorry, I missed this in my last post, why not just page-map the
> executable and tag the pages. You could do the relocation withing a
> page the first time it was referenced?

smop

some amount of stuff cms inherited from os/360 which was real storage
paradigm and did stuff up-front ... before turning loose execution.

in addition to doing all the cp67 kernel changes and work on os/360
batch systems
http://www.garlic.com/~lynn/94.html#2 Schedulers
http://www.garlic.com/~lynn/94.html#18 CP/67 & OS MFT14

and doing paged mapped filesystem for cms (along with the
corresponding cp kernel changes) and restructuring pieces of cms
executable code for running in protected shared storage ... i would
have had to rewrite the whole link/loader process (i.e. it wasn't
theoritically system design ... i had to do all the design and changes
myself).

i had started doing some games with how to fetch pages from paged
mapped filesystem. one of the performance issues with tss page map
filesystem was it could do the page mapping ... and then individually
page fault each 4k executable page. for "large" executable ... say
half megabyte (128 pages), that could mean 128 4k page faults ... done
serially ... with the serialized page fault latency for each operation
..... say avg. of 30mills service time ... plus any queuing delay
because of other activity ... say four seconds startup delay plus
maybe two-three times that if there was any contention.

native os/360 and cms executable load would "read" up to 64k bytes at
a time in a single operation (assuming contiguous allocation) ...
doing each 64k operation in possibly 40mills compared to the 30mills
for single 4k operation (say 320mills startup delay instead of four
seconds).

so one of my challenges doing the paged mapped support for the cms
filesystem was to retain the benefits of the larger block transfers
(and not fall into one of the tss pits purely waiting for doing single
page fault at a time).

one of the problems then was not locking out other users ... being
able to dynamically adjust the size of block transfers to contention
on the system. later on i was able to demonstrate full-cylinder
transfers (150 pages) on 3380 drives in single block operation.
downside was that locked up the resource for the duration of the
transfer ... potentially impacting other applications.

misc. past posts about paged mapped filesystem changes
http://www.garlic.com/~lynn/subtopic.html#mmap

and misc. past posts about supporting shared protected executables
that the same executable could appear at different virtual addresses
in different virtual address spaces
http://www.garlic.com/~lynn/subtopic.html#adcon

other posts in this thread:
http://www.garlic.com/~lynn/2006s.html#22 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006s.html#26 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006s.html#27 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006s.html#29 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006s.html#39 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006s.html#46 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006s.html#49 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006s.html#54 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006t.html#10 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006t.html#28 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006t.html#30 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006t.html#39 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006u.html#1 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006u.html#2 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006u.html#19 Why so little parallelism?
http://www.garlic.com/~lynn/2006u.html#30 Why so little parallelism?
http://www.garlic.com/~lynn/2006u.html#54 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006u.html#60 Why these original FORTRAN quirks?
http://www.garlic.com/~lynn/2006u.html#61 Why these original FORTRAN quirks?
From: CBFalconer on
Brian Inglis wrote:
> blmblm(a)myrealbox.com <blmblm(a)myrealbox.com> wrote:
>> CBFalconer <cbfalconer(a)maineline.net> wrote:
>>
.... snip ...
>>>
>>> IIRC it is sufficient to describe all non-shareable resources by
>>> an integer, and insist on accessing those in strictly increasing
>>> numerical descriptions, while releasing them in the inverse order.
>>
>> IIRC you are right that if you give every resource a unique number
>> and access them in strictly increasing order deadlock is impossible.
>> But I don't think there are any restrictions on the order in which
>> things are released. (Now someone can correct me if I'm wrong. :-)? )
>
> Ideally, you want to reserve and release all resources as a set.

Not so. For example, you want to access a device and load
something, then process it, then dump it out to another device.
You don't need to tie up the output device while loading or
processing, nor the input device while processing or dumping.
Multiple processes can do the same operation, using the same
devices, provided you avoid deadly embrace. The sequential
reservation system does this. With the reserve/release all
paradigm the processes have to run strictly sequentially.

--
Chuck F (cbfalconer at maineline dot net)
Available for consulting/temporary embedded and systems.
<http://cbfalconer.home.att.net>


From: Anne & Lynn Wheeler on

re:
http://www.garlic.com/~lynn/2006v.html#0 Why these original FORTRAN quirks?

for total other link/loader drift ... playing with the (bps) loader
table from the initialization of the cp/67 kernel.

the additional changes mentioned in the email had been done by the
time i had implemented "dumprx" (a vmfdump replacement)
http://www.garlic.com/~lynn/subtopic.html#dumprx

From: wheeler
Date: 12/16/77 09:30:25

I made a number of simple changes to the savecp module in cp/67 and
then redefined the interface to obtain pageable cp core (i.e. a
diagnose that would setup pointers similar to the way dmkvmi is done
at ipl time). When the loader is done, it branches to the last csect
entry loaded or the entry point specified by the 'ldt' card. At that
time it passes a pointer to the start of the loader tables and the
count of table entries. I modified savecp to move the table to the end
of pageable cp core and save it along with the rest of cp
core. Modifications not yet done: 1) when the cp auto disk dump is
allocated, move the loader table immediately to the dump area so that
it will be available when cp abends. 2) modify vmfdump to recognize
the loader table pages in the dump.
----

When the loader table was moved to CP pageable core, It was sorted by
core address because the loader table is in the same sequence as
appears on the load map (nearly random within modules). Also in the
loader table, the high order byte of the address field contains the
'esd' id, i.e. the id number that appears on the external symbol
dictionary page in a listing.

.... snip ...

and prevous posts mentioning changes that I did as undergraduate to
cp67 to make portions of kernel "pageable" (which weren't released in
product until vm370) ... which then enabled adding the loader table
entries to the end of the pageable kernel image
http://www.garlic.com/~lynn/94.html#11 REXX
http://www.garlic.com/~lynn/2000b.html#32 20th March 2000
http://www.garlic.com/~lynn/2001b.html#23 Linux IA-64 interrupts [was Re: Itanium benchmarks ...]
http://www.garlic.com/~lynn/2001l.html#32 mainframe question
http://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
http://www.garlic.com/~lynn/2002i.html#9 More about SUN and CICS
http://www.garlic.com/~lynn/2002n.html#71 bps loader, was PLX
http://www.garlic.com/~lynn/2002p.html#56 cost of crossing kernel/user boundary
http://www.garlic.com/~lynn/2002p.html#64 cost of crossing kernel/user boundary
http://www.garlic.com/~lynn/2003f.html#3 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#12 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#14 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#20 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#23 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#26 Alpha performance, why?
http://www.garlic.com/~lynn/2003f.html#30 Alpha performance, why?
http://www.garlic.com/~lynn/2003n.html#45 hung/zombie users ... long boring, wandering story
http://www.garlic.com/~lynn/2004b.html#26 determining memory size
http://www.garlic.com/~lynn/2004f.html#46 Finites State Machine (OT?)
http://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004g.html#45 command line switches [Re: [REALLY OT!] Overuse of symbolic constants]
http://www.garlic.com/~lynn/2004o.html#9 Integer types for 128-bit addressing
http://www.garlic.com/~lynn/2005f.html#10 Where should the type information be: in tags and descriptors
http://www.garlic.com/~lynn/2005f.html#16 Where should the type information be: in tags and descriptors
http://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
http://www.garlic.com/~lynn/2005h.html#10 Exceptions at basic block boundaries
http://www.garlic.com/~lynn/2005n.html#23 Code density and performance?
http://www.garlic.com/~lynn/2006.html#35 Charging Time
http://www.garlic.com/~lynn/2006.html#40 All Good Things
http://www.garlic.com/~lynn/2006i.html#36 virtual memory
http://www.garlic.com/~lynn/2006j.html#5 virtual memory
http://www.garlic.com/~lynn/2006n.html#11 Not Your Dad's Mainframe: Little Iron
http://www.garlic.com/~lynn/2006n.html#49 Not Your Dad's Mainframe: Little Iron
From: Brian Inglis on
On Thu, 23 Nov 2006 10:58:50 -0500 in alt.folklore.computers,
CBFalconer <cbfalconer(a)yahoo.com> wrote:

>Brian Inglis wrote:
>> blmblm(a)myrealbox.com <blmblm(a)myrealbox.com> wrote:
>>> CBFalconer <cbfalconer(a)maineline.net> wrote:
>>>
>... snip ...
>>>>
>>>> IIRC it is sufficient to describe all non-shareable resources by
>>>> an integer, and insist on accessing those in strictly increasing
>>>> numerical descriptions, while releasing them in the inverse order.
>>>
>>> IIRC you are right that if you give every resource a unique number
>>> and access them in strictly increasing order deadlock is impossible.
>>> But I don't think there are any restrictions on the order in which
>>> things are released. (Now someone can correct me if I'm wrong. :-)? )
>>
>> Ideally, you want to reserve and release all resources as a set.
>
>Not so. For example, you want to access a device and load
>something, then process it, then dump it out to another device.
>You don't need to tie up the output device while loading or
>processing, nor the input device while processing or dumping.
>Multiple processes can do the same operation, using the same
>devices, provided you avoid deadly embrace. The sequential
>reservation system does this. With the reserve/release all
>paradigm the processes have to run strictly sequentially.

I did state "ideally".
Without the reserve/release all strategy there is no guarantee the
process will start or complete the work, if the input or output device
does not become available before the system times out the process.
The mainframe batch OS strategy is not to allow the job to start until
all required resources are available, and each resource may be
released as soon as it is no longer required, at end of input or
output in your example, so no execution time is wasted.
Just another way of looking at the world: better or worse depending on
the circumstances.

--
Thanks. Take care, Brian Inglis Calgary, Alberta, Canada

Brian.Inglis(a)CSi.com (Brian[dot]Inglis{at}SystematicSW[dot]ab[dot]ca)
fake address use address above to reply
From: jmfbahciv on
In article <ek1oqs$atl$1(a)reader2.panix.com>,
pa(a)see.signature.invalid (Pierre Asselin) wrote:
>In comp.lang.fortran jmfbahciv(a)aol.com wrote:
>> In article <ejskqj$864$2(a)reader2.panix.com>,
>> pa(a)see.signature.invalid (Pierre Asselin) wrote:
>> >
>> >[ ... ] If you want to relocate shared code you have to do it
>> >copy-on-write and then it's no longer shared, is it.
>
>> No,no,no,no. I think you and I have different meanings of relocate.
>
>> If the monitor relocates a segment of code, the physical address
>> changes are invisible to the app's eye.
>
>But I'm talking about *virtual* addresses. If shared code is not
>position independent it has to be mapped at the same virtual address
>by everyone,

No. Only the monitor has to do this mapping. Apps can be ignorant
of where the code is physically set as long as the app doesn't
commit the cardinal sin of trying to write to it.

> or else it has to be relocated --at the cost of private
>copies of any page with a relocation, i.e. no longer shared.

That may be true with IBM-flavored memory management, I don't
know. It is not a truth with timesharing OSes in my corner
of the biz. You might try to read some of JMF's code and
how he handled multiple sharable high segments, extended
addressing, and virtual memory.

I can't think of a single good place to start that will lead
you to all aspects of how an OS manages memory in behalf
of the user and itself. For some strange reason, the monitor
module that keeps shooting into my head is VMSER.MAC but
I don't think this is the best place to start reading code.

Look, if an app ties itself to a physical location on a
single physical machine, it stuck there forever until CALL
EXIT. This is not a Good Thing for any application that
needs to run without error. It certainly would never work
for any application that needs to rely on comm. If anything
should happen to any hardware that this app uses as bit paths,
the OS should be able to "move" that app and its current job
status to any other available gear. If you have hardwired
your app to an physical location in memory, you can't be
moved to another system or another set of gear that isn't broken.

Look up loosely-coupled systems, striping, SMP, etc.

All of these could handle a hardware error without forcing
the app to restart.

/BAH


/BAH