From: "Andy "Krazy" Glew" on
Mayan Moudgill wrote:
> Brett Davis wrote:
>
>> As for any idea of using no MMU and a completely shared memory space
>> like a graphics chip. That is insane. Having a thousand other
>> processes running broken code and scribbling all over my data and
>> code, leads to a design that will never work in the real world. Its a
>> house of cards, in a room full of angry two year olds.
>>
>
> Umm... MMU != memory protection. Various forms of base+bound protection
> could be implemented that would give you protection without needing an MMU.

Years ago, I overheard one of the CUDA architects say "Nvidia will never
have virtual memory".

For some time, I took that to mean that Nvidia did not do page based
address translation.

Eventually I learned that Nvidia does do page based address translation.
Classic TLBs. However, no page faults.

While I agree that page faults to disk may not need to be supported, I
suspect that page fault tricks like COW (Copy On Write) are so
ubiquitous that they must be supported to have a reasonable chance of
running modern software well.
From: Mayan Moudgill on
Andy "Krazy" Glew wrote:

> While I agree that page faults to disk may not need to be supported, I
> suspect that page fault tricks like COW (Copy On Write) are so
> ubiquitous that they must be supported to have a reasonable chance of
> running modern software well.

There are several OSes (including, IIRC, HP-UX) which do not permit
multiple virtual addresses to point to the same real address. I'm
guessing that they've managed to work around the CoW trick somehow.
From: ChrisQ on
Brett Davis wrote:
> In article <il1hr6-hck.ln1(a)laptop.reistad.name>,
> Morten Reistad <first(a)last.name> wrote:
>
>> In article <ggtgp-4FCC6C.16102725102009(a)netnews.asp.att.net>,
>> Brett Davis <ggtgp(a)yahoo.com> wrote:
>>> The future of CPU based computing, mini clusters.
>
>>>>> Do you need a main CPU if your GPU has 400 processors?
>>> So you design your hardware around 16 CPU clusters, and your OS, and
>>> your apps around the same paradigm. If you do it right, over time if the
>>> sweet spot moves to 8 CPUs or 32 CPUs, the same code will still run. You
>>> gave the primary process a cluster, it does not need to know how many
>>> CPUs, or how much cache, or what the clock speed was.
>>>
>>> The huge benefit is that you only need one MMU/L1/L2 per cluster. The
>>> MMU is a huge piece of die real estate, (and heat) as is the L1 and L2.
>> But you still get process isolation, right?
>
> I am fairly indifferent about process isolation inside a cluster.
> I figure that generally you are running the same code on 1000 items.
> So a programmer gets a cluster sand box that is all his property.
> The OS would wait for all threads to finish before reseting the sandbox
> and giving the cluster to another process group.

This was done in the 70's. Motorola built a cmos 1 bit microprocessor in
a 16 pin dip package for embedded industrial control work. A friend at
ibm told me that someone had built a computer using 1000 of these
devices, but no other details. Part no was MC 14500 and still have the
manual somewhere.

Maybe it's all been done before in hardware terms, but the software
issues still remain to be addressed and are the main stumbling block to
progress...

Regards,

Chris


From: Chris Gray on
Mayan Moudgill <mayan(a)bestweb.net> writes:

> There are several OSes (including, IIRC, HP-UX) which do not permit
> multiple virtual addresses to point to the same real address. I'm
> guessing that they've managed to work around the CoW trick somehow.

The restriction may be in the MMU. My memories of this are pretty vague,
but wasn't it (and the one in Power?) a "reverse lookup", which actually
mapped physical pages to virtual pages, instead of the other way around?
There could be only one such entry for a physical page. So, as Mayan
carefully says, you can't have multiple virtual addresses associated
with one physical address. However, that doesn't stop multiple address
spaces from having that physical page in them - it just must be at the
same virtual address in all of them (and perhaps the same modes).

It too long since I worked on the Myrias PAMS stuff, but I think there
was something extra that we had to do under HP-UX that we didn't have
to do under AIX. It may have related to the ability to nuke entire
virtual segments under AIX, however.

--
Experience should guide us, not rule us.

Chris Gray cg(a)GraySage.COM
http://www.Nalug.ORG/ (Lego)
http://www.GraySage.COM/cg/ (Other)
From: Anne & Lynn Wheeler on

Mayan Moudgill <mayan(a)bestweb.net> writes:
> There are several OSes (including, IIRC, HP-UX) which do not permit
> multiple virtual addresses to point to the same real address. I'm
> guessing that they've managed to work around the CoW trick somehow.

the problem can be when there is some sort of virtual cache (i.e. cache
lines are virtual address associative) ... here is old email describing
the "logical directory" (mixture of virtual and real addresses) for 3090
cache:
http://www.garlic.com/~lynn/2003j.html#email831118
in this old post
http://www.garlic.com/~lynn/2003j.html#42

where the virutal addresses are "STO" associative ... effectively address
space identifier. there was work in original 370 architecture allowing
for "PTO" associative i.e. STO (segment table origin) points to a unique
"Segment table" for each address space; the segment table contains
segment table entries which are PTOs (page table origin) pointing to
page table for each segment. If different virtual address spaces did
sharing by pointing to the same segment (i.e. pagetable) and if the
cache was PTO associative ... then there wouldn't be a problem ... even
if the same shared segment appeared at different virtual addresses in
different virtual address spaces.

I had done a lot of stuff originally on cp67 for page mapped filesystem
and virtual sharing ... even sharing the same thing at different virtual
addresses (or even having the same thing appearing multiple times in the
same virtual address space at different virtual addresses). old email
discussing migrating the changes from cp67 to vm370:
http://www.garlic.com/~lynn/2006v.html#email731212
http://www.garlic.com/~lynn/2006w.html#email750102
http://www.garlic.com/~lynn/2006w.html#email750430

--
40+yrs virtualization experience (since Jan68), online at home since Mar1970