From: Ilya Zakharevich on
On 2010-08-03, Peter J. Holzer <hjp-usenet2(a)hjp.at> wrote:
>> They do on OS/2: the DLL's-related memory is loaded into shared
>> address region. (This way one does not need any "extra"
>> per-process-context patching or redirection of DLL address accesses.)

> Sounds a bit like the pre-ELF shared library system in Linux.

No, there is a principal difference: on Linux (and most other flavors
of Unix), you never know whether your program would be "assembled"
(from shared modules) correctly or not: it is a russian roulette which
C symbol is resolved to which shared module (remember these Perl_ and
PL_ prefixes? They are the only workaround I know of). On OS/2, the
linking is done at link time; each symbol KNOWS to which DLL (and
which entry point inside the DLL) it must link.

So:

a DLL is compiled to the same assebler code as EXE (no indirection)

if a DLL is used from two different programs, its text pages would be
the same - provided all modules it links to are loaded at the same
addresses (no process-specific fixups).

- and it is what happens. DLL runs as quick as EXE, and there is no
overhead if it is reused.

[And, of course, a program loader living in user space is another
gift from people having no clue about security... As if an
executable stack was not enough ;-)]

> Of course that was designed when 16 MB was a lot of RAM and
> abandoned when 128 MB became normal for a server (but then I guess
> the same is true for OS/2).

No, it was designed when 2M was a lot of RAM. ;-) On the other hand,
the architecture was designed by mainframe people, so they may have had
different experiences.

> I'd still be surprised if anybody ran an application mix on OS/2 where
> the combined code size of all DLLs exceeds 1 GB.

"Contemporary flavors" of OS/2 still run confortably in 64MB systems.
(Of course, there is no FireWire/Bluetooth support, but I do not
believe that they would add much - IIRC, the USB stack is coded with
pretty minimal overhead.)

> the kernel. But of course making large changes for a factor of at most 2
> doesn't make much sense in a world governed by Moore's law, and anybody
> who needed the space moved to 64 bit systems anyway.

I do not believe in Moore's law (at least not in this context). Even
with today's prices on memory, DockStar has only 128MB of memory. 2
weeks ago it costed $24.99 on Amazon (not now, though!). I think in a
year or two we can start getting Linux stations in about $10 range.
In this pricerange, memory size matters.

So Moore's law works both ways; low-memory situation does not
magically go out of scope.

Yours,
Ilya
From: Peter J. Holzer on
On 2010-08-03 21:51, Ilya Zakharevich <nospam-abuse(a)ilyaz.org> wrote:
> On 2010-08-03, Peter J. Holzer <hjp-usenet2(a)hjp.at> wrote:
>>> They do on OS/2: the DLL's-related memory is loaded into shared
>>> address region. (This way one does not need any "extra"
>>> per-process-context patching or redirection of DLL address accesses.)
>
>> Sounds a bit like the pre-ELF shared library system in Linux.
>
> No, there is a principal difference: on Linux (and most other flavors
> of Unix), you never know whether your program would be "assembled"

There are a lot more principal differences, some of which are even
slightly relevant to the current topic, which is address space usage.


>> Of course that was designed when 16 MB was a lot of RAM and
>> abandoned when 128 MB became normal for a server (but then I guess
>> the same is true for OS/2).
>
> No, it was designed when 2M was a lot of RAM. ;-) On the other hand,
> the architecture was designed by mainframe people, so they may have had
> different experiences.
>
>> I'd still be surprised if anybody ran an application mix on OS/2 where
>> the combined code size of all DLLs exceeds 1 GB.
>
> "Contemporary flavors" of OS/2 still run confortably in 64MB systems.

And users of 64 MB systems load 1 GB of DLLs into their virtual memory?

Sorry, but that's just bullshit. The combined code size of all DLLs on a
64MB system is almost certainly a lot less than 64 MB, or the system
wouldn't be usable[1]. So by moving code from a general "code+data"
address space to a special code address space, you free up at most 64 MB
in the data address space - a whopping 3% of the 2GB you already have.


> (Of course, there is no FireWire/Bluetooth support, but I do not
> believe that they would add much - IIRC, the USB stack is coded with
> pretty minimal overhead.)
>
>> the kernel. But of course making large changes for a factor of at most 2
>> doesn't make much sense in a world governed by Moore's law, and anybody
>> who needed the space moved to 64 bit systems anyway.
>
> I do not believe in Moore's law (at least not in this context). Even
> with today's prices on memory, DockStar has only 128MB of memory.

The users of systems with 128 MB of memory don't benefit from a change
which increases the virtual address space from 2GB to 4GB. The only
people who benefit from such a change are those for whom 2GB is too
small and 4 GB is just enough. These are very likely to be the people
for whom next year 4 GB will be too small and the year after 8 GB will be
too small. An architectural change which just increases the usable
address space by a factor of two just isn't worth the effort, as the
people who need it will run into the new limit within a year or two. If
you make such a change you have to make it large enough to last for at
least a few years - a factor of 16 (8080 -> 8086, 8086 -> 80286, 80386
-> x86/PAE) seems to be just large enough to be viable.

> So Moore's law works both ways; low-memory situation does not
> magically go out of scope.

Nobody claimed that.

hp

[1] Yes, it's possible that you load a 1MB DLL just to use a single
function. But it's unlikely.
From: Ilya Zakharevich on
On 2010-08-07, Peter J. Holzer <hjp-usenet2(a)hjp.at> wrote:
> Sorry, but that's just bullshit. The combined code size of all DLLs on a
> 64MB system is almost certainly a lot less than 64 MB, or the system
> wouldn't be usable[1].

BS. Obviously, you never used OS/2 (in IBM variant - v2.0 or more;
versions up to 1.2 were done by MicroSoft, and were of a very
different "quality")... I think the first 24/7 system I used had 4MB
of memory, and, in my today's estimates, loaded more than 20MB of DLLs
(counting VIRTUAL MEMORY usage). It was quite usable - with "light
usage". When I got fluent enough to go to "heavy usage", it became
usable after upgrade to 8MB of physical memory.

What you forget about is that

a) there are well-designed systems - and effects of paging are quite
sensitive to design.

b) Virtual memory usage may, theoretically, be more than an order of
magnitude more than the resulting physical memory usage - if unit
of virtual memory (what an analogue of sbrk() increments) is
64KB, and of physical memory (page size) is 4KB.

Imaging that when you sbrk() for 2KB, you are returned 1 fully
accessible page of memory, but the next sbrk() would start at
64KB increment. (Yes, this is how my first implementation of
Perl's malloc() behaved on OS/2. ;-)

Well-designed malloc()s do not work this way. But a DLL
typically loads 2-3 segments; they take minimum 128-194KB of
shared memory region.

>> I do not believe in Moore's law (at least not in this context). Even
>> with today's prices on memory, DockStar has only 128MB of memory.

> The users of systems with 128 MB of memory don't benefit from a change
> which increases the virtual address space from 2GB to 4GB. The only
> people who benefit from such a change are those for whom 2GB is too
> small and 4 GB is just enough.

And, in the chunk of the future as I foresee it, there ALWAYS be a
computer's formfactor for which it is going to matter. You see that
today, Linuxes are used mostly with 128MB - 12GB of memory. I do not
foresee that in "close" future the lower range would float to be above 4GB.

Yours,
Ilya
From: Peter J. Holzer on
On 2010-08-07 22:59, Ilya Zakharevich <nospam-abuse(a)ilyaz.org> wrote:
> On 2010-08-07, Peter J. Holzer <hjp-usenet2(a)hjp.at> wrote:
>> Sorry, but that's just bullshit. The combined code size of all DLLs on a
>> 64MB system is almost certainly a lot less than 64 MB, or the system
>> wouldn't be usable[1].
>
> BS. Obviously, you never used OS/2 (in IBM variant - v2.0 or more;
> versions up to 1.2 were done by MicroSoft, and were of a very
> different "quality")...

True. I only used OS/2 1.x (around 1988/89).

> I think the first 24/7 system I used had 4MB of memory, and, in my
> today's estimates, loaded more than 20MB of DLLs (counting VIRTUAL
> MEMORY usage).

I am sceptical - when 4 MB of memory was normal, 20 MB of code was a lot
- even if you allow for a huge overhead caused by 64 kB segment
granularity. But even if you are right and there would be the same level
of overcommitment on a 64 MB system (which I also doubt), then that
would still be only 320 MB - far away from the 2GB limit.


> What you forget about is that
[...]
> Well-designed malloc()s do not work this way. But a DLL
> typically loads 2-3 segments; they take minimum 128-194KB of
> shared memory region.

2-3 segments of code or 1 segment of code, 1 segment of private data and
1 segment of shared data? Only code segment(s) are relevant here.


>>> I do not believe in Moore's law (at least not in this context). Even
>>> with today's prices on memory, DockStar has only 128MB of memory.
>
>> The users of systems with 128 MB of memory don't benefit from a change
>> which increases the virtual address space from 2GB to 4GB. The only
>> people who benefit from such a change are those for whom 2GB is too
>> small and 4 GB is just enough.
>
> And, in the chunk of the future as I foresee it, there ALWAYS be a
> computer's formfactor for which it is going to matter. You see that
> today, Linuxes are used mostly with 128MB - 12GB of memory. I do not
> foresee that in "close" future the lower range would float to be above 4GB.

You misunderstood me. I didn't say that the range below 4GB would
vanish. But people who need virtual memory sizes (not physical memory!)
up to 3 GB are perfectly fine with the current 32-bit scheme, and those
who need more than 4 GB need to move to 64-bit anyway. So the only
people who would benefit from a change of the 32-bit scheme are those
who need between 3 GB and 4 GB. This is IMNSHO a tiny minority, and it
will stay a tiny minority.

hp