From: Hector Santos on
Peter Olcott wrote:

>> Of course, good engineering and good hardware molds the
>> chance.
>
> So its not really chance.


No, its still chance, just lower chance.

It can only 100% deterministic to exclude chance when you disable and
remove chance from the picture from every occurring, even if the
probability is low. In lieu of that, its a "cross my fingers" design
philosophy.

And like Microsoft said again, it is ALWAYS using Virtual Memory -
ALWAYS even when the memory requirements are below the physical RAM size.

All you had basically showed that one single process runs nicely on
your 8GB machine, fast caching machine where after loading the
application, it can reside in memory with lower chance of paging out -
simple BECAUSE you have nothing else putting pressure on the machine.

Run a 2nd instance and you begin to see faults. You saw that. You
proved that. You told is that. It is why this thread got started.

I really hope you are beyond all these subtle windows design points by
now. But to get more input, in this loading you do:

Data.reserve(size);

change it to this to display the memory load before and after you reserve

const DWORD & GetMemoryLoad()
{
MEMORYSTATUSEX ms;
ms.dwLength = sizeof(ms);
GlobalMemoryStatusEx(&ms);
return ms.dwMemoryLoad;
}

printf("* B4 Memory Load : %d%%\n", GetMemoryLoad());
Data.reserve(size);
printf("* AD Memory Load : %d%%\n", GetMemoryLoad());

and you will see memory load % and the page faults under task manager
climb.

It remain steady at that point, but that only means you don't have and
competition for memory through out the system.

--
HLS
From: Peter Olcott on

"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
news:eEdA4isyKHA.5288(a)TK2MSFTNGP05.phx.gbl...
>
> Peter Olcott wrote:
>
>>> Of course, good engineering and good hardware molds the
>>> chance.
>>
>> So its not really chance.
>
>
> No, its still chance, just lower chance.
>
> It can only 100% deterministic to exclude chance when you
> disable and remove chance from the picture from every
> occurring, even if the probability is low. In lieu of
> that, its a "cross my fingers" design philosophy.
>
> And like Microsoft said again, it is ALWAYS using Virtual
> Memory - ALWAYS even when the memory requirements are
> below the physical RAM size.
>
> All you had basically showed that one single process runs
> nicely on your 8GB machine, fast caching machine where
> after loading the application, it can reside in memory
> with lower chance of paging out - simple BECAUSE you have
> nothing else putting pressure on the machine.
>
> Run a 2nd instance and you begin to see faults. You saw
> that. You proved that. You told is that. It is why this
> thread got started.

Four instances of 1.5 GB RAM and zero page faults after the
data is loaded.

You never know a man with a billion dollars in the bank just
might panic and sell all of his furniture just in case he
loses the billion dollars and won't be able to afford to pay
his electric bill.

If nothing else I will shut the VM off.

>
> I really hope you are beyond all these subtle windows
> design points by now. But to get more input, in this
> loading you do:
>
> Data.reserve(size);
>
> change it to this to display the memory load before and
> after you reserve
>
> const DWORD & GetMemoryLoad()
> {
> MEMORYSTATUSEX ms;
> ms.dwLength = sizeof(ms);
> GlobalMemoryStatusEx(&ms);
> return ms.dwMemoryLoad;
> }
>
> printf("* B4 Memory Load : %d%%\n", GetMemoryLoad());
> Data.reserve(size);
> printf("* AD Memory Load : %d%%\n", GetMemoryLoad());
>
> and you will see memory load % and the page faults under
> task manager climb.
>
> It remain steady at that point, but that only means you
> don't have and competition for memory through out the
> system.
>
> --
> HLS


From: Hector Santos on
Peter Olcott wrote:

> "Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
> news:eEdA4isyKHA.5288(a)TK2MSFTNGP05.phx.gbl...
>> Peter Olcott wrote:
>>
>>>> Of course, good engineering and good hardware molds the
>>>> chance.
>>> So its not really chance.
>>
>> No, its still chance, just lower chance.
>>
>> It can only 100% deterministic to exclude chance when you
>> disable and remove chance from the picture from every
>> occurring, even if the probability is low. In lieu of
>> that, its a "cross my fingers" design philosophy.
>>
>> And like Microsoft said again, it is ALWAYS using Virtual
>> Memory - ALWAYS even when the memory requirements are
>> below the physical RAM size.
>>
>> All you had basically showed that one single process runs
>> nicely on your 8GB machine, fast caching machine where
>> after loading the application, it can reside in memory
>> with lower chance of paging out - simple BECAUSE you have
>> nothing else putting pressure on the machine.
>>
>> Run a 2nd instance and you begin to see faults. You saw
>> that. You proved that. You told is that. It is why this
>> thread got started.
>
> Four instances of 1.5 GB RAM and zero page faults after the
> data is loaded.


Add a 5th one and you will see page faults again.

The point is you did have initial page faults and that goes to show
you were already in the public swimming pool (VM) and got wet like
everyone in the pool. The difference? You are by yourself or few
people in the pool, or you have a clear lane for yourself and you can
swim the length of the pool with your eyes close and not worry about
crashing into a little kid. :)

> If nothing else I will shut the VM off.


Now you are making it more deterministic! :)

--
HLS
From: Peter Olcott on

"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
news:O2vmnSuyKHA.5332(a)TK2MSFTNGP02.phx.gbl...
> Peter Olcott wrote:
>
>> "Hector Santos" <sant9442(a)nospam.gmail.com> wrote in
>> message news:eEdA4isyKHA.5288(a)TK2MSFTNGP05.phx.gbl...
>>> Peter Olcott wrote:
>>>
>>>>> Of course, good engineering and good hardware molds
>>>>> the chance.
>>>> So its not really chance.
>>>
>>> No, its still chance, just lower chance.
>>>
>>> It can only 100% deterministic to exclude chance when
>>> you disable and remove chance from the picture from
>>> every occurring, even if the probability is low. In
>>> lieu of that, its a "cross my fingers" design
>>> philosophy.
>>>
>>> And like Microsoft said again, it is ALWAYS using
>>> Virtual Memory - ALWAYS even when the memory
>>> requirements are below the physical RAM size.
>>>
>>> All you had basically showed that one single process
>>> runs nicely on your 8GB machine, fast caching machine
>>> where after loading the application, it can reside in
>>> memory with lower chance of paging out - simple BECAUSE
>>> you have nothing else putting pressure on the machine.
>>>
>>> Run a 2nd instance and you begin to see faults. You saw
>>> that. You proved that. You told is that. It is why this
>>> thread got started.
>>
>> Four instances of 1.5 GB RAM and zero page faults after
>> the data is loaded.
>
>
> Add a 5th one and you will see page faults again.

Of course, but I will make sure that never occurs.

>
> The point is you did have initial page faults and that
> goes to show you were already in the public swimming pool
> (VM) and got wet like everyone in the pool. The
> difference? You are by yourself or few

Exactly! The only purpose of any process on this dedicated
web server is to serve my OCR technology. Anything at all
else is not authorized to execute.

> people in the pool, or you have a clear lane for yourself
> and you can swim the length of the pool with your eyes
> close and not worry about crashing into a little kid. :)
>
>> If nothing else I will shut the VM off.
>
>
> Now you are making it more deterministic! :)

Yes but this might slow other aspects down a bit, or make
things a little less reliable. Most of the OS design assumes
that it has virtual memory so I will leave it on unless I
have to shut it down.

>
> --
> HLS


From: Oliver Regenfelder on
Hello,

Peter Olcott wrote:
> For all practical purposes virtual memory is not being used
> (meaning that its use is not impacting performance) whenever
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> zero or very few page faults are occurring.

It think the underlined statement marks some part of the problem.
In my opinion both of you are somewhat on the same point of view
regarding virtual memory "usage" but you phrase your insights
differently.

I think that your(peter) wording is a bit missleading. You represent the
fact that virual memory has no impact with the words "it is not used"
which formally is wrong and that is one thing that offends Hector.

Could we agree to say that "virtual memory is used but does not
show any impacts?" or better "virtual memory is used but paging
does not occure". Those two statements better describe what is
going on.

Best regards,

Oliver