From: Grzegorz on
Lucian Wischik wrote:
> Grzegorz Wr?bel </dev/null(a)localhost.localdomain> wrote:
>
>>rivenburgh(a)gmail.com wrote:
>>
>>>It looks like a single 250 MB allocation actually tries
>>>to allocate double that behind the scenes, for efficiency's sake.
>>
>>You kidding. What kind of efficiency would be that?
>
>
> I think that's how most heap-managers are written -- if the user has
> requested more memory than is available, then it's likely they'll
> request even more in future, so build up exponentially (request double
> the current max) so as to minimize how many requests you have to make
> to the underlying OS.

Yes memory manager will usually commit charge more memory that requested by a process for the reasons you mentioned. But it's rather unlikely memory manager would reserve additionaly 12.5% of total system RAM for a single process just for optimalization purposes. Anyway, regardless if it's likely or not, it does not explain OP's case.

Even if you assumed that "smart" cheap manager charged 500MB for the first time when procees requested 250MB and charged another 500MB when process requested another 250MB then it should be obvious that when process requests 250MB for the third time, then memory won't be charged since that request can be satisfied from what the process already has.
OP described that for the 3rd time memory is charged as well and that 4th requests fails due to "out of memory error". This suggests it is not some eccentricity of memory manager but that the process really consumes such ammounts of memory.

I would bet on some simple bug/typo in the source, but I quess we'll never know, since the code is top-secred. :)

--
677265676F727940346E6575726F6E732E636F6D
From: Grzegorz on
Grzegorz Wr?bel wrote:
> Even if you assumed that "smart" cheap manager charged 500MB for the
I probably meant here that such heap manager would be cheap. :D

--
677265676F727940346E6575726F6E732E636F6D
From: rivenburgh on
I'm sure I could make your approach work, but another big reason not to
is performance. I want this data to be in memory because it's
constantly being accessed in unpredictable ways. I imagine mapping a
file would be pretty slow, though reliable.

So I'm still back to trying to figure out why new/HeapAlloc/etc. aren't
working quite right, and I'm still looking for a simple fix if
possible.

Thanks,
Reid

From: Lucian Wischik on
rivenburgh(a)gmail.com wrote:
>I'm sure I could make your approach work, but another big reason not to
>is performance. I want this data to be in memory because it's
>constantly being accessed in unpredictable ways. I imagine mapping a
>file would be pretty slow, though reliable.

Reid, you have a fundamental misunderstanding of how memory and
virtual memory work...

(in this particular case, MapViewOfFile results in EXACTLY the same
performance as HeapAlloc, and works with exactly the same mechanism.)

My advice is to (1) find some basic explanations of operating systems,
virtual memory, TLBs and read them; (2) until then, follow the advice
in this thread a bit more blindly!

--
Lucian
From: rivenburgh on
Heh. That's what I thought, too, but it didn't mean I could rule such
silly behavior out!

Boris posted a little test program that I'm going to try. I had also
written a small program that was demonstrating my problem. I'm going
to take another look at everything and report back.

Thanks,
Reid

First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7
Prev: WMI compiing problem
Next: linker problem: __iob