From: Grzegorz on
Reid Rivenburgh wrote:

> allocate more than 4-5. It does look to me like the allocation of 250
> MB temporarily requires 500 MB. Stephen suggested in private email
> that it was a problem with the implementation of "new" trying to be a
> little too smart and predictive. He thought using HeapAlloc would
> avoid that, but it seems like it happens there, too.
I wouldn't call it "little". I wanted you to post relevant code, because it sounds like very simple bug oreven typo.
Like reading short ints from file into ints in memory or so. You may try to use malloc, where you be forced explicitely to provide number of bytes.
>
> Regarding fragmentation, maybe a better example illustrating Stephen's
> point would be two 900 MB chunks on a 2 GB machine. The first will
> likely work (assuming it doesn't try to get twice that!), but the
> second will likely fail. At least, that's what I've seen. My memory
> usually has some junk scattered through it that prevents the OS from
> finding that much contiguous space.
It is most likely that overall system memory quota (2GB) is to be exceeded (other applications can consume 200MB) when you try to allocate second 900MB rather than inability to find contiguous 900MB within application's memory address space (you didn't allocated there about 200MB) if that's what you mean.
If you refer to physical memory fragmentation however, then virtually the problem does not exist since windows 3.1. There is no need to have 900MB of contiguos physical memory to allocate 900MB using singe new[]. It's the system job to translate logical addresses from your app's own memory address space into physical one and contigous logical address space might be mapped into many non-contiguous blocks in the RAM. In fact some of the data might not be in RAM at all if you used swapfile.
So the most likely you don't have another 900MB summary.

>
> Thanks,
> Reid

--
677265676F727940346E6575726F6E732E636F6D
From: scott moore on
rivenburgh(a)gmail.com wrote:

> I'm also afraid that even HeapCreate/Alloc are temporarily using more
> than the exact amount of memory I need and failing prematurely (double
> again, in fact). Does anyone have any suggestions for maximum
> efficiency when allocating huge chunks of memory? The exact amount
> I'll need isn't something I know ahead of time, unfortunately, so I
> can't just do something like grab 1.5 GB when the program starts.
> (Well, I guess I could, but that might be unnecessarily greedy in some
> cases.) I understand fragmentation is an unpredictable issue. Also, I
> can't use anything like windbg in this environment; just what comes
> with Visual Studio and the Memory Validator tool.

Yes, the answer is to stop worrying about it. This is a virtual memory
system. Thinking that every byte taken from the heap is a byte of memory
lost is thinking carried over from fixed memory operating system days.
All you REALLY are doing here is allocating logical memory sections.
You should be at least doubling the total allocation you expect to have.
From: scott moore on
Scott McPhillips [MVP] wrote:

>
> Qualification to my earlier message: When you grab the huge chunk of
> memory the OS may feel the need to zero it out, which _would_
> temporarily impact other processes committed storage.
>

No, it is typed as "not loaded, zeroed", meaning that it would be zeroed
whenever it is actually called for (accessed). The only reason the
memory is accessed at all is for the arena headers (used to manage the
heap). If those headers span multiple pages, the page the header falls
in will be initialized, and the rest left for the VM to page up. In any
case, thats only one page, and there are ways for the OS to skip even
doing that.
From: Boris on
<rivenburgh(a)gmail.com> wrote in message
news:1140810484.105812.312260(a)i40g2000cwc.googlegroups.com...
> Hi. I have an application in which I need to dynamically allocate
> multiple instances of large amounts of memory (used to store large
> datasets). They can vary in size, but 250 MB is typical. On a machine
> with 2 GB of memory, I can generally read in three of them before I get
> an out of memory error. The app is written in C++ and uses standard
> "new" to allocate the memory. I have Memory Validator, a nice tool
> along the lines of Purify. It lets me see the memory space as my
> program runs. It looks like a single 250 MB allocation actually tries
> to allocate double that behind the scenes, for efficiency's sake. At
> some seemingly premature point, I guess because of memory
> fragmentation, it fails. (Not much else is running on the machine at
> the time.)
>
> I've been told that instead of "new" I should use Windows routines like
> HeapCreate and HeapAlloc to prevent it from trying to allocate so much
> memory. I've been playing around with them in a dummy program, and I'm
> seeing something strange. If I create a growable, 200 MB heap space
> with HeapCreate, and then I try to allocate 20 MB of space from that
> heap using HeapAlloc, Memory Validator shows that the 20 MB is being
> taken from the free space OUTSIDE my 200 MB heap. My question, then,
> is whether HeapAlloc is guaranteed to use the space from the specified
> heap. Is it possible that the tool I'm using to watch what's going on
> in memory is just getting it wrong?
>
> I'm also afraid that even HeapCreate/Alloc are temporarily using more
> than the exact amount of memory I need and failing prematurely (double
> again, in fact). Does anyone have any suggestions for maximum
> efficiency when allocating huge chunks of memory? The exact amount
> I'll need isn't something I know ahead of time, unfortunately, so I
> can't just do something like grab 1.5 GB when the program starts.
> (Well, I guess I could, but that might be unnecessarily greedy in some
> cases.) I understand fragmentation is an unpredictable issue. Also, I
> can't use anything like windbg in this environment; just what comes
> with Visual Studio and the Memory Validator tool.
>
> I'm running Windows 2000 SP4. I may move to XP someday....
>
> Thanks for any info! I'm obviously something of a novice when it comes
> to low-level memory management....
>
> Thanks,
> Reid
>
> P.S. I have no connection with the Memory Validator folks; they've
> been very helpful and it seems like a nice tool.
> http://www.softwareverify.com/memoryValidator/
>

I just wrote a test C++ program (using VC6) - in 10 minutes - that allocates
memory with *regular* operator 'new'.
My machine (32-bit XP Pro SP2) has 512 MB RAM and and 1.75GB pagefile space
(2 different pagefiles on 2 disks).
I was able to allocate 7 250MB chunks or 19 100MB chunks by using that test
program.

So, it seems the only limit: max amount of virtual memory per process (2GB).

Here's the source code:

#include <stdlib.h>
#include <stdio.h>
#include <memory.h>

void Usage()
{
printf("Usage... memalloc <chunk-size-in-MB> <number-of-chunks>\n");
exit(2);
}

main(int argc, char** argv)
{
int nChunkSize = 0;
int nNumChunks = 0;
if ( argc == 3 )
{
nChunkSize = atoi(argv[1]);
nNumChunks = atoi(argv[2]);
}
else
{
Usage();
}

if ( nChunkSize > 0 && nChunkSize < 1000 && nNumChunks > 0 )
{
for ( int idx = 0; idx < nNumChunks; idx++ )
{
char *p = new char[1024 * 1024 * nChunkSize];
if ( p )
{
memset(p,0,1024 * 1024 * nChunkSize);
}
else
{
printf("Failed to allocate %d-th chunk of %d MB\n",idx,nChunkSize);
break;
}
}
}
else
{
Usage();
}
}

-Boris



From: Lucian Wischik on
Grzegorz Wr?bel </dev/null(a)localhost.localdomain> wrote:
>rivenburgh(a)gmail.com wrote:
>> It looks like a single 250 MB allocation actually tries
>> to allocate double that behind the scenes, for efficiency's sake.
>You kidding. What kind of efficiency would be that?

I think that's how most heap-managers are written -- if the user has
requested more memory than is available, then it's likely they'll
request even more in future, so build up exponentially (request double
the current max) so as to minimize how many requests you have to make
to the underlying OS.

(this isn't my area but I went to a talk by a guy who specialised in
concurrent heap managers and vaguelly remember...) To write a good
heap-manager that doesn't fragment much, it divides its available
space into binary blocks -- say the entire space is 1024, it might be
divided into two blocks, so that one block is available for
allocations up to size 512; the other block is again subdivided and
available for two allocations up to 256. Apparently it's even better
to divide into exponential (or fibonacci?) rather than binary.

Anway, this kinds of binary data-structures might make it NECESSARY to
reserve 512MB just to satisfy a 250MB request (if 8MB had already been
used).

--
Lucian
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7
Prev: WMI compiing problem
Next: linker problem: __iob