From: Reid Rivenburgh on
"Alf P. Steinbach" <alfps(a)start.no> writes:

> * rivenburgh(a)gmail.com:
>> No, the processing involved doesn't allocate and deallocate small
>> chunks. It pretty much allocates the one big chunk and reads the data
>> from the file, processing the input and filling the memory. The
>> library just wants a file name and handles all of the memory management
>> itself, so it doesn't really lend itself to those kinds of
>> modifications....
>
> I don't understand what the problem is. If you can replace 'new' with
> HeapAlloc, and there is essentially one big chunk of memory that's the
> problem, why can't you replace 'new' with MapViewOfFile? They both
> yield memory pointers, and the use of those pointers are the same, no
> difference.

Unless I'm confused about something, I think the difference is that
what I'm storing in memory ISN'T what is stored in the file on disk.
The data on disk is read in, processed, converted, etc. and then
stored in memory as something else, it just happens that they're about
the same size.

Just changing the library to use HeapAlloc instead of "new" to
allocate its memory is a pretty minor change, so I don't mind trying
that if it helps.

Does that make sense...?

Thanks,
Reid
From: Grzegorz on
Stephen Kellett wrote:

> In message <dtnpjf$p4v$1(a)atlantis.news.tpi.pl>, Grzegorz Wr?bel
> </dev/null(a)localhost.localdomain> writes
>
>> You kidding. What kind of efficiency would be that?
>> You should have no problem with allocating 2gb memory,
>
>
> You are mistaken. Try this code.
>
> char *p = new char [1024 * 1024 * 1024 * 2];
>
> Guaranteed to fail and return NULL on Windows NT/W2K/XP Workstation.
This examples fails because you try to reserve more than your maximal address space (but this may be increased as other mentioned).
What I meant was that you should be able to allocate about 2GB of memory using your 250MB chunks without greater problems. From what you described it seems you fail to allocate 4*250MB on a 2GB RAM machine. Allocating 250MB shouldn't consume 500MB under no circumstances!

>
>> Memory fragmentation shouldn't be a problem, you deal with virtual
>> address space and os cares about translating it into physical one.
>
>
> Correct, but it needs to find a 2GB contiguous memory space. You won't
If you worry about fragmentation of application's address space they you should't unless you really do a lot of allocating deallocating large memory blocks within the application itself.

> find a space that large on Windows NT/2K/XP (you may on a /3GB machine).
> On a non /3GB machine 2GB is the max space for the DLLs that form your
> application, the C heap, program stack and two 4KB guard pages at
> 0x00000000 and 0x7ffff000 and the workspace from which you wish to
> allocate memory. By definition there is not 2GB contiguous space
> available in that 2GB block as some of it is already used.

You are right that by default there is only 2GB of address space available, upper 2GB is reserved by a system, but as Kelly mentioned it can be easily increased to 3GB, leaving 1GB for a system.


>
> You can check that with VM Validator (free) or Memory Validator's
> virtual tab which graphically shows you the memory space. You can find
> these tools at http://www.softwareverify.com
>
> Stephen

--
677265676F727940346E6575726F6E732E636F6D
From: Reid Rivenburgh on
Grzegorz Wr?bel </dev/null(a)localhost.localdomain> writes:

> Stephen Kellett wrote:
>
>> In message <dtnpjf$p4v$1(a)atlantis.news.tpi.pl>, Grzegorz Wr?bel
>> </dev/null(a)localhost.localdomain> writes
>>
>>> You kidding. What kind of efficiency would be that? You should
>>> have no problem with allocating 2gb memory,
>> You are mistaken. Try this code. char *p = new char [1024 * 1024 *
>> 1024 * 2]; Guaranteed to fail and return NULL on Windows NT/W2K/XP
>> Workstation.

> This examples fails because you try to reserve more than your
> maximal address space (but this may be increased as other
> mentioned). What I meant was that you should be able to allocate
> about 2GB of memory using your 250MB chunks without greater
> problems. From what you described it seems you fail to allocate
> 4*250MB on a 2GB RAM machine. Allocating 250MB shouldn't consume
> 500MB under no circumstances!

Just for the record, in case it's not clear, I'm the original poster
who couldn't allocate 4 * 250 MB on a 2 GB machine (roughly speaking).
And yes, that is indeed what seems to be happening: I'm unable to
allocate more than 4-5. It does look to me like the allocation of 250
MB temporarily requires 500 MB. Stephen suggested in private email
that it was a problem with the implementation of "new" trying to be a
little too smart and predictive. He thought using HeapAlloc would
avoid that, but it seems like it happens there, too.

Regarding fragmentation, maybe a better example illustrating Stephen's
point would be two 900 MB chunks on a 2 GB machine. The first will
likely work (assuming it doesn't try to get twice that!), but the
second will likely fail. At least, that's what I've seen. My memory
usually has some junk scattered through it that prevents the OS from
finding that much contiguous space.

Thanks,
Reid
From: Scott McPhillips [MVP] on
Reid Rivenburgh wrote:
> Hm, interesting, I wasn't looking at it like that. It looks to me in
> Memory Validator, however, as if HeapCreate tries to grab the chunk of
> memory requested and call it "commit", even before trying to do
> anything with it. I figured the heap space would be unusable by other
> apps. Maybe I'm misunderstanding you; are you talking about using C++
> "new" to grab 1.5 GB and managing that space in the app, NOT using
> HeapCreate? That I can believe, though trying to modify the library
> I'm using to work that way would be a big hassle.

Other apps are in other memory address spaces, so your address space
allocation does not reduce their address space allocation. For my
suggestion, I don't think it matters whether you use "new" or HeapCreate
or VirtualAlloc.

Qualification to my earlier message: When you grab the huge chunk of
memory the OS may feel the need to zero it out, which _would_
temporarily impact other processes committed storage.

--
Scott McPhillips [VC++ MVP]

From: Alf P. Steinbach on
* Reid Rivenburgh:
> "Alf P. Steinbach" <alfps(a)start.no> writes:
>
>> * rivenburgh(a)gmail.com:
>>> No, the processing involved doesn't allocate and deallocate small
>>> chunks. It pretty much allocates the one big chunk and reads the data
>>> from the file, processing the input and filling the memory. The
>>> library just wants a file name and handles all of the memory management
>>> itself, so it doesn't really lend itself to those kinds of
>>> modifications....
>> I don't understand what the problem is. If you can replace 'new' with
>> HeapAlloc, and there is essentially one big chunk of memory that's the
>> problem, why can't you replace 'new' with MapViewOfFile? They both
>> yield memory pointers, and the use of those pointers are the same, no
>> difference.
>
> Unless I'm confused about something, I think the difference is that
> what I'm storing in memory ISN'T what is stored in the file on disk.
> The data on disk is read in, processed, converted, etc. and then
> stored in memory as something else, it just happens that they're about
> the same size.

Well, create a like-sized file and use that as backing for MapViewOfFile.

Technically this shouldn't be much different from using ordinary virtual
memory and the system's page file, with the same problems, because
essentially you're just supplying your own page file.

However, I suspect it will work.


> Just changing the library to use HeapAlloc instead of "new" to
> allocate its memory is a pretty minor change, so I don't mind trying
> that if it helps.
>
> Does that make sense...?

If it works, yes (but it didn't, did it?).


--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7
Prev: WMI compiing problem
Next: linker problem: __iob