From: dpb on
helvio wrote:
....

> I also have the following related question: can the ALLOCATION /
> DEALLOCATION statements slow down the program if they are called
> multiple times, as compared with a single static declaration of "U"?
> e.g. by introducing a loop in my example above:
>
> do i=1,M
> call using_UV ! U is allocated here
> call kill_U ! U is deallocated here
> end do
....

The answer is, as others have said, "of course"--you can't do something
that you weren't doing otherwise and expect it to not cost at least
something. I'd reiterate Richard's comment: test it on you code and
see if you're curious enough to care for academic reasons. If it's
actually optimizing, profile first...

My only reason really for posting since none of that was anything
different than what has already been posted is "why are you doing this"
in regards to the code snippet above?

Unless you have something else to do w/ the memory you're releasing and
you're immediately going to reclaim it as the above code does, there's
absolutely nothing to be gained by deallocating anything until the loop
completes. In that case, whatever time the de- and re-allocation takes,
however, small, is simply wasted overhead.

If there's some other memory hog of other memory besides this in the
loop not shown, then the answer might be "maybe", but as somebody else
noted (I think in this thread; I've only been browsing, not reading)
modern OS will page swap out the section that's not used anyway behind
your back if it determines it needs something else and that page gets musty.

The one thing I see that might be a real detriment in this is that it is
at least possible that you could end up fragmenting memory badly by
doing this and actually cause a previously working program that was
robust to become not so. I don't know that it is particularly likely w/
the same memory sizes being released/reclaimed w/o something else in the
process going on in between, but add in the other code previously
mentioned and the fact there may be other applications in background,
etc., etc., and odds shorten...

All in all, unless I had a very clear and specific reason I'd certainly
not code that way in anything that was remotely like the actual snippet.
Granted, that may not be what the actual code really resembles; see
above... :)

--
From: glen herrmannsfeldt on
dpb <none(a)non.net> wrote:
> helvio wrote:

>> I also have the following related question: can the ALLOCATION /
>> DEALLOCATION statements slow down the program if they are called
>> multiple times, as compared with a single static declaration of "U"?
>> e.g. by introducing a loop in my example above:
(snip)

> The answer is, as others have said, "of course"--you can't do something
> that you weren't doing otherwise and expect it to not cost at least
> something. I'd reiterate Richard's comment: test it on you code and
> see if you're curious enough to care for academic reasons. If it's
> actually optimizing, profile first...

> My only reason really for posting since none of that was anything
> different than what has already been posted is "why are you doing this"
> in regards to the code snippet above?

> Unless you have something else to do w/ the memory you're releasing and
> you're immediately going to reclaim it as the above code does, there's
> absolutely nothing to be gained by deallocating anything until the loop
> completes. In that case, whatever time the de- and re-allocation takes,
> however, small, is simply wasted overhead.

Well, sometimes you have a loop that may be processing different
sized arrays. Then do you test the previous size before
deallocating and reallocating the new size?

> If there's some other memory hog of other memory besides this in the
> loop not shown, then the answer might be "maybe", but as somebody else
> noted (I think in this thread; I've only been browsing, not reading)
> modern OS will page swap out the section that's not used anyway behind
> your back if it determines it needs something else and that page
> gets musty.

Actually, I posted that in terms of the static memory case, but
yes it applies in the allocatable case, too.

> The one thing I see that might be a real detriment in this is that it is
> at least possible that you could end up fragmenting memory badly by
> doing this and actually cause a previously working program that was
> robust to become not so. I don't know that it is particularly likely w/
> the same memory sizes being released/reclaimed w/o something else in the
> process going on in between, but add in the other code previously
> mentioned and the fact there may be other applications in background,
> etc., etc., and odds shorten...

If there was other allocation in the same loop it could easily
fragment all the rest of memory. One of the best ways to fragment
is to allocate/copy/deallocate ever increasing sizes of more
than one array inside a loop. Each one will be too big for the
hole left by the previous one.

> All in all, unless I had a very clear and specific reason I'd certainly
> not code that way in anything that was remotely like the actual snippet.
> Granted, that may not be what the actual code really resembles; see
> above... :)

It seems that it was supposed to be a test, not the actual code.


-- glen
From: helvio on
On Jun 3, 5:02 am, glen herrmannsfeldt <g...(a)ugcs.caltech.edu> wrote:
>
> It seems that it was supposed to be a test, not the actual code.
>
Yes. I want to know the pros and cons of multiple dynamic allocation
when large volumes of memory are used. My code doesn't actually do
that. My code is big and uses reasonably large arrays that must be
kept during the whole run, and a number of other large arrays that are
used only temporarily and/or conditionally. And their size varies. I
am yet far from exceeding physical memory, but my code will increase
with time and there is a good possibility that one day it will require
a significant portion of the available memory, and perhaps even
exceeding it (it has happened with other people in my field of
research). And if/when that happens I wouldn't like to have to recode
it.

I've been using dynamic allocation for the large temporary arrays of
my code. They are not used simultaneously, so I save a lot of memory
with respect to static allocation. What I wanted to know is the pros
and cons of *not* using static allocation when I can.

From the comments of all of you, I think I can conclude the following:
as long as I am using a small portion of the available memory, it is
wise to use static allocation, because multiple allocation/
deallocation might fragment the memory badly (unless I iteratively
allocate ever increasing arrays); and when the total required memory
by static allocation gets significant with respect to the total
available memory, then allocation/deallocation might be a good/the
only option (but then again, fragmentation might also be an issue).

I think that everytime I create a module for my code, and after I
defrag it, I will create two copies of it. Something like
'mod_modname_static.f90' and 'mod_modname_alloc.f90'. I will use the
static allocation version by default, and the dynamic allocation
version only if I see that my temporary arrays are too big.

Cheers,
--helvio
From: glen herrmannsfeldt on
helvio <helvio.vairinhos(a)googlemail.com> wrote:
(snip)

> I think that everytime I create a module for my code, and after I
> defrag it, I will create two copies of it. Something like
> 'mod_modname_static.f90' and 'mod_modname_alloc.f90'. I will use the
> static allocation version by default, and the dynamic allocation
> version only if I see that my temporary arrays are too big.

Well, you could use the C preprocessor, which is supported by
many Fortran compilers, to select between the appropriate statements
based on compiler command line options.

#ifdef ALLOC
real, allocatable:: x(:,:)
#else
real x(100,100)
#endif

Then, at least for compilers based on gcc, the -DALLOC
command line option will select the allocatable version.

-- glen
From: Ian Harvey on
On 3/06/2010 7:16 AM, glen herrmannsfeldt wrote:
> helvio<helvio.vairinhos(a)googlemail.com> wrote:
>> On Jun 2, 4:35 pm, nos...(a)see.signature (Richard Maine) wrote:
>>> helvio<helvio.vairin...(a)googlemail.com> wrote:
>>>> In sum, I think my doubts reduce to the question of whether the
>>>> efficiency of accessing the physical memory depends on the size of the
>>>> allocated memory, or if it is independent of it.
>
>>> It should be independent of it, or anyway close enough that
>>> you won't be able to measure the difference.
>
> In some theoretical calculations log(n) is used, and as an
> approximation that probably isn't so bad.

Perhaps I misunderstand, but I don't think the time for access to
physical memory is order log(n), where n is the total allocated memory.
Perhaps it is if n is the size of the working set (so the time takes
into account things like cache and swapping?), but arrays that are
allocated and not accessed aren't what I'd consider part of the working
set. Or are you referring to the time needed to allocate the memory in
the first place?
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5
Prev: New gfortran bug
Next: optimized code crashes under ifort