From: Ian Harvey on
With 4.6 I got warnings about integer overflow. See changes marked !#.

On 06/04/2010 09:21 PM, glen herrmannsfeldt wrote:
> implicit none
> integer(8) i,j
> integer(1), allocatable:: x(:)
> i=3000000000_8 !#
> j=i
> allocate(x(i))
> print *,size(x,KIND=8) !#
> x(j)=13_1 !#
> print *,j,x(j)
> print *,huge(i)
> end
>
From: Louis Krupp on
On 6/3/2010 11:02 PM, glen herrmannsfeldt wrote:
> Ian Harvey<ian_harvey(a)bigpond.com> wrote:
> (snip, I wrote)
>
>>> In some theoretical calculations log(n) is used, and as an
>>> approximation that probably isn't so bad.
>
>> Perhaps I misunderstand, but I don't think the time for access to
>> physical memory is order log(n), where n is the total allocated memory.
>> Perhaps it is if n is the size of the working set (so the time takes
>> into account things like cache and swapping?), but arrays that are
>> allocated and not accessed aren't what I'd consider part of the working
>> set. Or are you referring to the time needed to allocate the memory in
>> the first place?
>
<snip>
>
> OK, now for memory access. The address decoders on semiconductor
> RAMs require log(N) level of logic to address N bits. As memory
> arrays get bigger, the wires connecting them together get longer,
> requiring longer delays.
<snip>

Sounds like we're talking about log(log(n)), where n is the maximum
memory address. FWIW.

Louis
From: Louis Krupp on
On 6/3/2010 6:11 AM, helvio wrote:
<snip>
> I've been using dynamic allocation for the large temporary arrays of
> my code. They are not used simultaneously, so I save a lot of memory
> with respect to static allocation. What I wanted to know is the pros
> and cons of *not* using static allocation when I can.
>
> From the comments of all of you, I think I can conclude the following:
> as long as I am using a small portion of the available memory, it is
> wise to use static allocation, because multiple allocation/
> deallocation might fragment the memory badly (unless I iteratively
> allocate ever increasing arrays); and when the total required memory
> by static allocation gets significant with respect to the total
> available memory, then allocation/deallocation might be a good/the
> only option (but then again, fragmentation might also be an issue).

I think you have part of that backwards. Iterative allocation and
deallocation of increasingly large arrays is *more* likely to result in
fragmentation. Allocation from fragmented memory is likely to take longer.

Keep in mind, also, that there are two total memory sizes in play:
physical, the RAM on your box, and virtual, which is determined by
system architecture and possibly the OS. Virtual memory is much larger,
and whatever doesn't fit in physical memory will be paged in and out as
required. My guess is that if you really start using enough virtual
memory to get close to its limit, your system is going to be spending
most of its time paging, and it doesn't matter if you do that with
dynamically or statically allocated virtual memory.

If you were using very large automatic arrays and hitting a stack size
limit, dynamic allocation would be an obvious way to go. In your case,
it's not as clear that you're solving a real problem.

Louis