From: Ludovic Brenta on
Jerry wrote on comp.lang.ada:
> I would have thought that Ada (or GNAT
> specifically) would be smart enough to allocate memory for large
> objects such as my long array in a transparent way so that I don't
> have to worry about it, thus (in the Ada spirit) making it harder to
> screw up. (Like not having to worry about whether arguments to
> subprograms are passed by value or by reference--it just happens.)

So, you would like the Ada run-time to bypass the operating system-
enforced, administrator-approved stack limit? If userspace programs
could do that, what would be the point of having a stack limit in the
first place?

--
Ludovic Brenta.


From: Jeffrey Creem on
Jerry wrote:
> Thanks for the helpful comments.
>

> So here's me being naive: I would have thought that Ada (or GNAT
> specifically) would be smart enough to allocate memory for large
> objects such as my long array in a transparent way so that I don't
> have to worry about it, thus (in the Ada spirit) making it harder to
> screw up. (Like not having to worry about whether arguments to
> subprograms are passed by value or by reference--it just happens.)
>
> But it seems that I will have to allocate memory for large objects
> using pointers (and thus take the memory from the heap). Is that
> right?
>
> In this context, is there any advantage to declaring the large object
> inside a declare block? Would that force the memory to be allocated
> from the heap?
>
> Jerry

If you want the memory to come from the heap, you need to declare the
variables inside of packages instead of inside procedures. You can then
avoid using access types.

declare blocks will not help.

As for wishing that the compiler would automatically switch between heap
and stack, that would probably be a terrible idea and render the
language quite unsuitable for embedded systems.


-- warning, not even compiled early morning code example below

package do_stuff is
procedure no_bomb;
end do_stuff;

package body do_stuff is
type Float_Array_Type is array (Integer range <>) of Long_Float;
-- 1_048_343 causes segmentation fault, 1_048_342 does not.
x : Float_Array_Type(1 .. 1_048_343);

procedure No_bomb is

begin
x(1) := 1.0;
end No_bomb;
end do_stuff;


with do_stuff;
procedure stuff is

begin
do_stuff.No_Bomb;
end stuff;


From: Ludovic Brenta on
Jeffrey Creem wrote on comp.lang.ada:
> Jerry wrote:
> > Thanks for the helpful comments.
>
> > So here's me being naive: I would have thought that Ada (or GNAT
> > specifically) would be smart enough to allocate memory for large
> > objects such as my long array in a transparent way so that I don't
> > have to worry about it, thus (in the Ada spirit) making it harder to
> > screw up. (Like not having to worry about whether arguments to
> > subprograms are passed by value or by reference--it just happens.)
>
> > But it seems that I will have to allocate memory for large objects
> > using pointers (and thus take the memory from the heap). Is that
> > right?
>
> > In this context, is there any advantage to declaring the large object
> > inside a declare block? Would that force the memory to be allocated
> > from the heap?
>
> > Jerry
>
> If you want the memory to come from the heap, you need to declare the
> variables inside of packages instead of inside procedures. You can then
> avoid using access types.
>
> declare blocks will not help.
>
> As for wishing that the compiler would automatically switch between heap
> and stack, that would probably be a terrible idea and render the
> language quite unsuitable for embedded systems.
>
> -- warning, not even compiled early morning code example below
>
> package do_stuff is
>     procedure no_bomb;
> end do_stuff;
>
> package body do_stuff is
>       type Float_Array_Type is array (Integer range <>) of Long_Float;
>       -- 1_048_343 causes segmentation fault, 1_048_342  does not..
>       x : Float_Array_Type(1 .. 1_048_343);
>
>      procedure No_bomb is
>
>      begin
>        x(1) := 1.0;
>      end No_bomb;
> end do_stuff;
>
> with do_stuff;
> procedure stuff is
>
> begin
>     do_stuff.No_Bomb;
> end stuff;

No, the array is not in the heap in this case; it is in the executable
program's data segment. This may increase the size of the binary file.

To ensure that the array is on the heap, it is necessary to use an
access type and an allocator, e.g.:

type Float_Array_Access_Type is access Float_Array_Type;
x : Float_Array_Access_Type := new Float_Array_Type (1 .. 1_048_343);

--
Ludovic Brenta.
From: John B. Matthews on
In article
<ac4bed10-f655-4fa5-8891-2967ba4388a0(a)k6g2000prg.googlegroups.com>,
Jerry <lanceboyle(a)qwest.net> wrote:

> Thanks for the helpful comments.
>
> First,
> ulimit -s unlimited
> does not work on OS X:
> -bash: ulimit: stack size: cannot modify limit: Operation not
> permitted but I understand that it works on Linux. And possibly the
> reason is the difference in the way that Linux and OS X treat stack
> and heap memory. (Don't be confused and think I know what I'm talking
> about but I read that somewhere.)
>
> ulimit allows querying the hard limit of stack space
> ulimit -Hs
> which on OS X reports 65532 = 2^16 -4 kilobytes, about 67 MB. The user
> via ulimit can set the stack up to that size but not higher:
> ulimit -s 65532
> The default soft limit on OS X is 8192 kB, found by
> ulimit -s
>
> So here's me being naive: I would have thought that Ada (or GNAT
> specifically) would be smart enough to allocate memory for large
> objects such as my long array in a transparent way so that I don't
> have to worry about it, thus (in the Ada spirit) making it harder to
> screw up. (Like not having to worry about whether arguments to
> subprograms are passed by value or by reference--it just happens.)
>
> But it seems that I will have to allocate memory for large objects
> using pointers (and thus take the memory from the heap). Is that
> right?

I think so. When I ran into this some years ago, I was pleasantly
surprised at how easy it was to change over to heap allocation for my
largest data structure. Under Mac OS 9, such allocations fragmented the
heap, but Mac OS X behaves more reasonably.

The menace listed below allocates megabyte-sized blocks right up to the
limit of wired memory, as shown by top:

-----
with Ada.Text_IO;

procedure Heap_Test is

Megabyte : constant Positive := 1024 * 1024;
type Block is array (0 .. Megabyte - 1) of Character;
type Block_Ptr is access all Block;

BPtr : Block_Ptr;
N : Natural := 1;

begin
Ada.Text_IO.Put_Line("*** Heap test...");
while True loop
BPtr := new Block;
Ada.Text_IO.Put (N'Img);
N := N + 1;
end loop;
Ada.Text_IO.New_Line;
end Heap_Test;
-----

This horror raises STORAGE_ERROR at the `ulimit -s` you showed, but only
when compiled with -fstack-check:

-----
with Ada.Text_IO;

procedure Stack_Test is

Megabyte : constant Positive := 1024 * 1024;
type Block is array (0 .. Megabyte - 1) of Character;

procedure Allocate_Stack (N : Positive) is
Local : Block := (others => Character'Val(0));
begin
Ada.Text_IO.Put (N'Img);
Allocate_Stack(N + 1);
end;

begin
Ada.Text_IO.Put_Line("*** Stack test...");
Allocate_Stack(1);
Ada.Text_IO.New_Line;
end Stack_Test;
-----

For reference, ulimit is a bash built-in, so `man bash` for details:

<http://linux.die.net/man/1/bash>

--
John B. Matthews
trashgod at gmail dot com
<http://sites.google.com/site/drjohnbmatthews>
From: Gautier write-only on
Jerry:

> But it seems that I will have to allocate memory for large objects
> using pointers (and thus take the memory from the heap). Is that
> right?

Seems so.
Funnily I had a similar surprise ~12 years ago.
It was with a reputable Ada 83 compiler (DEC Ada), on the university's
main server, with a very reputable system (OpenVMS).
I had declared: A: Matrix(m,n); somewhere, and the m's and n's were
enough for 500MB.
And told myself the system would be smart enough...
Build, run... oh, frozen - and everybody was going out of the offices:
"what happened with the mail ? what happened with... ?"

G.