From: Poster Matt on
Nicolas George wrote:
> Eric Sosman wrote in message <hlrmk2$k4g$1(a)news.eternal-september.org>:
>> Count yourself lucky. A double-free or freeing memory not
>> obtained from malloc() may not always be detected so neatly, and
>> can lead to *much* more mysterious failures than the one you spent
>> a mere twenty minutes on.
>
> To solve those kind of mysteries, knowing the magic word helps a lot:
> valgrind.

Thanks for the pointer (ho, ho) to Valgrind. It looks good.
From: Poster Matt on
Thanks for the suggestions of things to look at and use. The Boehm
garbage collector looks very interesting and I'll try it out with some memory
intensive source code, and see what happens.

Cheers.
From: Rainer Weikusat on
Poster Matt <postermatt(a)no_spam_for_me.org> writes:
> Eric Sosman wrote:

[free before close]

>> Sometimes a program that does one thing and quits later finds
>> itself embedded in a larger program that runs its code repeatedly,
>> without terminating the process. You've written, say, a program
>> that loads a big image, runs a face-recognizer, and then exits.
>> And then you decide to adapt it to recognize the individual faces
>> in the group photo of your college tiddlywink team: You'll load the
>> image once (saving lots of I/O time), designate specific areas, and
>> run the recognizer on each area. It would be a Good Thing, don't
>> you think, if the recognizer released the gobs and gobs of memory
>> it had claimed for its work on one face before it moved on to
>> the next?

[...]

> Sounds like good advise. I'll follow it, freeing all memory manually
> before termination.

The basic meaning of this statement is "do useless work now in order
to make the computer do useless work forver because it might be useful
in future". And my take on this would usually be "don't do the useless
work now, but do it also 'in future'". The point of 'free' is to
inform the malloc allocator that a particular chunk of memory isn't
needed anymore and can thus be reused to satisfy some future
allocation request (simplification). If there won't be any future
allocation requests, calling 'free' is a pointless exercise.
From: Eric Sosman on
On 2/21/2010 4:00 PM, Rainer Weikusat wrote:
> Poster Matt<postermatt(a)no_spam_for_me.org> writes:
>> Eric Sosman wrote:
>
> [free before close]
>
>>> Sometimes a program that does one thing and quits later finds
>>> itself embedded in a larger program that runs its code repeatedly,
>>> without terminating the process. You've written, say, a program
>>> that loads a big image, runs a face-recognizer, and then exits.
>>> And then you decide to adapt it to recognize the individual faces
>>> in the group photo of your college tiddlywink team: You'll load the
>>> image once (saving lots of I/O time), designate specific areas, and
>>> run the recognizer on each area. It would be a Good Thing, don't
>>> you think, if the recognizer released the gobs and gobs of memory
>>> it had claimed for its work on one face before it moved on to
>>> the next?
>
> [...]
>
>> Sounds like good advise. I'll follow it, freeing all memory manually
>> before termination.
>
> The basic meaning of this statement is "do useless work now in order
> to make the computer do useless work forver because it might be useful
> in future". And my take on this would usually be "don't do the useless
> work now, but do it also 'in future'". The point of 'free' is to
> inform the malloc allocator that a particular chunk of memory isn't
> needed anymore and can thus be reused to satisfy some future
> allocation request (simplification). If there won't be any future
> allocation requests, calling 'free' is a pointless exercise.

I think what you've missed, or maybe discounted, is the fact
that code changes. The circumstances under which it operates
tomorrow may be different from those it encounters today. That's
why we try to keep software "soft," keep it malleable so we can
reshape it for changing demands.

One of the possible changes -- and I hope you'll note that I'm
only saying "possible" and not "inevitable" -- is that a piece of
code written as a one-shot operation can wind up "library-ized" to
be embedded in a program that will use it repeatedly. If that seems
a likely development (using the crystal ball I mentioned in the part
you snipped), it makes sense to anticipate the eventuality, and to
write the code in a way that makes re-use (or even parallel re-use,
which is harder) fairly easy to do. If, on the other hand, the
crystal ball says "No way!" then just dropping the memory on the
floor is perfectly all right (as I said in *another* bit you snipped).

--
Eric Sosman
esosman(a)ieee-dot-org.invalid
From: Rainer Weikusat on
Eric Sosman <esosman(a)ieee-dot-org.invalid> writes:
> On 2/21/2010 4:00 PM, Rainer Weikusat wrote:
>>> Eric Sosman wrote:
>>
>> [free before close]
>>
>>>> Sometimes a program that does one thing and quits later finds
>>>> itself embedded in a larger program that runs its code repeatedly,
>>>> without terminating the process. You've written, say, a program
>>>> that loads a big image, runs a face-recognizer, and then exits.
>>>> And then you decide to adapt it to recognize the individual faces
>>>> in the group photo of your college tiddlywink team: You'll load the
>>>> image once (saving lots of I/O time), designate specific areas, and
>>>> run the recognizer on each area. It would be a Good Thing, don't
>>>> you think, if the recognizer released the gobs and gobs of memory
>>>> it had claimed for its work on one face before it moved on to
>>>> the next?

[...]

>> The basic meaning of this statement is "do useless work now in order
>> to make the computer do useless work forver because it might be useful
>> in future". And my take on this would usually be "don't do the useless
>> work now, but do it also 'in future'".

[...]

> I think what you've missed, or maybe discounted, is the fact
> that code changes. The circumstances under which it operates
> tomorrow may be different from those it encounters today. That's
> why we try to keep software "soft," keep it malleable so we can
> reshape it for changing demands.
>
> One of the possible changes -- and I hope you'll note that I'm
> only saying "possible" and not "inevitable" -- is that a piece of
> code written as a one-shot operation can wind up "library-ized" to
> be embedded in a program that will use it repeatedly. If that seems
> a likely development

[...]

> it makes sense to anticipate the eventuality, and to write the code
> in a way that makes re-use

[...]

> fairly easy to do.

Code which is written such that it solves a particular problem today
can be reused for something different tomorrow without implementing
the support for 'something different' already today, "just in case".
It doesn't buy anyone anything to write some code 'today' which is
'today' useless and actually even detrimental because this code
_might_ be of use at some unspecified point in time in the future.
The code necessary to do Y can be written whenever the need to do Y
actually materializes. For as long as it hasn't, the chance that Y
never needs to be done is greater than zero and in this case, all that
has happened is that human has wasted some of his time to implement
something which requires any 'user' of the code (not 'developer') to
also waste some of his time by wasting some of the computing resources
available to him. Work more to achieve less, so to say.