From: Rainer Weikusat on
Rainer Weikusat <rweikusat(a)mssgmbh.com> writes:
> Andrew Poelstra <apoelstra(a)localhost.localdomain> writes:
>> On 2010-03-11, Rainer Weikusat <rweikusat(a)mssgmbh.com> wrote:
>>> I wrote 'written with ad hoc usage of the malloc-interface in mind'
>>> for a reason. And this 'approach' doesn't change when swapping
>>> malloc implementations. An interesting, related question would be: How
>>> to determine which 'other malloc' to use because of what reasons, eg,
>>> who is going to read through all the code and understand its behaviour
>>> fully enough to make an educated judgement in this respect? The guy
>>> who desired to avoid writing the less-than-200-LOC in the first place?
>>> Fat chance ...
>>
>> Writing less than 200 LOC is not only costly up front in terms of
>> extra coding effort, but also requires testing and maintenance,
>> possibly refactoring and documentation, and is likely to be a
>> blight on your otherwise-unrelated-to-memory-management source.
>
> 'Writing less than 200 LOC' is something like half a days worth of
> work (in extreme cases, most I had to deal with so far were much
> simpler than that).

Something I completely forgot: The only way for code to be 'unrelated
to memory management' is if it doesn't use any memory. Otherwise,
that's a pretty fundamental issue. If it is an issue which doesn't
really matter, and I mean here, doesn't really matter technically, not
doesn't really matter because of devil-may-care attitude, a lot more
work can be saved by using a language more suitable to the task at
hand than C. It doesn't really make sense to shoulder the burden of
being comparatively close to the machine without actually desiring to
use the features this provides.
From: Chris M. Thomasson on
"Urs Thuermann" <urs(a)isnogud.escape.de> wrote in message
news:ygfzl2fxh5k.fsf(a)janus.isnogud.escape.de...
>I have a typical producer/consumer problem (with varying-sized
> elements) which I currently have solved using a ring buffer in the
> classical way. Because of the varying size, no prior safe way to know
> the needed buffer size and to reduce copying I want to change that to
> linked list of items passed from producer to consumer.
>
> The producer would allocate memory for an item and append it to the
> list, the consumer would dequeue from the beginning of the list and
> free the memory for that item afterwards.
>
> The average rate will be roughly 40 to 50 allocations/deallocations in
> a strict LIFO order and there will be 30000 to 60000 items on the list.
>
> My questions are:
>
> 1. Will typical implementations of malloc()/free() in libc handle this
> load well? Or should I implement my own memory management? I
> currently use glibc on Debian but would also like to know about
> other libc implementations. BTW, the hardware is an Intel x86 CPU
> at about 2GHz.
>
> 2. Are malloc() and free() thread-safe (according to POSIX and/or in
> typical libc implementations) or do I have to use a mutex?

Do you have multiple consumers and/or producers?

From: Casper H.S. Dik on
Rainer Weikusat <rweikusat(a)mssgmbh.com> writes:

>'Writing less than 200 LOC' is something like half a days worth of
>work (in extreme cases, most I had to deal with so far were much
>simpler than that). Since the code is going to be used by the other
>parts of the program, separate testing isn't really necessary. It
>presumably wouldn't hurt to spend something like half an hour on that,
>too. 'Maintenance' means 'making code changes'. I do not quite
>understand which type of code changes might be required here. So far,
>I only ever had to 'maintain' infrastructure code if I got the initial
>design wrong completely. Which usually doesn't hapen. If it does,
>that's a one time effort (IIRC, I did do this exactly once in the last
>6.x years).

Reimplemented part of the standard library is typically a bad idea;
e.g., your "200LOC" might not scale when confronted with 1000s of threads.

A long time ago, we didn't have a very good memory allocator in
the kernel. Some other subsystems worked around that by "writing their
own" (typically allocating a lot of memory and keeping their own
freelists).

Then the memory allocator was upgraded and memory allocator became
fast and scalable; unfortunately, all the subsystems with their own
memory allocators wouldn't scale and needed to be fixed.

Casper
--
Expressed in this posting are my opinions. They are in no way related
to opinions held by my employer, Sun Microsystems.
Statements on Sun products included here are not gospel and may
be fiction rather than truth.
From: Rainer Weikusat on
Casper H.S. Dik <Casper.Dik(a)Sun.COM> writes:
> Rainer Weikusat <rweikusat(a)mssgmbh.com> writes:
>>'Writing less than 200 LOC' is something like half a days worth of
>>work (in extreme cases, most I had to deal with so far were much
>>simpler than that). Since the code is going to be used by the other
>>parts of the program, separate testing isn't really necessary. It
>>presumably wouldn't hurt to spend something like half an hour on that,
>>too. 'Maintenance' means 'making code changes'. I do not quite
>>understand which type of code changes might be required here. So far,
>>I only ever had to 'maintain' infrastructure code if I got the initial
>>design wrong completely. Which usually doesn't hapen. If it does,
>>that's a one time effort (IIRC, I did do this exactly once in the last
>>6.x years).
>
> Reimplemented part of the standard library is typically a bad idea;

The 'standard library' is mostly a collection of bad ideas and these
remain bad, no matter how often they are 'reimplemented'.

> e.g., your "200LOC" might not scale when confronted with 1000s of
> threads.

Since they are not 'in the standard library', that's not only not a
problem but actually a feature.

> A long time ago, we didn't have a very good memory allocator in
> the kernel. Some other subsystems worked around that by "writing their
> own" (typically allocating a lot of memory and keeping their own
> freelists).
>
> Then the memory allocator was upgraded and memory allocator became
> fast and scalable; unfortunately, all the subsystems with their own
> memory allocators wouldn't scale and needed to be fixed.

You didn't really write this with a straight face after I basically
wrote that the malloc-interface was a bad idea and that I would prefer
to use a slab-allocator instead, did you? Since the problems I
presently need to deal with don't require '1000s of threads' I could
so far restrict myself to using the simple 'lock per cache' approach
outlined in Bonwick's original paper. Also, I think you are
inappropriately badmouthing these 'subsystems with their own
allocators'. These were probably fine for the problems they were
intended to solve. When these changed, the strategies to deal with
'the problems at hand' needed to be changed, too. Realistically, they
also provided the boilerplate for the generalized object-caching
allocator which was implemented later by someone who took the time to
actually think about the problem instead of muttering "it's still not
complicated enough ..." and go back to add a new set of workarounds to
his malloc.

The statement about the two approaches to design a software program is
a great pearl of wisdom ...



From: Ersek, Laszlo on
In article <ygfzl2fxh5k.fsf(a)janus.isnogud.escape.de>, Urs Thuermann <urs(a)isnogud.escape.de> writes:

> 1. Will typical implementations of malloc()/free() in libc handle this
> load well? Or should I implement my own memory management?

If you intend to use valgrind, the following link may be pertinent:

http://valgrind.org/docs/manual/manual-core.html#manual-core.limits

----v----
If your program does its own memory management, rather than using
malloc/new/free/delete, it should still work, but Memcheck's error
checking won't be so effective. If you describe your program's memory
management scheme using "client requests" (see The Client Request
mechanism), Memcheck can do better. Nevertheless, using malloc/new and
free/delete is still the best approach.
----^----

(This is no advice, just a bit of trivia so you can make a more informed
decision, in either way.)

Cheers,
lacos