From: EW on
On Aug 11, 2:52 pm, Paul Rubin <no.em...(a)nospam.invalid> wrote:
> EW <ericwoodwo...(a)gmail.com> writes:
> > Well I cared because I thought garbage collection would only happen
> > when the script ended - the entire script.  Since I plan on running
> > this as a service it'll run for months at a time without ending.  So I
> > thought I was going to have heaps of Queues hanging out in memory,
> > unreferenced and unloved.  It seemed like bad practice so I wanted to
> > get out ahead of it.
>
> Even if GC worked that way it wouldn't matter, if you use just one queue
> per type of task.  That number should be a small constant so the memory
> consumption is small.

Well I can't really explain it but 1 Queue per task for what I'm
designing just doesn't feel right to me. It feels like it will lack
future flexibility. I like having 1 Queue per producer thread object
and the person instantiating that object can do whatever he wants with
that Queue. I can't prove I'll need that level of flexibility but I
don't see why it' bad to have. It's still a small number of Queues,
it's just a small, variable, number of Queues.
From: Paul Rubin on
EW <ericwoodworth(a)gmail.com> writes:
> Well I can't really explain it but 1 Queue per task for what I'm
> designing just doesn't feel right to me. It feels like it will lack
> future flexibility.

That makes no sense at all. Multiple readers and writers per queue are
the way Python queues are designed to work. The normal way to spray a
bunch of concurrent tasks to worker threads is just have a bunch of
workers listening to one queue. It's the same way at the producer end.