From: Joseph Kinsella on 30 Jun 2010 09:57
First, let me give a little background. I have two applications, which
both connect to two different databases (but on the same server.) These
applications run on about 40+ computers. The default pool size (5) is
too large since it can possibly take up 800+ connections to my single
MySQL server (Remember there are two (2) databases). However, when I
decrease the pool size to try and keep the connections more compact, I
run into the infamous "ConnectionTimeoutError". Essentially telling me
that the pool size is too small. After some debugging, I've discovered
that some threads that should be done with the database are not
releasing their connection back into the pool. Now, I've tried a few
different Monkey Patches that I found on the internet to try and combat
this problem, but none have worked thus far. So I guess my question is,
how I can I ensure that a connection gets released back into a pool
after my Model is done being used.
I am aware of the with_connection() method, but that appears to be used
outside of Models. I did find this:
but that looks like I'd have to hand edit a lot of my Models, and since
I have ~50 models with lots of methods inside them, I'm not sure this is
the easiest/best approach.
I also have another monkey patch in place that attempts to cleanup the
connection with every call. I am having difficulties finding the URL to
it right now, but that doesn't work either.
If you need code posted, I can post some code, but since this is an in
house thing, I am limited on what I can show.
Posted via http://www.ruby-forum.com/.
Prev: RubyGems 1.3.7
Next: Networking: select() blocks for seconds (> timeout)