From: David Schwartz on
On May 7, 4:26 am, "Ersek, Laszlo" <la...(a)caesar.elte.hu> wrote:

> How could the Solaris implementation be made refuse new connections if the
> "rate limiter" is in effect? Simply by setting up a low backlog value with
> the initial listen()? Or by manipulating the backlog dinamically with
> repeated listen() calls during peaking loads?

Basically, you open two or three junk descriptors (copies of /dev/
null). If you get an EMFILE, you close the junk connections, call
'accept' followed by 'close' until you get 'WOULDBLOCK', then open the
junk files again.

Once you get an EMFILE, you never need get it again. Once you know how
many sockets you're allowed, if you 'accept' a connection such that
the next connection would cause an EMFILE, you can just drop
connections in a tight loop.

Generally, you can fairly easily determine the maximum number of
descriptors you can have. Say it's 16,384. You can then set a limit
just a bit lower than that, say 16,380. (That helps to save a few
descriptors for urgent stuff, such as if an existing connection forces
you to open a file.) If you accept a new connection whose descriptor
number is higher than that, you can send it an error and then close
it, just close it, take steps to load shed, or whatever.

You can set soft and hard limits, if that's suitable for your
application. At the soft limit, you may reduce timeouts, disconnect
"less important" connections, take yourself out of the server pool, or
whatever is appropriate for your application.

DS
From: Ersek, Laszlo on
On Fri, 7 May 2010, David Schwartz wrote:

> Basically, you open two or three junk descriptors (copies of /dev/
> null). If you get an EMFILE, you close the junk connections, call
> 'accept' followed by 'close' until you get 'WOULDBLOCK', then open the
> junk files again.

Great, thanks!


> load shed

English never stops amazing me.

Cheers,
lacos
From: Noob on
Chris Friesen wrote:

> For really high-performance servers that need to deal with large
> numbers (tens of thousands) of descriptors, POSIX doesn't really have
> a solution. Various OS-specific options exist (kqueue on BSD, epoll
> on Linux, /dev/epoll on Solaris, etc.)

Dan Kegel's article is an interesting read.

http://www.kegel.com/c10k.html
From: phil-news-nospam on
On Sat, 08 May 2010 10:21:41 +0200 Noob <root(a)127.0.0.1> wrote:
| Chris Friesen wrote:
|
|> For really high-performance servers that need to deal with large
|> numbers (tens of thousands) of descriptors, POSIX doesn't really have
|> a solution. Various OS-specific options exist (kqueue on BSD, epoll
|> on Linux, /dev/epoll on Solaris, etc.)
|
| Dan Kegel's article is an interesting read.
|
| http://www.kegel.com/c10k.html

excessive emphasis on threads compared to processes

What is really needed is a whole NEW threading concept where individual
threads can have private-to-that-thread resources, like file descriptors
(but done without giving up the ability to choose to share them). Then
you can spread the descriptors and other resources out in ways that allow
them to be managed better.

--
-----------------------------------------------------------------------------
| Phil Howard KA9WGN | http://linuxhomepage.com/ http://ham.org/ |
| (first name) at ipal.net | http://phil.ipal.org/ http://ka9wgn.ham.org/ |
-----------------------------------------------------------------------------
From: David Schwartz on
On May 8, 6:22 am, phil-news-nos...(a)ipal.net wrote:

> excessive emphasis on threads compared to processes

Process-pool designs are not really realistic yet. Nobody's done the
work needed to make them useful.

I keep hoping somebody will, since I think that's a phenomenal design
approach. You would need to allocate lots of memory address space
before you fork off the child processes (64-bit OSes make this easy),
and have a special "shared allocator" to allocate shared memory. You'd
need a library that made it easy to register file descriptors as
shared and hand them from process to process. You'd also need a "work
pool" implementation that only accepted references to shared resources
to identify a work item.

Ideally, a process could register what it was messing with. So if it
crashed/failed, the system would know what was potentially corrupt.

> What is really needed is a whole NEW threading concept where individual
> threads can have private-to-that-thread resources, like file descriptors
> (but done without giving up the ability to choose to share them).  Then
> you can spread the descriptors and other resources out in ways that allow
> them to be managed better.

I'm not sure how that would be any better. Currently, if you want a
file descriptor to only be accessed by one thread, just only access it
from that one thread.

DS