From: Joe Pfeiffer on
karthikbalaguru <karthikbalaguru79(a)gmail.com> writes:

> On Feb 20, 8:08�pm, markhob...(a)hotpop.donottypethisbit.com (Mark
> Hobley) wrote:
>> karthikbalaguru <karthikbalagur...(a)gmail.com> wrote:
>> > While reading about the various designs, interestingly i
>> > came across an info that the design of TCP servers is
>> > mostly such that whenever it accepts a connection,
>> > a new process is invoked to handle it .
>>
>> TCP is a "reliable" connection, whereas UDP is "unreliable". If you understand
>> the difference between these two types of connections, it should be clear why
>> this is so, and you would know which connection type best suits your
>> application.
>>
>
> Agreed, but the query is about the design of the
> TCP server and the UDP server. In TCP server
> whenever a new connection arrives, it accepts the
> connection and invokes a new process to handle
> the new connection request. The main point here
> is that 'a new process is created to handle every
> new connection that arrives at the server' .
> In the case of UDP server, it seems that most
> of the the server design is such that there is only
> one process to handle various clients.
> Will the TCP server get overloaded if it creates
> a new process for every new connection ? How is
> it being managed ?

Tim Watts did an excellent job two posts up-thread describing three
different architectures for TCP servers. To summarize the part that
relates directly to your question: if you've got a really heavy load,
the server can indeed get overloaded. In that case, you need to work
harder and do something like a threaded or multiplexing server.

>>
>> > How is TCP server able to handle large number of very rapid
>> > near-simultaneous connections ?
>>
>> The datagrams carry identification numbers that enable them to be related
>> to the controlling processes, enabling them to be easily managed.
>>
>
> The point here is, consider a scenario that there are
> multiple connection requests are arriving while the
> TCP server is busy in the process of creation of a
> new process for the earlier connection request.
> How does TCP handle those multiple connection
> requests during that scenario ?

That's what the backlog parameter on the listen() call is for. If the
number of pending requests is less than or equal to that number, they
get queued. When the number of pending requests exceeds it, requests
start getting refused.
--
As we enjoy great advantages from the inventions of others, we should
be glad of an opportunity to serve others by any invention of ours;
and this we should do freely and generously. (Benjamin Franklin)
From: Paul Keinanen on
On Sat, 20 Feb 2010 08:15:13 -0800 (PST), karthikbalaguru
<karthikbalaguru79(a)gmail.com> wrote:

>
>Agreed, but the query is about the design of the
>TCP server and the UDP server. In TCP server
>whenever a new connection arrives, it accepts the
>connection and invokes a new process to handle
>the new connection request. The main point here
>is that 'a new process is created to handle every
>new connection that arrives at the server' .
>In the case of UDP server, it seems that most
>of the the server design is such that there is only
>one process to handle various clients.
>Will the TCP server get overloaded if it creates
>a new process for every new connection ? How is
>it being managed ?

As long as you have a simple transaction system, one incoming request,
one outgoing response, why on earth would any sensible person create a
TCP/IP connection for this simple transaction ?

From: karthikbalaguru on
On Feb 21, 12:02 am, Paul Keinanen <keina...(a)sci.fi> wrote:
> On Sat, 20 Feb 2010 08:15:13 -0800 (PST), karthikbalaguru
>
> <karthikbalagur...(a)gmail.com> wrote:
>
> >Agreed, but the query is about the design of the
> >TCP server and the UDP server. In TCP server
> >whenever a new connection arrives, it accepts the
> >connection and invokes a new process to handle
> >the new connection request. The main point here
> >is that 'a new process is created to handle every
> >new connection that arrives at the server' .
> >In the case of UDP server, it seems that most
> >of the the server design is such that there is only
> >one process to handle various clients.
> >Will the TCP server get overloaded if it creates
> >a new process for every new connection ? How is
> >it being managed ?
>
> As long as you have a simple transaction system, one incoming request,
> one outgoing response, why on earth would any sensible person create a
> TCP/IP connection for this simple transaction ?

Consider a scenario in which multiple high speed
TCP connection requests are arriving within a
very very short/micro time frame. In that scenario,
the TCP server would get overloaded if separate
thread is created for every new connection that
arrives at the server.

Karthik Balaguru
From: David W. Hodgins on
On Sat, 20 Feb 2010 14:02:20 -0500, Paul Keinanen <keinanen(a)sci.fi> wrote:

> As long as you have a simple transaction system, one incoming request,
> one outgoing response, why on earth would any sensible person create a
> TCP/IP connection for this simple transaction ?

In many cases, the outgoing response will not fit in one packet.
TCP takes care of handling out-of-order responses being received
by the client.

Regards, Dave Hodgins

--
Change nomail.afraid.org to ody.ca to reply by email.
(nomail.afraid.org has been set up specifically for
use in usenet. Feel free to use it yourself.)
From: karthikbalaguru on
On Feb 20, 8:03 pm, Tim Watts <t...(a)dionic.net> wrote:
> karthikbalaguru <karthikbalagur...(a)gmail.com>
>   wibbled on Saturday 20 February 2010 13:10
>
> > While reading about the various designs, interestingly i
> > came across an info that the design of TCP servers is
> > mostly such that whenever it accepts a connection,
> > a new process is invoked to handle it .
>
> Not generally true these days - used to be the method of choice, see
> below...
>
> > But, it seems that in the case of UDP servers design,
> > there is only a single process that handles all client
> > requests. Why such a difference in design of TCP and
> > UDP servers ? How is TCP server able to handle
> > large number of very rapid near-simultaneous connections ?
> > Any ideas ?
>
> First I recommend signing up to O'Reilly's Safari books onlinbe service - or
> buy some actual books. There are some excellent O'Reilly books specifically
> on TCP/IP.
>
> In the meantime, speaking generally (without embedded systems specifically
> in mind):
>
> TCP = reliable stream connection oriented protocol. No worrying about
> sequences, out of order packet delivery, missed packets - except in as much
> as your application needs to handle the TCP stack declaring it's given up
> (exception handling). Some overhead in setting up (3 way handshake) and
> closedown.
>
> UDP = datagram protocol and your application needs to worry about all the
> rest above, if it cares. But very light - no setup/closedown.
>
> Regarding TCP service architechture, there are 3 main classes:
>
> 1) Forking server;
> 2) Threaded server;
> 3) Multiplexing server;
>
> 1 - simplest to program, heaviest on system resources. But you can
> potentially (on a real *nix system) simply write a program that talks to
> STDIN/STDOUT and shove it behind (x)inetd and have a network server without
> a single line of network code in your program. Perfectly good method for
> light load servers where latency is not an issue.
>

Interesting to know a method for having a Light load TCP server
by using the existing utilities in Linux/Unix in the form of
Forking Server !

> 2 - Popular - little harder to program, much more efficient, assuming your
> OS can handle thread creation more lightly than process creation.
>

Threaded Server seems to be good, but it might be
overloading the TCP server very quickly incase of fast
multiple connection requests within a very short timeframe.
Just as you said, i think if the thread creation is of less
overhead in the particular OS in which TCP server is
running, then it would be great.

I came across preforking tricks too where a server launches
a number of child processes when it starts . Those inturn
would be serving the new connection requests by having
some kind of locking mechanism around the call to accept
so that at any point of time, only one child can use it and
the others will be blocked until the lock is released.
There seem to be some way out of that locking problem.
But, i think the idea of creation of one child for every
new connection/client seems to be better than the preforking
trick, but these tricks in turn overload the TCP server
incase of fast successive/near-simultaneous connection
requests within a short time frame.

Just as you said, i think if the thread creation is of less
overhead in the particular OS in which TCP server is
running, then it would be great.

> 3 - Very efficient. One process maintains a state for all connections, often
> using event methodology to call service subroutines when something
> interesting happens (eg new connection, data arrived, output capable of
> accepting data, connection closed). Sounds horrible, but with an OO
> approach, very easy to get your head around. Now bearing in mind that
> anything in OO can be bastardised to a handle and an array of struct which
> holds the equivalent data that an OO object would, this could be a very
> suitable method for emebedded systems where C may be the language of choice
> and there may be no OS or only a very simple one that doesn't map well.
>

Having one process for maintaining states of all
connections and implementing the event
methodology that calls service subroutines whenever
a certain specific instance happens sounds interesting.
Appears to be the ultimate method for embedded systems
where OS is absent and C is the main language .
Anyhow, need to analyze the drawbacks if any.

> Now, doing 3 wouldn't be so far different to doing it all in UDP *except*
> you now have to care about packet delivery unreliability - as you can get a
> variety of stacks for many embedded systems, why not let someone else's hard
> work help you out?
>
> --

Karthik Balaguru
First  |  Prev  |  Next  |  Last
Pages: 1 2 3 4 5 6 7 8
Prev: 'netstat' and '-f inet' option
Next: WPE for linux