From: K-mart Cashier on
On Jan 12, 1:17 am, David Schwartz <dav...(a)webmaster.com> wrote:
> On Jan 11, 8:27 pm, Arkadiy <vertl...(a)gmail.com> wrote:
>
> > Yes. My protocol is -- send request, get response, if it times out,
> > forget the whole thing, send the next request, get the next response,
> > and so on...
>
> If the protocol permits the other side to not respond, it should also
> require the other side to specifically identify what request each
> response is to. If it doesn't do that, the protocol is broken.
>
> I agree with Rainer Weikusat. It sounds like TCP was a bad choice, as
> it provides you no ability to control the transmit timing.
>
> DS


Okay, I'm having a brain fart. How does UDP provide a way to control
the transmit timing (as opposed to TCP).
From: Rainer Weikusat on
K-mart Cashier <cdalten(a)gmail.com> writes:
> On Jan 12, 1:17 am, David Schwartz <dav...(a)webmaster.com> wrote:

[...]

>> It sounds like TCP was a bad choice, as it provides you no ability
>> to control the transmit timing.
>
> Okay, I'm having a brain fart. How does UDP provide a way to control
> the transmit timing (as opposed to TCP).

Whenever you send a UDP datagram, it is, not counting latencies
introduced by local hardware and software processing, transmitted.
Data supposed to be transported by TCP is enqueued for transmission
somewhere inside the stack, transmitted at some later time, after the
remote endpoint communicated that it can accept it, and will be
retransmitted until its reception has been acknowledged. If some
router drops a UDP datagram because of temporary ressource shortage,
it is gone. A TCP-segment will finally arrive at its destination after
sufficient ressources had been available all along the path it
travelled. In bad cases, this can be a matter of more than a minute
for a network path with an RTT of about 0.1s[*].

[*] This refers to the network path from the German office of
my employer to the hosting facility in New Jersey, where most
of the appliance management computers are located. Until
recently, a "no, I don't understand what's going on here,
either" type of bug in the BSP network driver used to cause
lots of gratuitous packet losses under constant load,
introducing latencies in ssh sessions making it near
impossible to get any work done remotely whenever someone
was 'downloading' something.




From: Arkadiy on
On Jan 12, 3:44 am, Rainer Weikusat <rweiku...(a)mssgmbh.com> wrote:

> Since your application protocol is already unreliable in nature and
> you want realtime behaviour, ie no (unlimited) retransmissions of old
> data which didn't make it "across the net" the first time, you should
> IMO not be using TCP for this, but rather UDP.

The server is a generic facility that implements both TCP and UDP-
based protocols. I am just writing a client API for it, so I need to
use both. I absolutely need to be able to specify a timeout. When
an operation times out, I assume that, for a request-response
protocol, a logical thing would be to drop the response. The question
is how to do it correctly with TCP and with UDP.

Regards,
Arkadiy
From: Rainer Weikusat on
Arkadiy <vertleyb(a)gmail.com> writes:
> On Jan 12, 3:44�am, Rainer Weikusat <rweiku...(a)mssgmbh.com> wrote:
>> Since your application protocol is already unreliable in nature and
>> you want realtime behaviour, ie no (unlimited) retransmissions of old
>> data which didn't make it "across the net" the first time, you should
>> IMO not be using TCP for this, but rather UDP.
>
> The server is a generic facility that implements both TCP and UDP-
> based protocols. I am just writing a client API for it, so I need to
> use both. I absolutely need to be able to specify a timeout.

This 'absolutely' means that TCP isn't the right choice for a
transport protocol. Eg TCP has something called
head-of-line-blocking, meaning, if a segment is lost in transmission,
a receiving application will not see subsequents segments until the
lost one has been retransmitted sucessfully. This is, of course,
desirable for actual bytestream traffic, eg file transfers or shell
sessions, but not for realtime communication of independent messages.

> When an operation times out, I assume that, for a request-response
> protocol, a logical thing would be to drop the response. The question
> is how to do it correctly with TCP and with UDP.

There aren't many options for this: Include a ('sufficiently') unique
identifier with each request and drop all responses with a different
id.
From: Arkadiy on
On Jan 13, 11:44 am, Rainer Weikusat <rweiku...(a)mssgmbh.com> wrote:

> > When an operation times out, I assume that, for a request-response
> > protocol, a logical thing would be to drop the response. The question
> > is how to do it correctly with TCP and with UDP.
>
> There aren't many options for this: Include a ('sufficiently') unique
> identifier with each request and drop all responses with a different
> id.

I can do this with UDP, but with TCP the server I am using doesn't
implement request ids.

Do you mean that timeouts don't make sence with TCP? Can't I just
drop the connection?

Regards,
Arkadiy