From: Rainer Weikusat on
Nicolas George <nicolas$george(a)salle-s.org> writes:
> Moi wrote in message
> <77da1$4c2b367a$5350c024$24831(a)cache100.multikabel.net>:
>> The simplest possible protocol would want the "messages" to be terminated
>> by a \n
>
> That is the simplest to imagine, and to quick-test using telnet-like tools,
> but this is terribly annoying to implement. I would strongly advise against
> it in newly designed protocols. Announcing the size of the fields before the
> payload is much more practical.

This is then twice as annoying as you have to have a recv-loop to read
the length and a separate recv-loop to read the data, including the
possibility that you last 'length read' will read an arbitrary part of
'data', possibly including some number of following records and one
hacked off at an arbitrary boundary at the end of the buffer.

>> (or \r\n)
>
> _That_ is just plain masochism.

It's the usual internet protocol convention. The original idea behind
that was (supposedly) that both \r and \n can appear in the data
without being escaped.

From: Nicolas George on
Rainer Weikusat wrote in message <87mxucslpx.fsf(a)fever.mssgmbh.com>:
> This is then twice as annoying as you have to have a recv-loop to read
> the length and a separate recv-loop to read the data, including the
> possibility that you last 'length read' will read an arbitrary part of
> 'data', possibly including some number of following records and one
> hacked off at an arbitrary boundary at the end of the buffer.

You obviously need to learn to write a buffering recv loop. Any other design
is just stupid.

> It's the usual internet protocol convention.

Yes, I know.

> The original idea behind
> that was (supposedly) that both \r and \n can appear in the data
> without being escaped.

The original idea is that the terminals randomly added \r in what they
emitted and sometimes required them to display things properly.

Fortunately, the days where network protocols were directly connected to a
terminal ended a good decade ago.

Now, please, stop your useless trolls.
From: Scott Lurndal on
arnuld <sunrise(a)invalid.address> writes:
>> On Wed, 30 Jun 2010 14:08:50 +0200, Ersek, Laszlo wrote:
>
>
>> ..SNIP....
>
>> 1. Loop until at least N bytes come in (fixed size header).
>>
>> 2. Supposing M >= N bytes arrived, parse the header. The header tells
>> you (because the sender computed it) how much octets the body will
>> contain. Let's call that integer B.
>
>
>Yes, the server already sends data in that format. Its sends Content-
>Length: N in the sending text.
>
>Problem is how can I be sure that particular length of data will arrive
>in recv(). It can come in any of these partial recv()s

If you know that it is being sent, keep calling recv() until you've
received the entire line, then parse it.

scott
From: Rainer Weikusat on
Nicolas George <nicolas$george(a)salle-s.org> writes:
> Rainer Weikusat wrote in message <87mxucslpx.fsf(a)fever.mssgmbh.com>:
>> This is then twice as annoying as you have to have a recv-loop to read
>> the length and a separate recv-loop to read the data, including the
>> possibility that you last 'length read' will read an arbitrary part of
>> 'data', possibly including some number of following records and one
>> hacked off at an arbitrary boundary at the end of the buffer.
>
> You obviously need to learn to write a buffering recv loop. Any other design
> is just stupid.

When data is read into a buffer of some maximum size and then parsed,
anyway, your assertion that 'using \n as line terminator would be
annoying' doesn't make any sense anymore, at least to me. Care to
elaborate what those 'annoyances' are supposed to be?

>> The original idea behind
>> that was (supposedly) that both \r and \n can appear in the data
>> without being escaped.
>
> The original idea is that the terminals randomly added \r in what they
> emitted and sometimes required them to display things properly.
>
> Fortunately, the days where network protocols were directly connected to a
> terminal ended a good decade ago.

IIRC, the last time I saw an actual character-based terminal was about
a decade ago and it was already a rare curiousity at these times. Also
'connecting network protocols to terminals' is not something which
could possibly done in the way your statement suggests at all. OTOH,
the original SMTP RFC (822) specifically allowed both \r and \n as
part of the user data (this has meanwhile been retracted) and
consequently, the at least the SMTP line terminator must be something
different from either of both, indepdently of what you were referring
to above.
From: Nicolas George on
Rainer Weikusat wrote in message <87eifosg5m.fsf(a)fever.mssgmbh.com>:
> When data is read into a buffer of some maximum size and then parsed,
> anyway, your assertion that 'using \n as line terminator would be
> annoying' doesn't make any sense anymore, at least to me. Care to
> elaborate what those 'annoyances' are supposed to be?

Knowing in advance the size of the data avoids all the dynamic reallocation:
the wrapping buffering function reads in a fixed buffer whose size depends
roughly on the typical network throughput, and you copy the data directly in
a memory area with the correct size.

> IIRC, the last time I saw an actual character-based terminal was about
> a decade ago and it was already a rare curiousity at these times. Also

So what?

> the original SMTP RFC (822) specifically allowed both \r and \n as
> part of the user data (this has meanwhile been retracted) and
> consequently, the at least the SMTP line terminator must be something
> different from either of both, indepdently of what you were referring
> to above.

And what is it supposed to prove? SMTP is a braindead protocol. Encouraging
people who design new protocols to imitate its flaws is just plain criminal.