From: TomChapman on
I have written a client/server application set using TCP as the link.
Normally only a few packets are exchanged every minute. One thing the
client does is ask the server for a specific database record. The server
retrieves the record and sends it back to the client. In some odd cases
the client may all-of-a-sudden realize that it needs thousands of these
records.

The obvious method is to send one request wait for the return data and
then send the next request. That would work. However...

I'm thinking I'd get faster results for the client if I tried to stay
ahead by sending requests early so there were a few queued at the server
or in transit.

I don't know the bast way to do this. How do I know when to send a new
request. An individual client does not know what load the server is
under. It may vary.

I'm thinking I need some kind of counter. What is the best approach
here? How should I tackle this throttling problem.


----------
Question 2:

In many of my programs I seem to spend more time handling errors and
what-ifs then I do with nominal situation code. I'm use to thinking
about what-ifs.

So in the case where I am sending thousands of request-for-data
messages. In the perfect world, the server would correctly respond and I
would receive 100% of the responses. But I always worry about what-ifs.
Specifically in this case, what if something goes wrong somewhere and I
never get a response. If I'm queuing multiple requests and using some
kind of counter, the counter might get out-of-wack if a packet
here-or-there was never responded to. How do I handle this situation?
From: ScottMcP [MVP] on
On Dec 4, 11:49 pm, TomChapman <TomChapma...(a)gmail.com> wrote:
> I have written a client/server application set using TCP as the link.
> Normally only a few packets are exchanged every minute. One thing the
> client does is ask the server for a specific database record. The server
> retrieves the record and sends it back to the client. In some odd cases
> the client may all-of-a-sudden realize that it needs thousands of these
> records.
>
> The obvious method is to send one request wait for the return data and
> then send the next request. That would work. However...
>
> I'm thinking I'd get faster results for the client if I tried to stay
> ahead by sending requests early so there were a few queued at the server
> or in transit.
>
> I don't know the bast way to do this. How do I know when to send a new
> request. An individual client does not know what load the server is
> under. It may vary.
>
> I'm thinking I need some kind of counter. What is the best approach
> here? How should I tackle this throttling problem.

It doesn't sound like there is a throttling problem, just a queueing
problem. And the queueing problem is largely solved for you by TCP.
If you send multiple requests they will simply wait in a winsock queue
until the server feels like inputting them. At some point, if the
server's input buffer fills, the client end will be paused by TCP,
typically with WSAEWOULDBLOCK. So it that sense you already have a
throttle (assuming you coded proper behavior for WSAEWOULDBLOCK).


>
> ----------
> Question 2:
>
> In many of my programs I seem to spend more time handling errors and
> what-ifs then I do with nominal situation code. I'm use to thinking
> about what-ifs.
>
> So in the case where I am sending thousands of request-for-data
> messages. In the perfect world, the server would correctly respond and I
> would receive 100% of the responses. But I always worry about what-ifs.
> Specifically in this case, what if something goes wrong somewhere and I
> never get a response. If I'm queuing multiple requests and using some
> kind of counter, the counter might get out-of-wack if a packet
> here-or-there was never responded to. How do I handle this situation?

You can send a "request number" with each query, and have the server
reply include the same number. That will let you detect any error.



From: Joseph M. Newcomer on
See below...
On Fri, 04 Dec 2009 22:49:52 -0600, TomChapman <TomChapman12(a)gmail.com> wrote:

>I have written a client/server application set using TCP as the link.
>Normally only a few packets are exchanged every minute. One thing the
>client does is ask the server for a specific database record. The server
>retrieves the record and sends it back to the client. In some odd cases
>the client may all-of-a-sudden realize that it needs thousands of these
>records.
>
>The obvious method is to send one request wait for the return data and
>then send the next request. That would work. However...
>
>I'm thinking I'd get faster results for the client if I tried to stay
>ahead by sending requests early so there were a few queued at the server
>or in transit.
****
Yes, this sounds like a good idea
****
>
>I don't know the bast way to do this. How do I know when to send a new
>request. An individual client does not know what load the server is
>under. It may vary.
****
Then the requests will simply wait at the server until it gets around to them. I don't
see a problem here.
****
>
>I'm thinking I need some kind of counter. What is the best approach
>here? How should I tackle this throttling problem.
****
You could limit the number of outstanding requests from any given client, but as you point
out, that's a simple counter. But I'd be inclined to a more laissez-faire approach (free
market) and just dump out all the requests; if the server is running slow, they take
longer to process, but so what? You're going to need them anyway, so just queue them up
and let the server deal with the issue.
joe
****
>
>
>----------
>Question 2:
>
>In many of my programs I seem to spend more time handling errors and
>what-ifs then I do with nominal situation code. I'm use to thinking
>about what-ifs.
>
>So in the case where I am sending thousands of request-for-data
>messages. In the perfect world, the server would correctly respond and I
>would receive 100% of the responses. But I always worry about what-ifs.
>Specifically in this case, what if something goes wrong somewhere and I
>never get a response. If I'm queuing multiple requests and using some
>kind of counter, the counter might get out-of-wack if a packet
>here-or-there was never responded to. How do I handle this situation?
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Alexander Grigoriev on
For what it's worth, even microsoft managed (of course) to get it wrong in
their server.sys.

See, for example, KB 297019. When outlook PST files are located on a remote
server over a slow WAN link, and many users are accessing the files at once,
the server may run out of paged pool, or hang.

This tells that most likely, server.sys does NOT limit or throttle total
amount of data pending for transmission, thus, when it gets a lot of
requests at once, it will try to queue excessive amount of data, possibly
exceeding available memory.

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in message
news:dhrkh5p6tqr1fae6sqrm0rcetdra025r17(a)4ax.com...
> See below...
> On Fri, 04 Dec 2009 22:49:52 -0600, TomChapman <TomChapman12(a)gmail.com>
> wrote:
>
>>I have written a client/server application set using TCP as the link.
>>Normally only a few packets are exchanged every minute. One thing the
>>client does is ask the server for a specific database record. The server
>>retrieves the record and sends it back to the client. In some odd cases
>>the client may all-of-a-sudden realize that it needs thousands of these
>>records.
>>
>>The obvious method is to send one request wait for the return data and
>>then send the next request. That would work. However...
>>
>>I'm thinking I'd get faster results for the client if I tried to stay
>>ahead by sending requests early so there were a few queued at the server
>>or in transit.
> ****
> Yes, this sounds like a good idea
> ****
>>
>>I don't know the bast way to do this. How do I know when to send a new
>>request. An individual client does not know what load the server is
>>under. It may vary.
> ****
> Then the requests will simply wait at the server until it gets around to
> them. I don't
> see a problem here.
> ****
>>
>>I'm thinking I need some kind of counter. What is the best approach
>>here? How should I tackle this throttling problem.
> ****
> You could limit the number of outstanding requests from any given client,
> but as you point
> out, that's a simple counter. But I'd be inclined to a more laissez-faire
> approach (free
> market) and just dump out all the requests; if the server is running slow,
> they take
> longer to process, but so what? You're going to need them anyway, so just
> queue them up
> and let the server deal with the issue.
> joe
> ****
>>
>>
>>----------
>>Question 2:
>>
>>In many of my programs I seem to spend more time handling errors and
>>what-ifs then I do with nominal situation code. I'm use to thinking
>>about what-ifs.
>>
>>So in the case where I am sending thousands of request-for-data
>>messages. In the perfect world, the server would correctly respond and I
>>would receive 100% of the responses. But I always worry about what-ifs.
>>Specifically in this case, what if something goes wrong somewhere and I
>>never get a response. If I'm queuing multiple requests and using some
>>kind of counter, the counter might get out-of-wack if a packet
>>here-or-there was never responded to. How do I handle this situation?
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: TomChapman on
Joseph M. Newcomer wrote:
> See below...
> On Fri, 04 Dec 2009 22:49:52 -0600, TomChapman <TomChapman12(a)gmail.com> wrote:
>
>> I have written a client/server application set using TCP as the link.
>> Normally only a few packets are exchanged every minute. One thing the
>> client does is ask the server for a specific database record. The server
>> retrieves the record and sends it back to the client. In some odd cases
>> the client may all-of-a-sudden realize that it needs thousands of these
>> records.
>>
>> The obvious method is to send one request wait for the return data and
>> then send the next request. That would work. However...
>>
>> I'm thinking I'd get faster results for the client if I tried to stay
>> ahead by sending requests early so there were a few queued at the server
>> or in transit.
> ****
> Yes, this sounds like a good idea
> ****
>> I don't know the bast way to do this. How do I know when to send a new
>> request. An individual client does not know what load the server is
>> under. It may vary.
> ****
> Then the requests will simply wait at the server until it gets around to them. I don't
> see a problem here.
> ****
>> I'm thinking I need some kind of counter. What is the best approach
>> here? How should I tackle this throttling problem.
> ****
> You could limit the number of outstanding requests from any given client, but as you point
> out, that's a simple counter. But I'd be inclined to a more laissez-faire approach (free
> market) and just dump out all the requests; if the server is running slow, they take
> longer to process, but so what? You're going to need them anyway, so just queue them up
> and let the server deal with the issue.
> joe
> ****
>>
>> ----------
>> Question 2:
>>
>> In many of my programs I seem to spend more time handling errors and
>> what-ifs then I do with nominal situation code. I'm use to thinking
>> about what-ifs.
>>
>> So in the case where I am sending thousands of request-for-data
>> messages. In the perfect world, the server would correctly respond and I
>> would receive 100% of the responses. But I always worry about what-ifs.
>> Specifically in this case, what if something goes wrong somewhere and I
>> never get a response. If I'm queuing multiple requests and using some
>> kind of counter, the counter might get out-of-wack if a packet
>> here-or-there was never responded to. How do I handle this situation?
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm

Just for my knowledge...

What happens when two clients send data to the same server. I know each
connected client gets a separate derived CAsyncSocket class, each with a
separate OnReceive. I know that TCP guarantees in-order arrival at the
server. So I'm sure each client's data will be in-order.

My question is:

Say one client sends a thousand packets which will take the server some
time to process. While this is happening a second client then sends a
bunch of packets.

Will my program process all of the packets from the first client first,
since they arrived first, or will the first client's OnReceive be
interspersed with OnReceive calls from the second instance for the
second client?

I could see good arguments in either way. One side of the coin would be
to process in the order received, which sounds logical. But on the other
side of the coin, it might be good to intersperse so that one client
doesn't monopolize the server.