From: r norman on
On Tue, 23 Feb 2010 15:17:06 -0500, Hector Santos
<sant9442(a)nospam.gmail.com> wrote:

>Peter Olcott wrote:
>
>> I have made major enhancements to my technology and am
>> considering trade secret rather than patent protection, thus
>> I am trying to test the feasibility of selling my technology
>> as a web service that performs with the response time in the
>> ball park of locally installed software.
>
>
>Come on. I'm sure you haven't invented anything novel that hasn't been
>in place for 30+ years. Do you honestly think you are the first with
>fast internet transaction needs. Come on Peter.
>
>If your "idea" is a "guarantee" of 500 ms maximum, well, there is no
>way you can guarantee any response time SHORT of failing when it
>timeouts and using this failure as an exclusion from success and
>frivolously claim this is the guarantee.
>
>Frankly 500 ms is HIGH for initial contacts and depending on the data
>size, the RTT. But you can't reliably guarantee it.

Doesn't all this depend heavily on network utilization? If you have
to go through a number of switches and routers and portions of your
network have very high percentage utilization because of other people
hogging the bandwidth, then you could easily experience VERY long
delays, many seconds.

From: Hector Santos on
r norman wrote:

> On Tue, 23 Feb 2010 15:17:06 -0500, Hector Santos
> <sant9442(a)nospam.gmail.com> wrote:
>
>> Peter Olcott wrote:
>>
>> Frankly 500 ms is HIGH for initial contacts and depending on the data
>> size, the RTT. But you can't reliably guarantee it.
>
> Doesn't all this depend heavily on network utilization? If you have
> to go through a number of switches and routers and portions of your
> network have very high percentage utilization because of other people
> hogging the bandwidth, then you could easily experience VERY long
> delays, many seconds.

Yes.

Let me rephrase it. Under ideal considerations, 500ms is pretty high
for initial contact. That does not mean that you would use 500 ms for
a timeout, maybe into the seconds and generally, those are special
needs. As a web service HTTP request, unless its programmatically
done using XHR (AJAX), you automatically get a 25-35 second timeout
and thats based on the socket layer.

If the client is not a browser but using an HTTP protocol, it can also
use a timeout.

I guess, a "Smart" client can tune it based on the target host.

If the target host is local (client/server same machine), then a low
timeout is reasonable.

If the target host is on the LAN, a low timeout is reasonable.

If the target host is on the WAN, then this where a high timeout
should be expected.

On a relative note, for our RCP client/server system:

If the client and server is on the same machine, then we
use a local RPC bind (ncalrpc).

If the client and server are remote from each other, then
we use a TCP/IP RPC bind (ncacn_ip_tcp).

Both have different timeouts RPC constraints.

--
HLS
From: Joseph M. Newcomer on
The magic number is 30ms. If you feed your own speech back into headphones you are
wearing, using a 30ms delay, you will soon be unable to talk. One of the little cognitive
psychology numbers.

I've played a pipe organ that had a 1.5 beat delay. Because I play by ear and not by
rote playing of notes, I quickly discovered that it was unplayable. It would take a *lot*
of practice to get that 1.5 beat delay (well, for the tempi I was playing in) into my head
so I could cope with it, and I only had a fifteen-minute time slot on it.
joe

On Tue, 23 Feb 2010 12:02:56 -0800, Geoff <geoff(a)invalid.invalid> wrote:

>On Tue, 23 Feb 2010 14:39:45 -0500, Joseph M. Newcomer
><newcomer(a)flounder.com> wrote:
>
>>Speed of light through air/vacuum is substantially higher than speed of light in fiber
>>optics or copper wire; most of the satellite delays are on the uplink and downlink side
>>due to packet traffic scheduling. A direct link from, say, the US to Australia, using
>>satellite links, has fewer routers and repeaters than a cable. But point-to-point
>>distances don't matter; router distance and hop count can dominate.
>
>True enough, and router congestion is unpredictable but a simple
>up/down link with no other delays imposes a 72,000/299792.5 or 240.1ms
>delay at the outset. A typical terrestrial link is 1/3 this value on
>average.
>
>If you really want to have fun try holding a simple telephone
>conversation over a satellite link in the presence of echo from the
>other end.
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Peter Olcott on

"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
news:ebU9CPMtKHA.3408(a)TK2MSFTNGP06.phx.gbl...
> Peter Olcott wrote:
>
>> Is it possible for a very fast web service to
>> consistently provide an average 500 millisecond response
>> time?
>>
>> Is the internet itself too slow making this goal
>> completely infeasible using current technology?
>
>
> What response time you mean, total or initial contact?

Total response time must at least average < 500 ms for the
specific application that I have in mind. It looks like with
only two packets of input and one packet of output this
might be feasible for US customers with servers also in the
US.

>
> When you talk of an application like a web service
> (presumably TCP based), I don't think you can guarantee
> any consistency for response time. However, it is
> reasonable to use an service-defined initial contact
> response time before considering it as a timeout.
>
> This might be defined by whether your client is a sync or
> async, In general, 25-35 seconds is the default timeout
> for a socket. When async, you have better control of the
> initial contact.
>
> You also didn't mention if there is size involvement in
> the timing.
>
> In principle, it isn't that the internet is slow, but
> there are many factors that can make it unreliable. But
> there is throttling that can be done too by the network
> provider.
>
> Reading your other input, at best, all you can do is set a
> limit perhaps on the initial contact time, if that
> concerns you. There is no way you would be able to get a
> persistent and consistent response time you are looking
> for. 500ms should be reasonable for the data size you are
> talking about. But how it is used is to define a timeout
> only. You can't control that a RTT (Round Trip Time) will
> be 500ms. Too many factors between end points.
>
>
> --
> HLS


From: Peter Olcott on

"Hector Santos" <sant9442(a)nospam.gmail.com> wrote in message
news:OUGJ$UMtKHA.5976(a)TK2MSFTNGP05.phx.gbl...
> Peter Olcott wrote:
>
>> I have made major enhancements to my technology and am
>> considering trade secret rather than patent protection,
>> thus I am trying to test the feasibility of selling my
>> technology as a web service that performs with the
>> response time in the ball park of locally installed
>> software.
>
>
> Come on. I'm sure you haven't invented anything novel that
> hasn't been in place for 30+ years. Do you honestly think
> you are the first with fast internet transaction needs.
> Come on Peter.
>

My technology is the only technology in the world that can
consistently recognize character glyphs at 96 DPI screen
resolutions with 100% accuracy. I already have a patent on
this.

> If your "idea" is a "guarantee" of 500 ms maximum, well,
> there is no way you can guarantee any response time SHORT
> of failing when it timeouts and using this failure as an
> exclusion from success and frivolously claim this is the
> guarantee.
>
> Frankly 500 ms is HIGH for initial contacts and depending
> on the data size, the RTT. But you can't reliably
> guarantee it.
>
> --
> HLS