From: Peter Olcott on

"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
message news:9i98o55tloemgur426od4u8gkg12pd2qfa(a)4ax.com...
> See below...
> On Tue, 23 Feb 2010 10:36:01 -0600, "Peter Olcott"
> <NoSpam(a)OCR4Screen.com> wrote:
>
>>
>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>message news:fis7o5pcfncd6ne5bk1rrnrp0licmnhogq(a)4ax.com...
>>> See below...
>>> On Tue, 23 Feb 2010 03:36:10 -0600, "Peter Olcott"
>>> <NoSpam(a)OCR4Screen.com> wrote:
>>>
>>>>
>>>>"Joseph M. Newcomer" <newcomer(a)flounder.com> wrote in
>>>>message
>>>>news:ss17o5ljcgg1cueekve99dvd30uh4v9t47(a)4ax.com...
>>>>> The components involved in the response are:
>>>>> Sender stack delay
>>>>> Sender bandwidth to downstream server
>>>>> Total multi-server latency to receiver downstream
>>>>> server
>>>>> Receiver bandwidth to downstream server
>>>>> Receiver stack delay
>>>>> Receiver rendering time
>>>>>
>>>>> The stack delays exist and cannot be changed. They
>>>>> are
>>>>> probably in hundreds of
>>>>> microseconds but I have never measured them.
>>>>>
>>>>> "The last mile" certainly dominates. If you have
>>>>> dialup
>>>>> internet vs. T3 hardwire you will
>>>>> get different performance.
>>>>>
>>>>> But the interserver performance matters also. A ping,
>>>>> which is a trivial packet, can turn
>>>>> around in a few hundred milliseconds, e.g.,
>>>>> =================================================
>>>>> C:\Documents and Settings\email>ping google.com
>>>>>
>>>>> Pinging google.com [72.14.204.147] with 32 bytes of
>>>>> data:
>>>>>
>>>>> Reply from 72.14.204.147: bytes=32 time=145ms TTL=53
>>>>> Reply from 72.14.204.147: bytes=32 time=144ms TTL=53
>>>>> Reply from 72.14.204.147: bytes=32 time=142ms TTL=53
>>>>> Reply from 72.14.204.147: bytes=32 time=143ms TTL=53
>>>>>
>>>>> Ping statistics for 72.14.204.147:
>>>>> Packets: Sent = 4, Received = 4, Lost = 0 (0%
>>>>> loss),
>>>>> Approximate round trip times in milli-seconds:
>>>>> Minimum = 142ms, Maximum = 145ms, Average = 143ms
>>>>>
>>>>> C:\Documents and Settings\email>ping verizon.net
>>>>>
>>>>> Pinging verizon.net [206.46.232.39] with 32 bytes of
>>>>> data:
>>>>>
>>>>> Reply from 206.46.232.39: bytes=32 time=203ms TTL=245
>>>>> Reply from 206.46.232.39: bytes=32 time=203ms TTL=245
>>>>> Reply from 206.46.232.39: bytes=32 time=200ms TTL=245
>>>>> Reply from 206.46.232.39: bytes=32 time=208ms TTL=245
>>>>>
>>>>> Ping statistics for 206.46.232.39:
>>>>> Packets: Sent = 4, Received = 4, Lost = 0 (0%
>>>>> loss),
>>>>> Approximate round trip times in milli-seconds:
>>>>> Minimum = 200ms, Maximum = 208ms, Average = 203ms
>>>>>
>>>>> C:\Documents and Settings\email>ping
>>>>> www.cityofsydney.nsw.gov.au
>>>>>
>>>>> Pinging www.cityofsydney.nsw.gov.au [203.147.135.212]
>>>>> with
>>>>> 32 bytes of data:
>>>>>
>>>>> Reply from 203.147.135.212: bytes=32 time=389ms
>>>>> TTL=112
>>>>> Reply from 203.147.135.212: bytes=32 time=382ms
>>>>> TTL=112
>>>>> Reply from 203.147.135.212: bytes=32 time=381ms
>>>>> TTL=112
>>>>> Reply from 203.147.135.212: bytes=32 time=379ms
>>>>> TTL=112
>>>>>
>>>>> Ping statistics for 203.147.135.212:
>>>>> Packets: Sent = 4, Received = 4, Lost = 0 (0%
>>>>> loss),
>>>>> Approximate round trip times in milli-seconds:
>>>>> Minimum = 379ms, Maximum = 389ms, Average = 382ms
>>>>>
>>>>> C:\Documents and Settings\email>
>>>>>
>>>>> ====================================
>>>>>
>>>>> So note that a ping to halfway around the world takes
>>>>> about 400ms. This suggests that
>>>>> under similar conditions, at 3am local time, you
>>>>> *might*
>>>>> be able to turn a packet around
>>>>> in 500ms, but I wouldn't bet on it.
>>>>
>>>>Ah so ping can directly measure what I need to know. I
>>>>just
>>>>pinged seescreen.com and got 53 ms with 32 bytes and 56
>>>>ms
>>>>with 1024 bytes, larger numbers of bytes timed out. What
>>>>is
>>>>the normal packet size for a web service? Is the total
>>>>time
>>>>that a file takes precisely proportional to the number
>>>>of
>>>>packets, times the ping time per packet?
>>> ****
>>> Not quite. ping measures ONE instance of ONE protocol,
>>> a
>>> protocol that is handled fairly
>>> low in the stack and doesn't involve application
>>> response
>>> (which includes time to bring in
>>> pages of the application, etc.). Therefore, it is a
>>> sort-of-indicator of network traffic
>>> delays and nothing else. Note that doing a ping at
>>> different times of day could produce
>>> different results because of backbone traffic. For
>>> example, if done while hundreds of
>>> thousands of users are streaming live Winter Olympics
>>> feeds would probably give different
>>> results, especially if pinging to the Pacific Northwest.
>>>
>>> As such, it is an INDICATOR. It is not a precise
>>> measure,
>>> and therefore is only
>>> SUGGESTIVE of performance. Your Mileage May Vary. Does
>>> Not Include Dealer Prep. Offer
>>> Void Where Prohibited By Law.
>>>
>>> Backbone traffic, retransmit times if packets are
>>> received
>>> in error (e.g., potential
>>> satellite feeds where upper atmosphere effects could
>>> scramble a packet, or a flock of
>>> birds flying in front of a microwave tower), and similar
>>> vagaries will introduce
>>> additional delays. ping is a UDP-class protocol, not
>>> TCP/IP, so there are also ack/nak
>>> time issues involved when you use TCP/IP. Sliding
>>> window
>>> protocols and piggybacking of
>>> ack/nak improve TCP/IP performance. Packet reassembly
>>> and
>>> multipath delays decrease it.
>>> If your server is heavily loaded, it will delay senders
>>> by
>>> throttling transmission until
>>> it can handle the load.
>>>
>>> Also, ping is connectionless; in TCP/IP you have to
>>> establish a connection, which is a
>>> whole set of packets flying back and forth just to
>>> activate the connection. These packets
>>> will have delays comparable to ping, so you might have
>>> several roundtrip delays.
>>>
>>> Packets vary; while in theory, packets can be fairly
>>> large, in practice they can be split,
>>> and most hosts cannot send out packets > 1456 total
>>> bytes
>>> including headers (TCP/IP
>>> headers are larger than ping headers, for example; 1456
>>> bytes is the maximum Ethernet
>>> packet size). But they can be further split depending
>>> on
>>> the MTU of any segment of the
>>> transmission path (Maximum Transmission Unit).
>>>
>>> So ping would represent "optimistic" times and not give
>>> you ANY GUARANTEES about TCP/IP
>>> performance. So, for example, with a 400ms delay to
>>> Sydney, Australia for ping, you may
>>> or may not be able to make a 500ms TCP/IP window.
>>>
>>> The best you can hope for is that TCP/IP will be close
>>> to
>>> ping time once connection is
>>> established, the app handles the activation, threads
>>> have
>>> been created (if necessary),
>>> etc. But ping does not account for connection/startup
>>> transient (and I don't mean that
>>> your app is launched by the connection, I'm referring to
>>> the "startup" required in *your*
>>> app once a connection is detected). Since TCP/IP is
>>> optimized for long transfers, short
>>> transfers have disproportionate overheads.
>>>
>>> I'm not sure why you think 500ms is a meaningful number
>>> at
>>> all; as I said, nearly
>>> everything involved is outside your control and
>>> therefore
>>> predicating end-to-end
>>> performance on environmental effects you cannot possibly
>>> take into account seems
>>> meaningless. Your goal is to turn that computation
>>> around
>>> as quickly as you can. Like
>>> google, you can even try to give an approximate measure.
>>> The user has to expect that
>>> other delays are in the network. Tough. Networks have
>>> delays, not your problem. Don't
>>> try to make it your problem, or you will always lose.
>>> Solve only the problem you can
>>> control.
>>>
>>> (You can work out the delays by doing things like
>>> figuring
>>> speed-of-light (about 80% of
>>> the speed of light in a vacuum), packet overheads, and
>>> payload times; I used to illustrate
>>> it using a plastic snake called "Packet"; times were
>>> measured from the time his head
>>> passed a point until his tail passed, so we had
>>> transmission time, reception time, and
>>> end-to-end delay, but that assumed ZERO routers in
>>> between, which is unrealistic. The
>>> point was to teach students that bandwidth matters. So
>>> I'd draw two lines on the board,
>>> and move "Packet" through those. Ultiimately,
>>> round-trip
>>> time was measured by
>>> head-to-tail-to-head-to-tail-to-head-to-tail-to-head-to-tail.
>>> "Packet" had articulated
>>> segments so I could make him shorter or longer as needed
>>> to illustrate packet overheads,
>>> the Nagle algorithm, and my only regret was that years
>>> ago
>>> I hadn't bought several plastic
>>> snakes so I could do sliding window protocol.
>>> Ultimately,
>>> the students had to
>>> demonstrate, mathematically, the consequences of packet
>>> size and demonstrate an
>>> understanding of the impact of processing delays on each
>>> side of the wire. So a typical
>>> exam question might be "You have two nodes, one in Los
>>> Angeles and one in Boston,
>>> separation 2500 miles for round numbers. The dedicated
>>> optical fiber has a speed of 0.8C.
>>> The response time for ack/nak is 200usec. Assume there
>>> are no delays caused by buffer
>>> waiting at either endpoint. What is the ideal sliding
>>> window size to maximize utilization
>>> of the fiber? Give the utilization you predict as a
>>> percentage of total fiber bandwidth")
>>> Of course, buffer delays are real, there are routers in
>>> between, there is other network
>>> traffic, etc. So no matter what your "perfect formula"
>>> predicts, in the real Internet it
>>> is AT BEST a HINT at what the performance MIGHT be under
>>> OPTIMISTIC conditions.
>>>
>>> I just tried several google searches on a variety of
>>> topics, While google took under 0.2
>>> seconds for each, the approximate times from
>>> click-to-refresh were approximately 1 second
>>> as I perceived them. (Allowing for my delays on the
>>> stopwatch, I was getting times from
>>> 0.76 to 1.05 seconds across about a dozen experiments).
>>> To click into a page took from
>>> 1.5 to 7.6 seconds from click to display, except one
>>> that
>>> took ~22 seconds.
>>> joe
>>> ****
>>
>>I just carefully evaluated my requirements. I will be
>>receiving between 1K to 2K of input and producing 10 to
>>100
>>bytes of output. In other words 1 to 2 packets of input
>>and
>>1 packet of output. It looks like achieving my target of
>>400
>>ms response time might just work. Does this seem plausible
>>under your analysis?
>>
>>I use 500 ms total response time because this is the
>>maximum
>>time that is perceived by a human as nearly instantaneous,
>>a
>>half of a second of wasted time is not very much wasted
>>time.
> ****
> Normal perceptual-to-motor delay is about 250ms. 500ms
> can actually be noticeable as a
> delay.
>
> As I say, you might be able to do it, but if you don't
> meet the window, it really isn't
> your problem. And people who send things out over the
> Internet are already conditioned to
> expect longer delays.
> joe
> ****

I have made major enhancements to my technology and am
considering trade secret rather than patent protection, thus
I am trying to test the feasibility of selling my technology
as a web service that performs with the response time in the
ball park of locally installed software.

>>
>>>>>
>>>>> But it is unrealistic to assume such goals when they
>>>>> are
>>>>> based on parameters you have no
>>>>> control over.
>>>>>
>>>>> Your job is to turn the packet around in the minimum
>>>>> compute time you can manage.
>>>>> Everything else is beyond what you can control.
>>>>> joe
>>>>> *****
>>>>>
>>>>> On Mon, 22 Feb 2010 23:54:04 -0600, "Peter Olcott"
>>>>> <NoSpam(a)OCR4Screen.com> wrote:
>>>>>
>>>>>>
>>>>>>"Ananth Ramasamy Meenachi" <msarm(a)live.com> wrote in
>>>>>>message
>>>>>>news:%23YtfKZEtKHA.3360(a)TK2MSFTNGP06.phx.gbl...
>>>>>>>
>>>>>>>> Is it possible for a very fast web service to
>>>>>>>> consistently provide an average 500 millisecond
>>>>>>>> response
>>>>>>>> time?
>>>>>>> It depends. The Internet Bandwidth is one of the
>>>>>>> major
>>>>>>> factor which controls the effective response
>>>>>>> performance
>>>>>>> (particularly the web Server). When you have enough
>>>>>>> bandwidth @ both ends next comes the hardware
>>>>>>> (storage
>>>>>>> &
>>>>>>> processing infrastructure). Best solution is going
>>>>>>> for
>>>>>>> CLOUD computing. Check for AZURE platform.
>>>>>>>
>>>>>>>
>>>>>>>> Is the internet itself too slow making this goal
>>>>>>>> completely infeasible using current technology?
>>>>>>> No way, any technology will fail to work with very
>>>>>>> low
>>>>>>> internet connection.
>>>>>>>
>>>>>>> Let me know you objective so that I can explain some
>>>>>>> more
>>>>>>> on this.
>>>>>>
>>>>>>I want to provide a web service that takes a tiny
>>>>>>image
>>>>>>file
>>>>>>(15 x (100 to 400) pixels) and returns 20 to 100 bytes
>>>>>>of
>>>>>>text that it derived using my proprietary OCR
>>>>>>software.
>>>>>>Ideally I want to do this with a real time limit of
>>>>>>500
>>>>>>milliseconds. The next best thing would be an average
>>>>>>response rate less than 500 milliseconds. You can
>>>>>>assume
>>>>>>100
>>>>>>milliseconds for my process including translation to
>>>>>>and
>>>>>>from HTTP.
>>>>>>
>>>>>>>
>>>>>>> "Peter Olcott" <NoSpam(a)OCR4Screen.com> wrote in
>>>>>>> message
>>>>>>> news:W4mdnSJ8462Jxx7WnZ2dnUVZ_s-dnZ2d(a)giganews.com...
>>>>>>>> Is it possible for a very fast web service to
>>>>>>>> consistently provide an average 500 millisecond
>>>>>>>> response
>>>>>>>> time?
>>>>>>>>
>>>>>>>> Is the internet itself too slow making this goal
>>>>>>>> completely infeasible using current technology?
>>>>>>>>
>>>>>>>>
>>>>>>
>>>>> Joseph M. Newcomer [MVP]
>>>>> email: newcomer(a)flounder.com
>>>>> Web: http://www.flounder.com
>>>>> MVP Tips: http://www.flounder.com/mvp_tips.htm
>>>>
>>> Joseph M. Newcomer [MVP]
>>> email: newcomer(a)flounder.com
>>> Web: http://www.flounder.com
>>> MVP Tips: http://www.flounder.com/mvp_tips.htm
>>
> Joseph M. Newcomer [MVP]
> email: newcomer(a)flounder.com
> Web: http://www.flounder.com
> MVP Tips: http://www.flounder.com/mvp_tips.htm


From: Joseph M. Newcomer on
See below...
On Tue, 23 Feb 2010 11:20:05 -0800, Geoff <geoff(a)invalid.invalid> wrote:

>On Tue, 23 Feb 2010 03:36:10 -0600, "Peter Olcott"
><NoSpam(a)OCR4Screen.com> wrote:
>
>>Ah so ping can directly measure what I need to know. I just
>>pinged seescreen.com and got 53 ms with 32 bytes and 56 ms
>>with 1024 bytes, larger numbers of bytes timed out. What is
>>the normal packet size for a web service? Is the total time
>>that a file takes precisely proportional to the number of
>>packets, times the ping time per packet?
>
>Ping is ICMP, a connectionless protocol. TCP/IP is connected and
>guarantees delivery of application data but not the order or delay of
>the packets involved. TCP reassembles the packets in their proper
>order and passes the data to the application once it is reassembled.
>TCP also doesn't guarantee that every connected packet between two
>hosts will travel the same pathway. This is part of the redundancy of
>the system.
>
>Not all hosts respond to ICMP ping packets. Some block it at the host
>firewall, some ISPs block from unknown sources it at their borders.
>
>That being said, 500ms is probably pretty reasonable for just about
>anywhere on the globe. The longest delays will usually be incurred
>when satellite links are involved, there are probably fewer hops but
>the speed of light delays are higher that purely terrestrial links
>owing to the 72,000 km round trip.
****
OTOH:

Speed of light through air/vacuum is substantially higher than speed of light in fiber
optics or copper wire; most of the satellite delays are on the uplink and downlink side
due to packet traffic scheduling. A direct link from, say, the US to Australia, using
satellite links, has fewer routers and repeaters than a cable. But point-to-point
distances don't matter; router distance and hop count can dominate. In a communication
between Boston and LA, in the real Internet one packet might go through Cleveland,
Chicago, Denver, Boise, Seattle, Portland, San Francisco and finally LA, while the next
packet might go to Washington DC, Atlanta, Mobile, Dallas, Phoenix, and San Diego before
hitting LA. And, as you point out, the second packet might get there first. It's all
guesswork as to what actual delays might be. All you can tell from any set of
measurements is how long that set of packets took. Taking averages and standard deviation
can give a more honest picture, but outliers are interesting in that they skew both the
average and the standard deviation. Most outliers are on the "high end" so the
distribution often looks like the upper 2/3 of a normal distribution curve (nothing gets
there in less than x ms, average higher, broad standard deviation)

I had a friend who gave a video lecture from Pittsburgh to Sydney some years ago; the
video went by satellite and the audio by ground link, and the audio ran sufficiently far
behind the video that it was extremely disconcerting to the viewers.
joe
****
>
>Some cmd tools:
>
>netstat -s in a cmd prompt. General stats about your TCP/IP stack.
>ping hostname as you have seen.
>tracert hostname
>pathping hostname
>
>Windows Perfmon has realtime measures of TCP, UDP, IP, ICMP gathered
>by the system.
>
>Some existing statistics continuously monitored globally.
>http://www.internettrafficreport.com/main.htm
>http://www.internethealthreport.com/
>http://www.noc.ucla.edu/weather.html
>http://www.dslreports.com/speedtest?more=1
>
>One excellent tool for users is http://www.dslreports.com/tools,
>especially the Tweak Test. This one has a "secret" link at the bottom
>of the summary page that graphs the packet protocol performance over
>time and includes the RTT per packet as well as a tabular summary of
>the performance. Of course this is for large data downloads but it
>might serve the purpose for your site.
Joseph M. Newcomer [MVP]
email: newcomer(a)flounder.com
Web: http://www.flounder.com
MVP Tips: http://www.flounder.com/mvp_tips.htm
From: Geoff on
On Tue, 23 Feb 2010 14:39:45 -0500, Joseph M. Newcomer
<newcomer(a)flounder.com> wrote:

>Speed of light through air/vacuum is substantially higher than speed of light in fiber
>optics or copper wire; most of the satellite delays are on the uplink and downlink side
>due to packet traffic scheduling. A direct link from, say, the US to Australia, using
>satellite links, has fewer routers and repeaters than a cable. But point-to-point
>distances don't matter; router distance and hop count can dominate.

True enough, and router congestion is unpredictable but a simple
up/down link with no other delays imposes a 72,000/299792.5 or 240.1ms
delay at the outset. A typical terrestrial link is 1/3 this value on
average.

If you really want to have fun try holding a simple telephone
conversation over a satellite link in the presence of echo from the
other end.
From: Hector Santos on
Peter Olcott wrote:

> Is it possible for a very fast web service to consistently
> provide an average 500 millisecond response time?
>
> Is the internet itself too slow making this goal completely
> infeasible using current technology?


What response time you mean, total or initial contact?

When you talk of an application like a web service (presumably TCP
based), I don't think you can guarantee any consistency for response
time. However, it is reasonable to use an service-defined initial
contact response time before considering it as a timeout.

This might be defined by whether your client is a sync or async, In
general, 25-35 seconds is the default timeout for a socket. When
async, you have better control of the initial contact.

You also didn't mention if there is size involvement in the timing.

In principle, it isn't that the internet is slow, but there are many
factors that can make it unreliable. But there is throttling that can
be done too by the network provider.

Reading your other input, at best, all you can do is set a limit
perhaps on the initial contact time, if that concerns you. There is no
way you would be able to get a persistent and consistent response time
you are looking for. 500ms should be reasonable for the data size you
are talking about. But how it is used is to define a timeout only.
You can't control that a RTT (Round Trip Time) will be 500ms. Too
many factors between end points.


--
HLS
From: Hector Santos on
Peter Olcott wrote:

> I have made major enhancements to my technology and am
> considering trade secret rather than patent protection, thus
> I am trying to test the feasibility of selling my technology
> as a web service that performs with the response time in the
> ball park of locally installed software.


Come on. I'm sure you haven't invented anything novel that hasn't been
in place for 30+ years. Do you honestly think you are the first with
fast internet transaction needs. Come on Peter.

If your "idea" is a "guarantee" of 500 ms maximum, well, there is no
way you can guarantee any response time SHORT of failing when it
timeouts and using this failure as an exclusion from success and
frivolously claim this is the guarantee.

Frankly 500 ms is HIGH for initial contacts and depending on the data
size, the RTT. But you can't reliably guarantee it.

--
HLS