From: Stephen Hemminger on
On Thu, 15 Jul 2010 12:51:22 -0700
Rick Jones <rick.jones2(a)hp.com> wrote:

> I have to wonder if the only heuristic one could employ for divining the initial
> congestion window is to be either pessimistic/conservative or
> optimistic/liberal. Or for that matter the only one one really needs here?
>
> That's what it comes down to doesn't it? At any one point in time, we don't
> *really* know the state of the network and whether it can handle the load we
> might wish to put upon it. We are always reacting to it. Up until now, it has
> been felt necessary to be pessimistic/conservative at time of connection
> establishment and not rely as much on the robustness of the "control" part of
> avoidance and control.
>
> Now, the folks at Google have lots of data to suggest we don't need to be so
> pessimistic/conservative and so we have to decide if we are willing to be more
> optimistic/liberal. Broadly handwaving, the "netdev we" seems to be willing to
> be more optimistic/liberal in at least a few cases, and the question comes down
> to whether or not the "IETF we" will be similarly willing.

I am not convinced that a host being aggressive with initial cwnd (Linux) would
not end up unfairly monopolizing available bandwidth compared to older more conservative
implementations (Windows). Whether fairness is important or not is another debate.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Bill Davidsen on
David Miller wrote:
> From: Bill Davidsen <davidsen(a)tmr.com>
> Date: Wed, 14 Jul 2010 11:21:15 -0400
>
>> You may have to go into /proc/sys/net/core and crank up the
>> rmem_* settings, depending on your distribution.
>
> You should never, ever, have to touch the various networking sysctl
> values to get good performance in any normal setup. If you do, it's a
> bug, report it so we can fix it.
>
> I cringe every time someone says to do this, so please do me a favor
> and don't spread this further. :-)
>
I think transit time measured in 1/10th sec would disqualify this as a "normal
setup."

High bandwidth and high latency don't work well because you get "send until the
window is full then wait for ack" and poor performance. I saw this with sat feed
to Wyoming from GE's Research Center in upstate NY in the late 80's or early
90's. (I think this was NYserNet at that time). I did feeds from NYC area to
California and Hawaii with SBC in the early to mid 2k years. In every case
SunOS, Solaris, AIX and Linux all failed to hit anything like reasonable
transfer speeds without manually tweaking, and I got the advice on increasing
window size from network engineers at ISPs and backbone providers.

The O.P. may have other issues, and may benefit from doing other things as well,
but raising window size is a reasonable thing to do on links with RTT in
hundreds of ms, and it's easy to try without changing config files.

--
Bill Davidsen <davidsen(a)tmr.com>
"We have more to fear from the bungling of the incompetent than from
the machinations of the wicked." - from Slashdot
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: H.K. Jerry Chu on
I don't even consider a modest IW increase to 10 is aggressive. The scaling
of IW is only adequate IMO given the huge b/w growth in the past
decade. Remember there could be plenty of flows sending large cwnd
bursts at
twice the bottleneck link rate at any point of time in the network anyway so
the "fairness" question may already be ill-defined. In any case we're
trying to conduct some experiment in a private testbed to hopefully
get some insights
with real data.

Jerry

On Thu, Jul 15, 2010 at 1:48 PM, Stephen Hemminger
<shemminger(a)vyatta.com> wrote:
> On Thu, 15 Jul 2010 12:51:22 -0700
> Rick Jones <rick.jones2(a)hp.com> wrote:
>
>> I have to wonder if the only heuristic one could employ for divining the initial
>> congestion window is to be either pessimistic/conservative or
>> optimistic/liberal. �Or for that matter the only one one really needs here?
>>
>> That's what it comes down to doesn't it? �At any one point in time, we don't
>> *really* know the state of the network and whether it can handle the load we
>> might wish to put upon it. �We are always reacting to it. Up until now, it has
>> been felt necessary to be pessimistic/conservative at time of connection
>> establishment and not rely as much on the robustness of the "control" part of
>> avoidance and control.
>>
>> Now, the folks at Google have lots of data to suggest we don't need to be so
>> pessimistic/conservative and so we have to decide if we are willing to be more
>> optimistic/liberal. �Broadly handwaving, the "netdev we" seems to be willing to
>> be more optimistic/liberal in at least a few cases, and the question comes down
>> to whether or not the "IETF we" will be similarly willing.
>
> I am not convinced that a host being aggressive with initial cwnd (Linux) would
> not end up unfairly monopolizing available bandwidth compared to older more conservative
> implementations (Windows). Whether fairness is important or not is another debate.
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Henrique de Moraes Holschuh on
On Wed, 14 Jul 2010, Ed W wrote:
> Hi, my network connection looks like 500Kbits with a round trip
> latency of perhaps 1s+ (it's a satellite link).

Last time I dealt with such stuff (hundreds of VSATs across the whole
country, arriving at a Satellite Base Station), you absolutely had to use
protocol enhancement proxies in the SBS AND in the VSAT clients to get good
performance for typical end-user Internet usage. This was a few years ago,
but it probably hasn't changed much. I don't recall what proprietary stuff
was used for the proxy, but...

http://en.wikipedia.org/wiki/Performance_Enhancing_Proxy
http://sourceforge.net/projects/pepsal/

A Google search for pepsal will return a link to a PDF explaining the
design. Maybe that could be of some help for you?

--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Patrick McManus on
On Wed, 2010-07-14 at 21:51 -0700, H.K. Jerry Chu wrote:
> except there are indeed bugs in the code today in that the
> code in various places assumes initcwnd as per RFC3390. So when
> initcwnd is raised, that actual value may be limited unnecessarily by
> the initial wmem/sk_sndbuf.

Thanks for the discussion!

can you tell us more about the impl concerns of initcwnd stored on the
route?

and while I'm asking for info, can you expand on the conclusion
regarding poor cache hit rates for reusing learned cwnds? (ok, I admit I
only read the slides.. maybe the paper has more info?)

article and slides much appreciated and very interetsing. I've long been
of the opinion that the downsides of being too aggressive once in a
while aren't all that serious anymore.. as someone else said in a
non-reservation world you are always trying to predict the future anyhow
and therefore overflowing a queue is always possible no matter how
conservative.




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/