fanf: (Default)
[personal profile] fanf
On Wednesday I asked about alternatives to window-based protocols. An answer is "rate-based". For example, see this paper from 1995 which discusses congestion control in ATM.

Of course TCP has an implicit measure of its sending rate, because it maintains both a congestion window and an RTT, so its average sending rate is cwnd/rtt - but this is not explicit in the way it is for ATM. And in fact TCP's rate can be very bursty and is often unstable in adverse conditions. However the cwnd does give you a direct measurement of the amount of buffering you need in case it is necessary to retransmit lost packets.

Edit: There's a really nice summary of the burstiness of TCP in this paper which brilliantly turns what is commonly viewed as a disadvantage into a benefit for dynamic traffic splitting.

Someone on #cl pointed me to XCP which is a really clever congestion control protocol, which has been implemented to work under TCP but could in principle work with any unicast transport protocol. One thing that I quite like about it is that it agrees with my intuition that the Internet would work better if routers and hosts co-operated more (one two). In XCP, senders annotate their packets with their current RTT and cwnd parameters (measured by TCP in the usual way) plus their desired cwnd increase. Routers along the way can adjust the increase according to how busy the links are, and even make it negative if there is too much traffic. The receiver then returns the feedback to the sender in its acks. The really brilliant thing is that routers do not need to keep any per-flow state - compare the ATM paper above, which does require per-flow state, and is therefore hopelessly unscalable. Imagine the number of concurrent flows and the flow churn rate of a router on the LINX! Furthermore, XCP separates aggregate efficiency from fair allocation of bandwidth between flows. The fairness controller can implement different policies, which could allow for more precise QOS guarantees, or bandwidth allocation based on price paid, etc.

Really really cool, but really really hard to deploy widely. It's the kind of thing that I suppose would fit in with David D. Clark's Future Internet Network Design research programme, which is supposed to come up with new ideas for the long term. "To do that, you have to free yourself from what the world looks like now."

Date: 2006-04-09 11:27 (UTC)
From: [identity profile] dwmalone.livejournal.com
Would backpressure be slower than (say) TCP waiting about an RTT to find out that there had been a loss?

There are certainly interesting interactions between TCP and shared media (because you may have multiple buffers that the TCP streams don't share) and between TCP and media where transmission chances are shared with ethernet like mechanisms. We've looked at using some of the wifi priotisation stuff to work around these effects (see here) and other people have suggested things like modifing TCP or active queueing to get around it. As with all this type of research YMMV ;-)

On wifi there are also other clever things you can do. There was a paper at SIGCOMM last year suggesting a nice way to adjust the random backoffs automatically to match the amount of traffic on the network and reduce the collisions to a reasonable level (I think it was called something like "Idle Sense").

(For faster wifi networks, it looks like the per-packet overheads are really the killer, rather than the collisions. Reducing the actualy frame headers seems to be hard, according to the PHY level guys as you need to train your reciever. So people are looking at other ways of getting around them.)

December 2025

S M T W T F S
 123456
78910111213
14151617181920
21222324 252627
28293031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated 2026-01-01 07:03
Powered by Dreamwidth Studios