Jump to content

Bufferbloat

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by JimGettys (talk | contribs) at 21:55, 20 June 2011 (Add some references. More to come as more gets published.). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Bufferbloat is a phenomenon in a packet-switched computer network whereby excess buffering of packets inside the network causes high latency and jitter, as well as reducing the overall network throughput. The term was coined by Jim Gettys in late 2010.[1]

This problem is caused mainly by router and switch manufacturers making incorrect assumptions about whether to buffer packets or drop them. As a general rule, packets should not be buffered for more than a few milliseconds. Any more than this can lead to TCP causing problems and high latency. When bufferbloat is present and then network is under load, normal web page loads can take many seconds to complete. Any type of interactive application, be it VoIP, networked gaming, text and video chat programs, and remote login become next to impossible.

While latency has been identified as more important than bandwidth for many years,[2] as the price of RAM has fallen the problem of bufferbloat has become increasingly obvious as large buffers can be implemented extremely cheaply.

The problem can be eliminated by simply reducing the buffer size on the network hardware, however, this is not configurable on most routers and switches.

Details

The problem is that the TCP congestion avoidance algorithm relies on packet drops to determine the bandwidth available. It speeds up the data transfer until packets start to drop, then slows down the connection. Ideally it speeds up and slows down until it finds an equilibrium equal to the speed of the link. However, for this to work the packet drops must occur in a timely manner, so that the algorithm can select a suitable transfer speed. With a large buffer, the packets will arrive, but with a higher latency. The packet is not dropped, so TCP does not slow down even though it really should. It does not slow down until it has sent so much beyond the capacity of the link that the buffer fills and drops packets, but this then means it has far overestimated the speed of the link.

In a network buffer, packets are queued before being transmitted. Packets are only dropped if the buffer is full. On older routers, buffers were fairly small so filled quickly and therefore packets began to drop shortly after the link became saturated, so the TCP protocol could adjust. On newer routers buffers have become large enough to hold several megabytes of data, which translates to 10 seconds or more at a 1 Mbit/s line rate used for residential Internet access. This causes the TCP algorithm to work erratically and possibly even time out completely.

The problem also affects other protocols. Since the buffer can easily build up several seconds worth of data before packets start to drop which must wait in the buffer until they are transmitted this can reduce the interactivity of interactive applications and cause latency problems for gamers and VoIP. This is still the case when using DiffServ to prioritise traffic, which uses multiple buffers (queues) for each class of traffic. HTTP and VoIP may be buffered independently, but each buffer will still be independently susceptible to bufferbloat.

With TCP, during network congestion bufferbloat causes extra delays, limiting the speed of internet connections. Other network protocols also appear to be affected, including UDP-based protocols. This can cause problems by restricting the speed of connections, affecting interactive Web 2.0 applications, gaming and VoIP.

See also

References