Jump to content

Bufferbloat: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Bender the Bot (talk | contribs)
m top: HTTP→HTTPS for Ars Technica, per BRFA 8 using AWB
Rescuing 1 sources and tagging 0 as dead. #IABot (v1.5beta)
Line 15: Line 15:
== Mechanism ==
== Mechanism ==
{{See also|TCP tuning#Window size|Slow-start}}
{{See also|TCP tuning#Window size|Slow-start}}
The TCP congestion control algorithm relies on measuring the occurrence of packet drops to determine the available [[bandwidth (computing)|bandwidth]]. The algorithm speeds up the data transfer until packets start to drop, then slows down the transmission rate. Ideally, it keeps adjusting the transmission rate until it reaches an equilibrium speed of the link. However for this to work, the feedback about packet drops must occur in a timely manner, so that the algorithm can select a suitable transfer speed. With a large [[buffer (telecommunication)|buffer]] that has been filled, the packets will arrive at their destination, but with a higher latency. The packets were not dropped, so TCP does not slow down once the uplink has been saturated, further filling the buffer. Newly arriving packets are dropped only when the buffer is fully saturated. TCP may even decide that the path of the connection has changed, and again go into the more aggressive search for a new operating point.<ref>{{cite journal | url = http://www.cord.edu/faculty/zhang/cs345/assignments/researchPapers/congavoid.pdf | title = Congestion avoidance and control | journal = ACM SIGCOMM Computer Communication Review | author1 = Jacobson, Van | author2 = Karels, MJ | year = 1988 | volume = 18 | issue = 4 | format = PDF}}</ref>
The TCP congestion control algorithm relies on measuring the occurrence of packet drops to determine the available [[bandwidth (computing)|bandwidth]]. The algorithm speeds up the data transfer until packets start to drop, then slows down the transmission rate. Ideally, it keeps adjusting the transmission rate until it reaches an equilibrium speed of the link. However for this to work, the feedback about packet drops must occur in a timely manner, so that the algorithm can select a suitable transfer speed. With a large [[buffer (telecommunication)|buffer]] that has been filled, the packets will arrive at their destination, but with a higher latency. The packets were not dropped, so TCP does not slow down once the uplink has been saturated, further filling the buffer. Newly arriving packets are dropped only when the buffer is fully saturated. TCP may even decide that the path of the connection has changed, and again go into the more aggressive search for a new operating point.<ref>{{cite journal | url = http://www.cord.edu/faculty/zhang/cs345/assignments/researchPapers/congavoid.pdf | title = Congestion avoidance and control | journal = ACM SIGCOMM Computer Communication Review | author1 = Jacobson, Van | author2 = Karels, MJ | year = 1988 | volume = 18 | issue = 4 | format = PDF | deadurl = yes | archiveurl = https://web.archive.org/web/20040622215331/http://www.cord.edu/faculty/zhang/cs345/assignments/researchPapers/congavoid.pdf | archivedate = 2004-06-22 | df = }}</ref>


Packets are queued within a network buffer before being transmitted; in problematic situations, packets are dropped only if the buffer is full. On older routers, buffers were fairly small so they filled quickly and therefore packets began to drop shortly after the link became saturated, so the TCP protocol could adjust and the issue would not become apparent. On newer routers, buffers have become large enough to hold several megabytes of data, which translates to time amounts in seconds required for emptying the buffers. This causes the TCP algorithm that shares bandwidth on a link to react very slowly as its behavior depends on actually having packets dropped when the transmission channel becomes saturated.
Packets are queued within a network buffer before being transmitted; in problematic situations, packets are dropped only if the buffer is full. On older routers, buffers were fairly small so they filled quickly and therefore packets began to drop shortly after the link became saturated, so the TCP protocol could adjust and the issue would not become apparent. On newer routers, buffers have become large enough to hold several megabytes of data, which translates to time amounts in seconds required for emptying the buffers. This causes the TCP algorithm that shares bandwidth on a link to react very slowly as its behavior depends on actually having packets dropped when the transmission channel becomes saturated.

Revision as of 07:45, 27 July 2017

Bufferbloat is high latency in packet-switched networks caused by excess buffering of packets. Bufferbloat can also cause packet delay variation (also known as jitter), as well as reduce the overall network throughput. When a router or switch is configured to use excessively large buffers, even very high-speed networks can become practically unusable for many interactive applications like Voice over IP (VoIP), online gaming, and even ordinary web surfing.

Some communications equipment manufacturers placed overly large buffers in some of their network products. In such equipment, bufferbloat occurs when a network link becomes congested, causing packets to become queued in buffers for too long. In a first-in first-out queuing system, overly large buffers result in longer queues and higher latency, and do not improve network throughput.

The bufferbloat phenomenon was initially described as far back as in 1985.[1] It gained more widespread attention starting in 2009.[2]

Buffering

An established rule of thumb for the network equipment manufacturers was to provide buffers large enough to accommodate at least 250 ms of buffering for a stream of traffic passing through a device. For example, a router's Gigabit Ethernet interface would require a relatively large 32 MB buffer.[3] Such sizing of the buffers can lead to failure of the TCP congestion control algorithm, causing problems such as high and variable latency, and choking network bottlenecks for all other flows as the buffer becomes full of the packets of one TCP stream and other packets are then dropped.[4] The buffers then take some time to drain, before the TCP connection ramps back up to speed and fills the buffers again.[5]

A bloated buffer has an effect only when this buffer is actually used. In other words, oversized buffers have a damaging effect only when the link they buffer becomes a bottleneck. When the current bottleneck on the route from or to another host is not in contention, it is easy to check whether it is bloated or not using the ping utility provided by most operating systems. First, the other host should be pinged continuously; then, a several-seconds-long download from it should be started and stopped a few times. By design, the TCP congestion avoidance algorithm will rapidly fill up the bottleneck on the route. If downloading (and uploading, respectively) correlates with a direct and important increase of the round trip time reported by ping, then it demonstrates that the buffer of the current bottleneck in the download (and upload, respectively) direction is bloated. Since the increase of the round trip time is caused by the buffer on the bottleneck, the maximum increase gives a rough estimation of its size in milliseconds.[6]

In the previous example, using an advanced traceroute tool instead of the simple pinging (for example, MTR) will not only demonstrate the existence of a bloated buffer on the bottleneck, but will also pinpoint its location in the network. Traceroute achieves this by displaying the route (path) and measuring transit delays of packets across the network. The history of the route is recorded as round-trip times of the packets received from each successive host (remote node) in the route (path).[7]

Mechanism

The TCP congestion control algorithm relies on measuring the occurrence of packet drops to determine the available bandwidth. The algorithm speeds up the data transfer until packets start to drop, then slows down the transmission rate. Ideally, it keeps adjusting the transmission rate until it reaches an equilibrium speed of the link. However for this to work, the feedback about packet drops must occur in a timely manner, so that the algorithm can select a suitable transfer speed. With a large buffer that has been filled, the packets will arrive at their destination, but with a higher latency. The packets were not dropped, so TCP does not slow down once the uplink has been saturated, further filling the buffer. Newly arriving packets are dropped only when the buffer is fully saturated. TCP may even decide that the path of the connection has changed, and again go into the more aggressive search for a new operating point.[8]

Packets are queued within a network buffer before being transmitted; in problematic situations, packets are dropped only if the buffer is full. On older routers, buffers were fairly small so they filled quickly and therefore packets began to drop shortly after the link became saturated, so the TCP protocol could adjust and the issue would not become apparent. On newer routers, buffers have become large enough to hold several megabytes of data, which translates to time amounts in seconds required for emptying the buffers. This causes the TCP algorithm that shares bandwidth on a link to react very slowly as its behavior depends on actually having packets dropped when the transmission channel becomes saturated.

The problem also affects other protocols. All packets passing through a simple buffer implemented as a single queue will experience the same delay, so the latency of any connection that passes through a filled buffer will be affected. Available channel bandwidth can also end up being unused, as some fast destinations may not be reached due to buffers clogged with data awaiting delivery to slow destinations — caused by contention between simultaneous transmissions competing for some space in an already full buffer. This also reduces the interactivity of applications using other network protocols, including UDP or any other datagram protocol used in latency-sensitive applications like VoIP and games.[9] In extreme cases, bufferbloat may cause failures in essential protocols such as DNS.

Impact on applications

Any type of a service which requires consistently low latency or jitter-free transmission (whether in low or high traffic bandwidths) can be severely affected, or even rendered unusable by the effects of bufferbloat. Examples are voice calls, online gaming, video chat, and other interactive applications such as instant messaging and remote login. Latency has been identified as more important than raw bandwidth for many years.[citation needed]

When the bufferbloat phenomenon is present and the network is under load, even normal web page loads can take many seconds to complete, or simple DNS queries can fail due to timeouts.[10]

Diagnostic tools

The ICSI Netalyzr[11] is an on-line tool that can be used for checking networks for the presence of bufferbloat, together with checking for many other common configuration problems.[citation needed] The CeroWrt project also provides an easy procedure for determining whether a connection has excess buffering that will slow it down.[12]

Mitigations

The problem may be mitigated by reducing the buffer size on the OS[10] and network hardware; however, this is not configurable on most home routers, broadband equipment and switches, nor even feasible in today's broadband and wireless systems.[10] Some other mitigation approaches are also available:

Network scheduler

The network scheduler arbiter is a program that manages the sequence of network packets. It has been successfully used to significantly mitigate the bufferbloat phenomenon when employing the CoDel or the Fair Queue CoDel queuing discipline, because these algorithms drop at the head.

There are several other queuing disciplines available for active queue management, used in general for traffic shaping, but none of them fundamentally changes the situation, as although HTTP and VoIP may be buffered independently, each buffer will still be independently susceptible to bufferbloat. In practice, though, this may help mitigate,[10] for example as a result of one large buffer being split into multiple smaller buffers, or isolation of bufferbloat queues combined with prioritisation.

  • CeroWrt is an open source project based on OpenWrt with AQM.[10]
  • CoDel is the scheduler algorithm, with which a significant improvement can be achieved

See also

References

  1. ^ "On Packet Switches With Infinite Storage". 1985-12-31.
  2. ^ van Beijnum, Iljitsch (2011-01-07). "Understanding Bufferbloat and the Network Buffer Arms Race". Ars Technica. Retrieved 2011-11-12.
  3. ^ Guido Appenzeller; Isaac Keslassy; Nick McKeown (2004). "Sizing Router Buffers" (PDF). ACM SIGCOMM. ACM. Retrieved 2013-10-15.
  4. ^ Gettys, Jim (May–June 2011). "Bufferbloat: Dark Buffers in the Internet". IEEE Internet Computing. IEEE. pp. 95–96. doi:10.1109/MIC.2011.56. Retrieved 2012-02-20.
  5. ^ Nichols, Kathleen; Jacobson, Van (2012-05-06). "Controlling Queue Delay". ACM Queue. ACM Publishing. Retrieved 2013-09-27.
  6. ^ Clunis, Andrew (2013-01-22). "Bufferbloat demystified". Retrieved 2013-09-27.
  7. ^ "traceroute(8) – Linux man page". die.net. Retrieved 2013-09-27.
  8. ^ Jacobson, Van; Karels, MJ (1988). "Congestion avoidance and control" (PDF). ACM SIGCOMM Computer Communication Review. 18 (4). Archived from the original (PDF) on 2004-06-22. {{cite journal}}: Unknown parameter |deadurl= ignored (|url-status= suggested) (help)
  9. ^ "Technical Introduction to Bufferbloat". Bufferbloat.net. Retrieved 2013-09-27.
  10. ^ a b c d e f g Gettys, Jim; Nichols, Kathleen (January 2012). "Bufferbloat: Dark Buffers in the Internet". Communications of the ACM. 55 (1). ACM: 57–65. doi:10.1145/2063176.2063196. Retrieved 2012-02-28. {{cite journal}}: Cite journal requires |journal= (help)
  11. ^ "ICSI Netalyzr". berkeley.edu. Retrieved 30 January 2015.
  12. ^ "Cerowrt: Quick Test for Bufferbloat". bufferbloat.net. Retrieved 30 January 2015.
  13. ^ "DOCSIS "Upstream Buffer Control" feature". CableLabs. pp. 554–556. Retrieved 2012-08-09.