Jump to content

Talk:Bufferbloat

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by JimGettys (talk | contribs) at 18:34, 19 December 2011. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconComputing: Networking Start‑class Low‑importance
WikiProject iconThis article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
StartThis article has been rated as Start-class on Wikipedia's content assessment scale.
LowThis article has been rated as Low-importance on the project's importance scale.
Taskforce icon
This article is supported by Networking task force (assessed as Mid-importance).

This is not a new problem, RFC 970 On Packet Switches With Infinite Storage http://tools.ietf.org/html/rfc970 describes the issue well.

16:34 EDT 2011-09-14 167.104.7.2 (talk) 20:36, 14 September 2011 (UTC) Note that bufferbloat != Jitter I think this subject can get its own page. but maybe i am missing a good synosym where this is already discussed. :14:58, 7 January 2011 (UTC)[reply]

This needs to be Incorporated in the article:

http://arstechnica.com/tech-policy/news/2011/01/understanding-bufferbloat-and-the-network-buffer-arms-race.ars :

He mentions that TCP congestion control—not flow control, that's something else—requires dropped packets to function, but that's not entirely true. TCP's transmission speed can be limited by the send and/or receive buffers and the round-trip time, or it can slow down because packets get lost. Both excessive buffering and excessive packet loss are unpleasant, so it's good to find some middle ground. :14:01, 10 January 2011 (UTC)

This is pretty moot: modern TCP's (anything later than Windows XP) do window scaling, and can and will fill arbitrary sized buffers with even a single TCP flowJimGettys (talk) 18:34, 19 December 2011 (UTC)[reply]


Just a note: Maybe it is worth mentioning that Control theory says that control of systems with large delays from input to measurable response is difficult and easily results in unstable systems. I find this to be a reasonable explanation for difficulties TCP etc is having with large buffer sizes. —Preceding unsigned comment added by 79.136.60.104 (talk) 02:19, 27 February 2011 (UTC)[reply]

Reducing the buffer size does _not_ eliminate the problem

"The problem can be eliminated by simply reducing the buffer size on the network hardware"

That's unfortunately not true, it is not that easy. It is true, that when you have bufferbloat, reducing the buffer size reduces the negative impact, but in practice there is no such thing as the optimal buffer size. The right buffer size always depends on the transmission rate, however usually you have multiple destinations with differing transmission rates, making it rather difficult to find the optimal buffer size. — Preceding unsigned comment added by Tddt (talkcontribs) 15:38, 24 July 2011 (UTC)[reply]

This comment is correct. JimGettys (talk) 18:34, 19 December 2011 (UTC)[reply]


This article still needs some serious rework

The ACM queue article (to appear in CACM in January), is really the best publication to date on the bufferbloat topic, and should be read and consulted by anyone who wants to take this topic on properly.

I don't have time to do this today, and it is probably more appropriate if others undertake the surgery. I may get to it, but not anytime in the next month or two. JimGettys (talk) 18:34, 19 December 2011 (UTC)[reply]