Talk:Network congestion

From Wikipedia, the free encyclopedia
Jump to: navigation, search
WikiProject Computing / Networking (Rated C-class, Mid-importance)
WikiProject icon This article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
C-Class article C  This article has been rated as C-Class on the project's quality scale.
 Mid  This article has been rated as Mid-importance on the project's importance scale.
Taskforce icon
This article is supported by Networking task force (marked as High-importance).
 

Utility functions[edit]

Shouldn't the utility functions be concave and strictly-increasing? Diminishing returns as rates increase?--160.39.62.168 (talk) 02:07, 1 March 2011 (UTC)

You are right. I have also checked Kelly's paper http://www.statslab.cam.ac.uk/~frank/rate.pdf which says:

Assume that the utility U_r(x_r) †is an increasing, strictly concave and continuously differentiable function of x_r

I changed convex to concave on the page. --Per Olofsson (talk) 13:37, 6 November 2013 (UTC)

[Network Congestion] Avoidance[edit]

I'd like to take umbrage with the statement that "The prevention of network congestion and collapse requires... End-to-end flow control mechanisms designed into the end points which respond to congestion and behave appropriately".

The example I offer in support of this taking of umbrage is ATM's Usage Parameter Control (and Network Parameter Control) where the only requirement on the end points relates to the source, which must not exceed bandwidth and jitter (delay-variation/burstiness) limits in transmission, and only then to prevent the UPC/NPC functions delaying or discarding some of its transmissions to enforce conformance to the traffic contract. To ensure congestion avoidance, it is also necessary to ensure that shared resources, e.g. switch output buffers, are not oversubscribed by the set of connections routed through them and thus, e.g., liable to overflow, but neither the source nor the destination end point respond to congestion; rather, the actions of the source are entirely proactive. Thus congestion avoidance can be done by predicting the loads on these resources from, e.g., the bandwidths and jitter. However, this prediction of the effect of a connection on congestion is either an off-line function, e.g. at system design time in reliable real-time systems, or a function of connection admission control, and thus not "designed into the end points".

A similar approach is taken in AFDX, where the source limits transmission by a Bandwidth Allocation Gap (BAG), and switches police traffic to the BAG and an allowed jitter tolerance on a per Vlink basis, to ensure that the bandwidths of the switch outputs are not oversubscribed and the switch buffers should not or cannot (depending on the rigor of the prediction method, which is not given in the ARINC 664P7 standard) overflow.

In both ATM and AFDX, it can be argued that such methods are inefficient. However, so is the overprovisioning of networks for QoS purposes, as recommended by the Internet2 project (see QoS page for refs). Indeed, in a way, these methods are merely a way of quantifying the overprovisioning that is required to ensure or even guarantee (depending again on the rigor in the predictions) that congestion is avoided. They also underscore that overprovisioning the switch buffers is just as important an issue as is overprovisioning the bandwidths of the physical links in the network, which may not necessarily be perceived where overprovisioning is done as an ad hoc process or using a "wet finger" approach.

So, whilst E2E flow control may be one way of solving the problem, it is not the only one, and thus not a necessary pre-requisite, as it is implied in this article. I suspect that the problem may be to do with a significant bias towards Ethernet networks and more specifically avoiding congestive collapse. And to be fair by paragraph 5 or 7 (depending on how you count) the section starts to allude to special measures, but never really gets past the necessity of designed in reactive functionality. Also what is meant by "quality-of-service routing[sic]" is not clear – there seems to be no reference to such routeing on the QoS page.

However, even in Ethernet, there are switches available that do per VLAN traffic shaping/policing, which would allow congestion to be avoided without the necessity of reactive mechanisms designed into the end-points; specifically, they would operate on UDP flows, if these are separately identified, e.g., by VLAN Id and priority. Again to be fair these methods may only really apply to private networks, such as on-platform avionic networks and those in automation control etc. But to state, in effect, that "End-to-end flow control mechanisms designed into the end points which respond to congestion and behave appropriately." are the only way to do it is far too narrow.

Graham.Fountain | Talk 12:58, 2 April 2012 (UTC)

That's all well and good but what we really need to sort this out is some good references. I have tagged the statement in the article. --Kvng (talk) 18:25, 4 April 2012 (UTC)

Having thought about this, a bit, it may be that the problem is one of semantics and the difference between "congestion avoidance", i.e. taking actions in response to incipient congestion, and what's done in some private networks, e.g. avionic and industrial networks, that might be referred to as "congestion prevention". This is taking continuous actions in the end systems that source the data and, generally, in the switches in the network, such that the network cannot become congested. In which case there is no need to take actions in specific cases – and they can be proved to have no emergent properties, rather than relying on what are essentially mathematical arguments based on assumptions about the self-similarity of the flows, which may or may not be reliable.

The two schemes that employ congestion prevention that come first to mind here are Time Triggered Ethernet (TTE) and ATM and, perhaps more relevantly, ATM’s avatar in the Ethernet context, the Avionics Full-Duplex Switched Ethernet (AFDX) protocol (the coming "down to Earth" aspect of an Avatar is ironic though) - though there are a few other protocols that are also based on Ethernet like Profinet and maybe Ethernet Powerlink worth thinking about where more detail is relevant.

Currently, these methods are, as far as I understand the situation, limited to private networks such as avionic/space-borne systems and industrial control. However, there is some work relevant to the Internet itself. I have no idea how the time domain constraints of TTE might be applied there. But in the frequency domain control of ATM and AFDX, for example, the paper Network Border Patrol: Preventing Congestion Collapse and Promoting Fairness in the Internet by Célio Albuquerque et al (IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 12, NO. 1, FEBRUARY 2004) addresses this very issue, proposing network border patrols and enhanced core-stateless fair queueing to the prevention of Internet congestion collapse. It addresses only parts of the problem for real-time data transport, but shows interest and possibly much wider notability than for TTE and AFDX themselves (I wouldn’t mention ATM in this context).

So, (all) that being said, the question is, is there any value in a new section on these methods of congestion prevention, and if so, what should its title be? Graham.Fountain | Talk 15:15, 5 March 2013 (UTC)

I have created a draft for such a section, presently titled Congestion Provention at User:Graham.Fountain/Congestion prevention; however, as yet, it contains no references or citations. I will get around to adding these in time, but if anyone is interested in comenting, ammending, or adding refs, plese feel free. Graham.Fountain | Talk 10:44, 7 March 2013 (UTC)