Fair queuing

From Wikipedia, the free encyclopedia
  (Redirected from Fair scheduling)
Jump to: navigation, search

Fair queuing is a family of scheduling algorithms used by process and network schedulers, e.g., to allow multiple packet flows to fairly share the capacity of a communications link. The advantage over conventional first in first out (FIFO) queuing is that a high-data-rate flow, consisting of large or many data packets, cannot take more than its fair share of the link capacity.

Fair queuing is used in routers, switches, and statistical multiplexers that forward packets from a buffer. The buffer works as a queuing system, where the data packets are stored temporarily until they are transmitted. The buffer space is divided into many queues, each of which is used to hold the packets of one flow, defined for instance by source and destination IP addresses.


With a link data-rate of R, at any given time the N active data flows (the ones with non-empty queues) are serviced each with an average data rate of R / N. In a short time interval the data rate may be fluctuating around this value since the packets are delivered sequentially.

Fair queuing achieves max-min fairness, i.e., its first priority is to maximize the minimum data rate that any of the active data flows experience, the second priority is to maximize the second minimum data rate, etc. This results in lower throughput (lower system spectrum efficiency in wireless networks) than maximum throughput scheduling, but avoids scheduling starvation of expensive flows.

Various sources disagree on what is "fair". Some do round-robin scheduling of packets; others adjust for packet sizes to ensure each flow is given equal opportunity to transmit an equal amount of data. Weighted fair queuing associates a weight with each queue.

A fair queuing algorithm[edit]

This algorithm attempts to emulate the fairness of bitwise round-robin sharing of link resources among competing flows. Packet-based flows, however, must be transmitted packetwise and in sequence. Fair queuing selects transmission order for the packets by modeling the finish time for each packet as if they could be transmitted bitwise round robin. The packet with the earliest finish time according to this modeling is the next selected for transmission.

Modeling of actual finish time, while feasible, is computationally intensive. The model needs to be substantially recomputed every time a packet is selected for transmission and every time a new packet arrives into any queue.

To reduce computational load, the concept of virtual time is introduced. Finish time for each packet is computed on this alternate monotonically increasing virtual timescale. While virtual time does not accurately model the time packets complete their transmissions, it does accurately model the order in which the transmissions must occur to meet the objectives of the full-featured model. Using virtual time, it is unnecessary to recompute the finish time for previously queued packets. Although the finish time, in absolute terms, for existing packets is potentially affected by new arrivals, finish time on the virtual time line is unchanged - the virtual time line warps with respect to real time to accommodate any new transmission.

The virtual finish time for a newly queued packet is given by the finish time of the packet queued ahead of it for its flow plus its own size. If there are no packets queued for the flow, the virtual finish time is given by current virtual time plus the packet's size where current virtual time is the assigned virtual finish time for the packet which most recently completed transmission plus progress on the current transmission (if any).

With a virtual finishing time of all candidate packets (i.e., the packets at the head of all non-empty flow queues) computed, fair queuing compares the virtual finishing time and selects the minimum one. The packet with the minimum virtual finishing time is transmitted.


The term "fair queuing" was coined by John Nagle in 1985 while proposing round-robin scheduling in the gateway between a local area network and the internet to reduce network disruption from badly-behaving hosts.[1][2][3] A byte-weighted version was proposed by A. Demers, S. Keshav and S. Shenker in 1989.[4][5]

See also[edit]


  1. ^ John Nagle: "On packet switches with infinite storage," RFC 970, IETF, December 1985.
  2. ^ Nagle, J. B. (1987). "On Packet Switches with Infinite Storage". IEEE Transactions on Communications 35 (4): 435. doi:10.1109/TCOM.1987.1096782.  edit
  3. ^ Phillip Gross (January 1986), Proceedings of the 16-17 January 1986 DARPA Gateway Algorithms and Data Structures Task Force, IETF, p. 5, 98, retrieved 2015-03-04 . Nagle presented his "fair queuing" scheme, in which gateways maintain separate queues for each sending host. In this way, hosts with pathological implementations can not usurp more than their fair share of the gateway’s resources. This invoked spirited and interested discussion.
  4. ^ Demers, Alan; Keshav, Srinivasan; Shenker, Scott (1989). "Analysis and simulation of a fair queueing algorithm". ACM SIGCOMM Computer Communication Review 19 (4): 1–12. doi:10.1145/75247.75248. 
  5. ^ Demers, Alan; Keshav, Srinivasan; Shenker, Scott (1990). "Analysis and Simulation of a Fair Queueing Algorithm". Internetworking: Research and Experience 1: 3-26.