Weighted fair queueing

From Wikipedia, the free encyclopedia
  (Redirected from Weighted fair queuing)
Jump to: navigation, search

Weighted fair queueing (WFQ) is a data packet scheduling technique allowing different scheduling priorities to statistically multiplexed data flows. Weighted fair queueing is popular because it approximates generalized processor sharing "to within one packet transmission time, regardless of the arrival patterns."[1]

WFQ is a generalization of fair queuing (FQ). Both in WFQ and FQ, each data flow has a separate FIFO queue. In FQ, with a link data rate of R, at any given time the N active data flows (the ones with non-empty queues) are serviced simultaneously, each at an average data rate of R/N. Since each data flow has its own queue, an ill-behaved flow (who has sent larger packets or more packets per second than the others since it became active) will only punish itself and not other sessions.

As opposed to FQ, WFQ allows different sessions to have different service shares. If N data flows currently are active, with weights w_1, w_2 ... w_N, data flow number i will achieve an average data rate of

\frac{Rw_i}{(w_1+w_2+...+w_N)}

It can be proven that when using a network with WFQ switches and a data flow that is leaky bucket constrained, an end-to-end delay bound can be guaranteed.[2] By regulating the WFQ weights dynamically, WFQ can be utilized for controlling the quality of service, for example to achieve guaranteed data rate.

Proportional fairness can be achieved by setting the weights to w_i=1/c_i, where c_i is the cost per data bit of data flow i. For example in CDMA spread spectrum cellular networks, the cost may be the required energy (the interference level), and in dynamic channel allocation systems, the cost may be the number of nearby base station sites that can not use the same frequency channel, in view to avoid co-channel interference.

The algorithm was first proposed in 1989.[3][1]

See also[edit]

References[edit]

  1. ^ a b Parekh, A. K.; Gallager, R. G. (1993). "A generalized processor sharing approach to flow control in integrated services networks: The single-node case". IEEE/ACM Transactions on Networking 1 (3): 344. doi:10.1109/90.234856.  edit
  2. ^ Stiliadis, D.; Varma, A. (1998). "Latency-rate servers: A general model for analysis of traffic scheduling algorithms". IEEE/ACM Transactions on Networking 6 (5): 611. doi:10.1109/90.731196.  edit
  3. ^ Demers, A.; Keshav, S.; Shenker, S. (1989). "Analysis and simulation of a fair queueing algorithm". ACM SIGCOMM Computer Communication Review 19 (4): 1. doi:10.1145/75247.75248.  edit