Generalized processor sharing

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Generalized processor sharing (GPS) is an ideal scheduling algorithm for network schedulers.

It is related to the Fair queuing principle, that groups the packets into classes, and share the service capacity between them. The GPS shares this capacity according to some fixed weights.[1]

In processor scheduling, generalized processor sharing is "an idealized scheduling algorithm that achieves perfect fairness. All practical schedulers approximate GPS and use it as a reference to measure fairness."[2] Generalized processor sharing assumes that traffic is fluid (infinitesimal packet sizes), and can be arbitrarily split. There are several service disciplines which track the performance of GPS quite closely such as weighted fair queuing (WFQ),[3] also known as packet-by-packet generalized processor sharing (PGPS).

Justification[edit]

In a network such as the internet, different application types require different levels of performance. For example, email is a genuinely store and forward kind of application, but videoconferencing isn't since it requires low latency. When packets are queued up on one end of a congested link, the node usually has some freedom in deciding the order in which it should send the queued packets. One example ordering is simply first-come, first-served, which works fine if the sizes of the queues are small, but can result in problems if there are latency sensitive packets being blocked by packets from bursty, higher bandwidth applications.

Details[edit]

In GPS, a scheduler handling N flows (also called "classes", or "sessions") is configured with one weight w_i for each flow. Then, the GPS ensures that, considering one flows i, and some time interval (s,t] such that the flow i is continuously backlogged on this interval (i.e. the queue is never empty), then, for any other flow j, the following relation holds

w_j O_i(s,t) \geq  w_i O_j(s,t)

where O_k(s,t) denotes the amount of bits of the flow k outputted on interval (s,t].

Then, it can be proved that each flow i will receive at least a rate

R_i = \frac{w_i}{\sum_{j=1}^N w_j} R

where R is the rate of the server.

This is a minimal rate. If some flow does not use its bandwidth during some period, this remaining capacity is shared by the active flows wrt their respective weights. For example, consider a GPS server with w_1=2,w_2=w_3=1. The first flow will receive at least half of the capacity, whereas the other two only get 1/4. Nevertheless, if on some time interval (s,t], only the second and third flows are active, they will receive each one half of the capacity.

Implementations, parametrization and fairness[edit]

In GPS, and all protocols inspired by GPS, the choice of the weights is left to the network administrator.

Generalized processor sharing assumes that the traffic is fluid, i.e., infinitely divisible so that whenever an application type has packets in the queue, it will receive exactly the fraction of the server given by the formula above. However, traffic is not fluid and consists of packets, possibly of variable sizes. Therefore, GPS is mostly a theoretical idea, and several scheduling algorithms have been developed to approximate this GPS ideal: PGPS, aka Weighted fair queuing, is the most known implementation of GPS, but its has some drawbacks, and several other implementations have been proposed, as Deficit round robin or WF2Q.[4]

GPS is considered as a fair ideal, and all its approximations "use it as a reference to measure fairness."[2] Nevertheless, several Fairness measures exist.

GPS is insensible to packet sizes, since it assumes a fluid model.

See also[edit]

References[edit]

  1. ^ Parekh, A. K.; Gallager, R. G. (1993). "A generalized processor sharing approach to flow control in integrated services networks: The single-node case" (PDF). IEEE/ACM Transactions on Networking 1 (3): 344. doi:10.1109/90.234856.  edit
  2. ^ a b Li, T.; Baumberger, D.; Hahn, S. (2009). "Efficient and scalable multiprocessor fair scheduling using distributed weighted round-robin" (PDF). ACM SIGPLAN Notices 44 (4): 65. doi:10.1145/1594835.1504188.  edit
  3. ^ Demers, A.; Keshav, S.; Shenker, S. (1989). "Analysis and simulation of a fair queueing algorithm". ACM SIGCOMM Computer Communication Review 19 (4): 1. doi:10.1145/75247.75248.  edit
  4. ^ Bennett, J. C. R.; Hui Zhang (1996). "WF/sup 2/Q: Worst-case fair weighted fair queueing". Proceedings of IEEE INFOCOM '96. Conference on Computer Communications 1. p. 120. doi:10.1109/INFCOM.1996.497885. ISBN 0-8186-7293-5.  edit