Jump to content

Generalized processor sharing

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by MarcBoyerONERA (talk | contribs) at 12:00, 10 October 2016 (Undid revision 743609214 by 212.234.83.85 (talk) No justification was given for changing the right-closed interval into an open one). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Generalized processor sharing (GPS) is an ideal scheduling algorithm for network schedulers.

It is related to the Fair queuing principle, that groups the packets into classes, and share the service capacity between them. The GPS shares this capacity according to some fixed weights.[1]

In processor scheduling, generalized processor sharing is "an idealized scheduling algorithm that achieves perfect fairness. All practical schedulers approximate GPS and use it as a reference to measure fairness."[2] Generalized processor sharing assumes that traffic is fluid (infinitesimal packet sizes), and can be arbitrarily split. There are several service disciplines which track the performance of GPS quite closely such as weighted fair queuing (WFQ),[3] also known as packet-by-packet generalized processor sharing (PGPS).

Justification

In a network such as the internet, different application types require different levels of performance. For example, email is a genuinely store and forward kind of application, but videoconferencing isn't since it requires low latency. When packets are queued up on one end of a congested link, the node usually has some freedom in deciding the order in which it should send the queued packets. One example ordering is simply first-come, first-served, which works fine if the sizes of the queues are small, but can result in problems if there are latency sensitive packets being blocked by packets from bursty, higher bandwidth applications.

Details

In GPS, a scheduler handling N flows (also called "classes", or "sessions") is configured with one weight for each flow. Then, the GPS ensures that, considering one flows , and some time interval such that the flow is continuously backlogged on this interval (i.e. the queue is never empty), then, for any other flow , the following relation holds

where denotes the amount of bits of the flow outputted on interval .

Then, it can be proved that each flow will receive at least a rate

where is the rate of the server.

This is a minimal rate. If some flow does not use its bandwidth during some period, this remaining capacity is shared by the active flows wrt their respective weights. For example, consider a GPS server with . The first flow will receive at least half of the capacity, whereas the other two only get 1/4. Nevertheless, if on some time interval , only the second and third flows are active, they will receive each one half of the capacity.

Implementations, parametrization and fairness

In GPS, and all protocols inspired by GPS, the choice of the weights is left to the network administrator.

Generalized processor sharing assumes that the traffic is fluid, i.e., infinitely divisible so that whenever an application type has packets in the queue, it will receive exactly the fraction of the server given by the formula above. However, traffic is not fluid and consists of packets, possibly of variable sizes. Therefore, GPS is mostly a theoretical idea, and several scheduling algorithms have been developed to approximate this GPS ideal: PGPS, aka Weighted fair queuing, is the most known implementation of GPS, but its has some drawbacks, and several other implementations have been proposed, as Deficit round robin or WF2Q.[4]

GPS is considered as a fair ideal, and all its approximations "use it as a reference to measure fairness."[2] Nevertheless, several Fairness measures exist.

GPS is insensible to packet sizes, since it assumes a fluid model.

See also

References

  1. ^ Parekh, A. K.; Gallager, R. G. (1993). "A generalized processor sharing approach to flow control in integrated services networks: The single-node case" (PDF). IEEE/ACM Transactions on Networking. 1 (3): 344. doi:10.1109/90.234856.
  2. ^ a b Li, T.; Baumberger, D.; Hahn, S. (2009). "Efficient and scalable multiprocessor fair scheduling using distributed weighted round-robin" (PDF). ACM SIGPLAN Notices. 44 (4): 65. doi:10.1145/1594835.1504188.
  3. ^ Demers, A.; Keshav, S.; Shenker, S. (1989). "Analysis and simulation of a fair queueing algorithm". ACM SIGCOMM Computer Communication Review. 19 (4): 1. doi:10.1145/75247.75248.
  4. ^ Bennett, J. C. R.; Hui Zhang (1996). "WF/sup 2/Q: Worst-case fair weighted fair queueing". Proceedings of IEEE INFOCOM '96. Conference on Computer Communications. Vol. 1. p. 120. doi:10.1109/INFCOM.1996.497885. ISBN 0-8186-7293-5.