Differentiated services or DiffServ is a computer networking architecture that specifies a simple, scalable and coarse-grained mechanism for classifying and managing network traffic and providing quality of service (QoS) on modern IP networks. DiffServ can, for example, be used to provide low-latency to critical network traffic such as voice or streaming media while providing simple best-effort service to non-critical services such as web traffic or file transfers.
Since modern data networks carry many different types of services, including voice, video, streaming music, web pages and email, many of the proposed QoS mechanisms that allowed these services to co-exist were both complex and failed to scale to meet the demands of the public Internet. In December 1998, the IETF published RFC 2474 - Definition of the Differentiated services field (DS field) in the IPv4 and IPv6 Headers, which replaced the IPv4 TOS field with the DS field. In the DS field, a range of eight values (Class Selectors) is used for backward compatibility with the IP precedence specification in the former TOS field. Today, DiffServ has largely supplanted TOS and other layer-3 QoS mechanisms, such as Integrated services (IntServ), as the primary protocol routers use to provide different levels of service.
Traffic management mechanisms
DiffServ is a coarse-grained, class-based mechanism for traffic management. In contrast, IntServ is a fine-grained, flow-based mechanism.
DiffServ operates on the principle of traffic classification, where each data packet is placed into a limited number of traffic classes, rather than differentiating network traffic based on the requirements of an individual flow. Each router on the network is configured to differentiate traffic based on its class. Each traffic class can be managed differently, ensuring preferential treatment for higher-priority traffic on the network.
While DiffServ does recommend a standardized set of traffic classes, the DiffServ architecture does not incorporate predetermined judgements of what types of traffic should be given priority treatment. DiffServ simply provides a framework to allow classification and differentiated treatment. The standard traffic classes (discussed below) serve to simplify interoperability between different networks and different vendors' equipment.
DiffServ relies on a mechanism to classify and mark packets as belonging to a specific class. DiffServ-aware routers implement per-hop behaviors (PHBs), which define the packet-forwarding properties associated with a class of traffic. Different PHBs may be defined to offer, for example, low-loss or low-latency.
A group of routers that implement common, administratively defined DiffServ policies are referred to as a DiffServ domain.
Classification and marking
Network traffic entering a DiffServ domain is subjected to classification and conditioning. Traffic may be classified by many different parameters, such as source address, destination address or traffic type and assigned to a specific traffic class. Traffic classifiers may honor any DiffServ markings in received packets or may elect to ignore or override those markings. Because network operators want tight control over volumes and type of traffic in a given class, it is very rare that the network honors markings at the ingress to the DiffServ domain. Traffic in each class may be further conditioned by subjecting the traffic to rate limiters, traffic policers or shapers.
The Per-Hop Behavior is determined by the DS field of the IP header. The DS field contains a 6-bit Differentiated services Code Point (DSCP) value. Explicit Congestion Notification (ECN) occupies the least-significant 2 bits of the IPv4 Type of Service field (TOS) and IPv6 Traffic Class field (TC).
In theory, a network could have up to 64 (i.e. 26) different traffic classes using different DSCPs. The DiffServ RFCs recommend, but do not require, certain encodings. This gives a network operator great flexibility in defining traffic classes. In practice, however, most networks use the following commonly-defined Per-Hop Behaviors:
- Default PHB (Per hop behavior)—which is typically best-effort traffic
- Expedited Forwarding (EF) PHB—dedicated to low-loss, low-latency traffic
- Assured Forwarding (AF) PHB—gives assurance of delivery under prescribed conditions
- Class Selector PHBs—which maintain backward compatibility with the IP Precedence field.
A Default PHB (a.k.a. Default Forwarding (DF) PHB) is the only required behavior. Essentially, any traffic that does not meet the requirements of any of the other defined classes is placed in the default PHB. Typically, the default PHB has best-effort forwarding characteristics. The recommended DSCP for the default PHB is 000000B (0).
Expedited Forwarding (EF) PHB
The IETF defines Expedited Forwarding behavior in RFC 3246. The EF PHB has the characteristics of low delay, low loss and low jitter. These characteristics are suitable for voice, video and other realtime services. EF traffic is often given strict priority queuing above all other traffic classes. Because an overload of EF traffic will cause queuing delays and affect the jitter and delay tolerances within the class, EF traffic is often strictly controlled through admission control, policing and other mechanisms. Typical networks will limit EF traffic to no more than 30%—and often much less—of the capacity of a link. The recommended DSCP for expedited forwarding is 101110B (46 or 2EH).
Voice Admit (VA) PHB
The IETF defines Voice Admit behavior in RFC 5865. The Voice Admit PHB has identical characteristics to the Expedited Forwarding PHB. However Voice Admit traffic is also admitted by the network using a Call Admission Control (CAC) procedure. The recommended DSCP for voice admit is 101100B (44 or 2CH).
Assured Forwarding (AF) PHB group
The IETF defines the Assured Forwarding behavior in RFC 2597 and RFC 3260. Assured forwarding allows the operator to provide assurance of delivery as long as the traffic does not exceed some subscribed rate. Traffic that exceeds the subscription rate faces a higher probability of being dropped if congestion occurs.
The AF behavior group defines four separate AF classes with Class 4 having the highest priority. Within each class, packets are given a drop precedence (high, medium or low). The combination of classes and drop precedence yields twelve separate DSCP encodings from AF11 through AF43 (see table)
|Class 1 (lowest)||Class 2||Class 3||Class 4 (highest)|
|Low Drop||AF11 (DSCP 10)||AF21 (DSCP 18)||AF31 (DSCP 26)||AF41 (DSCP 34)|
|Med Drop||AF12 (DSCP 12)||AF22 (DSCP 20)||AF32 (DSCP 28)||AF42 (DSCP 36)|
|High Drop||AF13 (DSCP 14)||AF23 (DSCP 22)||AF33 (DSCP 30)||AF43 (DSCP 38)|
Some measure of priority and proportional fairness is defined between traffic in different classes. Should congestion occur between classes, the traffic in the higher class is given priority. Rather than using strict priority queuing, more balanced queue servicing algorithms such as fair queuing or weighted fair queuing (WFQ) are likely to be used. If congestion occurs within a class, the packets with the higher drop precedence are discarded first. To prevent issues associated with tail drop, more sophisticated drop selection algorithms such as random early detection (RED) are often used.
Class Selector (CS) PHB
Prior to DiffServ, IPv4 networks could use the Precedence field in the TOS byte of the IPv4 header to mark priority traffic. The TOS octet and IP precedence were not widely used. The IETF agreed to reuse the TOS octet as the DS field for DiffServ networks. In order to maintain backward compatibility with network devices that still use the Precedence field, DiffServ defines the Class Selector PHB.
The Class Selector code points are of the form 'xxx000'. The first three bits are the IP precedence bits. Each IP precedence value can be mapped into a DiffServ class. If a packet is received from a non-DiffServ aware router that used IP precedence markings, the DiffServ router can still understand the encoding as a Class Selector code point.
Advantages of DiffServ
Under DiffServ, all the policing and classifying is done at the boundaries between DiffServ domains. This means that in the core of the Internet, routers are unhindered by the complexities of collecting payment or enforcing agreements. That is, in contrast to IntServ, DiffServ requires no advance setup, no reservation, and no time-consuming end-to-end negotiation for each flow.
Disadvantages of DiffServ
End-to-end and peering problems
|This section does not cite any references or sources. (February 2009)|
The details of how individual routers deal with the DS field is configuration specific, therefore it is difficult to predict end-to-end behaviour. This is complicated further if a packet crosses two or more DiffServ domains before reaching its destination.
From a commercial viewpoint, this is a major flaw, as it means that it is impossible to sell different classes of end-to-end connectivity to end users, as one provider's Gold packet may be another's Bronze. Internet operators could fix this, by enforcing standardised policies across networks, but are not keen on adding new levels of complexity to their already complex peering agreements. One of the reasons for this is set out below.
DiffServ or any other IP based QoS marking does not ensure quality of the service or a specified service-level agreement (SLA). By marking the packets, the sender indicates that it wants the packets to be treated as a specific service, but it can only hope that this happens. It is up to all the service providers and their routers in the path to ensure that their policies will take care of the packets in an appropriate fashion.
DiffServ vs. More capacity
||The neutrality of this section is disputed. (July 2009)|
|This section does not cite any references or sources. (July 2011)|
The problem addressed by DiffServ does not exist in a system that has enough capacity to carry all traffic. Teitelbaum & Stanislav argue instead the capacity of Internet links should be chosen large enough to prevent packet loss altogether.
DiffServ is simply a mechanism for deciding to deliver or route at the expense of others in a situation where there is not enough network capacity. when DiffServ is working (by dropping packets selectively) traffic on the link in question must already be very close to or exceeding saturation, and any further increase in traffic will result in lower priority services being dropped altogether. This will happen on a regular basis if the average traffic on a link is near the limit (where DiffServ becomes needed).
After the Dot-com bubble of 2001, there was a glut of fiber capacity in most parts of the telecoms market, with it being far easier and cheaper to add more capacity than to employ elaborate DiffServ policies as a way of increasing customer satisfaction. This is what is generally done in the core of the Internet, which is generally fast and dumb with "fat pipes".
There are several complex factors that make internet bandwidth capacity planning a complex problem:
- The problem of low priority traffic being starved can be avoided if the network is provisioned to provide a guaranteed minimum bandwidth to low priority services. This minimum can be assured by limiting the maximum amount of higher priority traffic admitted. While careful planning of traffic admittance will prevent the loss of low priority traffic, careful planning will also eliminate the need for automatic priority mechanisms like DiffServ.
- Simple over-provisioning is a highly inefficient solution, both in absolute terms (since unused connections generate no revenue) and due to the highly bursty nature of Internet traffic. A network is designed to carry all traffic at the highest peak times must be many order of magnitude larger than a network designed to carry 95% of traffic in 95% of load conditions, with traffic management such as DiffServ used to prevent collapse by selectively eliminating low priority traffic during peaks.
- Because of the design of TCP, the primary internet protocol, it is very difficult to define an upper peak traffic. The TCP protocol continues to request more bandwidth as the loss rate decreases, resulting in all connections using as much bandwidth as possible until the transmission is exhausted.
- As with any complex system capacity problem, increasing the capacity of one link eventually causes loss to occur on a different link. Traffic that is no longer slowed down by the core link will flood intermediary links, and as each bottleneck is eliminated, other areas of the network become the new bottleneck.
- With wireless links such as EV-DO, where the air-interface bandwidth is several orders of magnitude less than the next upstream link, QoS is being used to efficiently deliver VoIP packets where it would not otherwise be achievable.
The issue of the need for traffic shaping and QoS is very real and seen on networks every day. The ability to mark packets and expedite forwarding of time sensitive data gives the system the ability to ride through spikes in bandwidth utilization that are transient in nature and extremely difficult to characterize without utilizing a bandwidth monitor over an extended period.
Effects of dropped packets
Dropping packets wastes the resources which have been expended carrying those packets. For TCP, dropping segments amounts to betting congestion will have resolved by the time the segments are re-sent, or TCP will throttle back transmission rates at the sources to reduce congestion in the network. For UDP, dropping datagrams amounts to betting the lost information will have no material affect on application performance, or there are other mitigating mechanisms at Layers 4-7.
TCP congestion avoidance algorithms are subject to a phenomenon called TCP global synchronization, unless special approaches such as random early detection are used when dropping TCP segments. In global synchronization, all TCP streams tend to increase transmission rates concurrently, peak at points of constrained network bandwidth concurrently, and fallback to lower transmission rates concurrently as segments are dropped - repeating the cycle until aggregate demand decreases to less than the bandwidth available at all points in the network.
RFC 2638 from IETF defines the entity of the Bandwidth Broker in the framework of DiffServ. A Bandwidth Broker is an agent that has some knowledge of an organization's priorities and policies and allocates bandwidth with respect to those policies. In order to achieve an end-to-end allocation of resources across separate domains, the Bandwidth Broker managing a domain will have to communicate with its adjacent peers, which allows end-to-end services to be constructed out of purely bilateral agreements.
- RFC 2474—Definition of the differentiated services field (DS field) in the IPv4 and IPv6 headers
- RFC 2475—An architecture for differentiated services
- RFC 2597—Assured forwarding PHB group
- RFC 2983—Differentiated services and tunnels
- RFC 3086—Definition of differentiated services per domain behaviors and rules for their specification
- RFC 3140—Per hop behavior identification codes (Obsoletes RFC 2836)
- RFC 3246—An expedited forwarding PHB (Obsoletes RFC 2598)
- RFC 3247—Supplemental information for the new definition of the EF PHB (expedited forwarding per-hop behavior)
- RFC 3260—New Terminology and Clarifications for Diffserv (Updates RFC 2474, RFC 2475 and RFC 2597)
- RFC 4594—Configuration Guidelines for DiffServ Service Classes
- RFC 5865—A differentiated services code point (DSCP) for capacity-admitted traffic (updates RFC 4542 and RFC 4594)
DiffServ Management RFCs
- RFC 3289—Management information base for the differentiated services architecture
- RFC 3290—An informal management model for differentiated services routers
- RFC 3317—Differentiated services quality of service policy information base
- Bandwidth Broker
- Class of service
- Integrated services
- Teletraffic engineering
- Traffic shaping
- Type of service
- "Deploying IP and MPLS QoS for Multiservice Networks: Theory and Practice" by John Evans, Clarence Filsfils (Morgan Kaufmann, 2007, ISBN 0-12-370549-5)
- "Differentiated services for the Internet", by Kalevi Kilkki, Macmillan Technical Publishing, Indianapolis, IN, USA, June 1999, is available in pdf-format at 
- RFC 3260
- RFC 4594
- RFC 2597 Section 3
- RFC 2474
- RFC 6088
- Worldwide. "Implementing Quality of Service Policies with DSCP". Cisco. Retrieved 2010-10-16.
- Filtering DSCP
- RFC 4594
- Teitelbaum, Ben & Stanislav Shalunov. "Why Premium IP Service Has Not Deployed (and Probably Never Will)". Internet2 QoS Working Group. Retrieved 17 October 2011.
- IETF DiffServ Working Group page
- Cisco Whitepaper—DiffServ-The Scalable End-to-End Quality of Service Model
- ACM SIGCOMM'09 paper-Modeling and Understanding End-to-End Class of Service Policies in Operational Networks: proposes a practical model for extracting DiffServ policies
- Cisco: Implementing Quality of Service Policies with DSCP
- Cisco: DiffServ QoS recommendations, based on the guideline from RFC 4594
- Blocking ASPROX_SQL injection attacks by configuring Cisco Routers, CiscoNews, blogs.