Quality of service - Computer network technologies and services (2015)

Computer network technologies and services (2015)

Chapter 7
Quality of service

Quality of service is the set of technologies to try1 to guarantee specific requirements about packet delay and jitter2 for multimedia networking applications generating inelastic traffic.

Main approaches

Three approaches have been proposed for quality of service:

• integrated services (IntServ): it requires fundamental changes to the network infrastructure so that the application can reserve end-to-end bandwidth ⇒ new complex software in hosts and routers (see section 7.3);

• differentiated services (DiffServ): it requires fewer changes to the network infrastructure (see section 7.4);

• laissez-faire: who cares about delays and quality of service, the network will not never be congested ⇒ all the complexity at application layer.

7.1 Principles

1. Packet marking needed for router to distinguish between different classes; and new router policy to treat packets accordingly.

2. Provide protection ( isolation) for one class from the other ones.
3. While providing isolation, it is desirable to use resources as efficiently as possible.

4. The flow declares its needs by a call admission, then the network may block the call (e.g. busy signal) if it can not meet the needs.

7.2 Mechanisms 7.2.1 Packet scheduling mechanisms

The goal of scheduling mechanisms is to manage the priorities for incoming packets.

1 Quality of service just tries to do that, because it is impossible to guarantee a circuit-switching service over a packet-switching network.
2Jitter is the variability of packet delays within the same packet stream.

FIFO scheduling It is easy to implement and efficient only if there is a sophisticated discard policy:

• tail drop: drop always the arriving packet;
• random: drop a random packet in the queue;
• priority: drop the lowest-priority class packet.

Priority scheduling One buffer is available per class, and always the highest-priority class packet is served.
It does not grant isolation and may introduce starvation: packets in low-priority buffers are never served because high-priority packets keep arriving. Moreover, if temporarly the high-priority queue is empty allowing to start transmitting a packet in the low-priority queue, but a highpriority packet arrives just after the start of the transmission, the latter will have to wait for the transmission to end, especially if the packet is long ⇒ transmission delays are introduced.

Round robin scheduling It scans cyclically the class queues, serving one for each class (if available).
It grants isolation and it is fair, but it does not grant priority.

Figure 7.1: Example of weighted fair queuing.

Weighted fair queuing It generalizes round robin by combining it with priority scheduling. Each class gets weighted amount of service in each cycle, and the bandwidth Ri for the class i having weight wi is given by the following formula (the empty queues have null weight):

Ri = wi Rtot
j
wj

However this solution is not very scalable because the formula, involving floating-point operations, needs to be computed for every single packet.

7.2.2 Policing mechanisms

The goal of policing mechanisms is to limit the traffic so that it does not exceed declared parameters, such as:

• (long-term)average rate: how many packets can be sent per unit time;
• peak rate: measured in packets per minute;

• (maximum) burst size: maximum number of packets sent consecutively (with no intervening idle).

Token bucket is the technique used to limit input to specified burst size and average rate:
• a bucket can hold b tokens;
• tokens are generated at rate r tokens/s, unless the bucket is full;
• over an interval of length t, the number of packets admitted is less than or equal to (rt +b).

Figure 7.2: Token bucket.

7.3 IntServ

Resource reservation Basically a host asks for a service that requires some resources (path message): if the network can provide this service it will serve the user, otherwise it will not serve it (reservation message).
Resource reservation is a feature which is not native in IP.

Call admission The arriving session uses the Resource Reservation Protocol (RSVP) signaling protocol to declare:

R-spec: it defines the quality of service being requested;
T-spec: it defines the traffic characteristics.

Thereceiver, not the sender, specifies the resource reservation.

This is definitely not a scalable solution. It still has major problems, and there are currently no reasons to better implement IntServ.

7.4 DiffServ

Differentiated Services (DiffServ) is an architecture proposed by IETF for quality of service: it moves the complexity (buckets and buffers) from the network core to the edges routers (or hosts) ⇒ more scalability.

7.4.1 Architecture

DiffServ architecture is made up of two major components performing per-flow traffic management:

• edge routers: they mark the packets as in-profile (high-priority voice traffic) or out-profile (low-priority data traffic);

• core routers: they perform buffering and scheduling based on the marking done at the edges, giving preference to in-profile packets.

7.4.2 Marking

Marking is performed by edge routers in the Differentiated Service Code Point (DSCP) field, lying at the 6 most significant bits in the ‘Type of Service’ field for the IPv4 header and in the ‘Priority’ field for the IPv6 one.

It would be better to let the source, at application layer, perform the marking because just the source exactly knows the traffic type (voice traffic or data traffic), but most users would declare all their packets as high-priority because they would not be honest ⇒ marking needs to be performed by gateways which are under the control of the provider. However some studies have been found that routers can properly recognize at most 20-30% traffic, due for example to encrypted traffic ⇒ distinction can be simplified for routers by connecting the PC to a port and the telephone to another port, so the router can mark the traffic based on the input port.

7.4.3 PHB

Some Per Hop Behaviours (PHB) are being developed:

expedited forwarding: the packet departure rate of a class equals or exceeds a specified rate;

assured forwarding: four classes of traffic: each one guarantees minimum amount of bandwidth and three drop preference partitions.

PHBs specify the services to be offered, not how to implement them.