
The Need for QoS
describes the leading causes of poor quality of service and how they can be
- QoS is a network infrastructure technology that relies on a set of tools and mechanisms to assign different levels of priority to different IP traffic flows and provides special treatment to higher-priority IP traffic flows.
- For higher-priority IP traffic flows, it reduces packet loss during times of network congestion and also helps control delay (latency) and delay variation (jitter); for low-priority IP traffic flows, it provides a best-effort delivery service.
- Mechanisms used to achieve QoS goals include classification and marking, policing and shaping, congestion management and avoidance.
Causes and Results of Quality Issues
When packets are delivered using a best-effort delivery model, they may not arrive in order or in a timely manner, and they may be dropped.
- For video, this can result in pixelization of the image, pausing, choppy video, audio and video being out of sync, or no video at all.
- For audio, it could cause echo, talker overlap (a walkie-talkie effect where only one person can speak at a time), unintelligible and distorted speech, voice breakups, long silence gaps, and call drops.
The following are the leading causes of quality issues:
- Lack of bandwidth
- Latency and jitter
- Packet loss
Lack of Bandwidth
The available bandwidth on the data path from a source to a destination equals the capacity of the lowest-bandwidth link. When the maximum capacity of the lowest-bandwidth link is surpassed, link congestion takes place, resulting in traffic drops. The solution:
- Increase the link bandwidth capacity, but this is not always possible, due to budgetary or technological constraints.
- Implement QoS mechanisms such as policing and queueing to prioritize traffic according to level of importance.
- Voice, video, and business-critical traffic should get prioritized forwarding and sufficient bandwidth to support their application requirements.
- The least important traffic should be allocated the remaining bandwidth.
Latency and Jitter
One-way end-to-end delay, also known as network latency, is the time it takes for packets to travel across a network from a source to a destination. Regardless of the application type, ITU Recommendation G.114 recommends:
- A network latency of 400 ms should not be exceeded,
- For real-time traffic, network latency should be less than 150 ms; however the ITU and Cisco have demonstrated that real-time traffic quality does not begin to significantly degrade until network latency exceeds 200 ms.
Network latency can be broken down into fixed and variable latency:
- Propagation delay (fixed)
- Serialization delay (fixed)
- Processing delay (fixed)
- Delay variation (variable)
Propagation Delay
Propagation delay is the time it takes for a packet to travel from the source to a destination at the speed of light over a medium such as fiber-optic cables or copper wires.
- The speed of light is 299,792,458 meters per second in a vacuum.
- The lack of vacuum conditions in a fiber-optic cable or a copper wire slows down the speed of light by a ratio known as the refractive index; the larger the refractive index value, the slower light travels.
- The average refractive index value of an optical fiber is about 1.5. The speed of light through a medium v is equal to the speed of light in a vacuum c divided by the refractive index n, or v = c / n. This means the speed of light through a fiber-optic cable with a refractive index of 1.5 is approximately 200,000,000 meters per second (that is, 300,000,000 / 1.5).
- If a single fiber-optic cable with a refractive index of 1.5 were laid out around the equatorial circumference of Earth, which is about 40,075 km, the propagation delay would be equal to the equatorial circumference of Earth divided by 200,000,000 meters per second. This is approximately 200 ms.
Serialization Delay/Processing Delay
Serialization delay is the time it takes to place all the bits of a packet onto a link.
- It is a fixed value that depends on the link speed; the higher the link speed, the lower the delay.
- The serialization delay s is equal to the packet size in bits divided by the line speed in bits per second.
Processing delay is the fixed amount of time it takes for a networking device to take the packet from an input interface and place the packet onto the output queue of the output interface. The processing delay depends on factors such as:
- CPU speed (for software-based platforms)
- CPU utilization (load)
- IP packet switching mode (process switching, software CEF, or hardware CEF)
- Router architecture (centralized or distributed)
- Configured features on both input and output interfaces
Delay Variation/Packet Loss
Delay variation, also referred to as jitter, is the difference in the latency between packets in a single flow. For example, if one packet takes 50 ms to traverse the network from the source to destination, and the following packet takes 70 ms, the jitter is 20 ms. The major factors affecting variable delays are queuing delay, dejitter buffers, and variable packet sizes. Jitter is experienced due to the queueing delay experienced during periods of network congestion. Packet loss is usually a result of congestion on an interface, and can be prevented by implementing one of the following approaches:
- Increase link speed.
- Implement QoS congestion-avoidance and congestion-management mechanism.
- Implement traffic policing to drop low-priority packets and allow high-priority traffic through.
- Implement traffic shaping to delay packets instead of dropping them since traffic may burst and exceed the capacity of an interface buffer. Traffic shaping is not recommended for realtime traffic because it relies on queuing that can cause jitter.
Other useful information: