Quality of Service
Quality of Service
Important Parameters:
1. Bandwidth: Amount of data transmitted per unit time.
2. Latency: Delay between data transmission and reception.
3. Jitter: Variation in packet delay.
4. Packet Loss: Percentage of lost packets.
5. Throughput: Actual data transfer rate.
QoS Categories:
1. Best Effort: No guarantees, best possible effort.
2. Differentiated Services (DiffServ): Prioritizes traffic based on classification.
3. Integrated Services (IntServ): Guarantees resources for specific
applications.
QoS Mechanisms:
1. Traffic Shaping: Regulates traffic rate.
2. Traffic Policing: Drops excess traffic.
3. Queue Management: Schedules packet transmission.
4. Resource Reservation: Reserves bandwidth and resources.
5. Congestion Avoidance: Prevents network overload.
QoS Protocols:
1. RSVP (Resource Reservation Protocol): Reserves resources. 2.
MPLS (Multi-Protocol Label Switching): Provides QoS guarantees.
3. DiffServ: Classifies and prioritizes traffic.
4. IEEE 802.1p: Prioritizes Ethernet traffic.
Applications:
1. VoIP (Voice over Internet Protocol): Requires low latency and
jitter. 2. Video Streaming: Requires high bandwidth and low latency.
3. Online Gaming: Requires low latency and packet loss. 4. Cloud
Computing: Requires high bandwidth and reliability.
Challenges:
1. Network Congestion: Insufficient bandwidth.
2. Packet Loss: Network errors or congestion.
3. Security: QoS mechanisms can be vulnerable.
4. Scalability: QoS mechanisms must adapt to growing networks.
QoS (Quality of Service) Techniques
1) Scheduling
2) Resource Reservation
3) Admission Control
4) Traffic Shaping
1)Scheduling
A good scheduling technique treats the different flows in a fair and appropriate
manner.
•In first-in, first-out (F I F O) queuing, packets wait in a buffer (queue) until the
node (router or switch) is ready to process them. If the average arrival rate is
higher than the average processing rate, the queue will fill up and new packet
swill be discarded.
Fig: FIFO queue
In priority queuing, packets are first assigned to a priority class. Each priority
class has its own queue. The packets in the highest-priority queue are processed
first. Packets in the lowest-priority queue are processed last.
•If there is a continuous flow in a high-priority queue, the packets in the lower
priority queues will never have a chance to be processed. This is a condition
called starvation
Priority queuing
The system processes packets in each queue in a round-robin fashion with the
number of packets selected from each queue based on the corresponding
weight.
For example, if the weights are 3, 2, and 1, three packets are processed from the
first queue, two from the second queue, and one from the third queue. If the
system does not impose priority on the classes, all weights can be equal. In this
way,we have fair queuing with priority.
A flow of data needs resources such as a buffer, bandwidth, CPU time, and so
on. The quality of service is improved if these resources are reserved
beforehand.
Application Reliability Delay Jitter Bandwidth
3)AdmissionControl
4)Traffic Shaping
The Leaky Bucket Algorithm is a simple, efficient, and widely used method for
rate limiting and traffic shaping in computer networks. It's commonly applied to
control the amount of data that can be transmitted or processed within a specific
time frame.
Working procedure
Important workings:
Traffic shaping is a mechanism to control the amount and the rate of the traffic
sent to the network.
In leaky bucket algorithm, bursty chunks are stored in the bucket and sent out
at an average rate.
Fig: Leaky Bucket
➢ If the traffic consists of fixed-size packets the process removes a fixed
number of packets from the queue at each tick of the clock.
If a host is not sending for a while, its bucket becomes empty. Now if the host
has bursty data, the leaky bucket allows only an average rate. The time when
the host was idle is not taken into account.
Applications:
memory. Pros:
1. Simple to implement.
2. Efficient in preventing network congestion.
3. Allows for bursty traffic.
Cons:
The Token Bucket Algorithm is a traffic shaping and policing mechanism used
to manage network traffic rate and burstiness. It ensures compliance with
predetermined QoS policies.
Important Assumptions
Working procedure
TBA Parameters
Cons:
QoS Applications
1. Traffic shaping for WAN links.
2. Policing for SLA ( Service Level Agreement)
compliance. 3. Rate limiting for real-time applications.
Implementation
Token Bucket
Leaky Bucket Algorithm (LBA) vs Token Bucket Algorithm
(TBA): Similarities: