0% found this document useful (0 votes)
7 views10 pages

Quality of Service

Uploaded by

K Kiranmayi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views10 pages

Quality of Service

Uploaded by

K Kiranmayi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Quality of Service

QoS refers to the ability of a network to provide guaranteed performance and


reliability for various applications and services.

Important Parameters:
1. Bandwidth: Amount of data transmitted per unit time.
2. Latency: Delay between data transmission and reception.
3. Jitter: Variation in packet delay.
4. Packet Loss: Percentage of lost packets.
5. Throughput: Actual data transfer rate.

QoS Categories:
1. Best Effort: No guarantees, best possible effort.
2. Differentiated Services (DiffServ): Prioritizes traffic based on classification.
3. Integrated Services (IntServ): Guarantees resources for specific
applications.

QoS Mechanisms:
1. Traffic Shaping: Regulates traffic rate.
2. Traffic Policing: Drops excess traffic.
3. Queue Management: Schedules packet transmission.
4. Resource Reservation: Reserves bandwidth and resources.
5. Congestion Avoidance: Prevents network overload.

QoS Protocols:
1. RSVP (Resource Reservation Protocol): Reserves resources. 2.
MPLS (Multi-Protocol Label Switching): Provides QoS guarantees.
3. DiffServ: Classifies and prioritizes traffic.
4. IEEE 802.1p: Prioritizes Ethernet traffic.
Applications:
1. VoIP (Voice over Internet Protocol): Requires low latency and
jitter. 2. Video Streaming: Requires high bandwidth and low latency.
3. Online Gaming: Requires low latency and packet loss. 4. Cloud
Computing: Requires high bandwidth and reliability.

Challenges:
1. Network Congestion: Insufficient bandwidth.
2. Packet Loss: Network errors or congestion.
3. Security: QoS mechanisms can be vulnerable.
4. Scalability: QoS mechanisms must adapt to growing networks.
QoS (Quality of Service) Techniques

We briefly discuss four common methods

1) Scheduling
2) Resource Reservation
3) Admission Control
4) Traffic Shaping

1)Scheduling

A good scheduling technique treats the different flows in a fair and appropriate
manner.
•In first-in, first-out (F I F O) queuing, packets wait in a buffer (queue) until the
node (router or switch) is ready to process them. If the average arrival rate is
higher than the average processing rate, the queue will fill up and new packet
swill be discarded.
Fig: FIFO queue

In priority queuing, packets are first assigned to a priority class. Each priority
class has its own queue. The packets in the highest-priority queue are processed
first. Packets in the lowest-priority queue are processed last.
•If there is a continuous flow in a high-priority queue, the packets in the lower
priority queues will never have a chance to be processed. This is a condition
called starvation

Priority queuing

The system processes packets in each queue in a round-robin fashion with the
number of packets selected from each queue based on the corresponding
weight.

For example, if the weights are 3, 2, and 1, three packets are processed from the
first queue, two from the second queue, and one from the third queue. If the
system does not impose priority on the classes, all weights can be equal. In this
way,we have fair queuing with priority.

weighted fair queuing


2) Resource Reservation

A flow of data needs resources such as a buffer, bandwidth, CPU time, and so
on. The quality of service is improved if these resources are reserved
beforehand.
Application Reliability Delay Jitter Bandwidth

E-mail High Low Low Low

File transfer High Low Low Medium

Web access High Medium Low Medium

Remote login High Medium Medium Low

Audio on demand Low Low High Medium

Video on demand Low Low High High


Telephony Low High High Low

Videoconferencing Low High High High

3)AdmissionControl

4)Traffic Shaping

The Leaky Bucket Algorithm is a simple, efficient, and widely used method for
rate limiting and traffic shaping in computer networks. It's commonly applied to
control the amount of data that can be transmitted or processed within a specific
time frame.

Working procedure

1. Imagine a bucket with a small leak


2. The bucket has a fixed capacity (C).
3. Incoming data packets or requests fill the bucket.
4. The leak drains the bucket at a constant rate (R), representing the allowed
transmission rate.
5. If the bucket is full, excess packets or requests are discarded.

Important workings:

1. Bucket size (C): Maximum capacity of the bucket.


2. Leak rate (R): Rate at which the bucket is drained.
3. Incoming traffic: Data packets or requests arriving at the bucket.

Traffic shaping is a mechanism to control the amount and the rate of the traffic
sent to the network.
In leaky bucket algorithm, bursty chunks are stored in the bucket and sent out
at an average rate.
Fig: Leaky Bucket
➢ If the traffic consists of fixed-size packets the process removes a fixed
number of packets from the queue at each tick of the clock.

➢ If the traffic consists of variable-length packets, the fixed output rate


must be based on the number of bytes or bits.

The following is an algorithm for variable-length packets:


1. Initialize a counter to n at the tick of the clock.
2. If n is greater than the size of the packet, send the packet and decrement the
counter by the packet size. Repeat this step until nis smaller than the packet
size. 3. Reset the counter and go to step 1.
Leaky Bucket implementation
A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by
averaging the data rate. It may drop the packets if the bucket is full.

If a host is not sending for a while, its bucket becomes empty. Now if the host
has bursty data, the leaky bucket allows only an average rate. The time when
the host was idle is not taken into account.
Applications:

1. Network traffic shaping: Limiting the bandwidth used by a specific


application. 2. API rate limiting: Controlling the number of requests to an API
within a time frame.
3. Resource allocation: Managing resource usage, such as CPU or

memory. Pros:

1. Simple to implement.
2. Efficient in preventing network congestion.
3. Allows for bursty traffic.

Cons:

1. Can be overly restrictive.


2. May not account for variable traffic patterns.
Token Bucket Algorithm (TBA)

The Token Bucket Algorithm is a traffic shaping and policing mechanism used
to manage network traffic rate and burstiness. It ensures compliance with
predetermined QoS policies.

Important Assumptions

1. Token Bucket: Avirtual containerstoring tokens, representing bytes or packets.


2. Token Rate: Tokens are added at a constant rate, determining the allowed
traffic rate.
3. Bucket Size: Maximum tokens stored, controlling burstiness. 4.
Token Arrival: Incoming traffic is checked against available tokens.

Working procedure

1. Tokens are added to the bucket at the token rate.


2. Incoming traffic requires tokens; if available, traffic is
transmitted. 3. If no tokens, traffic is buffered or discarded.
4. Bucket size limits bursty traffic.

TBA Parameters

1. Token Rate (r): Tokens per second.


2. Bucket Size (B): Maximum tokens.
3. Peak Rate (p): Maximum burst rate.
Pros:

1. Efficient traffic shaping and policing.


2. Prevents network congestion.
3. Supports variable traffic rates.
4. Simple implementation.

Cons:

1. Can be overly restrictive.


2. Difficult to set optimal parameters.

QoS Applications
1. Traffic shaping for WAN links.
2. Policing for SLA ( Service Level Agreement)
compliance. 3. Rate limiting for real-time applications.

Implementation

1. Cisco IOS: "rate-limit" command.


2. Linux: "tc" command (Traffic Control).
3. Juniper JUNOS: "rate-limit" command.
The token bucket allows bursty traffic at a regulatedmaximumrate.
The token is initialized to zero. Each time a token is added, the counter is
incremented by 1. Each time a unit of data is sent, the counter is decremented
by 1. When the counter is zero, the host cannot send data.

Token Bucket
Leaky Bucket Algorithm (LBA) vs Token Bucket Algorithm

(TBA): Similarities:

1. Both algorithms control traffic rate and burstiness.


2. Used for traffic shaping and policing.
3. Prevent network congestion.
Differences:

Leaky Bucket Algorithm (LBA):

1. Water flows into bucket at variable


rate. 2. Water drains at constant rate (leak
rate). 3. Bucket size limits burstiness.
4. Simple implementation.

Token Bucket Algorithm (TBA):

1. Tokens added to bucket at constant rate (token


rate). 2. Tokens consumed by incoming traffic.
3. Bucket size limits burstiness.
4. More flexible than LBA.

You might also like