0% found this document useful (0 votes)
102 views10 pages

Chapter-2 - Burst Traffic Analysis

This document discusses network traffic characteristics and quality of service techniques. It defines key traffic descriptors like average data rate, peak data rate, and maximum burst size that are used to characterize different traffic profiles like constant bit rate, variable bit rate, and bursty traffic. It also discusses quality of service factors like reliability, delay, jitter, and bandwidth for different types of network flows. Common techniques to improve quality of service are then outlined, including scheduling methods like priority queuing and weighted fair queuing, as well as traffic shaping using leaky bucket and token bucket algorithms.

Uploaded by

alluse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
102 views10 pages

Chapter-2 - Burst Traffic Analysis

This document discusses network traffic characteristics and quality of service techniques. It defines key traffic descriptors like average data rate, peak data rate, and maximum burst size that are used to characterize different traffic profiles like constant bit rate, variable bit rate, and bursty traffic. It also discusses quality of service factors like reliability, delay, jitter, and bandwidth for different types of network flows. Common techniques to improve quality of service are then outlined, including scheduling methods like priority queuing and weighted fair queuing, as well as traffic shaping using leaky bucket and token bucket algorithms.

Uploaded by

alluse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

NETWORK ENGINEERING

ASSIGNMENT - 2

Traffic Characteristics and Descriptors


And
Quality Of Sservice

Submitted By,
T.Divyasree
2014611007
M.Tech (IT) SS-FT

TRAFFIC CHARACTERISTICS AND DESCRIPTORS


Data Traffic:
The main focus of congestion control and quality of service is data traffic. In
congestion control we try to avoid traffic congestion. In quality of service, we try to create an
appropriate environment for the traffic. So, before talking about congestion control and
quality of service, we discuss the data traffic itself.

Traffic Descriptor:

Traffic descriptors are qualitative values that represent a data flow.


Traffic descriptor are used in 3 ways:
traffic contract (used for $ and legal)
input to regulator
input to policer.

Average Data Rate:


The average data rate is the number of bits sent during a period of time, divided by the
number of seconds in that period. The average data rate is a very useful characteristic of
traffic because it indicates the average bandwidth needed by the traffic.
Average data rate =amount of data / time

Peak Data Rate:


The peak data rate defines the maximum data rate of the traffic. In Figure 24.1 it is
the maximum y axis value. The peak data rate is a very important measurement because it
indicates the peak bandwidth that the network needs for traffic to pass through without
changing its data flow.

Maximum Burst Size:

Although the peak data rate is a critical value for the network, it can usually be
ignored if the duration of the peak value is very short. For example, if data are flowing
steadily at the rate of 1 Mbps with a sudden peak data rate of 2 Mbps for just 1 ms, the
network probably can handle the situation. However, if the peak data rate lasts 60 ms, there
may be a problem for the network. The maximum burst size normally refers to the
maximum length of time the traffic is generated at the peak rate.

Effective Bandwidth:
The effective bandwidth is the bandwidth that the network needs to allocate for the
flow of traffic. The effective bandwidth is a function of three values: average data rate, peak
data rate, and maximum burst size. The calculation of this value is very complex.

TRAFFIC PROFILES;
A data flow can have one of the following traffic profiles:
constant bit rate,
variable bit rate,
bursty .

Constant Bit Rate:


A constant-bit-rate (CBR), or a fixed-rate, traffic model has a data rate that does not
change. In this type of flow, the average data ratc and thc peak data rate are the same. The
maximum burst size is not applicable. This type of traffic is very easy for a network to handle
since it is predictable. The network knows in advance how much bandwidth to allocate for
this type of flow.

Variable Bit Rate:


In the variable-bit-rate (VBR) category, the rate of the data flow changes in time,
with the changes smooth instead of sudden and sharp. In this type of flow, the average data
rate and the peak data rate are different. The maximum burst size is usually a small value.
This type of traffic is more difficult to handle than constant-bit-rate traffic, but it normally
does not need to be reshaped.

Bursty:
In the bursty data category, the data rate changes suddenly in a very short time. It
may jump from zero, for example, to 1 Mbps in a few microseconds and vice versa. It may
also remain at this value for a while. The average bit rate and the peak bit rate are very
different values in this type of flow. The maximum burst size is significant. This is the most
difficult type of traffic for a network to handle because the profile is very unpredictable. To
handle this type of traffic, the network normally needs to reshape it, using reshaping
techniques, as we will see shortly. Bursty traffic is one of the main causes of congestion in a
network.

QUALITY OF SERVICE
Quality of service (QoS) is an internetworking issue that has been discussed more
than defined. We can informally define quality of service as something a flow seeks to attain.

Flow Characteristics:
Traditionally, four types of characteristics are attributed to a flow:
reliability
delay
jitter
bandwidth.

Reliability:
Reliability is a characteristic that a flow needs. Lack of reliability means losing a
packet or acknowledgment, which entails retransmission. However, the sensitivity of
application programs to reliability is not the same. For example, it is more important that
electronic mail, file transfer, and Internet access have reliable transmissions than telephony or
audio conferencing
.

Delay:
Source-to-destination delay is another flow characteristic. Again applications can
tolerate delay in different degrees. In this case, telephony, audio conferencing, video
conferencing, and remote log-in need minimum delay, while delay in file transfer or e-mail is
less important.

Jitter:
Jitter is the variation in delay for packets belonging to the same flow. For example, if
four packets depart at times 0, 1, 2, 3 and arrive at 20, 21, 22, 23, all have the same delay, 20
units of time. On the other hand, if the above four packets arrive at 21, 23, 21, and 28, they
will have different delays: 21,22, 19, and 24. For applications such as audio and video, the
first case is completely acceptable; the second case is not. For these applications, it does not
matter if the packets arrive with a short or long delay as long as the delay is the same for all
packets. For this application, the second case is not acceptable.

Jitter is defined as the variation in the packet delay. High jitter means the difference
between delays is large; low jitter means the variation is small.

Bandwidth:
Different applications need different bandwidths. In video conferencing we need to
send millions of bits per second to refresh a color screen while the total number of bits in an
e-mail may not reach even a million.

Flow Classes:
Based on the flow characteristics, we can classify flows into groups, with each group
having similar levels of characteristics.

TECHNIQUES TO IMPROVE QoS


Some techniques that can be used to improve the quality of service.
Scheduling
Traffic shaping
Admission contro
Resource reservation.

Scheduling:
Packets from different flows arrive at a switch or router for processing. A good
scheduling technique treats the different flows in a fair and appropriate manner. Several
scheduling techniques are designed to improve the quality of service. There are three types of
techniques. They are

FIFO queuing
Priority queuing
Weighted fair queuing.

FIFO Queuing:
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node
(router or switch) is ready to process them. If the average arrival rate is higher than the
average processing rate, the queue will fill up and new packets will be discarded. A FIFO
queue is familiar to those who have had to wait for a bus at a bus stop

Priority Queuing:
In priority queuing, packets are first assigned to a priority class. Each priority class
has its own queue. The packets in the highest-priority queue are processed first. Packets in
the lowest-priority queue are processed last. Note that the system does not stop serving a
queue until it is empty.

A priority queue can provide better QoS than the FIFO queue because higherpriority traffic,
such as multimedia, can reach the destination with less delay. However, there is a potential
drawback. If there is a continuous flow in a high-priority queue, the packets in the lowerpriority queues will never have a chance to be processed. This is a condition called
starvation.

Weighted Fair Queuing:


A better scheduling method is weighted fair queuing. In this technique, the packets
are still assigned to different classes and admitted to different queues. The queues, however,
are weighted based on the priority of the queues; higher priority means a higher weight. The
system processes packets in each queue in a round-robin fashion with the number of packets
selected from each queue based on the corresponding weight. For example, if the weights are
3, 2, and 1, three packets are processed from the first queue, two from the second queue, and
one from the third queue. If the system does not impose priority on the classes, all weights
can be equaL In this way, we have fair queuing with priority.

Traffic Shaping
Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to
the network. Two techniques can shape traffic: leaky bucket and token bucket.

Leaky Bucket

If a bucket has a small hole at the bottom, the water leaks from the bucket at a
constant rate as long as there is water in the bucket. The rate at which the water leaks does
not depend on the rate at which the water is input to the bucket unless the bucket is empty.
The input rate can vary, but the output rate remains constant. Similarly, in networking, a
technique called leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the
bucket and sent out at an average rate.
We assume that the network has committed a bandwidth of 3 Mbps for a host. The use of
the leaky bucket shapes the input traffic to make it conform to this commitment.The host sends a
burst of data at a rate of 12 Mbps for 2 s, for a total of 24 Mbits of data. The host is silent for 5 s

and then sends data at a rate of 2 Mbps for 3 s, for a total of 6 Mbits of data. In all, the host has
sent 30 Mbits of data in lOs. The leaky bucket smooths the traffic by sending out data at a rate of
3 Mbps during the same 10 s. Without the leaky bucket, the beginning burst may have hurt the
network by consuming more bandwidth than is set aside for this host. We can also see that the
leaky bucket may prevent congestion. As an analogy, consider the freeway during rush hour
(bursty traffic). If, instead, commuters could stagger their working hours, congestion o'n our
freeways could be avoided.
A FIFO queue holds the packets. If the traffic consists of fixed-size packets (e.g., cells in
ATM networks), the process removes a fixed number of packets from the queue at each tick of
the clock. If the traffic consists of variable-length packets, the fixed output rate must be based on
the number of bytes or bits.

The following is an algorithm for variable-length packets:


1. Initialize a counter to n at the tick of the clock.
2. If n is greater than the size of the packet, send the packet and decrement the
counter by the packet size. Repeat this step until n is smaller than the packet size.
3. Reset the counter and go to step 1.

A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by


averaging the data rate. It may drop the packets if the bucket is full.

Token Bucket:
The leaky bucket is very restrictive. It does not credit an idle host. For example, if a host
is not sending for a while, its bucket becomes empty. Now if the host has bursty data, the leaky
bucket allows only an average rate. The time when the host was idle is not taken into account. On
the other hand, the token bucket algorithm allows idle hosts to accumulate credit for the future in
the form of tokens. For each tick of the clock, the system sends n tokens to the bucket. The
system removes one token for every cell (or byte) of data sent. For example, if n is 100 and the
host is idle for 100 ticks, the bucket collects 10,000 tokens. Now the host can consume all these
tokens in one tick with 10,000 cells, or the host takes 1000 ticks with 10 cells per tick. In other
words, the host can send bursty data as long as the bucket is not empty. Figure 24.21
shows the idea.
The token bucket can easily be implemented with a counter. The token is initialized
to zero. Each time a token is added, the counter is incremented by 1. Each time a
unit of data is sent, the counter is decremented by 1. When the counter is zero, the host
cannot send data.
The token bucket allows bursty traffic at a regulated maximum rate.

Combining Token Bucket and Leaky Bucket:


The two techniques can be combined to credit an idle host and at the same time
regulate the traffic. The leaky bucket is applied after the token bucket; the rate of the leaky
bucket needs to be higher than the rate of tokens dropped in the bucket.

Resource Reservation:
A flow of data needs resources such as a buffer, bandwidth, CPU time, and so on. The
quality of service is improved if these resources are reserved beforehand. We discuss in
this section one QoS model called Integrated Services, which depends heavily on resource
reservation to improve the quality of service.

Admission Control:
Admission control refers to the mechanism used by a router, or a switch, to accept or
reject a flow based on predefined parameters called flow specifications. Before a router
accepts a flow for processing, it checks the flow specifications to see if its capacity (in
terms of bandwidth, buffer size, CPU speed, etc.) and its previous commitments to other
flows can handle the new flow.

You might also like