Chapter-2 - Burst Traffic Analysis
Chapter-2 - Burst Traffic Analysis
ASSIGNMENT - 2
Submitted By,
T.Divyasree
2014611007
M.Tech (IT) SS-FT
Traffic Descriptor:
Although the peak data rate is a critical value for the network, it can usually be
ignored if the duration of the peak value is very short. For example, if data are flowing
steadily at the rate of 1 Mbps with a sudden peak data rate of 2 Mbps for just 1 ms, the
network probably can handle the situation. However, if the peak data rate lasts 60 ms, there
may be a problem for the network. The maximum burst size normally refers to the
maximum length of time the traffic is generated at the peak rate.
Effective Bandwidth:
The effective bandwidth is the bandwidth that the network needs to allocate for the
flow of traffic. The effective bandwidth is a function of three values: average data rate, peak
data rate, and maximum burst size. The calculation of this value is very complex.
TRAFFIC PROFILES;
A data flow can have one of the following traffic profiles:
constant bit rate,
variable bit rate,
bursty .
Bursty:
In the bursty data category, the data rate changes suddenly in a very short time. It
may jump from zero, for example, to 1 Mbps in a few microseconds and vice versa. It may
also remain at this value for a while. The average bit rate and the peak bit rate are very
different values in this type of flow. The maximum burst size is significant. This is the most
difficult type of traffic for a network to handle because the profile is very unpredictable. To
handle this type of traffic, the network normally needs to reshape it, using reshaping
techniques, as we will see shortly. Bursty traffic is one of the main causes of congestion in a
network.
QUALITY OF SERVICE
Quality of service (QoS) is an internetworking issue that has been discussed more
than defined. We can informally define quality of service as something a flow seeks to attain.
Flow Characteristics:
Traditionally, four types of characteristics are attributed to a flow:
reliability
delay
jitter
bandwidth.
Reliability:
Reliability is a characteristic that a flow needs. Lack of reliability means losing a
packet or acknowledgment, which entails retransmission. However, the sensitivity of
application programs to reliability is not the same. For example, it is more important that
electronic mail, file transfer, and Internet access have reliable transmissions than telephony or
audio conferencing
.
Delay:
Source-to-destination delay is another flow characteristic. Again applications can
tolerate delay in different degrees. In this case, telephony, audio conferencing, video
conferencing, and remote log-in need minimum delay, while delay in file transfer or e-mail is
less important.
Jitter:
Jitter is the variation in delay for packets belonging to the same flow. For example, if
four packets depart at times 0, 1, 2, 3 and arrive at 20, 21, 22, 23, all have the same delay, 20
units of time. On the other hand, if the above four packets arrive at 21, 23, 21, and 28, they
will have different delays: 21,22, 19, and 24. For applications such as audio and video, the
first case is completely acceptable; the second case is not. For these applications, it does not
matter if the packets arrive with a short or long delay as long as the delay is the same for all
packets. For this application, the second case is not acceptable.
Jitter is defined as the variation in the packet delay. High jitter means the difference
between delays is large; low jitter means the variation is small.
Bandwidth:
Different applications need different bandwidths. In video conferencing we need to
send millions of bits per second to refresh a color screen while the total number of bits in an
e-mail may not reach even a million.
Flow Classes:
Based on the flow characteristics, we can classify flows into groups, with each group
having similar levels of characteristics.
Scheduling:
Packets from different flows arrive at a switch or router for processing. A good
scheduling technique treats the different flows in a fair and appropriate manner. Several
scheduling techniques are designed to improve the quality of service. There are three types of
techniques. They are
FIFO queuing
Priority queuing
Weighted fair queuing.
FIFO Queuing:
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node
(router or switch) is ready to process them. If the average arrival rate is higher than the
average processing rate, the queue will fill up and new packets will be discarded. A FIFO
queue is familiar to those who have had to wait for a bus at a bus stop
Priority Queuing:
In priority queuing, packets are first assigned to a priority class. Each priority class
has its own queue. The packets in the highest-priority queue are processed first. Packets in
the lowest-priority queue are processed last. Note that the system does not stop serving a
queue until it is empty.
A priority queue can provide better QoS than the FIFO queue because higherpriority traffic,
such as multimedia, can reach the destination with less delay. However, there is a potential
drawback. If there is a continuous flow in a high-priority queue, the packets in the lowerpriority queues will never have a chance to be processed. This is a condition called
starvation.
Traffic Shaping
Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to
the network. Two techniques can shape traffic: leaky bucket and token bucket.
Leaky Bucket
If a bucket has a small hole at the bottom, the water leaks from the bucket at a
constant rate as long as there is water in the bucket. The rate at which the water leaks does
not depend on the rate at which the water is input to the bucket unless the bucket is empty.
The input rate can vary, but the output rate remains constant. Similarly, in networking, a
technique called leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the
bucket and sent out at an average rate.
We assume that the network has committed a bandwidth of 3 Mbps for a host. The use of
the leaky bucket shapes the input traffic to make it conform to this commitment.The host sends a
burst of data at a rate of 12 Mbps for 2 s, for a total of 24 Mbits of data. The host is silent for 5 s
and then sends data at a rate of 2 Mbps for 3 s, for a total of 6 Mbits of data. In all, the host has
sent 30 Mbits of data in lOs. The leaky bucket smooths the traffic by sending out data at a rate of
3 Mbps during the same 10 s. Without the leaky bucket, the beginning burst may have hurt the
network by consuming more bandwidth than is set aside for this host. We can also see that the
leaky bucket may prevent congestion. As an analogy, consider the freeway during rush hour
(bursty traffic). If, instead, commuters could stagger their working hours, congestion o'n our
freeways could be avoided.
A FIFO queue holds the packets. If the traffic consists of fixed-size packets (e.g., cells in
ATM networks), the process removes a fixed number of packets from the queue at each tick of
the clock. If the traffic consists of variable-length packets, the fixed output rate must be based on
the number of bytes or bits.
Token Bucket:
The leaky bucket is very restrictive. It does not credit an idle host. For example, if a host
is not sending for a while, its bucket becomes empty. Now if the host has bursty data, the leaky
bucket allows only an average rate. The time when the host was idle is not taken into account. On
the other hand, the token bucket algorithm allows idle hosts to accumulate credit for the future in
the form of tokens. For each tick of the clock, the system sends n tokens to the bucket. The
system removes one token for every cell (or byte) of data sent. For example, if n is 100 and the
host is idle for 100 ticks, the bucket collects 10,000 tokens. Now the host can consume all these
tokens in one tick with 10,000 cells, or the host takes 1000 ticks with 10 cells per tick. In other
words, the host can send bursty data as long as the bucket is not empty. Figure 24.21
shows the idea.
The token bucket can easily be implemented with a counter. The token is initialized
to zero. Each time a token is added, the counter is incremented by 1. Each time a
unit of data is sent, the counter is decremented by 1. When the counter is zero, the host
cannot send data.
The token bucket allows bursty traffic at a regulated maximum rate.
Resource Reservation:
A flow of data needs resources such as a buffer, bandwidth, CPU time, and so on. The
quality of service is improved if these resources are reserved beforehand. We discuss in
this section one QoS model called Integrated Services, which depends heavily on resource
reservation to improve the quality of service.
Admission Control:
Admission control refers to the mechanism used by a router, or a switch, to accept or
reject a flow based on predefined parameters called flow specifications. Before a router
accepts a flow for processing, it checks the flow specifications to see if its capacity (in
terms of bandwidth, buffer size, CPU speed, etc.) and its previous commitments to other
flows can handle the new flow.