0% found this document useful (0 votes)
3 views

Module 3Congestion

Congestion control in networks involves mechanisms to manage packet load and prevent congestion, categorized into open-loop (prevention) and closed-loop (removal) methods. Open-loop techniques include retransmission, window, acknowledgment, discarding, and admission policies, while closed-loop methods involve backpressure, choke packets, and signaling. Quality of service (QoS) is essential for managing flow characteristics like reliability, delay, jitter, and bandwidth, with techniques such as scheduling, traffic shaping, admission control, and resource reservation to enhance QoS.

Uploaded by

xavieralan076
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Module 3Congestion

Congestion control in networks involves mechanisms to manage packet load and prevent congestion, categorized into open-loop (prevention) and closed-loop (removal) methods. Open-loop techniques include retransmission, window, acknowledgment, discarding, and admission policies, while closed-loop methods involve backpressure, choke packets, and signaling. Quality of service (QoS) is essential for managing flow characteristics like reliability, delay, jitter, and bandwidth, with techniques such as scheduling, traffic shaping, admission control, and resource reservation to enhance QoS.

Uploaded by

xavieralan076
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Congestion Control

• Congestion in a network may occur if the load on the network-the


number of packets sent to the network is greater than the capacity of
the network-the number of packets a network can handle.

• Congestion control refers to the mechanisms and techniques to control


the congestion and keep the load below the capacity.
• Two broad categories of congestion control mechanism are:

• open-loop congestion control (prevention)

• closed-loop congestion control (removal)


Open-Loop Congestion Control
• In open-loop congestion control, policies are applied to prevent
congestion before it happens.
• In these mechanisms, congestion control is handled by either the
source or the destination.
1. Retransmission Policy
• Retransmission is sometimes unavoidable.

• If the sender feels that a sent packet is lost or corrupted, the packet needs to be
retransmitted.

• Retransmission may increase congestion in the network. A good


retransmission policy can prevent congestion.
• The retransmission policy and the retransmission timers must be designed to
optimize efficiency and at the same time prevent congestion.
2. Window Policy
The type of window at the sender may also affect congestion. The
Selective Repeat window is better than the Go-Back-N window for
congestion control. In the Go-Back-N window, when the timer for a
packet times out, several packets may be resent, although some may
have arrived safe and sound at the receiver. This duplication may make
the congestion worse. The Selective Repeat window tries to send the
specific packets that have been lost or corrupted.
3Acknowledgment Policy
• The acknowledgment policy imposed by the receiver may also affect
congestion. If the receiver does not acknowledge every packet it
receives, it may slow down the sender and help prevent congestion..
A receiver may send an acknowledgment only if it has a packet to be
sent or a special timer expires. A receiver may decide to acknowledge
only N packets at a time. Sending fewer acknowledgments means
imposing less load on the network.
4. Discarding Policy
• A good discarding policy by the routers may prevent congestion and
at the same time may not harm the integrity of the transmission. For
example, in audio transmission, if the policy is to discard less sensitive
packets when congestion is likely to happen, the quality of sound is
still preserved and congestion is prevented.
5 Admission Policy
• An admission policy, which is a quality-of-service mechanism, can also
prevent congestion in virtual-circuit networks. Switches in a flow first
check the resource requirement of a flow before admitting it to the
network. A router can deny establishing a virtual circuit connection if
there is congestion in the network or if there is a possibility of future
congestion.
Closed-Loop Congestion Control
• Closed-loop congestion control mechanisms try to sort out congestion
after it happens. Several mechanisms have been used by different
protocols
Backpressure
• The technique of backpressure refers to a congestion control
mechanism in which a congested node stops receiving data from the
immediate upstream node or nodes. This may cause the upstream
node or nodes to become congested, and they, in turn, reject data
from their upstream nodes or nodes. Backpressure is a node-to-node
congestion control that starts with a node and propagates, in the
opposite direction of data flow, to the source. The backpressure
technique can be applied only to virtual circuit networks, in which
each node knows the upstream node from which a flow of data is
coming.
Choke Packet
• A choke packet is a packet sent by a node to the source to inform it of
congestion. In the choke packet method, the warning is from the
router, which has encountered congestion, to the source station
directly. The intermediate nodes through which the packet has
travelled are not warned.
Implicit Signaling
• In implicit signaling, there is no communication between the
congested node or nodes and the source. The source guesses that
there is congestion somewhere in the network from other symptoms.
For e.g.:, when a source sends several packets and there is no
acknowledgment for a while, one assumption is that the network is
congested. The delay in receiving an acknowledgment is interpreted
as congestion in the network; the source should slow down.
Explicit Signaling
• The node that experiences congestion can explicitly send a signal to
the source or destination. The explicit signaling method is different
from the choke packet method. In the choke packet method, a
separate packet is used for this purpose; in the explicit signaling
method, the signal is included in the packets that carry data.
Quality of Service

• Quality of service (QoS) is an internetworking issue.

• quality of service as something a flow seeks to attain.


Flow Characteristics
• Traditionally, four types of characteristics are attributed to a flow:
reliability, delay, jitter, and bandwidth
Reliability
• Reliability is a characteristic that a flow needs.
• Lack of reliability means losing a packet or acknowledgment, which
entails retransmission.
• The sensitivity of application programs to reliability is not the same.
• For example, it is more important that electronic mail, file transfer,
and Internet access have reliable transmissions than telephony or audio
conferencing
Delay

• Source-to-destination delay is another flow characteristic.

• Applications can tolerate delay in different degrees.

• In this case, telephony, audio conferencing, video conferencing, and


remote log-in need minimum delay, while delay in file transfer or e-
mail is less important.
Jitter
• Jitter is defined as the variation in the packet delay.
• High jitter means the difference between delays is large; low jitter
means the variation is small.
• Jitter is the variation in delay for packets belonging to the same flow.
• For eg if 4 packets depart at times 0, 1, 2, 3 and arrive at 20, 21, 22,
23, all have the same delay, 20 units of time.
• On the other hand, if the above four packets arrive at 21, 23, 21, and
28, they will have different delays: 21, 22, 19, and 24.
Jitter

• For applications such as audio and video, the first case is completely
acceptable; the second case is not.

• For these applications, it does not matter if the packets arrive with a
short or long delay as long as the delay is the same for all packets. For
this application, the second case is not acceptable.
Bandwidth

• Different applications need different bandwidths.

• In video conferencing we need to send millions of bits per second to


refresh a colour screen while the total number of bits in an email may
not reach even a million.
TECHNIQUES TO IMPROVE QoS
Four common methods:
1. Scheduling
2. Traffic shaping
3. Admission control
4. Resource reservation
Scheduling

• Packets from different flows arrive at a switch or router for processing.

• A good scheduling technique treats the different flows in a fair and


appropriate manner.

• Several scheduling techniques are designed to improve the quality of


service. We discuss three of them here: FIFO queuing, priority
queuing, and weighted fair queuing.
FIFO Queuing
• In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue)
until the node (router or switch) is ready to process them.
• If the average arrival rate exceeds the average processing rate, the
queue will fill up and new packets will be discarded.
Priority Queuing

• In priority queuing, packets are first assigned to a priority class.

• Each priority class has its queue.

• The packets in the highest-priority queue are processed first.

• Packets in the lowest-priority queue are processed last. Note that


the system does not stop serving a queue until it is empty.
• A priority queue can provide better QoS than the FIFO queue because
higher-priority traffic, such as multimedia, can reach the destination with
less delay.

• However, there is a potential drawback.

• If there is a continuous flow in a high-priority queue, the packets in the


lower-priority queues will never have a chance to be processed. This is a
condition called starvation.
Weighted Fair Queuing
• The packets are still assigned to different classes and admitted to different queues.

• The queues, however, are weighted based on the priority of the queues; higher priority
means a higher weight.

• The system processes packets in each queue in a round-robin fashion with the number of
packets selected from each queue based on the corresponding weight.

• For example, if the weights are 3, 2, and 1, three packets are processed from the first
queue, two from the second queue, and one from the third queue. If the system does not
impose priority on the classes, all weights can be equal In this way, we have fair queuing
with priority.
Traffic Shaping

• Traffic shaping is a mechanism to control the amount and the rate of


traffic sent to the network.

• Two techniques can shape traffic: leaky bucket and token bucket.
Leaky Bucket
• If a bucket has a small hole at the bottom, the water leaks from the bucket at
a constant rate as long as there is water in the bucket.
• The rate at which the water leaks does not depend on the rate at which the
water is input to the bucket unless the bucket is empty.
• The input rate can vary, but the output rate remains constant.
• Similarly, in networking, a technique called leaky bucket can smooth out
burst traffic.
• Bursty chunks are stored in the bucket and sent out at an average rate.
• In Figure the host sends a burst of data at a rate of 12 Mbps for 2 s,

• The leaky bucket smooths the traffic by sending out data at a rate of 3
Mbps during the same 10 s.

• Without the leaky bucket, the beginning burst may have hurt the
network by consuming more bandwidth than is set aside for this host
Token Bucket

• The leaky bucket is very restrictive.

• It does not credit an idle host.

• For example, if a host is not sending for a while, its bucket becomes empty.

• Now if the host has bursty data, the leaky bucket allows only an average
rate.

• The time when the host was idle is not taken into account.
• Token bucket allows idle hosts to accumulate credit for the future in the
form of tokens.
• The system removes one token for every cell of data sent.
• For each clock tick, the system sends n tokens to the bucket.
• If n is 100 and the host is idle for 100 ticks, the bucket collects 10000
tokens.
• Host can now consume all these tokens with 10 cells per tick.
Resource Reservation

• A flow of data needs resources such as a buffer, bandwidth, CPU time,


and so on.

• The quality of service is improved if these resources are reserved


beforehand.
Admission Control

• Admission control refers to the mechanism used by a router, or a


switch, to accept or reject a flow based on predefined parameters
called flow specifications.

• Before a router accepts a flow for processing, it checks the flow


specifications to see if its capacity (in terms of bandwidth, buffer size,
CPU speed, etc.) and its previous commitments to other flows can
handle the new flow.

You might also like