Congestion Control
Congestion Control
Congestion Control
24.1 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
24-1 DATA TRAFFIC
24.2
Figure 24.1 Traffic descriptors
24.3
Figure 24.2 Three traffic profiles
24.4
24-2 CONGESTION
24.5
Congestion Control Introduction:
◼ When too many packets are present in (a part of) the subnet,
performance degrades. This situation is called congestion.
◼ As traffic increases too far, the routers are no longer able to cope
and they begin losing packets.
◼ At very high traffic, performance collapses completely and almost no
packets are delivered.
◼ Reasons of Congestion:
◼ Slow Processors.
◼ High stream of packets sent from one of the sender.
◼ Insufficient memory.
◼ High memory of Routers also add to congestion as becomes un
manageable and un accessible.
◼ Low bandwidth lines.
◼ Then what is congestion control? Congestion control has to do
with making sure the subnet is able to carry the offered traffic.
◼ Congestion control and flow control are often confused but both
helps reduce congestion.
General Principles of Congestion Control
◼ Three Step approach to apply congestion control:
1. Monitor the system .
◼ detect when and where congestion occurs.
24.9
Figure Packet delay and throughput as functions of load
24.10
24-3 CONGESTION CONTROL
24.11
Figure 24.5 Congestion control categories
24.12
Warning Bit or Backpressure:
◼ DECNET(Digital Equipment Corporation to connect mini
computers) architecture signaled the warning state by setting
a special bit in the packet's header.
◼ The source then cut back on traffic.
◼ The source monitored the fraction of acknowledgements with
the bit set and adjusted its transmission rate accordingly.
◼ As long as the warning bits continued to flow in, the source
continued to decrease its transmission rate. When they
slowed to a trickle, it increased its transmission rate.
◼ Disadvantage: Note that since every router along the path
could set the warning bit, traffic increased only when no
router was in trouble.
Figure 24.6 Backpressure method for alleviating congestion
24.14
Choke Packets:
◼ The router sends a choke packet back to the source host, giving it
the destination found on the path.
◼ The original packet is tagged (a header bit is turned on) so that it
will not generate any more choke packets farther along the path
and is then forwarded in the usual way.
◼ When the source host gets the choke packet, it is required to reduce
the traffic sent to the specified destination by X percent.
◼ See next figure, flow starts reducing from step 5.
◼ Reduction from 25% to 50% to 75% and so on.
◼ Router maintains threshold. And based on it gives
◼ Mild Warning
◼ Stern Warning
◼ Ultimatum.
◼ Variation: Use queue length or buffers instead of line utilization as
trigger signal. This will reduce traffic. Chocks also increase traffic.
Figure 24.7 Choke packet
24.16
Congestion Prevention Policies
24.20
Figure 24.15 Flow characteristics
24.21
24-6 TECHNIQUES TO IMPROVE QoS
24.22
Scheduling
Scheduling
Priority Weighted
FIFO RED
Queue Fair Queue
24.23
Figure 24.16 FIFO queue
24.24
Figure 24.17 Priority queuing
24.25
Figure 24.18 Weighted fair queuing
24.26
Random Early Detection:
◼ It is the idea of discarding packets before all the buffer space is really
exhausted.
◼ A popular algorithm for doing this is called RED (Random Early
Detection) (Floyd and Jacobson, 1993).
◼ Response to lost packets is the source to slow down.
◼ Lost packets are mostly due to buffer overruns rather than
transmission errors.
◼ The idea is that there is time for action to be taken before it is too
late.
◼ To determine when to start discarding? For this, routers maintain a
running average of their queue lengths.
◼ When the average queue length on some line exceeds a threshold,
the line is said to be congested and action is taken.
◼ How should the router tell the source about the problem?
◼ One way is to send it a choke packet.
◼ Other option? Just discard the selected packet and don’t report even.
◼ Source will eventually notice lack of Ack and takes action.
◼ Thus, slowing down instead of trying harder.
Traffic Shaping
Traffic
Shaping
Leaky Token
Bucket Bucket
24.28
Figure 24.19 Leaky bucket
24.29
Figure 24.20 Leaky bucket implementation
24.30
Note
24.31
Note
24.32
Figure 24.21 Token bucket
24.33
Resource Reservation and
Admission Control
◼ Buffer, CPU Time, Bandwidth are the
resources that can be reserved for
particular flows for particular time to
maintain the QoS.
◼ Mechanism used by routers to accept or
reject flows based on flow specifications is
what we call Admission Control.
24.34
24.35
24.36
24.37
24.38
24.39
24.40
24.41
24.42
24.43
24.44
24.45
24.46
24.47