Congestion Control Algorithms
Congestion Control Algorithms
➢ Too many packets present in (a part of) the network causes packet delay and loss
➢ The network and transport layers share the responsibility for handling congestion.
➢ Since congestion occurs within the network, it is the network layer that directly
experiences it and must ultimately determine what to do with the excess packets.
➢ However, the most effective way to control congestion is to reduce the load that the
➢ When the number of packets hosts send into the network is well within its carrying
➢ However, as the offered load approaches the carrying capacity, bursts of traffic
occasionally fill up the buffers inside routers and some packets are lost.
➢ These lost packets consume some of the capacity, so the number of delivered
which performance plummets as the offered load increases beyond the capacity.
➢ This can happen because packets can be sufficiently delayed inside the network that
➢ For example, in the early Internet, the time a packet spent waiting for a backlog of
packets ahead of it to be sent over a slow 56-kbps link could reach the maximum
➢ A different failure mode occurs when senders retransmit packets that are greatly
➢ In this case, copies of the same packet will be delivered by the network, again
➢ To capture these factors, the y-axis of Fig. 5-21 is given as goodput, which is the rate
➢ We would like to design networks that avoid congestion where possible and do not
all need the same output line, a queue will build up.
➢ Adding more memory may help up to a point, but Nagle realized that if routers have
➢ This is because by the time packets get to the front of the queue, they have already
➢ Low-bandwidth links or routers that process packets more slowly than the line rate
➢ In this case, the situation can be improved by directing some of the traffic away from
➢ In this situation, there is no alternative but to shed load or build a faster network.
➢ It is worth pointing out the difference between congestion control and flow control,
➢ Congestion control has to do with making sure the network is able to carry the
offered traffic.
➢ It is a global issue, involving the behavior of all the hosts and routers.
➢ Flow control, in contrast, relates to the traffic between a particular sender and a
particular receiver.
➢ Its job is to make sure that a fast sender cannot continually transmit data faster than
➢ The presence of congestion means that the load is (temporarily) greater than the
➢ Two solutions come to mind: increase the resources or decrease the load.
➢ As shown in Fig. 5-22, these solutions are usually applied on different time scales to
➢ The most basic way to avoid congestion is to build a network that is well matched to
➢ If there is a low-bandwidth link on the path along which most traffic is directed,
congestion is likely.
for example, turning on spare routers or enabling lines that are normally used only
as backups (to make the system fault tolerant) or purchasing bandwidth on the open
market.
➢ More often, links and routers that are regularly heavily utilized are upgraded at the
earliest opportunity.
➢ To make the most of the existing network capacity, routes can be tailored to traffic
patterns that change during the day as network users wake and sleep in different
time zones.
➢ For example, routes may be changed to shift traffic away from heavily used paths by
➢ Some local radio stations have helicopters flying around their cities to report on road
congestion to make it possible for their mobile listeners to route their packets (cars)
around hotspots.
➢ The only way then to beat back the congestion is to decrease the load.
➢ In a virtual-circuit network, new connections can be refused if they would cause the
➢ At a finer granularity, when congestion is imminent the network can deliver feedback
to the sources whose traffic flows are responsible for the problem.
➢ The network can request these sources to throttle their traffic, or it can slow down
➢ Two difficulties with this approach are how to identify the onset of congestion, and
packet loss.
➢ To tackle the second issue, routers must participate in a feedback loop with the
sources.
➢ For a scheme to work correctly, the time scale must be adjusted carefully.
➢ If every time two packets arrive in a row, a router yells STOP and every time a router
is idle for 20 μsec, it yells GO, the system will oscillate wildly and never converge.
➢ On the other hand, if it waits 30 minutes to make sure before saying anything, the
➢ An added concern is having routers send more messages when the network is
already congested.
➢ Finally, when all else fails, the network is forced to discard packets that it cannot
deliver.
➢ A good policy for choosing which packets to discard can help to prevent congestion
collapse.
Traffic-Aware Routing
➢ The goal in taking load into account when computing routes is to shift traffic away
from hotspots that will be the first places in the network to experience congestion.
link bandwidth and propagation delay plus the (variable) measured load or average
queuing delay.
➢ Least-weight paths will then favor paths that are more lightly loaded, all else being
equal
➢ Consider the network of Fig. 5-23, which is divided into two parts, East and West,
➢ Suppose that most of the traffic between East and West is using link CF, and, as a
➢ Including queueing delay in the weight used for the shortest path calculation will
➢ After the new routing tables have been installed, most of the East-West traffic will
➢ If load is ignored and only bandwidth and propagation delay are considered, this
➢ Attempts to include load but change weights within a narrow range only slow down
routing oscillations.
➢ The first is multipath routing, in which there can be multiple paths from a source to
a destination.
➢ In our example this means that the traffic can be spread across both of the East to
West links.
➢ The second one is for the routing scheme to shift traffic across routes slowly enough
➢ Given these difficulties, in the Internet routing protocols do not generally adjust their
➢ Instead, adjustments are made outside the routing protocol by slowly changing its
inputs.
Admission Control
➢ The idea is simple: do not set up a new virtual circuit unless the network can carry
➢ This is better than the alternative, as letting more people in when the network is
➢ The trick with this approach is working out when a new virtual circuit will lead to
congestion.
➢ The task is straightforward in the telephone network because of the fixed bandwidth
➢ However, virtual circuits in computer networks come in all shapes and sizes.
➢ Thus, the circuit must come with some characterization of its traffic if we are to apply
admission control.
➢ The problem of how to describe it in a simple yet meaningful way is difficult because
➢ For example, traffic that varies while browsing the Web is more difficult to handle
than a streaming movie with the same long-term throughput because the bursts of
➢ A commonly used descriptor that captures this effect is the leaky bucket or token
bucket.
➢ A leaky bucket has two parameters that bound the average rate and the
virtual circuit.
➢ One possibility is for the network to reserve enough capacity along the paths of each
➢ In this case, the traffic description is a service agreement for what the network will
➢ We have prevented congestion but veered into the related topic of quality of service
➢ Even without making guarantees, the network can use traffic descriptions for
admission control.
➢ The task is then to estimate how many circuits will fit within the carrying capacity of
➢ Suppose that virtual circuits that may blast traffic at rates up to 10 Mbps all pass
risking congestion, but this is wasteful in the normal case since it may rarely happen
➢ For example, consider the network illustrated in Fig. 5-24(a), in which two routers
attached to router B.
➢ Normally, this connection would pass through one of the congested routers.
➢ To avoid this situation, we can redraw the network as shown in Fig. 5-24(b), omitting
➢ The dashed line shows a possible route for the virtual circuit that avoids the
congested routers.
Traffic Throttling
➢ In the Internet and many other computer networks, senders adjust their
➢ In this setting, the network aims to operate just before the onset of congestion.
➢ When congestion is imminent, it must tell the senders to throttle back their
➢ The term congestion avoidance is sometimes used to contrast this operating point
with the one in which the network has become (overly) congested.
➢ Let us now look at some approaches to throttling traffic that can be used in both
➢ First, routers must determine when congestion is approaching, ideally before it has
arrived.
➢ Three possibilities are the utilization of the output links, the buffering of queued
packets inside the router, and the number of packets that are lost due to insufficient
buffering.
➢ Of these possibilities, the second one is the most useful. Averages of utilization do
not directly account for the burstiness of most traffic—a utilization of 50% may be
low for smooth traffic and too high for highly variable traffic.
➢ Congestion has already set in by the time that packets are lost.
➢ The queueing delay inside routers directly captures any congestion experienced by
packets.
➢ It should be low most of time, but will jump when there is a burst of traffic that
generates a backlog.
➢ where the constant α determines how fast the router forgets recent history.
➢ Whenever d moves above the threshold, the router notes the onset of congestion.
➢ The second problem is that routers must deliver timely feedback to the senders that
➢ It must then warn them carefully, without sending many more packets into the
Choke Packets
➢ In this approach, the router selects a congested packet and sends a choke packet
back to the source host, giving it the destination found in the packet.
➢ The original packet may be tagged (a header bit is turned on) so that it will not
generate any more choke packets farther along the path and then forwarded in the
usual way.
➢ To avoid increasing load on the network during a time of congestion, the router may
is likely to cause choke packets to be sent to fast senders, because they will have the
➢ The feedback implicit in this protocol can help prevent congestion yet not throttle
➢ For the same reason, it is likely that multiple choke packets will be sent to a given
➢ The host should ignore these additional chokes for the fixed time interval until its
➢ After that period, further choke packets indicate that the network is still congested.
➢ It never caught on, though, partly because the circumstances in which it was
➢ Instead of generating additional packets to warn of congestion, a router can tag any
packet it forwards (by setting a bit in the packet’s header) to signal that it is
experiencing congestion.
➢ When the network delivers the packet, the destination can note that there is
➢ This design is called ECN (Explicit Congestion Notification) and is used in the Internet
➢ Two bits in the IP packet header are used to record whether the packet has
experienced congestion.
➢ Packets are unmarked when they are sent, as illustrated in Fig. 5-25.
➢ If any of the routers they pass through is congested, that router will then mark the
➢ The destination will then echo any marks back to the sender as an explicit congestion
➢ This is shown with a dashed line in the figure to indicate that it happens above the
➢ The sender must then throttle its transmissions, as in the case of choke packets.
Hop-by-Hop Backpressure
➢ At high speeds or over long distances, many new packets may be transmitted after
congestion has been signaled because of the delay before the signal takes effect.
➢ Consider, for example, a host in San Francisco (router A in Fig. 5-26) that is sending
traffic to a host in New York (router D in Fig. 5-26) at the OC-3 speed of 155 Mbps.
➢ An ECN indication will take even longer because it is delivered via the destination.
➢ Choke packet propagation is illustrated as the second, third, and fourth steps in Fig.
5-26(a). In those 40 msec, another 6.2 megabits will have been sent. Even if
➢ the host in San Francisco completely shuts down immediately, the 6.2 megabits in
➢ the pipe will continue to pour in and have to be dealt with. Only in the seventh
➢ diagram in Fig. 5-26(a) will the New York router notice a slower flow.
➢ An alternative approach is to have the choke packet take effect at every hop it passes
➢ Here, as soon as the choke packet reaches F, F is required to reduce the flow to D.
still sending away at full blast, but it gives D immediate relief, like a headache remedy
in a television commercial.
➢ In the next step, the choke packet reaches E, which tells E to reduce the flow to F.
This action puts a greater demand on E’s buffers but gives F immediate relief.
➢ Finally, the choke packet reaches A and the flow genuinely slows down.
➢ The net effect of this hop-by-hop scheme is to provide quick relief at the point of
➢ In this way, congestion can be nipped in the bud without losing any packets.