0% found this document useful (0 votes)
14 views18 pages

Congestion Control Algorithms

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views18 pages

Congestion Control Algorithms

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

CONGESTION CONTROL ALGORITHMS

➢ Too many packets present in (a part of) the network causes packet delay and loss

that degrades performance.

➢ This situation is called congestion.

➢ The network and transport layers share the responsibility for handling congestion.

➢ Since congestion occurs within the network, it is the network layer that directly

experiences it and must ultimately determine what to do with the excess packets.

➢ However, the most effective way to control congestion is to reduce the load that the

transport layer is placing on the network.

➢ This requires the network and transport layers to work together.

➢ Figure 5-21 depicts the onset of congestion.

➢ When the number of packets hosts send into the network is well within its carrying

capacity, the number delivered is proportional to the number sent.

DR V RAMA KRISHNA TOTTEMPUDI 1


➢ If twice as many are sent, twice as many are delivered.

➢ However, as the offered load approaches the carrying capacity, bursts of traffic

occasionally fill up the buffers inside routers and some packets are lost.

➢ These lost packets consume some of the capacity, so the number of delivered

packets falls below the ideal curve.

➢ The network is now congested.

➢ Unless the network is well designed, it may experience a congestion collapse, in

which performance plummets as the offered load increases beyond the capacity.

➢ This can happen because packets can be sufficiently delayed inside the network that

they are no longer useful when they leave the network.

➢ For example, in the early Internet, the time a packet spent waiting for a backlog of

packets ahead of it to be sent over a slow 56-kbps link could reach the maximum

time it was allowed to remain in the network.

➢ It then had to be thrown away.

➢ A different failure mode occurs when senders retransmit packets that are greatly

delayed, thinking that they have been lost.

➢ In this case, copies of the same packet will be delivered by the network, again

wasting its capacity.

➢ To capture these factors, the y-axis of Fig. 5-21 is given as goodput, which is the rate

at which useful packets are delivered by the network.

➢ We would like to design networks that avoid congestion where possible and do not

suffer from congestion collapse if they do become congested.

➢ Unfortunately, congestion cannot wholly be avoided.

DR V RAMA KRISHNA TOTTEMPUDI 2


➢ If all of a sudden, streams of packets begin arriving on three or four input lines and

all need the same output line, a queue will build up.

➢ If there is insufficient memory to hold all of them, packets will be lost.

➢ Adding more memory may help up to a point, but Nagle realized that if routers have

an infinite amount of memory, congestion gets worse, not better.

➢ This is because by the time packets get to the front of the queue, they have already

timed out (repeatedly) and duplicates have been sent.

➢ This makes matters worse, not better—it leads to congestion collapse.

➢ Low-bandwidth links or routers that process packets more slowly than the line rate

can also become congested.

➢ In this case, the situation can be improved by directing some of the traffic away from

the bottleneck to other parts of the network.

➢ Eventually, however, all regions of the network will be congested.

➢ In this situation, there is no alternative but to shed load or build a faster network.

➢ It is worth pointing out the difference between congestion control and flow control,

as the relationship is a very subtle one.

➢ Congestion control has to do with making sure the network is able to carry the

offered traffic.

➢ It is a global issue, involving the behavior of all the hosts and routers.

➢ Flow control, in contrast, relates to the traffic between a particular sender and a

particular receiver.

➢ Its job is to make sure that a fast sender cannot continually transmit data faster than

the receiver is able to absorb it.

DR V RAMA KRISHNA TOTTEMPUDI 3


Approaches to Congestion Control

➢ The presence of congestion means that the load is (temporarily) greater than the

resources (in a part of the network) can handle.

➢ Two solutions come to mind: increase the resources or decrease the load.

➢ As shown in Fig. 5-22, these solutions are usually applied on different time scales to

either prevent congestion or react to it once it has occurred.

➢ The most basic way to avoid congestion is to build a network that is well matched to

the traffic that it carries.

➢ If there is a low-bandwidth link on the path along which most traffic is directed,

congestion is likely.

➢ Sometimes resources can be added dynamically when there is serious congestion,

for example, turning on spare routers or enabling lines that are normally used only

as backups (to make the system fault tolerant) or purchasing bandwidth on the open

market.

➢ More often, links and routers that are regularly heavily utilized are upgraded at the

earliest opportunity.

DR V RAMA KRISHNA TOTTEMPUDI 4


➢ This is called provisioning and happens on a time scale of months, driven by long-

term traffic trends.

➢ To make the most of the existing network capacity, routes can be tailored to traffic

patterns that change during the day as network users wake and sleep in different

time zones.

➢ For example, routes may be changed to shift traffic away from heavily used paths by

changing the shortest path weights.

➢ Some local radio stations have helicopters flying around their cities to report on road

congestion to make it possible for their mobile listeners to route their packets (cars)

around hotspots.

➢ This is called traffic-aware routing.

➢ Splitting traffic across multiple paths is also helpful.

➢ However, sometimes it is not possible to increase capacity.

➢ The only way then to beat back the congestion is to decrease the load.

➢ In a virtual-circuit network, new connections can be refused if they would cause the

network to become congested.

➢ This is called admission control.

➢ At a finer granularity, when congestion is imminent the network can deliver feedback

to the sources whose traffic flows are responsible for the problem.

➢ The network can request these sources to throttle their traffic, or it can slow down

the traffic itself.

➢ Two difficulties with this approach are how to identify the onset of congestion, and

how to inform the source that needs to slow down.

DR V RAMA KRISHNA TOTTEMPUDI 5


➢ To tackle the first issue, routers can monitor the average load, queueing delay, or

packet loss.

➢ In all cases, rising numbers indicate growing congestion.

➢ To tackle the second issue, routers must participate in a feedback loop with the

sources.

➢ For a scheme to work correctly, the time scale must be adjusted carefully.

➢ If every time two packets arrive in a row, a router yells STOP and every time a router

is idle for 20 μsec, it yells GO, the system will oscillate wildly and never converge.

➢ On the other hand, if it waits 30 minutes to make sure before saying anything, the

congestion-control mechanism will react too sluggishly to be of any use.

➢ Delivering timely feedback is a nontrivial matter.

➢ An added concern is having routers send more messages when the network is

already congested.

➢ Finally, when all else fails, the network is forced to discard packets that it cannot

deliver.

➢ The general name for this is load shedding.

➢ A good policy for choosing which packets to discard can help to prevent congestion

collapse.

Traffic-Aware Routing

➢ The first approach we will examine is traffic-aware routing.

➢ These schemes adapted to changes in topology, but not to changes in load.

➢ The goal in taking load into account when computing routes is to shift traffic away

from hotspots that will be the first places in the network to experience congestion.

DR V RAMA KRISHNA TOTTEMPUDI 6


➢ The most direct way to do this is to set the link weight to be a function of the (fixed)

link bandwidth and propagation delay plus the (variable) measured load or average

queuing delay.

➢ Least-weight paths will then favor paths that are more lightly loaded, all else being

equal

➢ Consider the network of Fig. 5-23, which is divided into two parts, East and West,

connected by two links, CF and EI.

➢ Suppose that most of the traffic between East and West is using link CF, and, as a

result, this link is heavily loaded with long delays.

➢ Including queueing delay in the weight used for the shortest path calculation will

make EI more attractive.

➢ After the new routing tables have been installed, most of the East-West traffic will

now go over EI, loading this link.

➢ Consequently, in the next update, CF will appear to be the shortest path.

DR V RAMA KRISHNA TOTTEMPUDI 7


➢ As a result, the routing tables may oscillate wildly, leading to erratic routing and

many potential problems.

➢ If load is ignored and only bandwidth and propagation delay are considered, this

problem does not occur.

➢ Attempts to include load but change weights within a narrow range only slow down

routing oscillations.

➢ Two techniques can contribute to a successful solution.

➢ The first is multipath routing, in which there can be multiple paths from a source to

a destination.

➢ In our example this means that the traffic can be spread across both of the East to

West links.

➢ The second one is for the routing scheme to shift traffic across routes slowly enough

that it is able to converge, as in the scheme of Gallagher

➢ Given these difficulties, in the Internet routing protocols do not generally adjust their

routes depending on the load.

➢ Instead, adjustments are made outside the routing protocol by slowly changing its

inputs.

➢ This is called traffic engineering.

Admission Control

➢ One technique that is widely used in virtual-circuit networks to keep congestion at

bay is admission control.

➢ The idea is simple: do not set up a new virtual circuit unless the network can carry

the added traffic without becoming congested.

DR V RAMA KRISHNA TOTTEMPUDI 8


➢ Thus, attempts to set up a virtual circuit may fail.

➢ This is better than the alternative, as letting more people in when the network is

busy just makes matters worse.

➢ By analogy, in the telephone system, when a switch gets overloaded it practices

admission control by not giving dial tones.

➢ The trick with this approach is working out when a new virtual circuit will lead to

congestion.

➢ The task is straightforward in the telephone network because of the fixed bandwidth

of calls (64 kbps for uncompressed audio).

➢ However, virtual circuits in computer networks come in all shapes and sizes.

➢ Thus, the circuit must come with some characterization of its traffic if we are to apply

admission control.

➢ Traffic is often described in terms of its rate and shape.

➢ The problem of how to describe it in a simple yet meaningful way is difficult because

traffic is typically bursty—the average rate is only half the story.

➢ For example, traffic that varies while browsing the Web is more difficult to handle

than a streaming movie with the same long-term throughput because the bursts of

Web traffic are more likely to congest routers in the network.

➢ A commonly used descriptor that captures this effect is the leaky bucket or token

bucket.

➢ A leaky bucket has two parameters that bound the average rate and the

instantaneous burst size of traffic.

DR V RAMA KRISHNA TOTTEMPUDI 9


➢ Armed with traffic descriptions, the network can decide whether to admit the new

virtual circuit.

➢ One possibility is for the network to reserve enough capacity along the paths of each

of its virtual circuits that congestion will not occur.

➢ In this case, the traffic description is a service agreement for what the network will

guarantee its users.

➢ We have prevented congestion but veered into the related topic of quality of service

a little too early;

➢ Even without making guarantees, the network can use traffic descriptions for

admission control.

➢ The task is then to estimate how many circuits will fit within the carrying capacity of

the network without congestion.

➢ Suppose that virtual circuits that may blast traffic at rates up to 10 Mbps all pass

through the same 100- Mbps physical link.

DR V RAMA KRISHNA TOTTEMPUDI 10


➢ How many circuits should be admitted? Clearly, 10 circuits can be admitted without

risking congestion, but this is wasteful in the normal case since it may rarely happen

that all 10 are transmitting full blast at the same time.

➢ In real networks, measurements of past behavior that capture the statistics of

transmissions can be used to estimate the number of circuits to admit, to trade

better performance for acceptable risk.

➢ Admission control can also be combined with traffic-aware routing by considering

routes around traffic hotspots as part of the setup procedure.

➢ For example, consider the network illustrated in Fig. 5-24(a), in which two routers

are congested, as indicated

➢ Suppose that a host attached to router A wants to set up a connection to a host

attached to router B.

➢ Normally, this connection would pass through one of the congested routers.

➢ To avoid this situation, we can redraw the network as shown in Fig. 5-24(b), omitting

the congested routers and all of their lines.

➢ The dashed line shows a possible route for the virtual circuit that avoids the

congested routers.

Traffic Throttling

➢ In the Internet and many other computer networks, senders adjust their

transmissions to send as much traffic as the network can readily deliver.

➢ In this setting, the network aims to operate just before the onset of congestion.

➢ When congestion is imminent, it must tell the senders to throttle back their

transmissions and slow down.

DR V RAMA KRISHNA TOTTEMPUDI 11


➢ This feedback is business as usual rather than an exceptional situation.

➢ The term congestion avoidance is sometimes used to contrast this operating point

with the one in which the network has become (overly) congested.

➢ Let us now look at some approaches to throttling traffic that can be used in both

datagram networks and virtual-circuit networks.

➢ Each approach must solve two problems.

➢ First, routers must determine when congestion is approaching, ideally before it has

arrived.

➢ To do so, each router can continuously monitor the resources it is using.

➢ Three possibilities are the utilization of the output links, the buffering of queued

packets inside the router, and the number of packets that are lost due to insufficient

buffering.

➢ Of these possibilities, the second one is the most useful. Averages of utilization do

not directly account for the burstiness of most traffic—a utilization of 50% may be

low for smooth traffic and too high for highly variable traffic.

➢ Counts of packet losses come too late.

➢ Congestion has already set in by the time that packets are lost.

➢ The queueing delay inside routers directly captures any congestion experienced by

packets.

➢ It should be low most of time, but will jump when there is a burst of traffic that

generates a backlog.

➢ To maintain a good estimate of the queueing delay, d, a sample of the instantaneous

queue length, s, can be made periodically and d updated according to

DR V RAMA KRISHNA TOTTEMPUDI 12


➢ where the constant α determines how fast the router forgets recent history.

➢ This is called an EWMA (Exponentially Weighted Moving Average).

➢ It smoothes out fluctuations and is equivalent to a low-pass filter.

➢ Whenever d moves above the threshold, the router notes the onset of congestion.

➢ The second problem is that routers must deliver timely feedback to the senders that

are causing the congestion.

➢ Congestion is experienced in the network, but relieving congestion requires action

on behalf of the senders that are using the network.

➢ To deliver feedback, the router must identify the appropriate senders.

➢ It must then warn them carefully, without sending many more packets into the

already congested network.

Choke Packets

➢ The most direct way to notify a sender of congestion is to tell it directly.

➢ In this approach, the router selects a congested packet and sends a choke packet

back to the source host, giving it the destination found in the packet.

➢ The original packet may be tagged (a header bit is turned on) so that it will not

generate any more choke packets farther along the path and then forwarded in the

usual way.

➢ To avoid increasing load on the network during a time of congestion, the router may

only send choke packets at a low rate.

DR V RAMA KRISHNA TOTTEMPUDI 13


➢ When the source host gets the choke packet, it is required to reduce the traffic sent

to the specified destination, for example, by 50%.

➢ In a datagram network, simply picking packets at random when there is congestion

is likely to cause choke packets to be sent to fast senders, because they will have the

most packets in the queue.

➢ The feedback implicit in this protocol can help prevent congestion yet not throttle

any sender unless it causes trouble.

➢ For the same reason, it is likely that multiple choke packets will be sent to a given

host and destination.

➢ The host should ignore these additional chokes for the fixed time interval until its

reduction in traffic takes effect.

➢ After that period, further choke packets indicate that the network is still congested.

➢ It never caught on, though, partly because the circumstances in which it was

generated and the effect it had were not clearly specified.

Explicit Congestion Notification

➢ Instead of generating additional packets to warn of congestion, a router can tag any

packet it forwards (by setting a bit in the packet’s header) to signal that it is

experiencing congestion.

➢ When the network delivers the packet, the destination can note that there is

congestion and inform the sender when it sends a reply packet.

➢ The sender can then throttle its transmissions as before.

DR V RAMA KRISHNA TOTTEMPUDI 14


➢ This design is called ECN (Explicit Congestion Notification) and is used in the Internet

➢ Two bits in the IP packet header are used to record whether the packet has

experienced congestion.

➢ Packets are unmarked when they are sent, as illustrated in Fig. 5-25.

➢ If any of the routers they pass through is congested, that router will then mark the

packet as having experienced congestion as it is forwarded.

➢ The destination will then echo any marks back to the sender as an explicit congestion

signal in its next reply packet.

➢ This is shown with a dashed line in the figure to indicate that it happens above the

IP level (e.g., in TCP).

➢ The sender must then throttle its transmissions, as in the case of choke packets.

Hop-by-Hop Backpressure

➢ At high speeds or over long distances, many new packets may be transmitted after

congestion has been signaled because of the delay before the signal takes effect.

➢ Consider, for example, a host in San Francisco (router A in Fig. 5-26) that is sending

traffic to a host in New York (router D in Fig. 5-26) at the OC-3 speed of 155 Mbps.

DR V RAMA KRISHNA TOTTEMPUDI 15


➢ If the New York host begins to run out of buffers, it will take about 40 msec for a

choke packet to get back to San Francisco to tell it to slow down.

➢ An ECN indication will take even longer because it is delivered via the destination.

➢ Choke packet propagation is illustrated as the second, third, and fourth steps in Fig.

5-26(a). In those 40 msec, another 6.2 megabits will have been sent. Even if

➢ the host in San Francisco completely shuts down immediately, the 6.2 megabits in

➢ the pipe will continue to pour in and have to be dealt with. Only in the seventh

➢ diagram in Fig. 5-26(a) will the New York router notice a slower flow.

➢ An alternative approach is to have the choke packet take effect at every hop it passes

through, as shown in the sequence of Fig. 5-26(b).

DR V RAMA KRISHNA TOTTEMPUDI 16


➢ Here, as soon as the choke packet reaches F, F is required to reduce the flow to D.

DR V RAMA KRISHNA TOTTEMPUDI 17


➢ Doing so will require F to devote more buffers to the connection, since the source is

still sending away at full blast, but it gives D immediate relief, like a headache remedy

in a television commercial.

➢ In the next step, the choke packet reaches E, which tells E to reduce the flow to F.

This action puts a greater demand on E’s buffers but gives F immediate relief.

➢ Finally, the choke packet reaches A and the flow genuinely slows down.

➢ The net effect of this hop-by-hop scheme is to provide quick relief at the point of

congestion, at the price of using up more buffers upstream.

➢ In this way, congestion can be nipped in the bud without losing any packets.

DR V RAMA KRISHNA TOTTEMPUDI 18

You might also like