0% found this document useful (0 votes)
9 views10 pages

Unit 3

The document discusses congestion control algorithms in networks, highlighting the causes and effects of congestion, such as increased latency, packet loss, and reduced throughput. It categorizes solutions into open loop and closed loop approaches, detailing various metrics for monitoring congestion and strategies for prevention and control. Additionally, it covers specific techniques for managing congestion in virtual-circuit and datagram subnets, including admission control, warning bits, and load shedding.

Uploaded by

Raghuveer Rajesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views10 pages

Unit 3

The document discusses congestion control algorithms in networks, highlighting the causes and effects of congestion, such as increased latency, packet loss, and reduced throughput. It categorizes solutions into open loop and closed loop approaches, detailing various metrics for monitoring congestion and strategies for prevention and control. Additionally, it covers specific techniques for managing congestion in virtual-circuit and datagram subnets, including admission control, warning bits, and load shedding.

Uploaded by

Raghuveer Rajesh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

UNIT 3

Congestion Control Algorithms


Congestion refers to the state when network nodes or links become overloaded with data,
leading to a degradation in performance and increased delays. Network congestion can occur
due to high demand, insufficient bandwidth, or inefficient routing.

When a network becomes congested, several issues may arise:

1. Increased Latency: Congestion can cause delays in the transmission of data packets,
resulting in increased latency or lag. This can impact real-time applications, such as
video conferencing or online gaming, where timely delivery of data is crucial.
2. Packet Loss: In congested networks, packets may be dropped or lost due to limited
buffer space or overwhelmed network devices. Packet loss can lead to data
retransmissions, affecting the overall efficiency and reliability of network
communication.
3. Reduced Throughput: Network congestion can decrease the available bandwidth for
each user or device, resulting in reduced data transfer rates. This can impact tasks that
require high data throughput, such as large file transfers or streaming media.
4. Unfair Resource Allocation: Congestion can lead to an unfair distribution of network
resources among users or applications. Certain connections or services may dominate
the available bandwidth, causing others to suffer from limited access and poor
performance.

1. General Principles of Congestion Control


Many problems in complex systems, such as computer networks, can be viewed from a control
theory point of view. This approach leads to dividing all solutions into two groups:
open loop and closed loop.

(i)Open loop solutions attempt to solve the problem by good design, in essence, to make sure
it does not occur in the first place. Once the system is up and running, midcourse corrections
are not made. Tools for doing open-loop control include deciding when to accept new traffic,
deciding when to discard packets and which ones, and making scheduling decisions at various
points in the network. All of these have in common the fact that they make decisions without
regard to the current state of the network.
(ii) closed loop solutions are based on the concept of a feedback loop. This approach has three
parts when applied to congestion control:
1. Monitor the system to detect when and where congestion occurs.
2. Pass this information to places where action can be taken.
3. Adjust system operation to correct the problem.
1. A variety of metrics can be used to monitor the subnet for congestion. Chief among
these are the percentage of all packets discarded for lack of buffer space, the average
queue lengths, the number of packets that time out and are retransmitted, the average
packet delay, and the standard deviation of packet delay. In all cases, rising numbers
indicate growing congestion.
2. The second step in the feedback loop is to transfer the information about the congestion
from the point where it is detected to the point where something can be done about it.
The obvious way is for the router detecting the congestion to send a packet to the traffic
source or sources, announcing the problem. Of course, these extra packets increase the
load at precisely the moment that more load is not needed, namely, when the subnet is
congested.
3. Still another approach is to have hosts or routers periodically send probe packets out to
explicitly ask about congestion. This information can then be used to route traffic
around problem areas. Some radio stations have helicopters flying around their cities to
report on road congestion to make it possible for their mobile listeners to route their
packets (cars) around hot spots.

The open loop algorithms are further divided into ones that act at the source versus ones that
act at the destination. The closed loop algorithms are also divided into two subcategories:
explicit feedback versus implicit feedback.
• In explicit feedback algorithms, packets are sent back from the point of congestion to
warn the source.
• In implicit algorithms, the source deduces the existence of congestion by making local
observations, such as the time needed for acknowledgements to come back.

The presence of congestion means that the load is (temporarily) greater than the resources (in
part of the system) can handle. Two solutions come to mind: increase the resources or decrease
the load. For example, the subnet may start using dial-up telephone lines to temporarily
increase the bandwidth between certain points. On satellite systems, increasing transmission
power often gives higher bandwidth. Splitting traffic over multiple routes instead of always
using the best one may also effectively increase the bandwidth. Finally, spare routers that are
normally used only as backups (to make the system fault tolerant) can be put on-line to give
more capacity when serious congestion appears.
However, sometimes it is not possible to increase the capacity, or it has already been increased
to the limit. The only way then to beat back the congestion is to decrease the load. Several ways
exist to reduce the load, including denying service to some users, degrading service to some or
all users, and having users schedule their demands in a more predictable way.
2. Congestion Prevention Policies
The open loop systems are designed to minimize congestion in the first place, rather than letting
it happen and reacting after the fact. They try to achieve their goal by using appropriate policies
at various levels.

i) At the data link layer the retransmission policy is concerned with how fast a sender times out
and what it transmits upon timeout. A jumpy sender that times out quickly and retransmits all
outstanding packets using go back n will put a heavier load on the system than will a leisurely
sender that uses selective repeat. Closely related to this is the buffering policy. If receivers
routinely discard all out-of-order packets, these packets will have to be transmitted again later,
creating extra load. With respect to congestion control, selective repeat is clearly better than
go back n. Acknowledgement policy also affects congestion. If each packet is acknowledged
immediately, the acknowledgement packets generate extra traffic. However, if
acknowledgements are saved up to piggyback onto reverse traffic, extra timeouts and
retransmissions may result. A tight flow control scheme (e.g., a small window) reduces the
data rate and thus helps fight congestion.

ii) At the network layer, the choice between using virtual circuits and using datagrams affects
congestion since many congestion control algorithms work only with virtual-circuit subnets.
Packet queueing and service policy relates to whether routers have one queue per input line,
one queue per output line, or both. It also relates to the order in which packets are processed.
Discard policy is the rule telling which packet is dropped when there is no space. A good
policy can help alleviate congestion and a bad one can make it worse. A good routing
algorithm can help avoid congestion by spreading the traffic over all the lines, whereas a bad
one can send too much traffic over already congested lines. Finally, packet lifetime
management deals with how long a packet may live before being discarded. If it is too long,
lost packets may clog up the works for a long time, but if it is too short, packets may
sometimes time out before reaching their destination, thus inducing retransmissions.

iii) In the transport layer, the same issues occur as in the data link layer, but in addition,
determining the timeout interval is harder because the transit time across the network is less
predictable than the transit time over a wire between two routers. If the timeout interval is too
short, extra packets will be sent unnecessarily. If it is too long, congestion will be reduced but
the response time will suffer whenever a packet is lost.

3. Congestion Control in Virtual-Circuit Subnets


One technique that is widely used to keep congestion that has already started from getting worse
is admission control. The idea is simple: once congestion has been signaled, no more virtual
circuits are set up until the problem has gone away. Thus, attempts to set up new transport layer
connections fail. Letting more people in just makes matters worse. While this approach is crude,
it is simple and easy to carry out. In the telephone system, when a switch gets overloaded, it also
practices admission control by not giving dial tones. An alternative approach is to allow new
virtual circuits but carefully route all new virtual circuits around problem areas.

Suppose that a host attached to router A wants to set up a connection to a host attached to router
B. Normally, this connection would pass through one of the congested routers. To avoid this
situation, we can redraw the subnet as shown in Fig. 5-27(b), omitting the congested routers and
all of their lines. The dashed line shows a possible route for the virtual circuit that avoids the
congested routers.
Another strategy relating to virtual circuits is to negotiate an agreement between the host and
subnet when a virtual circuit is set up. This agreement normally specifies the volume and shape of
the traffic, quality of service required, and other parameters. To keep its part of the agreement, the
subnet will typically reserve resources along the path when the circuit is set up. These resources
can include table and buffer space in the routers and bandwidth on the lines. In this way, congestion
is unlikely to occur on the new virtual circuits because all the necessary resources are guaranteed
to be available.

This kind of reservation can be done all the time as standard operating procedure or only when the
subnet is congested. A disadvantage of doing it all the time is that it tends to waste resources. If
six virtual circuits that might use 1 Mbps all pass through the same physical 6- Mbps line, the line
has to be marked as full, even though it may rarely happen that all six virtual circuits are
transmitting full blast at the same time. Consequently, the price of the congestion control is unused
(i.e., wasted) bandwidth in the normal case.

4. Congestion Control in Datagram Subnets


Let us now turn to some approaches that can be used in datagram subnets (and also in virtual
circuit subnets). Each router can easily monitor the utilization of its output lines and other
resources. For example, it can associate with each line a real variable, u, whose value, between
0.0 and 1.0, reflects the recent utilization of that line. To maintain a good estimate of u, a
sample of the instantaneous line utilization, f (either 0 or 1), can be made periodically and u
updated according to

where the constant ‘a’ determines how fast the router forgets recent history.
Whenever u moves above the threshold, the output line enters a ''warning'' state. Each newly
arriving packet is checked to see if its output line is in warning state. If it is, some action is
taken. The action taken can be one of several alternatives.
The Warning Bit
The old DECNET architecture signaled the warning state by setting a special bit in the packet's
header. So does frame relay. When the packet arrived at its destination, the transport entity
copied the bit into the next acknowledgement sent back to the source. The source then cut back
on traffic.
As long as the router was in the warning state, it continued to set the warning bit, which meant
that the source continued to get acknowledgements with it set. The source monitored the
fraction of acknowledgements with the bit set and adjusted its transmission rate accordingly.
As long as the warning bits continued to flow in, the source continued to decrease its
transmission rate. When they slowed to a trickle, it increased its transmission rate. Note that
since every router along the path could set the warning bit, traffic increased only when no router
was in trouble.
Choke Packets
The previous congestion control algorithm is fairly subtle. It uses a roundabout means to tell
the source to slow down. Why not just tell it directly? In this approach, the router sends a choke
packet back to the source host, giving it the destination found in the packet. The original packet
is tagged (a header bit is turned on) so that it will not generate any more choke packets farther
along the path and is then forwarded in the usual way.
The purpose of choke packets is to provide feedback to the sender, notifying them of the
network congestion and signalling them to reduce their data transmission rate.
When the source host gets the choke packet, it is required to reduce the traffic sent to the
specified destination by X percent. Since other packets aimed at the same destination are
probably already under way and will generate yet more choke packets, the host should ignore
choke packets referring to that destination for a fixed time interval. After that period has
expired, the host listens for more choke packets for another interval. If one arrives, the line is
still congested, so the host reduces the flow still more and begins ignoring choke packets again.
If no choke packets arrive during the listening period, the host may increase the flow again.
The feedback implicit in this protocol can help prevent congestion yet not throttle any flow
unless trouble occurs.
Hosts can reduce traffic by adjusting their policy parameters, for example, their window size.
Typically, the first choke packet causes the data rate to be reduced to 0.50 of its previous rate,
the next one causes a reduction to 0.25, and so on. Increases are done in smaller increments to
prevent congestion from reoccurring quickly.

Hop-by-Hop Choke Packets


At high speeds or over long distances, sending a choke packet to the source hosts does not work
well because the reaction is so slow. Consider, for example, a host in San Francisco (router A
in Fig. 5-28) that is sending traffic to a host in New York (router D in Fig. 5-28) at 155 Mbps.
If the New York host begins to run out of buffers, it will take about 30 msec for a choke packet
to get back to San Francisco to tell it to slow down. The choke packet propagation is shown as
the second, third, and fourth steps in Fig. 5-28(a). In those 30 msec, another 4.6 megabits will
have been sent. Even if the host in San Francisco completely shuts down immediately, the 4.6
megabits in the pipe will continue to pour in and have to be dealt with. Only in the seventh
diagram in Fig. 5-28(a) will the New York router notice a slower flow.
An alternative approach is to have the choke packet take effect at every hop it passes through,
as shown in the sequence of Fig. 5-28(b). Here, as soon as the choke packet reaches F, F is
required to reduce the flow to D. Doing so will require F to devote more buffers to the flow,
since the source is still sending away at full blast, but it gives D immediate relief. In the next
step, the choke packet reaches E, which tells E to reduce the flow to F. This action puts a greater
demand on E's buffers but gives F immediate relief. Finally, the choke packet reaches A and
the flow genuinely slows down. The net effect of this hop-by-hop scheme is to provide quick
relief at the point of congestion at the price of using up more buffers upstream. In this way,
congestion can be nipped in the bud without losing any packets.
5. Load Shedding, Jitter Control
Load Shedding
When none of the above methods make the congestion disappear, routers can bring out the
heavy artillery: load shedding. Load shedding is a fancy way of saying that when routers are
being flooded by packets that they cannot handle, they just throw them away.
A router drowning in packets can just pick packets at random to drop, but usually it can do
better than that. Which packet to discard may depend on the applications running. For file
transfer, an old packet is worth more than a new one because dropping packet 6 and keeping
packets 7 through 10 will cause a gap at the receiver that may force packets 6 through 10 to be
retransmitted (if the receiver routinely discards out-of-order packets). In a 12-packet file,
dropping 6 may require 7 through 12 to be retransmitted, whereas dropping 10 may require
only 10 through 12 to be retransmitted. In contrast, for multimedia, a new packet is more
important than an old one. The former policy (old is better than new) is often called wine and
the latter (new is better than old) is often called milk.
A step above this in intelligence requires cooperation from the senders. For many applications,
some packets are more important than others. For example, certain algorithms for compressing
video periodically transmit an entire frame and then send subsequent frames as differences
from the last full frame. In this case, dropping a packet that is part of a difference is preferable
to dropping one that is part of a full frame. As another example, consider transmitting a
document containing ASCII text and pictures. Losing a line of pixels in some image is far less
damaging than losing a line of readable text.
To implement an intelligent discard policy, applications must mark their packets in priority
classes to indicate how important they are. If they do this, then when packets have to be
discarded, routers can first drop packets from the lowest class, then the next lowest class, and
so on. Of course, unless there is some significant incentive to mark packets as anything other
than VERY IMPORTANT— NEVER, EVER DISCARD, nobody will do it.
The incentive might be in the form of money, with the low-priority packets being cheaper to
send than the high-priority ones. Alternatively, senders might be allowed to send high-priority
packets under conditions of light load, but as the load increased they would be discarded, thus
encouraging the users to stop sending them.
Another option is to allow hosts to exceed the limits specified in the agreement negotiated
when the virtual circuit was set up (e.g., use a higher bandwidth than allowed), but subject to
the condition that all excess traffic be marked as low priority. Such a strategy is actually not a
bad idea, because it makes more efficient use of idle resources, allowing hosts to use them as
long as nobody else is interested, but without establishing a right to them when times get tough.
Random Early Detection
It is well known that dealing with congestion after it is first detected is more effective than
letting it gum up the works and then trying to deal with it. This observation leads to the idea of
discarding packets before all the buffer space is really exhausted. A popular algorithm for doing
this is called RED (Random Early Detection). In some transport protocols (including TCP), the
response to lost packets is for the source to slow down. The reasoning behind this logic is that
TCP was designed for wired networks and wired networks are very reliable, so lost packets are
mostly due to buffer overruns rather than transmission errors. This fact can be exploited to help
reduce congestion.
By having routers drop packets before the situation has become hopeless (hence the ''early'' in
the name), the idea is that there is time for action to be taken before it is too late. To determine
when to start discarding, routers maintain a running average of their queue lengths.
When the average queue length on some line exceeds a threshold, the line is said to be
congested and action is taken. Since the router probably cannot tell which source is causing
most of the trouble, picking a packet at random from the queue that triggered the action is
probably as good as it can do.
How should the router tell the source about the problem? One way is to send it a choke packet.
A problem with that approach is that it puts even more load on the already congested network.
A different strategy is to just discard the selected packet and not report it. The source will
eventually notice the lack of acknowledgement and take action. Since it knows that lost packets
are generally caused by congestion and discards, it will respond by slowing down instead of
trying harder. This implicit form of feedback only works when sources respond to lost packets
by slowing down their transmission rate. In wireless networks, where most losses are due to
noise on the air link, this approach cannot be used.

Jitter Control
For applications such as audio and video streaming, it does not matter much if the packets take
20 msec or 30 msec to be delivered, as long as the transit time is constant. The variation (i.e.,
standard deviation) in the packet arrival times is called jitter. High jitter, for example, having
some packets taking 20 msec and others taking 30 msec to arrive will give an uneven quality
to the sound or movie. Jitter is illustrated in Fig. 5-29. In contrast, an agreement that 99 percent
of the packets be delivered with a delay in the range of 24.5 msec to 25.5 msec might be
acceptable.
The range chosen must be feasible, of course. It must take into account the speed-of-light transit
time and the minimum delay through the routers and perhaps leave a little slack for some
inevitable delays.
The jitter can be bounded by computing the expected transit time for each hop along the path.
When a packet arrives at a router, the router checks to see how much the packet is behind or
ahead of its schedule. This information is stored in the packet and updated at each hop. If the
packet is ahead of schedule, it is held just long enough to get it back on schedule. If it is behind
schedule, the router tries to get it out the door quickly.
In fact, the algorithm for determining which of several packets competing for an output line
should go next can always choose the packet furthest behind in its schedule. In this way, packets
that are ahead of schedule get slowed down and packets that are behind schedule get speeded
up, in both cases reducing the amount of jitter.
In some applications, such as video on demand, jitter can be eliminated by buffering at the
receiver and then fetching data for display from the buffer instead of from the network in real
time. However, for other applications, especially those that require real-time interaction
between people such as Internet telephony and videoconferencing, the delay inherent in
buffering is not acceptable.

6. Quality Of Service: Requirements


The techniques we looked at in the previous sections are designed to reduce congestion and
improve network performance. However, with the growth of multimedia networking, often
these ad hoc measures are not enough. Serious attempts at guaranteeing quality of service
through network and protocol design are needed.0 A stream of packets from a source to a
destination is called a flow. In a connection-oriented network, all the packets belonging to a
flow follow the same route; in a connectionless network, they may follow different routes. The
needs of each flow can be characterized by four primary parameters: reliability, delay, jitter,
and bandwidth. Together these determine the QoS (Quality of Service) the flow requires.

You might also like