Unit 3
Unit 3
1. Increased Latency: Congestion can cause delays in the transmission of data packets,
resulting in increased latency or lag. This can impact real-time applications, such as
video conferencing or online gaming, where timely delivery of data is crucial.
2. Packet Loss: In congested networks, packets may be dropped or lost due to limited
buffer space or overwhelmed network devices. Packet loss can lead to data
retransmissions, affecting the overall efficiency and reliability of network
communication.
3. Reduced Throughput: Network congestion can decrease the available bandwidth for
each user or device, resulting in reduced data transfer rates. This can impact tasks that
require high data throughput, such as large file transfers or streaming media.
4. Unfair Resource Allocation: Congestion can lead to an unfair distribution of network
resources among users or applications. Certain connections or services may dominate
the available bandwidth, causing others to suffer from limited access and poor
performance.
(i)Open loop solutions attempt to solve the problem by good design, in essence, to make sure
it does not occur in the first place. Once the system is up and running, midcourse corrections
are not made. Tools for doing open-loop control include deciding when to accept new traffic,
deciding when to discard packets and which ones, and making scheduling decisions at various
points in the network. All of these have in common the fact that they make decisions without
regard to the current state of the network.
(ii) closed loop solutions are based on the concept of a feedback loop. This approach has three
parts when applied to congestion control:
1. Monitor the system to detect when and where congestion occurs.
2. Pass this information to places where action can be taken.
3. Adjust system operation to correct the problem.
1. A variety of metrics can be used to monitor the subnet for congestion. Chief among
these are the percentage of all packets discarded for lack of buffer space, the average
queue lengths, the number of packets that time out and are retransmitted, the average
packet delay, and the standard deviation of packet delay. In all cases, rising numbers
indicate growing congestion.
2. The second step in the feedback loop is to transfer the information about the congestion
from the point where it is detected to the point where something can be done about it.
The obvious way is for the router detecting the congestion to send a packet to the traffic
source or sources, announcing the problem. Of course, these extra packets increase the
load at precisely the moment that more load is not needed, namely, when the subnet is
congested.
3. Still another approach is to have hosts or routers periodically send probe packets out to
explicitly ask about congestion. This information can then be used to route traffic
around problem areas. Some radio stations have helicopters flying around their cities to
report on road congestion to make it possible for their mobile listeners to route their
packets (cars) around hot spots.
The open loop algorithms are further divided into ones that act at the source versus ones that
act at the destination. The closed loop algorithms are also divided into two subcategories:
explicit feedback versus implicit feedback.
• In explicit feedback algorithms, packets are sent back from the point of congestion to
warn the source.
• In implicit algorithms, the source deduces the existence of congestion by making local
observations, such as the time needed for acknowledgements to come back.
The presence of congestion means that the load is (temporarily) greater than the resources (in
part of the system) can handle. Two solutions come to mind: increase the resources or decrease
the load. For example, the subnet may start using dial-up telephone lines to temporarily
increase the bandwidth between certain points. On satellite systems, increasing transmission
power often gives higher bandwidth. Splitting traffic over multiple routes instead of always
using the best one may also effectively increase the bandwidth. Finally, spare routers that are
normally used only as backups (to make the system fault tolerant) can be put on-line to give
more capacity when serious congestion appears.
However, sometimes it is not possible to increase the capacity, or it has already been increased
to the limit. The only way then to beat back the congestion is to decrease the load. Several ways
exist to reduce the load, including denying service to some users, degrading service to some or
all users, and having users schedule their demands in a more predictable way.
2. Congestion Prevention Policies
The open loop systems are designed to minimize congestion in the first place, rather than letting
it happen and reacting after the fact. They try to achieve their goal by using appropriate policies
at various levels.
i) At the data link layer the retransmission policy is concerned with how fast a sender times out
and what it transmits upon timeout. A jumpy sender that times out quickly and retransmits all
outstanding packets using go back n will put a heavier load on the system than will a leisurely
sender that uses selective repeat. Closely related to this is the buffering policy. If receivers
routinely discard all out-of-order packets, these packets will have to be transmitted again later,
creating extra load. With respect to congestion control, selective repeat is clearly better than
go back n. Acknowledgement policy also affects congestion. If each packet is acknowledged
immediately, the acknowledgement packets generate extra traffic. However, if
acknowledgements are saved up to piggyback onto reverse traffic, extra timeouts and
retransmissions may result. A tight flow control scheme (e.g., a small window) reduces the
data rate and thus helps fight congestion.
ii) At the network layer, the choice between using virtual circuits and using datagrams affects
congestion since many congestion control algorithms work only with virtual-circuit subnets.
Packet queueing and service policy relates to whether routers have one queue per input line,
one queue per output line, or both. It also relates to the order in which packets are processed.
Discard policy is the rule telling which packet is dropped when there is no space. A good
policy can help alleviate congestion and a bad one can make it worse. A good routing
algorithm can help avoid congestion by spreading the traffic over all the lines, whereas a bad
one can send too much traffic over already congested lines. Finally, packet lifetime
management deals with how long a packet may live before being discarded. If it is too long,
lost packets may clog up the works for a long time, but if it is too short, packets may
sometimes time out before reaching their destination, thus inducing retransmissions.
iii) In the transport layer, the same issues occur as in the data link layer, but in addition,
determining the timeout interval is harder because the transit time across the network is less
predictable than the transit time over a wire between two routers. If the timeout interval is too
short, extra packets will be sent unnecessarily. If it is too long, congestion will be reduced but
the response time will suffer whenever a packet is lost.
Suppose that a host attached to router A wants to set up a connection to a host attached to router
B. Normally, this connection would pass through one of the congested routers. To avoid this
situation, we can redraw the subnet as shown in Fig. 5-27(b), omitting the congested routers and
all of their lines. The dashed line shows a possible route for the virtual circuit that avoids the
congested routers.
Another strategy relating to virtual circuits is to negotiate an agreement between the host and
subnet when a virtual circuit is set up. This agreement normally specifies the volume and shape of
the traffic, quality of service required, and other parameters. To keep its part of the agreement, the
subnet will typically reserve resources along the path when the circuit is set up. These resources
can include table and buffer space in the routers and bandwidth on the lines. In this way, congestion
is unlikely to occur on the new virtual circuits because all the necessary resources are guaranteed
to be available.
This kind of reservation can be done all the time as standard operating procedure or only when the
subnet is congested. A disadvantage of doing it all the time is that it tends to waste resources. If
six virtual circuits that might use 1 Mbps all pass through the same physical 6- Mbps line, the line
has to be marked as full, even though it may rarely happen that all six virtual circuits are
transmitting full blast at the same time. Consequently, the price of the congestion control is unused
(i.e., wasted) bandwidth in the normal case.
where the constant ‘a’ determines how fast the router forgets recent history.
Whenever u moves above the threshold, the output line enters a ''warning'' state. Each newly
arriving packet is checked to see if its output line is in warning state. If it is, some action is
taken. The action taken can be one of several alternatives.
The Warning Bit
The old DECNET architecture signaled the warning state by setting a special bit in the packet's
header. So does frame relay. When the packet arrived at its destination, the transport entity
copied the bit into the next acknowledgement sent back to the source. The source then cut back
on traffic.
As long as the router was in the warning state, it continued to set the warning bit, which meant
that the source continued to get acknowledgements with it set. The source monitored the
fraction of acknowledgements with the bit set and adjusted its transmission rate accordingly.
As long as the warning bits continued to flow in, the source continued to decrease its
transmission rate. When they slowed to a trickle, it increased its transmission rate. Note that
since every router along the path could set the warning bit, traffic increased only when no router
was in trouble.
Choke Packets
The previous congestion control algorithm is fairly subtle. It uses a roundabout means to tell
the source to slow down. Why not just tell it directly? In this approach, the router sends a choke
packet back to the source host, giving it the destination found in the packet. The original packet
is tagged (a header bit is turned on) so that it will not generate any more choke packets farther
along the path and is then forwarded in the usual way.
The purpose of choke packets is to provide feedback to the sender, notifying them of the
network congestion and signalling them to reduce their data transmission rate.
When the source host gets the choke packet, it is required to reduce the traffic sent to the
specified destination by X percent. Since other packets aimed at the same destination are
probably already under way and will generate yet more choke packets, the host should ignore
choke packets referring to that destination for a fixed time interval. After that period has
expired, the host listens for more choke packets for another interval. If one arrives, the line is
still congested, so the host reduces the flow still more and begins ignoring choke packets again.
If no choke packets arrive during the listening period, the host may increase the flow again.
The feedback implicit in this protocol can help prevent congestion yet not throttle any flow
unless trouble occurs.
Hosts can reduce traffic by adjusting their policy parameters, for example, their window size.
Typically, the first choke packet causes the data rate to be reduced to 0.50 of its previous rate,
the next one causes a reduction to 0.25, and so on. Increases are done in smaller increments to
prevent congestion from reoccurring quickly.
Jitter Control
For applications such as audio and video streaming, it does not matter much if the packets take
20 msec or 30 msec to be delivered, as long as the transit time is constant. The variation (i.e.,
standard deviation) in the packet arrival times is called jitter. High jitter, for example, having
some packets taking 20 msec and others taking 30 msec to arrive will give an uneven quality
to the sound or movie. Jitter is illustrated in Fig. 5-29. In contrast, an agreement that 99 percent
of the packets be delivered with a delay in the range of 24.5 msec to 25.5 msec might be
acceptable.
The range chosen must be feasible, of course. It must take into account the speed-of-light transit
time and the minimum delay through the routers and perhaps leave a little slack for some
inevitable delays.
The jitter can be bounded by computing the expected transit time for each hop along the path.
When a packet arrives at a router, the router checks to see how much the packet is behind or
ahead of its schedule. This information is stored in the packet and updated at each hop. If the
packet is ahead of schedule, it is held just long enough to get it back on schedule. If it is behind
schedule, the router tries to get it out the door quickly.
In fact, the algorithm for determining which of several packets competing for an output line
should go next can always choose the packet furthest behind in its schedule. In this way, packets
that are ahead of schedule get slowed down and packets that are behind schedule get speeded
up, in both cases reducing the amount of jitter.
In some applications, such as video on demand, jitter can be eliminated by buffering at the
receiver and then fetching data for display from the buffer instead of from the network in real
time. However, for other applications, especially those that require real-time interaction
between people such as Internet telephony and videoconferencing, the delay inherent in
buffering is not acceptable.