The document discusses congestion control algorithms, outlining general principles, prevention policies, and various approaches to managing network congestion. Key strategies include network provisioning, traffic-aware routing, admission control, and traffic throttling, with specific algorithms like the leaky bucket and token bucket for traffic regulation. It emphasizes the importance of monitoring, feedback mechanisms, and load shedding to maintain network performance amidst congestion.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0 ratings0% found this document useful (0 votes)
17 views42 pages
CN 4th Unit
The document discusses congestion control algorithms, outlining general principles, prevention policies, and various approaches to managing network congestion. Key strategies include network provisioning, traffic-aware routing, admission control, and traffic throttling, with specific algorithms like the leaky bucket and token bucket for traffic regulation. It emphasizes the importance of monitoring, feedback mechanisms, and load shedding to maintain network performance amidst congestion.
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 42
Congestion Control Algorithms
= General Principles of Congestion Control
= Congestion Prevention Policies
= Approaches to Congestion Control
* Network Provisioning
+ Traffic Aware Routing
* Admission Control
* Traffic Throttling
* Load Shedding
= Traffic Control Algorithm — Leaky bucket & Token bucketCongestion
= Too many packets present in (a part
of) the network causes packet delay
and loss that degrades performance.
This situation is called congestion.
* The network and transport layers
share the responsibility for handling
congestion. Since congestion occurs
within the network, it is the network
layer that directly experiences it and
must ultimately determine what to do
with the excess packets.
“capacity of subnet
Desirable
Congested
Packets delivered
Packets sent
= When too much traffic is offered, congestion sets in and performance degrades
sharply.General Principles of Congestion Control
A. Monitor the system .
— detect when and where congestion occurs.
B. Pass information to where action can be taken.
C. Adjust system operation to correct the problem.Congestion Prevention Policies
Policies that affect congestion
Layer Policies
Transport ‘* Retransmission policy
* Out-of-order caching policy
* Acknowledgement policy
* Flow control policy
+ Timeout determination
Network * Virtual circuits versus datagram inside the subnet
* Packet queueing and service policy
* Packet discard policy
* Routing algorithm
« Packet lifetime management
Data link ‘* Retransmission policy
* Out-of-order caching policy
* Acknowledgement policy
+ Flow control policyApproaches to Congestion Control
* The presence of congestion means that the load is (temporarily) greater than the resources (in a part of the
network) can handle. Two solutions come to mind: increase the resources or decrease the load. As shown in Fig,,
these solutions are usually applied on different time scales to either prevent congestion or react to it once it has
occurred
Network — Traffic-aware Admission _—_Traffic Load
provisioning routing control throttling shedding
— SA tt tt
Slower Faster
(Preventative) (Reactive)
* The most basic way to avoid congestion is to build a network that is well matched to the traffic that it carries. If
there is a low-bandwidth link on the path along which most traffic is directed, congestion is likely.
* Sometimes resources can be added dynamically when there is serious congestion for example, turning on spare
routers or enabling lines that are normally used only as backups (to make the system fault tolerant) or
purchasing bandwidth on the open market.
= More often, links and routers that are regularly heavily utilized are upgraded at the earliest opportunity. This is
called provisioning and happens on a time scale of months, driven by long-term traffic trends.Approaches to Congestion Control
* To make the most of the existing network capacity, routes can be tailored to traffic patterns
that change during the day as network users wake and sleep in different time zones. For
example, routes may be changed to shift traffic away from heavily used paths by changing
the shortest path weights. Some local radio stations have helicopters flying around theit
cities to report on road congestion to make it possible for their mobile listeners to route
their packets (cars) around hotspots. This is called traffic-aware routing. Splitting traffic
across multiple paths is also helpful.
* However, sometimes it is not possible to increase capacity. The only way then to beat back
the congestion is to decrease the load. In a virtual-circuit network, new connections can be
refused if they would cause the network to become congested. This is called admission
control.
* Ata finer granularity, when congestion is imminent the network can deliver feedback to
the sources whose traffic flows are responsible for the problem. The network can request
these sources to throttle their traffic, or it can slow down the traffic itself.Approaches to Congestion Control
‘Two difficulties with this approach are how to identify the onset of congestion, and how
to inform the source that needs to slow down.
To tackle the first issue, routers can monitor the average load, queueing delay, or packet
loss. In all cases, rising numbers indicate growing congestion.
To tackle the second issue, routers must participate in a feedback loop with the sources.
For a scheme to work correctly, the time scale must be adjusted carefully.
If every time two packets arrive in a row, a router yells STOP and every time a router is
idle for 20usec , it yells GO, the system will oscillate wildly and never converge. On the
other hand, if it waits 30 minutes to make sure before saying anything, the congestion-
control mechanism will react too sluggishly to be of any use. Delivering timely
feedback is a nontrivial matter. An added concern is having routers send more messages
when the network is already congested.
Finally, when all else fails, the network is forced to discard packets that it cannot
deliver, The general name for this is load shedding. A good policy for choosing which
packets to discard can help to prevent congestion collapse.Congestion Control in Virtual-Circuit Subnets: Admission control
Congestion A —_——_*
oN /
Virtual 7”
circuit
Congestion
@ o)
(a) Acongested subnet. (b) A redrawn subnet,
eliminates congestion and a virtual circuit from A to B.Admission Control
= Just saw the resource reservation, but how can the sender specify required
resources?
= Also, some applications are tolerant of occasional lapses is QoS and apps
might not know what its CPU requirements.
* Hence routers must convert a set of specifications to resource requirements
and then decide whether to accept or reject the flow.
Parameter Unit
Token bucket rate Bytes/sec
Token bucket size Bytes
Peak data rate Bytes/seo
Minimum packet size | Bytes
Maximum packet size | Bytes
Example of flow specification,Congestion Control in Datagram Subnets: Traffic Throttling
= The old DECNET and frame relay networks:
= A warning bit is sent back in the ack to the source in the case congestion.
Every router on the path can set the warning bit.
= aug +d-ayf
Unew
* Each router monitors its utilization u based on its temporary utilization f (either 0 or 1).
= ais a forgetness rate.
= Ifwis above a threshold, a warning state is reached.Hop-by-Hop Choke Packets
(in high speed nets)
It takes 30 ms for a choke packet to
get from NY to SF, For a 155 Mbps,
4.6 Mbps gets in the pipe.
(a) A choke packet that affects only
the source.
(b) A choke packet that affects
each hop it passes through.Dropping packets
Load shedding: Wine Vs. Milk
Wine: drop new packets (keep old); good for file transfer
Milk: drop old packets (keep new); good for multimedia
Random Early Detection
When the average queue length exceeds a threshold, packets are picked at
random from the queue and discarded.Jitter Control
3 3
a a
3 3
£ §
£ High jitter £
=| Low iter
win Delay —» Delay
ay
(due to
speed of
light) f@) (b)
(a) High jitter. (b) Low jitter.Requirements
Application Reliability | Delay Jitter Bandwidth
E-mail High Low Low Low
File transfer High Low Low Medium
Web access High Medium | Low Medium
Remote login High Medium | Medium | Low
Audio on demand Low Low High Medium
Video on demand | Low Low High High
Telephony Low High High Low
Videoconferencing | Low High High High
= How stringent the quality-of-service requirements are shown in Fig.Traffic Control
The Leaky Bucket Algorithm
ae
‘The bucket
eocting —-| 9. tat
leaky bucket ackats
Unrogulatos
tow
= —Rogulatod
‘ow
a.
o
Water sips out of he oO
tale 2 contort re —~
roy
o
(a) A leaky bucket with water. (b) a leaky bucket with packets.The Token Bucket Algorithm
computor computor
1
One token a
eacies | tweet
aoe i
ses 1am
5
(a) Before. (b) After.
= Token bucket allows some burstiness (up to the number of token the bucket can hold)