0% found this document useful (0 votes)
6 views37 pages

Congestion Control Qos

Congestion in computer networks occurs when too many packets are present, leading to performance degradation. Effective congestion control requires collaboration between the network and transport layers, utilizing various strategies such as traffic-aware routing, admission control, and feedback mechanisms to manage load and resources. Approaches like load shedding and Random Early Detection (RED) help mitigate congestion by intelligently managing packet flow and prioritizing traffic.

Uploaded by

patrickholkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views37 pages

Congestion Control Qos

Congestion in computer networks occurs when too many packets are present, leading to performance degradation. Effective congestion control requires collaboration between the network and transport layers, utilizing various strategies such as traffic-aware routing, admission control, and feedback mechanisms to manage load and resources. Approaches like load shedding and Random Early Detection (RED) help mitigate congestion by intelligently managing packet flow and prioritizing traffic.

Uploaded by

patrickholkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 37

Congestion Control

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
congestion
a) Congestion: Too many packets present in (a part of) the
network
b) layers responsible for handling congestion: network and
transport
c) network layer directly experiences congestion since
congestion occurs within the network and network layer must
determine what to do with the excess packets.
d) But the most effective way to control congestion is to reduce
the load that the transport layer is placing on the network.
e) This requires the network and transport layers to work
together.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
Congestion Control Algorithms (2)

When too much traffic is offered, congestion sets in and


performance degrades sharply.
Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
a) When the number of packets hosts send into the network is
well within its carrying capacity, the number delivered is
proportional to the number sent.
b) If twice as many are sent, twice as many are delivered.
c) However, as the offered load approaches the carrying
capacity, bursts of traffic occasionally fill up the buffers inside
routers and some packets are lost.
d) These lost packets consume some of the capacity, so the
number of delivered packets falls below the ideal curve.
e) The network is now congested.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
a) Network may experience a congestion collapse, in which
performance falls very quickly as the offered load increases
beyond the capacity.
b) Reason 1:packets can be sufficiently delayed inside the
network that they are no longer useful when they leave the
network.
c) Reason 2: when senders retransmit packets that are greatly
delayed, thinking that they have been lost. In this case,
copies of the same packet will be delivered by the network,
again wasting its capacity
d) Goodput which is the rate at which useful packets are
delivered by the network.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
a) design networks that avoid congestion where possible and do
not suffer from congestion collapse if they do become
congested. Unfortunately, congestion cannot wholly be
avoided.
b) difference between congestion control and flow control
c) Congestion control has to do with making sure the network is
able to carry the offered traffic. It is a global issue, involving
the behavior of all the hosts and routers.
d) Flow control, in contrast, relates to the traffic between a
particular sender and a particular receiver. Its job is to make
sure that a fast sender cannot continually transmit data faster
than the receiver is able to absorb it.
e) The reason congestion control and flow control are often
confused is that the best way to handle both problems is to
get the host to slow down. Thus, a host can get a ‘‘slow
down’’ message either because the receiver cannot handle
the load or because the network cannot handle it.
Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
Approaches to Congestion Control

The presence of congestion means that the load


is (temporarily) greater than the resources (in a
part of the network) can handle. Two solutions
come to mind:
increase the resources or decrease the load.

Timescales of approaches to congestion control


Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
these solutions are usually applied on different time scales to
either prevent congestion or react to it once it has occurred.
a)Network Provisioning
b)traffic-aware routing
c)admission control
d)Traffic throttling when congestion is imminent the network can deliver
feedback to the sources whose traffic flows are responsible for the problem.
The network can request these sources to throttle their traffic, or it can slow
down the traffic itself. how to identify the onset of congestion, and how to inform
the source that needs to slow down?. To tackle the first issue, routers can
monitor the average load, queueing delay, or packet loss. In all cases, rising
numbers indicate growing congestion. To tackle the second issue, routers must
participate in a feedback loop with the sources.
e)load shedding

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
Network Provisioning
• The most basic way to avoid congestion is to build a
network that is well matched to the traffic that it
carries
• If there is a low-bandwidth link on the path along which
most traffic is directed, congestion is likely
• Sometimes resources can be added dynamically when there is serious
congestion, for example, turning on spare routers or enabling lines
that are normally used only as backups (to make the system fault
tolerant) or purchasing bandwidth on the open market.
• More often, links and routers that are regularly heavily utilized are
upgraded at the earliest opportunity.
• This is called provisioning and happens on a time scale of months,
driven by long-term traffic trends.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
Traffic-Aware Routing

• The routing schemes we discussed used fixed link weights.


These schemes adapted to changes in topology, but not to
changes in load.
• The goal in taking load into account when computing routes is
to shift traffic away from hotspots that will be the first places in
the network to experience congestion.
• The most direct way to do this is to set the link weight to be a
function of the (fixed) link bandwidth and propagation delay
plus the (variable) measured load or average queuing delay.
Least-weight paths will then favor paths that are more lightly
loaded, all else being equal.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
Traffic-Aware Routing

A network in which the East and West parts


are connected by two links.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
Admission control
a) widely used in virtual-circuit networks
b) do not set up a new virtual circuit unless the network can
carry the added traffic without becoming congested. Thus,
attempts to set up a virtual circuit may fail.
c) Problem is to identify when a new virtual circuit will lead to
congestion. Virtual circuits in computer networks come in all
shapes and sizes. Thus, the circuit must come with some
characterization of its traffic if we are to apply admission
control. Traffic is often described in terms of its rate and
shape.
d) Armed with traffic descriptions, the network can decide
whether to admit the new virtual circuit. One possibility is for
the network to reserve enough capacity along the paths of
each of its virtual circuits that congestion will not occur.
Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
a) Even without making guarantees, the network can use traffic
descriptions for admission control. The task is then to
estimate how many circuits will fit within the carrying capacity
of the network without congestion.
b) Suppose that virtual circuits that may blast traffic at rates up
to 10 Mbps all pass through the same 100-Mbps physical link.
How many circuits should be admitted? Clearly, 10 circuits
can be admitted without risking congestion, but this is
wasteful in the normal case since it may rarely happen that all
10 are transmitting full blast at the same time.
c) In real networks, measurements of past behavior that capture
the statistics of transmissions can be used to estimate the
number of circuits to admit, to trade better performance for
acceptable risk.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
Admission control can also be combined with traffic-
aware routing by considering routes around traffic
hotspots as part of the setup procedure.

(a) A congested network. (b) The portion of the network that is


not congested. A virtual circuit from A to B is also shown.
Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
Traffic Throttling

• In the Internet and many other computer networks, senders adjust their
transmissions to send as much traffic as the network can readily deliver.
• In this setting, the network aims to operate just before the onset of
congestion.
• When congestion is imminent, it must tell the senders to throttle back
their transmissions and slow down.
• This feedback is business as usual rather than an exceptional situation.
The term congestion avoidance is sometimes used to contrast this
operating point with the one in which the network has become (overly)
congested.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
approaches to throttle traffic in both datagram networks and virtual-circuit
networks
approaches to throttling traffic have two problems
Problem1:routers must determine when congestion is approaching, ideally
before it has arrived.
To do so each router continuously monitor the resources it is using.
Three possibilities:
utilization of the output links,
the buffering of queued packets inside the router
the number of packets that are lost due to insufficient buffering.
Of these possibilities, the second one is the most useful

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
a) Averages of utilization do not directly account for the burstiness of
most traffic—a utilization of 50% may be low for smooth traffic and
too high for highly variable traffic.
b) Counts of packet losses come too late. Congestion has already set
in by the time that packets are lost.
c) The queueing delay inside routers directly captures any congestion
experienced by packets. It should be low most of time, but will jump
when there is a burst of traffic that generates a backlog.
d) To maintain a good estimate of the queueing delay, d, a sample of
the instantaneous queue length, s, can be made periodically and d
updated according to
dnew = αdold + (1 − α)s
a) where the constant α determines how fast the router forgets recent
history. This is called an EWMA (Exponentially Weighted Moving
Average). It smoothes out fluctuations and is equivalent to a low-
pass filter. Whenever d moves above the threshold, the router
notes the onset of congestion.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
a) Problem 2: routers must deliver timely feedback to the
senders that are causing the congestion.
b) Congestion is experienced in the network, but relieving
congestion requires action on behalf of the senders that are
using the network.
c) To deliver feedback, the router must identify the appropriate
senders.
d) It must then warn them carefully, without sending many more
packets into the already congested network.
e) Different schemes use different feedback mechanisms.
Choke Packets, Explicit Congestion Notification, Hop-by-Hop
Backpressure

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
Choke Packets
a) The most direct way to notify a sender of congestion is to tell
it directly.
b) In this approach, the router selects a congested packet and
sends a choke packet back to the source host, giving it the
destination found in the packet.
c) The original packet may be tagged (a header bit is turned on)
so that it will not generate any more choke packets farther
along the path and then forwarded in the usual way.
d) To avoid increasing load on the network during a time of
congestion, the router may only send choke packets at a low
rate.
e) When the source host gets the choke packet, it is required to
reduce the traffic sent to the specified destination, for
example, by 50%.
Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
Explicit congestion notification
a) Instead of generating additional packets to warn of
congestion, a router can tag any packet it forwards (by
setting a bit in the packet’s header) to signal that it is
experiencing congestion.
b) When the network delivers the packet, the destination can
note that there is congestion and inform the sender when it
sends a reply packet.
c) The sender can then throttle its transmissions as before.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
Hop-by-Hop Backpressure
At high speeds or over long distances, many new packets may
be transmitted after congestion has been signaled because of
the delay before the signal takes effect.

A choke packet that affects only the source..


Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
A choke packet that affects each hop it passes through.
Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
Load shedding
a) Load shedding: when routers are not able to handle packets
they just throw them away.
b) The key question: which packets to drop.
c) The preferred choice may depend on the type of applications
that use the network.
d) For a file transfer, an old packet is worth more than a new
one. This is because dropping old packet and keeping new
packets will only force the receiver to do more work to buffer
data that it cannot yet use.
e) In contrast, for real-time media, a new packet is worth more
than an old one. This is because packets become useless if
they are delayed and miss the time at which they must be
played out to the user.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
a) More intelligent load shedding requires cooperation from the
senders. An example is packets that carry routing information.
These packets are more important than regular data packets
because they establish routes; if they are lost, the network
may lose connectivity.
b) To implement an intelligent discard policy, applications must
mark their packets to indicate to the network how important
they are.
c) Then, when packets have to be discarded, routers can first
drop packets from the least important class, then the next
most important class, and so on.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
Random Early Detection
a) Dealing with congestion when it first starts is more effective than
letting it gum up the works and then trying to deal with it.
b) By having routers drop packets early, before the situation has
become hopeless, there is time for the source to take action before it
is too late. RED (Random Early Detection)
c) To determine when to start discarding, routers maintain a running
average of their queue lengths.
d) When the average queue length on some link exceeds a threshold,
the link is said to be congested and a small fraction of the packets
are dropped at random.
e) The affected sender will notice the loss when there is no
acknowledgement, and then the transport protocol will slow down.
f) The lost packet is thus delivering the same message as a choke
packet, but implicitly, without the router sending any explicit
signal.
Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
Quality of Service
a) An easy solution to provide good quality of service is to
build a network with enough capacity for whatever traffic
will be thrown at it.
b) The name for this solution is overprovisioning.
c) The resulting network will carry application traffic without
significant loss and, assuming a decent routing scheme, will
deliver packets with low latency.
d) Performance doesn’t get any better than this.
e) The trouble with this solution is that it is expensive. It is
basically solving a problem by throwing money at it.
f) With quality of service mechanisms, the network can honor
the performance guarantees that it makes even when traffic
spikes, at the cost of turning down some requests.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
Four issues must be addressed to ensure quality of service:
a)What applications need from the network.
b)How to regulate the traffic that enters the network.
c)How to reserve resources at routers to guarantee performance.
d)Whether the network can safely accept more traffic.
No single technique deals efficiently with all these issues.
Instead, a variety of techniques have been developed for use at the
network (and transport) layer.
Practical quality-of-service solutions combine multiple techniques.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
Application Requirements

a) A stream of packets from a source to a destination is called a flow.


b) A flow might be all the packets of a connection in a connection-
oriented network, or all the packets sent from one process to
another process in a connectionless network.
c) The needs of each flow can be characterized by four primary
parameters: bandwidth, delay, jitter, and loss.
d) Together, these determine the QoS (Quality of Service) the flow
requires.
e) The variation (i.e., standard deviation) in the delay or packet arrival
times is called jitter.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
Application Requirements

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
Traffic Shaping
a) network must know what traffic is being guaranteed.
b) In the telephone network, this characterization is simple. For
example, a voice call (in uncompressed format) needs 64 kbps and
consists of one 8-bit sample every 125 μsec.
c) However, traffic in data networks is bursty. It typically arrives at
nonuniform rates as the traffic rate varies (e.g., videoconferencing
with compression), users interact with applications (e.g., browsing a
new Web page), and computers switch between tasks.
d) Bursts of traffic are more difficult to handle than constant-rate
traffic because they can fill buffers and cause packets to be lost.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
a) Traffic shaping is a technique for regulating the average rate and
burstiness of a flow of data that enters the network.
b) The goal is to allow applications to transmit a wide variety of traffic
that suits their needs, including some bursts, yet have a simple and
useful way to describe the possible traffic patterns to the network.
c) When a flow is set up, the user and the network (i.e., the customer
and the provider) agree on a certain traffic pattern (i.e., shape) for
that flow.
d) Sometimes this agreement is called an SLA (Service Level
Agreement), especially when it is made over aggregate flows and
long periods of time, such as all of the traffic for a given customer.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
a) Traffic shaping reduces congestion and thus helps the network live
up to its promise.
b) Issue: how the provider can tell if the customer is following the
agreement and what to do if the customer is not.
c) Packets in excess of the agreed pattern might be dropped by the
network, or they might be marked as having lower priority.
Monitoring a traffic flow is called traffic policing.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
The Leaky Bucket Algorithm

(a) A leaky bucket with water. (b) a leaky bucket with packets.
a) imagine a bucket with a small hole in the bottom. No matter the rate
at which water enters the bucket, the outflow is at a constant rate, R,
when there is any water in the bucket and zero when the bucket is
empty.
b) Also, once the bucket is full to capacity B, any additional water
entering it spills over the sides and is lost.
c) This bucket can be used to shape or police packets entering the
network.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
a) each host is connected to the network by an interface containing a
leaky bucket.
b) To send a packet into the network, it must be possible to put more
water into the bucket.
c) If a packet arrives when the bucket is full, the packet must either be
queued until enough water leaks out to hold it or be discarded.
d) The former might happen at a host shaping its traffic for the
network as part of the operating system.
e) The latter might happen in hardware at a provider network interface
that is policing traffic entering the network.
f) This technique is called the leaky bucket algorithm.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011
The Token Bucket Algorithm

5-34

(a) Before. (b) After.


a) The tap is running at rate R and the bucket has a capacity of B, as
before.
b) Now, to send a packet we must be able to take water, or tokens, as
the contents are commonly called, out of the bucket (rather than
putting water into the bucket).
c) No more than a fixed number of tokens, B, can accumulate in the
bucket, and if the bucket is empty, we must wait until more tokens
arrive before we can send another packet.
d) This algorithm is called the token bucket algorithm.

Computer Networks, Fifth Edition by Andrew Tanenbaum and David Wetherall, © Pearson Education-Prentice Hall, 2011

You might also like