Congestion Control Notes
Congestion Control Notes
Taxonomy
In a host-centric design, the end hosts observe the network conditions (e.g.,
how many packets they are successfully getting through the network) and
adjust their behavior accordingly.
In a reservation-based system, some entity (e.g., the end host) asks the network
for a certain amount of capacity to be allocated for a flow.
In a feedback-based approach, the end hosts begin sending data without first
reserving any capacity and then adjust their sending rate according to the
feedback they receive.
This feedback can either be explicit (i.e., a congested router sends a “please slow down” message to the
host) or it can be implicit (i.e., the end host adjusts its sending rate according to the externally
observable behavior of the network, such as packet losses).
Queuing Disciplines
The idea of FIFO queuing, also called first-come-first-served (FCFS) queuing, is simple:
The first packet that arrives at a router is the first packet to be transmitted
Given that the amount of buffer space at each router is finite, if a packet arrives and the
queue (buffer space) is full, then the router discards that packet
This is done without regard to which flow the packet belongs to or how important the
packet is. This is sometimes called tail drop, since packets that arrive at the tail end of
the FIFO are dropped
Note that tail drop and FIFO are two separable ideas. FIFO is a scheduling discipline—it
determines the order in which packets are transmitted. Tail drop is a drop policy—it
determines which packets
get dropped
A simple variation on basic FIFO queuing is priority queuing. The idea is to mark each packet with a
priority; the mark could be carried, for example, in the IP header.
The routers then implement multiple FIFO queues, one for each priority class. The router always
transmits packets out of the highest-priority queue if that queue is nonempty before moving on to the
next priority queue.
Within each priority, packets are still managed in a FIFO manner.
Fair Queuing
The main problem with FIFO queuing is that it does not discriminate between different
traffic sources, or it does not separate packets according to the flow to which they
belong.
Fair queuing (FQ) is an algorithm that has been proposed to address this problem. The
idea of FQ is to maintain a separate queue for each flow currently being handled by the
router. The router then services these queues in a sort of round-robin,
Fair Queuing
The main complication with Fair Queuing is that the packets being processed at a router
are not necessarily the same length.
To truly allocate the bandwidth of the outgoing link in a fair manner, it is necessary to
take packet length into consideration.
For example, if a router is managing two flows, one with 1000-byte packets and
the other with 500-byte packets (perhaps because of fragmentation upstream
from this router), then a simple round-robin servicing of packets from each
flow’s queue will give the first flow two thirds of the link’s bandwidth and the
second flow only one-third of its bandwidth.
What we really want is bit-by-bit round-robin; that is, the router transmits a bit from
flow 1, then a bit from flow 2, and so on.
To understand the algorithm for approximating bit-by-bit round robin, consider the
behavior of a single flow
Clearly, Fi = Si + Pi
Fi = max(Fi-1, Ai) + Pi
Now for every flow, we calculate Fi for each packet that arrives using our formula
Next packet to transmit is always the packet that has the lowest timestamp
Example of fair queuing in action: (a) packets with earlier finishing times are sent first;
In fact, TCP repeatedly increases the load it imposes on the network in an effort to find
the point at which congestion occurs, and then it backs off from this point.
An appealing alternative, but one that has not yet been widely adopted, is to predict
when congestion is about to happen and then to reduce the rate at which hosts send
data just before packets start being discarded.
Congestion control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened. In
general, we can divide congestion control mechanisms into two broad categories:
open-loop congestion control (prevention) and closed-loop congestion control
(removal).
• Tradeoff
AIMD
– Over-sized windows (causing loss) are much worse than under-sized windows
(causing lower throughput)
• Congestion control
• Timeout
DEC Bit
The first mechanism was developed for use on the Digital Network Architecture
(DNA), a connectionless network with a connection-oriented transport protocol.
As noted above, the idea here is to more evenly split the responsibility for
congestion control between the routers and the end nodes.
Each router monitors the load it is experiencing and explicitly notifies the end
nodes when congestion is about to occur.
The destination host then copies this congestion bit into the ACK it sends back to
the source.
This average queue length is measured over a time interval that spans the last
busy+idle cycle, plus the current busy cycle.
Essentially, the router calculates the area under the curve and divides this value
by the time interval to compute the average queue length.
Using a queue length of 1 as the trigger for setting the congestion bit is a trade-
off between significant queuing (and hence higher throughput) and increased
idle time (and hence lower delay).
The source records how many of its packets resulted in some router setting the
congestion bit.
The value 50% was chosen as the threshold based on analysis that showed it to correspond to
the peak of the power curve. The “increase by 1, decrease by 0.875” rule was selected because
additive increase/multiplicative decrease makes the mechanism stable
A second mechanism, called random early detection (RED), is similar to the DECbit
scheme in that each router is programmed to monitor its own queue length, and when
it detects that congestion is imminent, to notify the source to adjust its congestion
window. RED, invented by Sally Floyd and Van Jacobson in the early 1990s, differs from
the DECbit scheme in two major ways:
The first is that rather than explicitly sending a congestion notification message to the
source, RED is most commonly implemented such that it implicitly notifies the source of
congestion by dropping one of its packets.
The source is, therefore, effectively notified by the subsequent timeout or duplicate
ACK.
RED is designed to be used in conjunction with TCP, which currently detects congestion
by means of timeouts (or some other means of detecting packet loss such as duplicate
ACKs).
As the “early” part of the RED acronym suggests, the gateway drops the packet earlier
than it would have to, so as to notify the source that it should decrease its congestion
window sooner than it would normally have.
In other words, the router drops a few packets before it has exhausted its buffer space
completely, so as to cause the source to slow down, with the hope that this will mean it
does not have to drop lots of packets later on.
The second difference between RED and DECbit is in the details of how RED decides
when to drop a packet and what packet it decides to drop.
To understand the basic idea, consider a simple FIFO queue. Rather than wait for the
queue to become completely full and then be forced to drop each arriving packet, we
could decide to drop each arriving packet with some drop probability whenever the
queue length exceeds some drop level.
This idea is called early random drop. The RED algorithm defines the details of how to
monitor the queue length and when to drop a packet.
First, RED computes an average queue length using a weighted running average similar
to the one used in the original TCP timeout computation. That is, AvgLen is computed as
where 0 < Weight < 1 and SampleLen is the length of the queue when a sample
measurement is made.
In most software implementations, the queue length is measured every time a new
packet arrives at the gateway.
Second, RED has two queue length thresholds that trigger certain activity: MinThreshold
and MaxThreshold.
When a packet arrives at the gateway, RED compares the current AvgLen with these two
thresholds, according to the following rules:
if AvgLen MinThreshold
if MaxThreshold AvgLen
P is a function of both AvgLen and how long it has been since the last packet was
dropped.
An ISP is granted a block of addresses starting with 190.100.0.0/16. The ISP needs to distribute these
addresses to three groups of customers as follows:
2. The second group has 128 customers; each needs 128 addresses.
Design the subblocks and give the slash notation for each subblock. Find out how many addresses are
still available after these allocations.
Group 1
For this group, each customer needs 256 addresses. This means the suffix length is 8 (28 = 256). The
prefix length is then 32 - 8 = 24.
…………………………………..
64: 190.100.63.0/24190.100.63.255/24
Group 2
For this group, each customer needs 128 addresses. This means the suffix length is 7 (27 = 128). The
prefix length is then 32 - 7 = 25. The addresses are:
Group 3
For this group, each customer needs 64 addresses. This means the suffix length is 6 (26 = 64). The prefix
length is then 32 - 6 = 26.
001:190.100.128.0/26 190.100.128.63/26
002:190.100.128.64/26 190.100.128.127/26
…………………………
128:190.100.159.192/26 190.100.159.255/26
1. Reliability
If a packet gets lost or acknowledgement is not received (at sender), the re-transmission of
data will be needed. This decreases the reliability.
The importance of the reliability can differ according to the application.
For example:
E- mail and file transfer need to have a reliable transmission as compared to that of an
audio conferencing.
2. Delay
Delay of a message from source to destination is a very important characteristic. However,
delay can be tolerated differently by the different applications.
For example:
The time delay cannot be tolerated in audio conferencing (needs a minimum time delay),
while the time delay in the e-mail or file transfer has less importance.
3. Jitter
The jitter is the variation in the packet delay.
If the difference between delays is large, then it is called as high jitter. On the contrary, if
the difference between delays is small, it is known as low jitter.
Example:
Case1: If 3 packets are sent at times 0, 1, 2 and received at 10, 11, 12. Here, the delay is
same for all packets and it is acceptable for the telephonic conversation.
Case2: If 3 packets 0, 1, 2 are sent and received at 31, 34, 39, so the delay is different for all
packets. In this case, the time delay is not acceptable for the telephonic conversation.
4. Bandwidth
Different applications need the different bandwidth.
For example:
Video conferencing needs more bandwidth in comparison to that of sending an e-mail.
Integrated Services and Differentiated Service
These two models are designed to provide Quality of Service (QoS) in the network.
i) Scalability
In Integrated Services, it is necessary for each router to keep information of each flow. But,
this is not always possible due to growing network.
1. Scalability
The main processing unit can be moved from central place to the edge of the network to
achieve the scalability. The router does not need to store the information about the flows
and the applications (or the hosts) define the type of services they want every time while
sending the packets.
The RSVP is a signaling protocol, which helps IP to create a flow and to make
resource reservation.
It is an independent protocol and also can be used in other different model.
RSVP helps to design multicasting (one to many or many to many
distribution), where a data can be sent to group of destination computers
simultaneously.
For example: The IP multicast is technique for one to many communication
through an IP infrastructure in the network.
RSVP can be also used for unicasting (transmitting a data to all possible
destination) to provide resource reservation for all types of traffic.
The two important types of RSVP messages are:
1. Path messages:
The receivers in a flow make the reservation in RSVP, but the receivers do not
know the path traveled by the packets before the reservation. The path is
required for reservation To solve this problem the RSVP uses the path
messages.
A path message travels from the sender and reaches to all receivers by
multicasting and path message stores the necessary information for the
receivers.
2. Resv messages:
After receiving path message, the receiver sends a Resv message. The Resv
message travels to the sender and makes a resource reservation on the routers
which supports for RSVP.