0% found this document useful (0 votes)
133 views18 pages

Congestion Control Notes

The document discusses several issues related to resource allocation in computer networks. It covers taxonomy of router-centric vs host-centric designs and reservation-based vs feedback-based systems. It also discusses queuing disciplines like FIFO, priority queuing, and fair queuing. Finally, it covers congestion control mechanisms used in TCP, including additive increase/multiplicative decrease and slow start exponential increase.

Uploaded by

Rushit Davda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
133 views18 pages

Congestion Control Notes

The document discusses several issues related to resource allocation in computer networks. It covers taxonomy of router-centric vs host-centric designs and reservation-based vs feedback-based systems. It also discusses queuing disciplines like FIFO, priority queuing, and fair queuing. Finally, it covers congestion control mechanisms used in TCP, including additive increase/multiplicative decrease and slow start exponential increase.

Uploaded by

Rushit Davda
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Issues in Resource Allocation

Taxonomy

 Router-centric versus Host-centric

 In a router-centric design, each router takes responsibility for deciding when


packets are forwarded and selecting which packets are to dropped, as well as
for informing the hosts that are generating the network traffic how many
packets they are allowed to send.

 In a host-centric design, the end hosts observe the network conditions (e.g.,
how many packets they are successfully getting through the network) and
adjust their behavior accordingly.

 Note that these two groups are not mutually exclusive.

 Reservation-based versus Feedback-based

 In a reservation-based system, some entity (e.g., the end host) asks the network
for a certain amount of capacity to be allocated for a flow.

 Each router then allocates enough resources (buffers and/or percentage


of the link’s bandwidth) to satisfy this request. If the request cannot be
satisfied at some router, because doing so would overcommit its
resources, then the router rejects the reservation.

 In a feedback-based approach, the end hosts begin sending data without first
reserving any capacity and then adjust their sending rate according to the
feedback they receive.

This feedback can either be explicit (i.e., a congested router sends a “please slow down” message to the
host) or it can be implicit (i.e., the end host adjusts its sending rate according to the externally
observable behavior of the network, such as packet losses).
Queuing Disciplines

 The idea of FIFO queuing, also called first-come-first-served (FCFS) queuing, is simple:

 The first packet that arrives at a router is the first packet to be transmitted

 Given that the amount of buffer space at each router is finite, if a packet arrives and the
queue (buffer space) is full, then the router discards that packet

 This is done without regard to which flow the packet belongs to or how important the
packet is. This is sometimes called tail drop, since packets that arrive at the tail end of
the FIFO are dropped

 Note that tail drop and FIFO are two separable ideas. FIFO is a scheduling discipline—it
determines the order in which packets are transmitted. Tail drop is a drop policy—it
determines which packets

get dropped

(a) FIFO queuing; (b) tail drop at a FIFO queue.

A simple variation on basic FIFO queuing is priority queuing. The idea is to mark each packet with a
priority; the mark could be carried, for example, in the IP header.

The routers then implement multiple FIFO queues, one for each priority class. The router always
transmits packets out of the highest-priority queue if that queue is nonempty before moving on to the
next priority queue.
Within each priority, packets are still managed in a FIFO manner.

 Fair Queuing

 The main problem with FIFO queuing is that it does not discriminate between different
traffic sources, or it does not separate packets according to the flow to which they
belong.

 Fair queuing (FQ) is an algorithm that has been proposed to address this problem. The
idea of FQ is to maintain a separate queue for each flow currently being handled by the
router. The router then services these queues in a sort of round-robin,

Round-robin service of four flows at a router

 Fair Queuing

 The main complication with Fair Queuing is that the packets being processed at a router
are not necessarily the same length.

 To truly allocate the bandwidth of the outgoing link in a fair manner, it is necessary to
take packet length into consideration.

 For example, if a router is managing two flows, one with 1000-byte packets and
the other with 500-byte packets (perhaps because of fragmentation upstream
from this router), then a simple round-robin servicing of packets from each
flow’s queue will give the first flow two thirds of the link’s bandwidth and the
second flow only one-third of its bandwidth.

 What we really want is bit-by-bit round-robin; that is, the router transmits a bit from
flow 1, then a bit from flow 2, and so on.

 Clearly, it is not feasible to interleave the bits from different packets.


 The FQ mechanism therefore simulates this behavior by first determining when a given
packet would finish being transmitted if it were being sent using bit-by-bit round-robin,
and then using this finishing time to sequence the packets for transmission.

 To understand the algorithm for approximating bit-by-bit round robin, consider the
behavior of a single flow

 For this flow, let

 Pi : denote the length of packet i

 Si: time when the router starts to transmit packet i

 Fi: time when router finishes transmitting packet i

 Clearly, Fi = Si + Pi

 When do we start transmitting packet i?

 Depends on whether packet i arrived before or after the router finishes


transmitting packet i-1 for the flow

 Let Ai denote the time that packet i arrives at the router

 Then Si = max(Fi-1, Ai)

 Fi = max(Fi-1, Ai) + Pi

 Now for every flow, we calculate Fi for each packet that arrives using our formula

 We then treat all the Fi as timestamps

 Next packet to transmit is always the packet that has the lowest timestamp

 The packet that should finish transmission before all others

Example of fair queuing in action: (a) packets with earlier finishing times are sent first;

(b) sending of a packet already in progress is completed.


Congestion Control Mechanism

 It is important to understand that TCP’s strategy is to control congestion once it


happens, as opposed to trying to avoid congestion in the first place.

 In fact, TCP repeatedly increases the load it imposes on the network in an effort to find
the point at which congestion occurs, and then it backs off from this point.

 An appealing alternative, but one that has not yet been widely adopted, is to predict
when congestion is about to happen and then to reduce the rate at which hosts send
data just before packets start being discarded.

 We call such a strategy congestion avoidance, to distinguish it from congestion control.

 For Congestion Control in TCP , AIMD mechanism ,refer


to TCP/IP by fourouzan
Congestion occurs when the no. of packets floated on network exceeds the carrying
capacity of network.

Congestion control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened. In
general, we can divide congestion control mechanisms into two broad categories:
open-loop congestion control (prevention) and closed-loop congestion control
(removal).

TCP Congestion window

• Each TCP sender maintains a congestion window

– Max number of bytes to have in transit (not yet ACK’d)


• Adapting the congestion window

– Decrease upon losing a packet: backing off

– Increase upon success: optimistically exploring

– Always struggling to find right transfer rate

• Tradeoff

– Pro: avoids needing explicit network feedback

Con: continually under- and over-shoots “right” rate

AIMD

• How much to adapt?

– Additive increase: On success of last window of data, increase window by 1 Max


Segment Size (MSS)

– Multiplicative decrease: On loss of packet, divide congestion window in half

• Much quicker to slow than speed up!

– Over-sized windows (causing loss) are much worse than under-sized windows
(causing lower throughput)

– AIMD: A necessary condition for stability of TCP

Slow Start Exponential Increase


In the slow-start algorithm, the size of the congestion window increases
exponentially until it reaches a threshold

Receiver Window Vs Congestion Window


• Flow control

– Keep a fast sender from overwhelming a slow receiver

• Congestion control

– Keep a set of senders from overloading the network

• Different concepts, but similar mechanisms

– TCP flow control: receiver window

– TCP congestion control: congestion window

– Sender TCP window =

min { congestion window, receiver window }

Start slow (a small CWND) to avoid overloading network

• Start with a small congestion window

– Initially, CWND is 1 MSS


– So, initial sending rate is MSS / RTT

• Could be pretty wasteful

– Might be much less than actual bandwidth

– Linear increase takes a long time to accelerate

• Slow-start phase (really “fast start”)

– Sender starts at a slow rate (hence the name)

– … but increases rate exponentially until the first loss

• Double CWND per round-trip time

• An implementation reacts to congestion detection in one of the following


ways:

• ❏ If detection is by time-out, a new slow


start phase starts.

• ❏ If detection is by three ACKs, a new


congestion avoidance phase starts.

• Timeout

– Packet n is lost and detected via a timeout

• When? n is last packet in window, or all packets in flight lost

– After timeout, blasting entire CWND would cause another burst

– Better to start over with a low CWND

– Triple duplicate ACK

– Packet n is lost, but packets n+1, n+2, etc. arrive

• How detected? Multiple ACKs that receiver waiting for n


• When? Later packets after n received

– After triple duplicate ACK, sender quickly resends packet n

– Do a multiplicative decrease and keep going

 DEC Bit
 The first mechanism was developed for use on the Digital Network Architecture
(DNA), a connectionless network with a connection-oriented transport protocol.

 This mechanism could, therefore, also be applied to TCP and IP.

 As noted above, the idea here is to more evenly split the responsibility for
congestion control between the routers and the end nodes.

 Each router monitors the load it is experiencing and explicitly notifies the end
nodes when congestion is about to occur.

 This notification is implemented by setting a binary congestion bit in the packets


that flow through the router; hence the name DECbit.

 The destination host then copies this congestion bit into the ACK it sends back to
the source.

 Finally, the source adjusts its sending rate so as to avoid congestion


 A single congestion bit is added to the packet header. A router sets this bit in a
packet if its average queue length is greater than or equal to 1 at the time the
packet arrives.

 This average queue length is measured over a time interval that spans the last
busy+idle cycle, plus the current busy cycle.

 Essentially, the router calculates the area under the curve and divides this value
by the time interval to compute the average queue length.

 Using a queue length of 1 as the trigger for setting the congestion bit is a trade-
off between significant queuing (and hence higher throughput) and increased
idle time (and hence lower delay).

In other words, a queue length of 1 seems to optimize the power function

 The source records how many of its packets resulted in some router setting the
congestion bit.

 In particular, the source maintains a congestion window, just as in TCP, and


watches to see what fraction of the last window’s worth of packets resulted in
the bit being set.

The value 50% was chosen as the threshold based on analysis that showed it to correspond to
the peak of the power curve. The “increase by 1, decrease by 0.875” rule was selected because
additive increase/multiplicative decrease makes the mechanism stable

 Random Early Detection (RED)

 A second mechanism, called random early detection (RED), is similar to the DECbit
scheme in that each router is programmed to monitor its own queue length, and when
it detects that congestion is imminent, to notify the source to adjust its congestion
window. RED, invented by Sally Floyd and Van Jacobson in the early 1990s, differs from
the DECbit scheme in two major ways:

 The first is that rather than explicitly sending a congestion notification message to the
source, RED is most commonly implemented such that it implicitly notifies the source of
congestion by dropping one of its packets.

 The source is, therefore, effectively notified by the subsequent timeout or duplicate
ACK.
 RED is designed to be used in conjunction with TCP, which currently detects congestion
by means of timeouts (or some other means of detecting packet loss such as duplicate
ACKs).

 As the “early” part of the RED acronym suggests, the gateway drops the packet earlier
than it would have to, so as to notify the source that it should decrease its congestion
window sooner than it would normally have.

 In other words, the router drops a few packets before it has exhausted its buffer space
completely, so as to cause the source to slow down, with the hope that this will mean it
does not have to drop lots of packets later on.

 The second difference between RED and DECbit is in the details of how RED decides
when to drop a packet and what packet it decides to drop.

 To understand the basic idea, consider a simple FIFO queue. Rather than wait for the
queue to become completely full and then be forced to drop each arriving packet, we
could decide to drop each arriving packet with some drop probability whenever the
queue length exceeds some drop level.

 This idea is called early random drop. The RED algorithm defines the details of how to
monitor the queue length and when to drop a packet.

 First, RED computes an average queue length using a weighted running average similar
to the one used in the original TCP timeout computation. That is, AvgLen is computed as

 AvgLen = (1 − Weight) × AvgLen + Weight × SampleLen

 where 0 < Weight < 1 and SampleLen is the length of the queue when a sample
measurement is made.

 In most software implementations, the queue length is measured every time a new
packet arrives at the gateway.

 In hardware, it might be calculated at some fixed sampling interval.

 Second, RED has two queue length thresholds that trigger certain activity: MinThreshold
and MaxThreshold.

 When a packet arrives at the gateway, RED compares the current AvgLen with these two
thresholds, according to the following rules:

 if AvgLen  MinThreshold

  queue the packet

 if MinThreshold < AvgLen < MaxThreshold


  calculate probability P

  drop the arriving packet with probability P

 if MaxThreshold  AvgLen

  drop the arriving packet

 P is a function of both AvgLen and how long it has been since the last packet was
dropped.

 Specifically, it is computed as follows:

 TempP = MaxP × (AvgLen − MinThreshold)/(MaxThreshold − MinThreshold)

 P = TempP/(1 − count × TempP)

An ISP is granted a block of addresses starting with 190.100.0.0/16. The ISP needs to distribute these
addresses to three groups of customers as follows:

1. The first group has 64 customers; each needs 256 addresses.

2. The second group has 128 customers; each needs 128 addresses.

3. The third group has 128 customers; each needs 64 addresses.

Design the subblocks and give the slash notation for each subblock. Find out how many addresses are
still available after these allocations.

Group 1

For this group, each customer needs 256 addresses. This means the suffix length is 8 (28 = 256). The
prefix length is then 32 - 8 = 24.

01: 190.100.0.0/24 190.100.0.255/24


02: 190.100.1.0/24 190.100.1.255/24

…………………………………..

64: 190.100.63.0/24190.100.63.255/24

Total = 64  256 = 16,384

Group 2

For this group, each customer needs 128 addresses. This means the suffix length is 7 (27 = 128). The
prefix length is then 32 - 7 = 25. The addresses are:

001: 190.100.64.0/25 190.100.64.127/25

002: 190.100.64.128/25 190.100.64.255/25

003: 190.100.127.128/25 190.100.127.255/25

Total = 128  128 = 16,384

Group 3

For this group, each customer needs 64 addresses. This means the suffix length is 6 (26 = 64). The prefix
length is then 32 - 6 = 26.

001:190.100.128.0/26 190.100.128.63/26

002:190.100.128.64/26 190.100.128.127/26

…………………………

128:190.100.159.192/26 190.100.159.255/26

Total = 128  64 = 8,192

Number of granted addresses: 65,536

Number of allocated addresses: 40,960

Number of available addresses: 24,576


QoS (Quality of Service ) is an overall performance measure of the computer network.

Important flow characteristics of the QoS are given below:

1. Reliability
If a packet gets lost or acknowledgement is not received (at sender), the re-transmission of
data will be needed. This decreases the reliability.
The importance of the reliability can differ according to the application.
For example:
E- mail and file transfer need to have a reliable transmission as compared to that of an
audio conferencing.

2. Delay
Delay of a message from source to destination is a very important characteristic. However,
delay can be tolerated differently by the different applications.
For example:
The time delay cannot be tolerated in audio conferencing (needs a minimum time delay),
while the time delay in the e-mail or file transfer has less importance.

3. Jitter
The jitter is the variation in the packet delay.
If the difference between delays is large, then it is called as high jitter. On the contrary, if
the difference between delays is small, it is known as low jitter.
Example:
Case1: If 3 packets are sent at times 0, 1, 2 and received at 10, 11, 12. Here, the delay is
same for all packets and it is acceptable for the telephonic conversation.
Case2: If 3 packets 0, 1, 2 are sent and received at 31, 34, 39, so the delay is different for all
packets. In this case, the time delay is not acceptable for the telephonic conversation.

4. Bandwidth
Different applications need the different bandwidth.
For example:
Video conferencing needs more bandwidth in comparison to that of sending an e-mail.
Integrated Services and Differentiated Service

These two models are designed to provide Quality of Service (QoS) in the network.

1. Integrated Services( IntServ)

Integrated service is flow-based QoS model and designed for IP.


In integrated services, user needs to create a flow in the network, from source to destination
and needs to inform all routers (every router in the system implements IntServ) of the
resource requirement.

Following are the steps to understand how integrated services works.

I) Resource Reservation Protocol (RSVP)


An IP is connectionless, datagram, packet-switching protocol. To implement a flow-based
model, a signaling protocol is used to run over IP, which provides the signaling mechanism
to make reservation (every applications need assurance to make reservation), this protocol is
called as RSVP.

ii) Flow Specification


While making reservation, resource needs to define the flow specification. The flow
specification has two parts:
a) Resource specification
It defines the resources that the flow needs to reserve. For example: Buffer, bandwidth, etc.
b) Traffic specification
It defines the traffic categorization of the flow.

iii) Admit or deny


After receiving the flow specification from an application, the router decides to admit or
deny the service and the decision can be taken based on the previous commitments of the
router and current availability of the resource.
Classification of services

The two classes of services to define Integrated Services are:

a) Guaranteed Service Class


This service guarantees that the packets arrive within a specific delivery time and not
discarded, if the traffic flow maintains the traffic specification boundary.
This type of service is designed for real time traffic, which needs a guaranty of minimum end
to end delay.
For example: Audio conferencing.

b) Controlled Load Service Class


This type of service is designed for the applications, which can accept some delays, but are
sensitive to overload network and to the possibility to lose packets.
For example: E-mail or file transfer.

Problems with Integrated Services.

The two problems with the Integrated services are:

i) Scalability
In Integrated Services, it is necessary for each router to keep information of each flow. But,
this is not always possible due to growing network.

ii) Service- Type Limitation


The integrated services model provides only two types of services, guaranteed and control-
load.

2. Differentiated Services (DS or Diffserv):

 DS is a computer networking model, which is designed to achieve the scalability by


managing the network traffic.
 DS is a class based QoS model specially designed for IP.
 DS was designed by IETF (Internet Engineering Task Force) to handle the problems of
Integrated Services.
The solutions to handle the problems of Integrated Services are explained below:

1. Scalability
The main processing unit can be moved from central place to the edge of the network to
achieve the scalability. The router does not need to store the information about the flows
and the applications (or the hosts) define the type of services they want every time while
sending the packets.

2. Service Type Limitation


The routers, route the packets on the basis of class of services define in the packet and not
by the flow. This method is applied by defining the classes based on the requirement of the
applications.

Resource Reservation Protocol (RSVP)

 The RSVP is a signaling protocol, which helps IP to create a flow and to make
resource reservation.
 It is an independent protocol and also can be used in other different model.
 RSVP helps to design multicasting (one to many or many to many
distribution), where a data can be sent to group of destination computers
simultaneously.
For example: The IP multicast is technique for one to many communication
through an IP infrastructure in the network.
 RSVP can be also used for unicasting (transmitting a data to all possible
destination) to provide resource reservation for all types of traffic.
The two important types of RSVP messages are:

1. Path messages:
 The receivers in a flow make the reservation in RSVP, but the receivers do not
know the path traveled by the packets before the reservation. The path is
required for reservation To solve this problem the RSVP uses the path
messages.
 A path message travels from the sender and reaches to all receivers by
multicasting and path message stores the necessary information for the
receivers.

2. Resv messages:
After receiving path message, the receiver sends a Resv message. The Resv
message travels to the sender and makes a resource reservation on the routers
which supports for RSVP.

You might also like