0% found this document useful (0 votes)
21 views

Final Copy of Computer Networks

The document discusses Quality of Service (QoS) in computer networks, focusing on congestion control, traffic management, and techniques to improve data transmission efficiency. It covers concepts such as Software Defined Networking (SDN) and Network Function Virtualization (NFV), highlighting their benefits and challenges. Additionally, it explains integrated and differentiated services, emphasizing the importance of resource reservation and call setup for ensuring QoS in network communications.

Uploaded by

enson
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Final Copy of Computer Networks

The document discusses Quality of Service (QoS) in computer networks, focusing on congestion control, traffic management, and techniques to improve data transmission efficiency. It covers concepts such as Software Defined Networking (SDN) and Network Function Virtualization (NFV), highlighting their benefits and challenges. Additionally, it explains integrated and differentiated services, emphasizing the importance of resource reservation and call setup for ensuring QoS in network communications.

Uploaded by

enson
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 22

KIBABII UNIVERSITY

NAME: MAKOKHA TUNAI AND SAMUEL SIMIYU

COURSE: COMPUTER NETWORKS

COURSE CODE: MIT 814

LECTURER: DR. BARASA

GROUP ONE
Table of Contents
Quality of Service (QoS)..............................................................................................................3
Fundamentals of congestion control and traffic management.....................................................6
Congestion Control......................................................................................................................7
SDN (Software Defined Networking)........................................................................................11
Network Function Virtualization (NFV)....................................................................................12
Benefits......................................................................................................................................13
Integrated and Differentiated Services.......................................................................................13
Integrated Services...................................................................................................................14
Differentiated Services.............................................................................................................15
Protocols Support for QOS........................................................................................................16
Resource Reservation Protocol (RSVP).................................................................................16
Multiprotocol Label Switching (MPLS)................................................................................17
TCP Congestion Control............................................................................................................18
Quality of Service (QoS)

Refers to any technology that manages data traffic to reduce packet loss, latency and jitter
on the network. QoS controls and manages network resources by setting priorities for specific
types of data on the network.

When one or more of these packets fails to reach its intended destination, this is called
packet loss.

Latency is an expression of how much time it takes for a data packet to travel from one
designated point to another.

Jitter is the variation in the time between data packets arriving, caused by network
congestion, or route changes. The longer data packets take to transmit, the more jitter affects
audio quality. The standard jitter measurement is in milliseconds (ms).

Techniques to improve quality of service

Some techniques that can be used to improve the quality of service. They include; scheduling,
traffic shaping, admission control, and resource reservation.

1. Scheduling

We have 3 scheduling Techniques. They include; FIFO queuing, priority queuing, and weighted
fair queuing.

a. FIFO queing – Here packets wait in a buffer (queue) until the node (router or
switch) is ready to process them. If the average arrival rate is higher than the
average processing rate, the queue will fill up and new packets will be discarded. A
FIFO queue is familiar to those who have had to wait for a bus at a bus stop.
b. Priority Queuing – Here packets are first assigned to a priority class. The packets
in the highest-priority queue are processed first. Packets in the lowest- priority
queue are processed last. A priority queue can provide better QoS than the FIFO
queue because higher priority traffic, such as multimedia, can reach the destination
with less delay. However, there is a potential drawback. If there is a continuous
flow in a high-priority queue, the packets in the lower-priority queues will never
have a chance to be processed. This is a condition called starvation7

c. Weighted Fair Queuing - A better scheduling method is weighted fair queuing. In


this technique, the packets are still assigned to different classes and admitted to
different queues. The queues, however, are weighted based on the priority of the
queues; higher priority means a higher weight.

2. Traffic Shaping - is a mechanism to control the amount and the rate of the traffic sent to
the network. Two techniques can shape traffic: leaky bucket and token bucket.

a. Leaky bucket - If a bucket has a small hole at the bottom, the water leaks from
the bucket at a constant rate as long as there is water in the bucket. The rate at
which the water leaks does not depend on the rate at which the water is input to
the bucket unless the bucket is empty. The input rate can vary, but the output rate
remains constant. Similarly, in networking, a technique called leaky bucket can
smooth out bursty traffic. Bursty chunks are stored in the bucket and sent out at an
average rate. A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic
by averaging the data rate. It may drop the packets if the bucket is full.

b. Token bucket - The leaky bucket is very restrictive. It does not credit an idle
host. For example, if a host is not sending for a while, its bucket becomes empty.
Now if the host has bursty data, the leaky bucket allows only an average rate. The
time when the host was idle is not taken into account. On the other hand, the
token bucket algorithm allows idle hosts to accumulate credit for the future in the
form of tokens. For each tick of the clock, the system sends n tokens to the
bucket. The system removes one token for every cell (or byte) of data sent. For
example, if n is 100 and the host is idle for 100 ticks, the bucket collects 10,000
tokens. The token bucket can easily be implemented with a counter. The token is
initialized to zero. Each time a token is added, the counter is incremented by 1.
Each time a unit of data is sent, the counter is decremented by 1. When the
counter is zero, the host cannot send data. The token bucket allows bursty traffic
at a regulated maximum rate.

3. Resource Reservation - A flow of data needs resources such as a buffer, bandwidth,


CPU time, and so on. The quality of service is improved if these resources are reserved
beforehand. We discuss in this section one QoS model called Integrated Services, which
depends heavily on resource reservation to improve the quality of service.

4. Admission Control - Admission control refers to the mechanism used by a router, or a


switch, to accept or reject a flow based on predefined parameters called flow
specifications. Before a router accepts a flow for processing, it checks the flow
specifications to see if its capacity (in terms of bandwidth, buffer size, CPU speed, etc.)
and its previous commitments to other flows can handle the new flow.

Fundamentals of congestion control and traffic management

The objective of traffic management is to ensure that each connection gets the quality of service
it was promised.

Congestion

Congestion in a network may occur if the load on the network (number of packets sent to the
network) is greater than the capacity of the network (number of packets a network can handle).

Causes of Congestion

 Congestion occurs when a router receives data faster than it can send it; insufficient
bandwidth, slow hosts, data simultaneously arriving from the same outgoing line.

 The system is not balanced; correcting the problem at one router will probably just move
it to another router.

Congestion causes more congestion

 Senders that are trying to transmit to a congested destination also become congested; they
must continually resend packets that have been dropped or that have been timed out, they
must continue to hold unacknowledged messages in memory.
 The queues have a finite size; overflowing queues will cause the packets to be dropped,
long queues delays will cause packets to be resent, dropped packets will cause packets to
be resent.

Congestion Control

Congestion control refers to techniques and mechanisms that can either prevent congestion,
before it happens, or remove congestion, after it has happened.

In general, congestion control mechanisms is divided into two broad categories including; open
loop/prevention and closed loop/removal congestion control.

1. Open-Loop Congestion Control

In open-loop congestion control, policies are applied to prevent congestion before it happens. In
these mechanisms, congestion control is handled by either the source or the destination.

a. Retransmission Policy - Retransmission is sometimes unavoidable. If the sender feels


that a sent packet is lost or corrupted, the packet needs to be retransmitted.
Retransmission in general may increase congestion in the network. However, a good
retransmission policy can prevent congestion. The retransmission policy and the
retransmission timers must be designed to optimize efficiency and at the same time
prevent congestion.
b. Acknowledgment Policy- The acknowledgment policy imposed by the receiver may also
affect congestion. If the receiver does not acknowledge every packet it receives, it may
slow down the sender and help prevent congestion. Several approaches are used in this
case. A receiver may send an acknowledgment only if it has a packet to be sent or a
special timer expires. A receiver may decide to acknowledge only N packets at a time.
We need to know that the acknowledgments are also part of the load in a network.
Sending fewer acknowledgments means imposing fewer loads on the network.

c. Discarding Policy - A good discarding policy by the routers may prevent congestion and
at the same time may not harm the integrity of the transmission. For example, in audio
transmission, if the policy is to discard less sensitive packets when congestion is likely to
happen, the quality of sound is still preserved and congestion is prevented or alleviated.

d. Admission Policy - An admission policy, which is a quality-of-service mechanism, can


also prevent congestion in virtual-circuit networks. Switches in a flow, first check the
resource requirement of a flow before admitting it to the network. A router can deny
establishing a virtual circuit connection if there is congestion in the network or if there is
a possibility of future congestion.

e. Window Policy - The type of window at the sender may also affect congestion. The
Selective Repeat window is better than the G o-Back-N window for congestion control.
In the Go-Back-N window, when the timer for a packet times out, several packets may be
resent, although some may have arrived safe and sound at the receiver. This duplication
may make the congestion worse. The Selective Repeat window, on the other hand, tries
to send the specific packets that have been lost or corrupted.

2. Closed loop solutions

Closed-loop congestion control mechanisms try to alleviate congestion after it happens. Several
mechanisms have been used by different protocols.

Warning bit / Backpressure

A special bit in the packet header set by the router to warn the source when congestion is
detected. The bit is copied and piggy-backed on the ACK and sent to the sender. The sender
monitors the number of ACK packets it receives with the warning bit set and adjusts its
transmission rate accordingly.
Choke Packets

A more direct way of telling the source to slow down. A choke packet is a control packet
generated at a congested node and transmitted to restrict traffic flow. The source, on the
receiving the choke packet must reduce its transmission rate by a certain percentage.

Implicit Signaling

In implicit signaling, there is no communication between the congested node or nodes and the
source. The source guesses that there is congestion somewhere in the network from other
symptoms. For example, when a source sends several packets and there is no acknowledgment
for a while, one assumption is that the network is congested. The delay in receiving an
acknowledgment is interpreted as congestion in the network; the source should slow down.

Explicit Signaling

The node that experiences congestion can explicitly send a signal to the source or destination.
The explicit signaling method, however, is different from the choke packet method. In the choke
packet method, a separate packet is used for this purpose; in the explicit signaling method, the
signal is included in the packets that carry data. Explicit signaling, can occur in either the
forward or the backward direction.

i. Backward Signaling- A bit can be set in a packet moving in the direction opposite to the
congestion. This bit can warn the source that there is congestion and that it needs to slow down
to avoid the discarding of packets.

ii. Forward Signaling- A bit can be set in a packet moving in the direction of the congestion.
This bit can warn the destination that there is congestion. The receiver in this case can use
policies, such as slowing down the acknowledgments, to alleviate the congestion.

 Categories of Explicit Signaling


 Binary: A bit set in a packet indicates congestion
 Credit based
 Indicates how many packets source may send
 Common for end-to-end flow control
 Rate based
 Supply explicit data rate limit

SDN (Software Defined Networking)

It is an emerging networking technology that greatly simplifies network management tasks by


allowing it to be centrally controlled. Also, it opens the door for network innovation through a
programmable flexible interface controlling the behavior of the entire network. Contrary,
traditional IP networks were very hard to manage, error prone and hard to introduce new
functionalities.
SDN Architecture

Comprises of 3 layers including; application, control and infrastructure layer.

Application layer – This layer contains the typical network applications that the orgs use i.e.
intrusion detection systems, load balancing or firewalls.

Control layer – It represents the centralized SDN controller, it acts as the brain of the SDN
network. This controller resides on a server and manages policies and traffic flow throughout the
network.

Infrastructure layer – Made up of physical switches in the network

Benefits of SDN

 A system administrator can change any network switch’s rules when necessary –
prioritizing, deprioritizing or even blocking some packets for security purposes.
 Centralized network positioning

 Lower operating costs - SDN also virtualizes hardware and services that were previously
carried out by dedicated hardware.

 More granular Security - SDN Controller provides a central point of control to distribute
security and policy information consistently throughout the enterprise
Challenges with SDN

 The centralized SDN controller presents a single point of failure and, if targeted by an
attacker.
 Migration to SDN- Asides the technical and cost implications involved, there is also the
challenge of convincing users to switch from legacy networks to SDN especially if the
networks are performing optimally
 Controller Design / Performance – SDN control plan may have multiple controllers
depending on the network topology design. If not properly designed there could be failure
in the network.
 Integration of SDN with legacy Networks

Network Function Virtualization (NFV)

NFV allows network operators to manage and expand their network capabilities on demand
using virtual, software based applications where physical boxes once stood in the network
architecture. NFV technology offers you the ability to virtualize the network service over the
cloud in its entirety.

This makes it easier to load-balance, scale up and down, and move functions across distributed
hardware resources. With continual updates, operators can keep things running on the latest
software without interruption to their customers. NFV uses virtualized networking components
to support an infrastructure totally independent of hardware.

Firewalls, traffic control and virtual routing are three of the most common virtual network
functions (VNFs). Other functions include working as an alternative to load balancers and
routers. Not only does this framework robustly install software across network locations, but it
also eliminates the need for any hardware infrastructure.

Benefits

Benefits of NFV to network operators include;


 Reduce costs in purchasing network equipment via migration to software on standard
servers
 Efficiencies in space, power, and cooling

 Faster time to deployment

 Flexibility – elastic scale up and scale down of capacity

 Access to a broad independent software community, including open source

Differences between SDN and NFV

 SDN operates in a campus, data center and/or cloud environment while NFV targets the
service provider network
 SDN uses OpenFlow as a communication protocol while NFV has no protocol

 SDN reduces cost of network because now there is no need of expensive switches &
routers while NFV increases scalability and agility as well as speed up time-to-market as
it dynamically allot hardware a level of capacity to network functions needed at a
particular time.

Integrated and Differentiated Services

Integrated Services

Integrated services refer to an architecture that ensures the Quality of Service (QoS) on a
network. Moreover, these services allow the receiver to watch and listen to video and sound
without any interruption. Each router in the network implements integrated services.
Furthermore, every application requires some kind of guarantee to make an individual
reservation.

Two key features of integrated service:

 Reserved resources: the router must know the amount of its resources currently reserved
for on-going sessions. That is the standard resources: link capacity, router buffers.
 Call setup: A session requiring QoS guarantees must first be able to reserve sufficient
resources at each network router on its source-to destination path to ensure that its end-to-
end QoS requirement is met.
Call Setup Process
 Traffic characterization and specification of the desired QoS:
 Traffic specification: Characterizes/defines the traffic the sender will be sending
into the network.
 Resource specification: Characterizes/defines the QoS being requested by the
connection.
 Signaling for Call Setup : A sessions Tspec and Rspec must be carried to the
intermediate routers: RSVP
 Per-element call admission: Once a router receives the Tspec and Rspec for a session, it
determines whether or not it can admit the call.

Service Classes

The Intserv architecture defines two major service classes:


 Guaranteed QoS : Provides firm (mathematically provable) bounds on queuing delays
that a packet will experience in a router. The sources traffic characterization is given by a
leaky bucket and the requested service is characterized by a transmission rate R bps.
 Controlled-load Network Service: Provides a service closely approximating the QoS
that same flow would receive from an unloaded network element. No quantitative
guarantees are made, therefore the word .closely approximate. Is non-quantifiable.

Differentiated Services

Differentiated service is an architecture for providing scalable and flexible service


differentiation. That is the ability to handle disferent classes of traffic in different ways within
the Internet.
It is a multiple model that can satisfy many requirements, ie. It supports multiple mission-critical
applications.
Moreover, these services help to minimize the burden of the network devices and also support
the scaling of the network. Some major differentiated services are as follows.

 Traffic conditioning – Ensures that the traffic enters the Differentiated Service
domain.bb
 Packet classification – Categorizes the packet within a specific group using the traffic
descriptor.

 Packet marking – Classify a packet based on a specific traffic descriptor.

 Congestion Management – Achieve queuing and traffic scheduling.

 Congestion avoidance – Monitor traffic loads to minimize congestion. In involves packet


dropping.

Differences between integrated service and differentiated service


The main distinction between integrated services and differentiated services is that integrated
services involve previous reservation of resources before achieving the desired quality of
service, whereas differential services mark the packets with previous and send it to the network
while not a prior reservation.
Other Differences.
 Integrated service is not scalable while differentiated service is scalable.
 Integrated service involve per flow setup while differentiated service involves long term
setup.
 Integrated service involves end-to-end service scope while differentiated service involves
domain service setup.

Protocols Support for QOS

Resource Reservation Protocol (RSVP)


Provides QOS by reserving resources on the network. It is used by routers to deliver quality-of-
service (QoS) requests to all nodes along data flow path(s) and to establish and maintain state for
the requested service. RSVP requests generally result in resource reservations in each node along
the data path. RSVP has the following attributes:
 RSVP is a receiver oriented signaling protocol. The receiver initiates and maintains
resource reservation.
 It is used both for unicasting (sending data from one source to one destination) and
multicasting (sending data simultaneously to a group of destination computers).
 RSVP supports dynamic automatic adaptation to changes in network.
 It provides a number of reservation styles. It also provides support for addition of future
styles.
 Supports both IPv4 and IPv6 packets that can be sent over RSVP-signaled LSPs.

Multiprotocol Label Switching (MPLS)


It is a routing technique in telecommunications networks that directs data from one node to the
next based on short path labels rather than long network addresses, thus avoiding complex
lookups in a routing table and speeding traffic flows. The labels identify virtual links (paths)
between distant nodes rather than endpoints.
MPLS can encapsulate packets of various network protocols, hence the "multiprotocol" reference
on its name.
MPLS supports a range of access technologies, including; Asynchronous Transfer Mode
(ATM), Frame Relay, and Digital subscriber line (DSL).

Role and Functioning


i. MPLS is scalable and protocol-independent. In an MPLS network, data packets are
assigned labels. Packet-forwarding decisions are made solely on the contents of this label,
without the need to examine the packet itself. This allows one to create end-to-end
circuits across any type of transport medium, using any protocol. The primary benefit is
to eliminate dependence on a particular OSI model data link layer (layer 2)
technology, such as Asynchronous Transfer Mode (ATM), Frame Relay, Synchronous
Optical Networking (SONET) or Ethernet, and eliminate the need for multiple layer-2
networks to satisfy different types of traffic.
ii. Multiprotocol label switching belongs to the family of packet-switched networks.
iii. MPLS operates at a layer that is generally considered to lie between traditional
definitions of OSI Layer 2 (data link layer) and Layer 3 (network layer), and thus is often
referred to as a layer 2.5 protocol.
iv. It was designed to provide a unified data-carrying service for both circuit-based clients
and packet-switching clients which provide a datagram service model.
v. It can be used to carry many different kinds of traffic, including IP packets, as well as
native ATM, SONET, and Ethernet frames.

A number of different technologies were previously deployed with essentially identical goals,
such as Frame Relay and ATM.

Frame Relay and ATM use "labels" to move frames or cells throughout a network. The header of
the Frame Relay frame and the ATM cell refers to the virtual circuit that the frame or cell resides
on.

The similarity between Frame Relay, ATM, and MPLS is that at each hop throughout the
network, the “label” value in the header is changed.

This is different from the forwarding of IP packets. MPLS technologies have evolved with the
strengths and weaknesses of ATM in mind.

MPLS is designed to have lower overhead than ATM while providing connection-oriented
services for variable-length frames, and has replaced much use of ATM in the market.

TCP Congestion Control

Network congestion may occur when a sender overflows the network with too many packets. At
the time of congestion, the network cannot handle this traffic properly, which results in a
degraded quality of service (QoS). The typical symptoms of congestion are:
 excessive packet delay,
 packet loss and
 Retransmission.
The common causes of congestion
 Insufficient link bandwidths,
 legacy network devices,
 Greedy network applications poorly designed or configured network infrastructure.
Greedy network applications or services, such as file sharing, video streaming using
UDP, etc., lacking TCP flow or congestion control mechanisms can significantly
contribute to congestion as well.
The function of TCP is to control the transfer of data so that it is reliable. The main TCP features
are:
 Connection management,
Connection management includes connection initialization (a 3-way handshake) and its
termination. The source and destination TCP ports are used for creating multiple virtual
connections.
 Reliability,
A reliable P2P transfer between hosts is achieved with the sequence numbers (used for
segments reordering) and retransmission. A retransmission of the TCP segments occurs
after a timeout, when the acknowledgement (ACK) is not received by the sender or when
there are three duplicate ACKs received (it is called fast retransmission when a sender is
not waiting until the timeout expires).
 Flow control
Flow control ensures that a sender does not overflow a receiving host. The receiver
informs a sender on how much data it can send without receiving ACK from the receiver
inside of the receiver’s ACK message. This option is called the sliding window (RWND)
and it’s amount is defined in Bytes. Thanks to the sliding window (rwnd), a receiving
host dynamically adjusts the amount of data which can be received from the sender.
 Congestion control.
Congestion control ensures that the sender does not overflow the network. Comparing to
the flow control technique where the flow control mechanism ensures that the source host
does not overflow the destination host, congestion control is more global. It ensures that
the capability of the routers along the path does not become overflowed.

TCP uses end-to- end congestion control, which addresses two fundamental questions
 How do senders react to congestion when it occurs, as signified by dropped packets?
 How does the sender determine the available capacity for a flow at any point in time?

Techniques for TCP to control congestion.


 RTT (round trip time) variance Estimation.
 Exponential RTO (Retransmit time out) backoff.
 Slow start.
 Congestion Avoidance
 Fast Recovery
A TCP sender limits its sending rate based on its perception of the network congestion, ie.
 A loss event is taken to be an indication of congestion, time out is a sign of a strong
congestion in the network, and the receipt of three duplicate ACKs is the sign of a
light congestion.
TCP Congestion Control techniques prevent congestion or help mitigate the congestion after it
occurs. Unlike the receive window (rwnd) used in the flow control mechanism and maintained
by the receiver, TCP uses the congestion window (cwnd) maintained by the sender. While rwnd
is present in the TCP header, cwnd is known only to a sender and is not sent over the links.
Cwnd is maintained for each TCP session and represents the maximum amount of data that can
be sent into the network without being acknowledged. In fact, different variants of TCP use
different approaches to calculate cwnd, based on the amount of congestion on the link. For
instance, the oldest TCP variant – the Old Tahoe initially sets cnwd to one Maximum Segment
Size (MSS). After each ACK packet is received, the sender increases the cwnd size by one MSS.
Cwnd is exponentially increased, following the formula: cwnd = cwnd + MSS. This phase is
known as a “slow start” where the cnwd value is less than the ssthresh value.
 A congestion window (cwnd) imposes constraint on the sending rate.
LastBytesent-LastByteAcked <= min (cwnd, rwnd)
- Actually, cwnd is measured in MSS (maximum-sized segment).
- By adjusting the value of cwnd, the sender can adjust the sending rate
- TCP uses ACKs to trigger its increase in cwnd (TCP self-clocking behavior).
Strategy: TCP increases its rate in response to arriving ACKs until a loss event
occurs, at which point, the rate is degreased (bandwidth probing)
(Congestion Window (cwnd) is a TCP state variable that limits the amount of data the TCP can
send into the network before receiving an ACK. The Receiver Window (rwnd) is a variable that
advertises the amount of data that the destination side can receive.)
RTT and RTO
If reliable communication is required, a sender needs to retransmit the packets when they are lost
due to congestion in the network. The general method to implement this is to number packets
consecutively and the receiver acknowledges each of them. But whenever an ACK
(Acknowledgement) is not received within the expected time, the sender assumes that the packet
is lost and retransmits it. This method is called ARQ (Automatic Repeat Request). But ARQ
require a timer that is initialized to RTO (Retransmit Time Out) value when a packet is sent.
Finding the correct RTO value is important for congestion control because packet loss is
interpreted as a sign of congestion. Since the time from sending a packet to receiving the
corresponding ACK is an RTT, the ideal value of RTO should be a function of an RTT, a most
recent RTT measurement. Also RTT depends on things that happen within the network (delay in
queues, path changes etc), hence relying solely on most recent measurement is too simple
approach. Hence a prediction must be made using the history of RTT samples. As a common rule
of thumb, RTT prediction should be conservative i.e. overestimating the RTT causes less harm
than underestimating it.

TCP Slow Start


TCP slow start is an algorithm which balances the speed of a network connection. Slow start
gradually increases the amount of data transmitted until it finds the network’s maximum carrying
capacity.
Slow start prevents a network from becoming congested by regulating the amount of data that’s
sent over it. It negotiates the connection between a sender and receiver by defining the amount of
data that can be transmitted with each packet, and slowly increases the amount of data until the
network’s capacity is reached. This ensures that as much data is transmitted as possible without
clogging the network.
When a connection begins, TCP enters into a slow –start state, that is:
 the congestion window starts with one MSS and increases by one MSS each time an
ACK arrives, thus cwnd is doubled every RTT (exponential growth)
Slow start exponential increase
When the slow start threshold (ssthresh) is reached, TCP switches from the slow start phase to
the congestion avoidance phase. The cwnd is changed according to the formula: cwnd = cwnd +
MSS /cwnd after each received ACK packet. It ensures that the cwnd growth is linear, thus
increased slower than during the slow start phase. However, when TCP sender detects packet
loss (receipt of duplicate ACKs or the retransmission timeout when ACK is not received), cwnd
is decreased to one MSS. Slow start threshold is then set to half of the current cwnd size and
TCP resumes the slow start phase.
TCP Tahoe
When a loss occurs, fast retransmit is sent, half of the current CWND is saved
as ssthresh and slow start begins again from its initial CWND. Once the CWND
reaches ssthresh, TCP changes to congestion avoidance algorithm where each new
ACK increases the CWND by MSS / CWND. This results in a linear increase of the
CWND.
TCP Reno
A fast retransmit is sent, half of the current CWND is saved as ssthresh and as new
CWND, thus skipping slow start and going directly to the congestion avoidance
algorithm. The overall algorithm here is called fast recovery.
Sammary
TCP Slow Start ends if;
 the congestion window reaches or surpasses the slow-start threshold (ssthresh), at this
point TCP enters the congestion avoidance state.
 If there is a time out, in this case the cwnd is set to 1, ssthresh to cwnd/2, and slow
start process is performed again.
 If three duplicate ACKs are detected, at this point fast retransmit is performed and
ssthresh is set to cwnd/2 afterwards.
-Tahoe TCP always enters slow start state
-Reno TCP( the most common version today) enters the Fast Recovery state.
Congestion Avoidance
On entry to the congestion-avoidance state, cwnd is about half its value when congestion was last
encountered.
More conservatively, TCP increases cwnd by one MSS every RTT, cwnd is increased by
MSS*(MSS/cwnd) for each ACK
Drawing
Fast Recovery

Fast Recovery is now the last improvement of TCP. With using only Fast Retransmit, the
congestion window is dropped down to 1 each time network congestion is detected. Thus, it
takes an amount of time to reach high link utilization as before. Fast Recovery, however,
alleviates this problem by removing the slow-start phase. Particulary, slow-start will be used
only at the beginning of a connection and whenever an RTO period is expired.

The reason for not performing slow-start after receiving 3 dup ACKs is that duplicate ACKs tell
the sending side more than a packet has been lost. Since the receiving side can create a duplicate
ACK only in the case that it receives an out-of-order packet, the dup ACK shows the sending
side that one packet has been left out the network. Thus, the sending side does not need to
drastically decrease cwnd down to 1 and restart slow-start. Moreover, the sender can only
decrease cwnd to one-half of the current cwnd and increase cwnd by 1 each time it receives a
duplicate ACK.
Sammary

 Fast recovery is optional in TCP.


 When the third ACK arrives,
- The set ssthresh = cwnd/2
- The missing segment is retransmitted (fast retransmit)
- The set cwnd = ssthresh +3
 Adding 3 to ssthresh accounts for the number of segments that have left the network and
that the other end has buffered.
References

Abhay, K., Agrawal, S., Upadhaya, N., & Govarrdhan, A. (2014). Congestion Control
Techniques and Principles in Computer Networks: A Survey. International Journal of
Computer Networking, 60 - 68.

Gurpreet, K., & Kumar, D. (2010). MPLS Technology on IP Backbone Network. International
Journal of Computer Applications, 13 - 16.

Mohammad Mousa, A. B.-E. (2016). Software Defined Networking concepts and challenges.
International Conference on Computer Engineering & Systems (ICCES). Ain Shams
University.

Ominike, A., Seun, E., O., A. A., & Osisanwo, F. Y. (2016). Introduction to Software Defined
Networks (SDN). International Journal of Applied Information Systems, 10 -14.

Rehman, A., & Rui L. Anguiar, J. P. (2018). Network Functions Virtualization: The Long Road
to Commercial Deployments. IEEE Access, 1 - 29.

V. Jacobson, “Congestion Avoidance and Control,” Proceedings of ACM Sigcomm, Scanford,


26-30 August 1988.

M. Welzl, “Network Congestion Control: Managing Internet Traffic,” John Wiley & Sons Ltd.,
2005.

You might also like