0% found this document useful (0 votes)
12 views19 pages

CN Assignment4 Q1&2

Assignment of Computer Networks course

Uploaded by

bhavikshangari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views19 pages

CN Assignment4 Q1&2

Assignment of Computer Networks course

Uploaded by

bhavikshangari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

CS301: Computer Networks

Assignment 04: Understanding TCP Congestion Control Algorithms


using NS-3
Dr. Anand Baswade
Rajeev Goel 12241460
DSAI

Part 1 : Study of TCP Congestion Control Algorithms

1. Algorithm Details

Along with TCP Cubic, I have chosen


• TCP Highspeed for Loss Based
• TCP Vegas for Delay Based
• TCP Veno for Hybrid (Loss + Delay Based)

TCP Cubic
Algorithm Details

• It adjusts the congestion window size based on a cubic function of time


since the last congestion event, allowing for fast recovery while being
TCP-friendly.
• At the start of a connection, TCP Cubic behaves similarly to other TCP
algorithms like TCP Reno. It uses the traditional slow start mechanism,
where the congestion window (Cwnd) grows exponentially until a packet
loss is detected or a threshold (called ssthresh, or slow start threshold) is
reached.
• When a loss event is detected (e.g., packet loss due to congestion), TCP
Cubic records the value of the congestion window at the time of the loss,
denoted as Wmax (maximum congestion window). Upon detecting packet
loss, TCP Cubic performs a multiplicative decrease, where the Cwnd
Rajeev Goel 12241460

is reduced. Typically, the window is reduced by a factor (e.g., halved,


β = 0.5)
Cwnd = Wmax × β
• TCP Cubic’s unique feature is its cubic window growth function. After
the loss event, the congestion window grows following a cubic function
over time:
(t − K)3
 
W (t) = C +
C
Here:
– W (t) is the congestion window size at time t.
– C is a scaling factor that determines the aggressiveness of window
growth.
– K is the time period that ensures the window regrowth reaches Wmax
(the size of the window before the last packet loss).
The cubic function is designed to grow slowly when the window is close
to Wmax and faster as it moves further away from Wmax , both after packet
loss and during recovery. This smoothes the window growth near Wmax
and allows for faster recovery when the network is underutilized.

It adjusts the congestion window size based on a cubic function of time since
the last congestion event, allowing for fast recovery while being TCP-friendly.

Suitable scenarios

• High Bandwidth-Delay Product Networks : As it efficiently utilizes


available capacity.
• Packet Loss Resilience : Its cubic window function maintains a slower
growth after detecting congestion, allowing a smoother recovery and pre-
venting sharp rate drops, making it suitable for networks where random
packet losses are common.

Limiting scenarios

Page 2
Rajeev Goel 12241460

• Short, Bursty Flows : TCP Cubic’s slower convergence limits per-


formance with short, quick flows, as it doesn’t reach optimal speeds in
time.
• Applications Requiring Minimal Latency : For applications ex-
tremely sensitive to delay (e.g., real-time gaming), TCP Cubic’s growth
pattern might introduce unnecessary latency.

TCP HighSpeed
Algorithm Details

• Congestion Window (Cwnd ): Set to a larger initial value than traditional


TCP (e.g., 4-10 MSS). ssthresh is typically set to a large value.
• In this phase, the congestion window (cwnd) increases exponentially.
Cwnd = Cwnd + N
Where N is the number of ACKs received in the current round-trip time
(RTT), and the phase continues until Cwnd ≥ ssthresh
• Upon detecting packet loss (either through timeouts or receiving three
duplicate ACKs), the algorithm reduces the congestion window.
Cwndnew = 0.875 × Cwndold
It then enters a fast recovery state, sending new packets without returning
to slow start immediately.
• TCP HighSpeed supports ECN to adjust its behavior without waiting for
packet loss. When ECN is marked, it performs the following adjustment:
Cwnd = Cwnd − M SS
This proactive approach allows for maintaining throughput during con-
gestion.

Page 3
Rajeev Goel 12241460

Suitable scenarios

• Bulk Data Transfers :Its capability to maintain a larger congestion


window and recover quickly from losses helps reduce transfer times sig-
nificantly.
• Streaming Media Applications : TCP HighSpeed is effective for
streaming applications, such as video conferencing and live broadcasts,
where a stable and consistent data stream is essential.

Limiting scenarios

• Short, Bursty Flows : TCP HighSpeed may not achieve optimal per-
formance. Its slower response time in adjusting the congestion window
can result in lower throughput before the transfer completes.
• Unfairness in Mixed Environments : In scenarios where multiple
TCP variants operate together, TCP HighSpeed can be less friendly to
other flows due to its aggressive nature.

TCP Vegas
Algorithm Details

• TCP Vegas relies on round-trip time (RTT) measurements to adjust the


congestion window size
• Congestion Window (Cwnd ) is Initialized to a small value, typically 1 MSS,
ssthresh is set to a large initial value. During the initial transmission,
TCP Vegas records the smallest RTT observed, assuming it represents
the path delay under no congestion conditions.
• Here also, the congestion window (cwnd) increases exponentially.
Cwnd = Cwnd × 2
and the phase continues until Cwnd ≥ ssthresh

Page 4
Rajeev Goel 12241460

• For each RTT, TCP Vegas calculates an Expected Throughput and a


Measured Throughput.
Cwnd
Expected Throughput =
RT Tbase
Cwnd
Measured Throughput =
RT Tcurrent
And calculates its difference,
δ = Expected Throughput − Measured Throughput

• If δ < α where α ≈ 1, the network is not congested, and TCP Vegas can
increase the window size by 1.
If δ > β where β ≈ 3 is large, it indicates congestion, and TCP Vegas
should decrease the window size by 1 to prevent further queuing.
Otherwise if β ≤ δ ≤ α we say no need to change the Cwnd .
• When the estimated RTT begins to increase or packet queuing is detected,
it exits slow start. This hapens earlier than the traaditional TCP Reno
Algorithm.
• TCP Vegas does not rely on packet loss as the primary signal for conges-
tion control but still incorporates a mechanism to handle loss if duplicate
ACKs are received.
cwnd
ssthresh =
2
cwnd = ssthresh
Suitable scenarios
• Congested Backbone Networks :TCP Vegas can be advantageous on
heavily utilized backbone links. By responding to early congestion signs
before packet loss, it helps reduce overall packet drops and smoothens
traffic flow.
• Wireless Networks with Variable Latency : TCP Vegas can be
beneficial in wireless networks with fluctuating latencies because it adapts
based on RTT rather than packet loss.

Page 5
Rajeev Goel 12241460

Limiting scenarios

• High-Latency, High-Bandwidth Networks : TCP Vegas may under-


perform in networks with high bandwidth-delay products (BDP), such as
transcontinental fiber links.
• Fairness in Mixed TCP Environments : When TCP Vegas shares a
network with more aggressive TCP variants like TCP Cubic or Reno, it
can experience fairness issues.

TCP Veno
Algorithm Details
• TCP Veno is a hybrid of TCP Reno and TCP Vegas, aiming to dis-
tinguish between congestion-based and random packet losses to respond
more effectively.
• Congestion Window (Cwnd ) is Initialized to 1 MSS, ssthresh is set to a
large initial value. The algorithm also initializes a variable, β, to help
distinguish between congestion-induced and random packet losses.under
no congestion conditions.
• Here also, the congestion window (cwnd) increases exponentially.
Cwnd = Cwnd × 2
and the phase continues until Cwnd ≥ ssthresh
• For each RTT, TCP Vena just like TCP Vegas calculates an Expected
Throughput and a Measured Throughput.
Cwnd
Expected Throughput =
RT Tbase
Cwnd
Measured Throughput =
RT Tcurrent
And calculates its difference,
δ = Expected Throughput − Measured Throughput

Page 6
Rajeev Goel 12241460

• If δ < β, it signifies the network is not congested, hence,


M SS
Cwnd = Cwnd +
Cwnd
If δ ≥ β congestion, and TCP Veno does not alters the Cwnd .

• When packet loss is detected, TCP Veno attempts to determine the cause
(random loss vs. congestion loss) using δ.
If the loss is likely due to congestion (δ ≥ β)
cwnd
ssthresh =
2
cwnd = ssthresh
If the loss is likely due to random errors, less aggressive response by reduc-
ing the congestion window less dramatically, aiming to sustain through-
put.

Suitable scenarios

• Wireless Networks with High Random Packet Loss :By avoiding


drastic reductions in the congestion window for random losses, TCP Veno
maintains better throughput.
• Hybrid Networks (Mixed Wired and Wireless) : TCP Veno can
outperform other algorithms by adjusting more flexibly to varying delay
and loss characteristics across segments.

Limiting scenarios

• Low-Latency, High-Bandwidth Networks : Veno’s delay-based con-


gestion control can become overly conservative. Its adjustments based on
small changes in RTT can result in lower utilization of available band-
width.
• Congestion-Dominated Networks : TCP Veno may struggle as its
congestion avoidance mechanism is optimized for random, non-congestion
losses.

Page 7
Rajeev Goel 12241460

Part 2: Understanding TCP Congestion Window using NS-3


Part A

For each of the given congestion control algorithms,


• NewReno
• HighSpeed
• Veno
• Vegas
perform the simulations and answer the following questions.

Explanation of codes used for solving this question.

• A presonal class MyApp has been base to control the packet transfer inde-
pendently of OnOffApplications of NS3. Functions shown in the image below
are used to achive this.

Page 8
Rajeev Goel 12241460

• The Following 3 functions are used to store the Congestion window size, the
RTT value and the ssthresh after each packet transfer.

• Inside the main fucntion this is main snippet responsible for using a particular
type of congestion control algorithm. Here its mentioned TcpNewReno,
changing it to TcpHighSeed will start implementing TCPHighSeed CCA.

• Inside the main fucntion this is main snippet responsible for using a particular
type of congestion control algorithm. Here its mentioned TcpNewReno,
changing it to TcpHighSeed will start implementing TCPHighSeed CCA.

• Now upon setting the nodes with a client and server using InternetStack-
Helper we set the Data-rate (Bandwidth) and delay (Latency) as follows:

• Finally using AsciiTraceHelper we stored the data in the following files.


And running it for 20 seconds.
Using commands ./ns3 build to compile and ./ns3 run [algo-name].cc to exe-
cute the file.

Page 9
Rajeev Goel 12241460

For question no. 1, 2, 3 and 6. Outputs from below files are used:
– NewReno.cc
– Highspeed.cc
– Veno.cc
– Vegas.cc

• In question 4 and 5 we need to vary the bandwidth, latency and MTU size.
For this I made 4 separate files,
– NewReno test.cc
– Highspeed test.cc
– Veno test.cc
– Vegas test.cc
Each file consist of a function that is called on 3 different values of these
parameters.
– first the bandwidth=10Mbps rest remained same.
– second the latency=5ms rest remained same.
– third the MTU=3000 rest remained same.
By observing the throughput at these 3 scenarios and the original one, we
can easily conclude the affect of these parameters on throughput.

Page 10
Rajeev Goel 12241460

• I am using 2 shell scripts to execute tshark to calculate the throughput using


all the pcap files generated as the output.
– throughput script.sh
– throughput script test.sh
throughput script test.sh is more dynamic for dealing with larger no. of pcap
files.
Below shown is the throughput script.sh file with important snippets marked
inred boxes.

This concludes all the codes used in solving this part.


Apart form this I am using a matplotlib in python to plot the plots asked in the
question.
Code for the same can be found in CN 4 Plots.ipynb file.

1. Plot the cwnd vs time graph, and describe what you observed, like slow start
and congestion avoidance, in detail.

• For TCP NewReno:

Page 11
Rajeev Goel 12241460

The Congestion stars from 1MSS and increases till reach ssthresh expo-
nentially. Then increases lineraly. If the packet loss is detected, it enters
the congestion avoidance phase by reducing the Cwnd size.

• Upon zooming in the initial section of plot we can notice the exponential
growth from 1.40s to 1.45s and then linear growth.

• For TCP HighSpeed:


The Congestion stars from 4MSS and increases till reach ssthresh expo-
nentially. Then increases lineraly. If the packet loss is detected, it enters
the congestion avoidance phase by reducing the Cwnd size.

Page 12
Rajeev Goel 12241460

• We can notice a lot more fluctuations of HighSpeed because it majorly


focuses on ustilizing the bandwidth.

• For TCP Veno:


The Congestion stars from 1MSS. And grows exponentially by a factor of
2. Till random loss or congestion detected.

Page 13
Rajeev Goel 12241460

• Upon zooming in We can notice that TCP veno is mostly paying attention
to packet losses, over congestion, are the dips are bringing the Cwnd value
to half of it.

• For TCP Vegas:


The Congestion stars from 1MSS. And grows exponentially by a factor of
2. Till random loss or congestion detected. After that we can see that its
trying hard to maintain the flow by keeping Cwnd around 3000-4000.

Page 14
Rajeev Goel 12241460

2. Find the average throughput for each of the congestion control algorithms using
tshark from the pcap files generated, and state which algorithm performed the
best.

Upon running the throughput script.sh we got the following results:

TCP Variant Average Throughput (KBps)


NewReno 527.29 KBps
HighSpeed 525.24 KBps
Veno 520.46 KBps
Vegas 506.05 KBps

Page 15
Rajeev Goel 12241460

So We can say TCP NewReno performed the beast followed by HighSpeed.

3. How many times did the TCP algo reduce the cwnd and why?

Upon running the snippets in CN 4 Plots.ipynb we get the results as:

TCP Variant No. of times Cwnd reduced


NewReno 991
HighSpeed 1513
Veno 971
Vegas 224

• TCP NewReno drops the Cwnd size due to packet loss.


• TCP HighSpeed shows the aggressive behavior with most no. of Cwnd
reductions.
• TCP Veno reduces window size majorly due to packet losses hence similar
results t NewReno, but a little bit less due to congestion avaoidance.
• TC Vegas reduces Cwnd due to congestion.

Here, TCP Vegas stands out as a consistent congestion control algorithm, by


reducing the Cwnd value least no. of times.

4. I am answering 4 and 5 together.


• Check the effect of changing the bandwidth and latency of point-to-point
connection and explain its effect on average throughput.
• Explain in short what is the effect of changing the default MTU size.

I considered 4 scenarios:

• base Bandwidth = 5Mbps, latency = 2ms and MTU = 1500.


• Test1 where bandwidth=10Mbps rest remained same.

Page 16
Rajeev Goel 12241460

• Test2 where latency=5ms rest remained same.


• Test3 where MTU=3000 rest remained same.

Upon executing these 4 cases for each of the CCAs, we get the results as:

TCP Variant Base Test 1 (b.w.) Test 2 (lat.) Test 3 (MTU)


NewReno 527.29 839.10 386.25 510.08
HighSpeed 525.24 963.31 501.75 525.43
Veno 520.46 881.03 395.52 530.04
Vegas 506.05 666.23 280.04 532.45
Observations

• Increasing Bandwidth assists in increasing the Throughput by alot.


Specifically for aggressing algorithms like Highspeed, the result reaches
to almost double.
But for Vegas the impact is not as much as others.
• Increasing the latency negatively affects the network throughput in each
case. Penalizing Vegas the most.
• Increasing MTU doesn’t affect much the algoritms focusing on packet
loss.
Although the Delay based algorithms like Vegas and Veno got the benefit
from increasing MTU, as now they are sending more data in one go by
avoiding packet loss.

Page 17
Rajeev Goel 12241460

5. Plot the rtt vs time graph and explain your inferences and observations.

• In TCP NewReno the maximum RTT reached was about 0.028s which is
quite good. and the no. of fluctuations signifies how the congestion in
the network is varying.

• in TCP Highspeed the maximum RTT reached was around 0.045s which
is very high for a 2 node point-to-point network. It is focusing majorly
on the sending packets hence adding congestion tothe network.

Page 18
Rajeev Goel 12241460

• TCP Veno is similar to NewReno having maximum RTT of 0.033s, which


should not be the case as the algorithm is made to enhance the NewReno
with incorporating congestion avoidance, so this result is contradictory.

• TCP Vegas max rached value is around 0.035 initailly as sudden rise, later
on it remains at 0.007s which is remarkable, as it controls the congestion
very efficienctly.

Page 19

You might also like