0% found this document useful (0 votes)
35 views

(Report) PACN Lab Assignment - 3

The document summarizes an experiment on creating virtual network containers on Linux and analyzing their performance. It describes creating two network namespaces called NetNsA and NetNsB and pinging between them using regular ping, ping with fixed delay, and ping with variable delay. It finds that regular ping and ping with fixed delay have lower and more accurate round-trip times than ping with variable delay. A second experiment analyzes file transfer performance between the namespaces using FTP and SCP for elephant and mouse flows of large and small file sizes. Python scripts are created to run the file transfers and a bash script runs them in parallel.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

(Report) PACN Lab Assignment - 3

The document summarizes an experiment on creating virtual network containers on Linux and analyzing their performance. It describes creating two network namespaces called NetNsA and NetNsB and pinging between them using regular ping, ping with fixed delay, and ping with variable delay. It finds that regular ping and ping with fixed delay have lower and more accurate round-trip times than ping with variable delay. A second experiment analyzes file transfer performance between the namespaces using FTP and SCP for elephant and mouse flows of large and small file sizes. Python scripts are created to run the file transfers and a bash script runs them in parallel.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

CS549: Performance Analysis of Computer Networks

Instructor: Dr.Sreelakshmi Manjunath

Lab Assignment 3 : Creating Virtual Network Containers on a Linux OS

Name : Sandeep N Kundalwal Date of Submission : 1/05/2023


Roll No.: T22051

Experiment 01

Aim : To create network containers using netns and ping between them in Linux OS.
Description : Create two network namespaces namely NetNsA and NetNsB and we will ping from one namespace to
another (say NetNsA to NetNsB). Namespaces have been pinged in three different ways:
(i) simply run ping between NetNsA and NetNsB.
(ii) run ping with a fixed delay of 50ms.
(iii) run ping with a variable delay of 50ms.

Machine Used : Raspberry Pi 3 Model B+.

Procedure : Measuring average RTT using three methods mentioned in the description (in ms).

Number of Iterations RTT using ping RTT using ping with fixed RTT using ping with variable
(in ms) Delay of 50ms (in ms) 50 ms delay (in ms)

1 0.114 37.631 53.354

2 0.114 50.139 63.738

3 0.109 50.127 51.808

4 0.114 50.119 10.782

5 0.106 50.130 80.876

6 0.113 50.132 57.119

7 0.118 50.130 68.095

8 0.113 50.136 22.923

9 0.108 50.130 39.548

10 0.113 50.133 41.765


Note:
➔ For this experiment, 10 readings have been taken for each method specified above. And for each reading, the
ping command runs 4 times and the average round trip time has been measured.

Fig. Line Chart: No. of Iteration v/s Execution Time

Python Script:
import subprocess

setup = [
'ip netns add NetNsA',
'ip netns add NetNsB',
'ip -n NetNsA link add eth0 type veth peer name eth0 netns NetNsB',
'ip -n NetNsA addr add 192.168.1.1/24 dev eth0',
'ip -n NetNsB addr add 192.168.2.1/24 dev eth0',
'ip netns exec NetNsA ip link set eth0 up',
'ip netns exec NetNsB ip link set eth0 up',
'ip netns exec NetNsA ip route add default via 192.168.1.1 dev eth0',
'ip netns exec NetNsB ip route add default via 192.168.2.1 dev eth0'
]

queuingDisciplines = [
'no-op',
'ip netns exec NetNsA tc qdisc add dev eth0 root netem delay 50ms',
'ip netns exec NetNsA tc qdisc add dev eth0 root netem delay 50ms 50ms'
]
teardown = [
'ip netns del NetNsA',
'ip netns del NetNsB'
]

def run_steps(setup_steps, ignore_errors=False):


for step in setup_steps:
try:
print('+ {}'.format(step))
subprocess.check_call(step, shell=True)
except subprocess.CalledProcessError:
if ignore_errors:
pass
else:
raise

if __name__ == '__main__':
try:
run_steps(setup)

iteration = 1
pingCmd = "ip netns exec NetNsA ping -c 4 192.168.2.1"
delQueuingDisciplineCmd = "ip netns exec NetNsA tc qdisc del dev eth0 root"

for queuingDiscipline in queuingDisciplines:


averages = []
print(queuingDiscipline)

if iteration == 3:
subprocess.Popen(delQueuingDisciplineCmd, shell=True)

discipline = subprocess.Popen(queuingDiscipline, shell=True)

for i in range(10):
pingOutput = subprocess.Popen(pingCmd, shell=True,
stdout=subprocess.PIPE,stderr=subprocess.PIPE, encoding='utf-8')
with open("log.txt", "a") as log:
log.write(pingOutput)

for line in pingOutput.stdout:


if line.find("avg") != -1:
splittedLine = line.split("/")
avg = splittedLine[4]
averages.append(avg)

iteration += 1
print(averages)

finally:
run_steps(teardown, ignore_errors=True)

Note:
The link to check the autologs of the above experiment is given below:
➔ https://drive.google.com/drive/folders/154plgfbPVs75IbNSmBqQJXPkmXP3bYcT?usp=share_link
Observations :
1. Ping with fixed delay of 50ms between the two network namespaces has minimum fluctuations in values for
roundtrip time after the first iteration. Similar case for ping with no delay.
2. In ping with variable delay, the fluctuation is high, signifying less accuracy.
3. After the first iteration, ping with fixed delay stabilizes.
4. Average RTTs for all the three pings are as follows,

Average RTT for regular ping Average RTT for fixed delay Average RTT for variable Delay
(ms) (ms) (ms)

0.1122 48.8807 49.0008

Inferences:
1. From the average RTTs, we can infer that -
Regular Ping < Ping with Fixed Delay < Ping with Variable Delay
2. Pings with fixed delays have maximum accuracy, whereas the pings with variable delays have the least.
3. The increased latency and jitter introduced by the queue discipline can result in degraded network performance
and may cause issues such as packet loss and congestion.
4. The ‘netem’ queue discipline is a power tool for simulating network conditions and testing the behavior of
applications and protocols under different network scenarios.
Experiment No. 02

Aim : To analyze the performance of file transfer between the two network namespaces using ftp and scp commands.
Description: Create different flows to examine the performance of file transfer between the two network namespaces
connected via Ethernet. The two types of flows that needs to be created:

1. Elephant Flow (large file size transfer)


2. Mouse Flow (small file size transfer)

Machine used : Raspberry Pi 3 Model B+

Task 1:
Write a python script titled ‘elephant’ that transfers a large size file using ftp. Execution time should be in 10s of
seconds. Write another python script titled ‘mouse’ that transfers a small size file using scp. Execution time
should be in 10s of seconds. Record the start time and end time for each file size transfer.

elephant.py for large file size transfer using scp.


# ip netns exec NetNsA scp /path-to-file-to-copy/file.pdf [email protected]:destination-path
#run command - sudo python3 elephant.py
import time
import subprocess

if __name__ == "__main__":
file = '100MB.8'
startTime = time.time()
scpCmd = 'ip netns exec NetNsA sshpass -p "root" scp /home/rpi/Desktop/NetNsA/' + file +
' [email protected]:/home/rpi/Desktop/NetNsB/' + file
scpElephantProcess = subprocess.Popen(scpCmd, shell=True,
stdout=subprocess.PIPE,stderr=subprocess.PIPE, encoding='utf-8')
scpElephantProcess.communicate()
endTime = time.time()
executionTime = endTime - startTime
print("File Size = {}, ExecutionTime = {}".format(file, executionTime))

mouse.py for small file size transfer using scp with exponential inter-file gap.
#run command - sudo python3 mouse.py <NmIndices> <FileIndices>
import sys
import time
import random
import subprocess

Nm = [4, 8, 12, 16]


mouse_files = ['1B.1', '10KB.3', '100KB.4', '500KB.5']

if __name__ == "__main__":
N = Nm[int(sys.argv[1])]
file = mouse_files[int(sys.argv[2])]
executionTime = 0.00
startTime = time.time()
for i in range(N):
scpCmd = 'ip netns exec NetNsA sshpass -p "root" scp /home/rpi/Desktop/NetNsA/' + file
+ ' [email protected]:/home/rpi/Desktop/NetNsB/' + str(i) + '_' + file
scpMouseProcess = subprocess.Popen(scpCmd, shell=True,
stdout=subprocess.PIPE,stderr=subprocess.PIPE, encoding='utf-8')
scpMouseProcess.communicate()
time.sleep(random.expovariate(0.20))

endTime = time.time()
executionTime = endTime - startTime
print("File Size = {}, Nm = {}, ExecutionTime = {}".format(file, N, executionTime))

Task 2:
Write a control script that runs elephant.py and Nm mouse.py in parallel.

Bash Script for running elephant.py and Nm mouse.py in parallel.


#!/bin/bash
python3 mouse.py 0 0 &
python3 elephant.py &
wait
python3 mouse.py 0 1 &
python3 elephant.py &
wait
python3 mouse.py 0 2 &
python3 elephant.py &
wait
python3 mouse.py 0 3 &
python3 elephant.py &
wait
python3 mouse.py 1 0 &
python3 elephant.py &
wait
python3 mouse.py 1 1 &
python3 elephant.py &
wait
python3 mouse.py 1 2 &
python3 elephant.py &
wait
python3 mouse.py 1 3 &
python3 elephant.py &
wait
python3 mouse.py 2 0 &
python3 elephant.py &
wait
python3 mouse.py 2 1 &
python3 elephant.py &
wait
python3 mouse.py 2 2 &
python3 elephant.py &
wait
python3 mouse.py 2 3 &
python3 elephant.py &
wait
python3 mouse.py 3 0 &
python3 elephant.py &
wait
python3 mouse.py 3 1 &
python3 elephant.py &
wait
python3 mouse.py 3 2 &
python3 elephant.py &
wait
python3 mouse.py 3 3 &
python3 elephant.py &

Task 3:
Repeat task 2 for various values of Nm. Plot elephant throughput Xe v/s Nm mouse transfers.

Table 2 : Readings observed after running the control script


Mouse File No of mouse Execution Time for Execution Time Total execution Elephant Throughput
Size scripts Elephant for Mouse Time Xe
(seconds) (seconds) (seconds) (Mbps)

1B 05 48.854 21.496 70.35 10.235

1B 10 56.454 68.169 124.623 8.857

1B 15 56.622 95.83 152.452 8.83

1B 20 53.425 111.217 164.642 9.359

10 KB 05 57.457 43.324 100.781 8.702

10 KB 10 52.460 59.556 112.016 9.531

10 KB 15 52.637 70.011 122.648 9.499

10 KB 20 53.027 123.248 176.275 9.429

100 KB 05 57.493 31.811 89.304 8.697

100 KB 10 52.423 50.663 103.086 9.538

100 KB 15 52.284 67.763 120.047 9.563

100 KB 20 51.915 159.498 211.413 9.631

500 KB 05 52.993 23.672 76.665 9.435

500 KB 10 56.019 80.172 136.191 8.926

500 KB 15 56.644 83.016 139.66 8.827

500 KB 20 52.166 143.642 195.808 9.585


Note:
Various considerations has been employed while performing the above experiment,
➔ File Sizes -
● Elephant File - 500 MB
● Mouse File - [ 1 B, 10 KB, 100 KB, 500 KB]
➔ Number of runs for Mouse script (Nm) - [ 5, 10, 15, 20 ]

Fig. Line Chart: Elephant Throughput v/s Nm for various mouse file sizes
Fig. Line Chart: Execution Time v/s Number of Iteration

Observations:
1. As the number of iterations increases, the average execution time for mouse files increases drastically.
2. The average execution time for the elephant file fluctuates very less throughout the experiment.
3. The average execution time for elephant file and Nm mouse files of various sizes.
Average
Mouse Mouse Execution Mouse Execution Mouse Execution
Number of Elephant
Execution Time Time for 10 KB Time for 100 KB Time for 500 KB
Iterations Execution Time
for 1B (seconds) (seconds) (seconds) (seconds)
(seconds)

5 54.199 21.496 43.324 31.811 23.672


10 54.339 68.169 59.556 50.663 80.172
15 54.547 95.83 70.011 67.763 83.016
20 52.633 111.217 123.248 159.498 143.642

Inferences:
1. As the number of mouse flows(Nm) increases, the total throughput of the system will increase up to a certain
point, beyond which the system will become congested and the throughput will decrease due to the increased
packet loss and delay.
2. For the 500KB mouse file, the execution time is almost higher every time than for the rest of the mouse files. So,
we can infer that the transfer time for mouse files is directly proportional to the number of iterations.
3. The average throughput of the elephant file doesn’t fluctuate much as the average execution time doesn’t vary
much.
4. Overall, we can infer that the performance of a network with a mix of elephant and mouse flows depends on the
number and size of the mouse flows, as well as the available bandwidth and capacity of the network. By
optimizing these parameters, we can improve the overall throughput and performance of the network.

You might also like