0% found this document useful (0 votes)
195 views5 pages

Using IPerf To Troubleshoot Speed - Throughput Issues

The document discusses using the iPerf network diagnostic tool to troubleshoot speed and throughput issues. It explains that iPerf can test TCP window sizes and use parallel streams to send more data and overcome bandwidth delay product issues caused by long distances between hosts. The document provides examples of using iPerf commands to test throughput over 20 seconds between servers in different cities, and shows that increasing the TCP window size to 1MB and using 7 parallel streams was able to achieve throughput of over 800 Mbps.

Uploaded by

DELALI ANSAH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
195 views5 pages

Using IPerf To Troubleshoot Speed - Throughput Issues

The document discusses using the iPerf network diagnostic tool to troubleshoot speed and throughput issues. It explains that iPerf can test TCP window sizes and use parallel streams to send more data and overcome bandwidth delay product issues caused by long distances between hosts. The document provides examples of using iPerf commands to test throughput over 20 seconds between servers in different cities, and shows that increasing the TCP window size to 1MB and using 7 parallel streams was able to achieve throughput of over 800 Mbps.

Uploaded by

DELALI ANSAH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Using iPerf to Troubleshoot

Speed/Throughput Issues
December 29, 2011

Posted by Andrew Tyler in Customer Service, SoftLayer, Technology, Tips and Tricks

Two of the most common network characteristics we look at when investigating network-related
concerns in the NOC are speed and throughput. You may have experienced the following
scenario yourself: You just provisioned a new bad-boy server with a gigabit connection in a data
center on the opposite side of the globe. You begin to upload your data and to your shock, you
see "Time Remaining: 10 Hours." "What's wrong with the network?" you wonder. The traceroute
and MTR look fine, but where's the performance and bandwidth I'm paying for?

This issue is all too common and it has nothing to do with the network, but in fact, the culprits
are none other than TCP and the laws of physics.

In data transmission, TCP sends a certain amount of data then pauses. To ensure proper delivery
of data, it doesn't send more until it receives an acknowledgement from the remote host that all
data was received. This is called the "TCP Window." Data travels at the speed of light, and
typically, most hosts are fairly close together. This "windowing" happens so fast we don't even
notice it. But as the distance between two hosts increases, the speed of light remains constant.
Thus, the further away the two hosts, the longer it takes for the sender to receive the
acknowledgement from the remote host, reducing overall throughput. This effect is called
"Bandwidth Delay Product," or BDP.

We can overcome BDP to some degree by sending more data at a time. We do this by adjusting
the "TCP Window" telling TCP to send more data per flow than the default parameters. Each
OS is different and the default values will vary, but most all operating systems allow tweaking of
the TCP stack and/or using parallel data streams. So what is iPerf and how does it fit into all of
this?

What is iPerf?
iPerf is simple, open-source, command-line, network diagnostic tool that can run on Linux, BSD,
or Windows platforms which you install on two endpoints. One side runs in a 'server' mode
listening for requests; the other end runs 'client' mode that sends data. When activated, it tries to
send as much data down your pipe as it can, spitting out transfer statistics as it does. What's so
cool about iPerf is you can test in real time any number of TCP window settings, even using
parallel streams. There's even a Java based GUI you can install that runs on top of it called, JPerf
(JPerf is beyond the scope of this article, but I recommend looking into it). What's even cooler is
that because iPerf resides in memory, there are no files to clean up.

How do I use iPerf?


iPerf can be quickly downloaded from SourceForge to be installed. It uses port 5001 by default,
and the bandwidth it displays is from the client to the server. Each test runs for 10 seconds by
default, but virtually every setting is adjustable. Once installed, simply bring up the command
line on both of the hosts and run these commands.

On the server side:


iperf -s

On the client side:


iperf -c [server_ip]

The output on the client side will look like this:

#iperf -c 10.10.10.5
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 0.0.0.0 port 46956 connected with 168.192.1.10 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 10.0 sec 10.0 MBytes 1.00 Mbits/sec

There are a lot of things we can do to make this output better with more meaningful data. For
example, let's say we want the test to run for 20 seconds instead of 10 (-t 20), and we want to
display transfer data every 2 seconds instead of the default of 10 (-i 2), and we want to test on
port 8000 instead of 5001 (-p 8000). For the purposes of this exercise, let's use those
customization as our baseline. This is what the command string would look like on both ends:

Client Side:
#iperf -c 10.10.10.5 -p 8000 -t 20 -i 2
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.10.10.10 port 46956 connected with 10.10.10.5 port 8000
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 6.00 MBytes 25.2 Mbits/sec
[ 3] 2.0- 4.0 sec 7.12 MBytes 29.9 Mbits/sec
[ 3] 4.0- 6.0 sec 7.00 MBytes 29.4 Mbits/sec
[ 3] 6.0- 8.0 sec 7.12 MBytes 29.9 Mbits/sec
[ 3] 8.0-10.0 sec 7.25 MBytes 30.4 Mbits/sec
[ 3] 10.0-12.0 sec 7.00 MBytes 29.4 Mbits/sec
[ 3] 12.0-14.0 sec 7.12 MBytes 29.9 Mbits/sec
[ 3] 14.0-16.0 sec 7.25 MBytes 30.4 Mbits/sec
[ 3] 16.0-18.0 sec 6.88 MBytes 28.8 Mbits/sec
[ 3] 18.0-20.0 sec 7.25 MBytes 30.4 Mbits/sec
[ 3] 0.0-20.0 sec 70.1 MBytes 29.4 Mbits/sec

Server Side:
#iperf -s -p 8000 -i 2
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 8.00 KByte (default)
------------------------------------------------------------
[852] local 10.10.10.5 port 8000 connected with 10.10.10.10 port 58316
[ ID] Interval Transfer Bandwidth
[ 4] 0.0- 2.0 sec 6.05 MBytes 25.4 Mbits/sec
[ 4] 2.0- 4.0 sec 7.19 MBytes 30.1 Mbits/sec
[ 4] 4.0- 6.0 sec 6.94 MBytes 29.1 Mbits/sec
[ 4] 6.0- 8.0 sec 7.19 MBytes 30.2 Mbits/sec
[ 4] 8.0-10.0 sec 7.19 MBytes 30.1 Mbits/sec
[ 4] 10.0-12.0 sec 6.95 MBytes 29.1 Mbits/sec
[ 4] 12.0-14.0 sec 7.19 MBytes 30.2 Mbits/sec
[ 4] 14.0-16.0 sec 7.19 MBytes 30.2 Mbits/sec
[ 4] 16.0-18.0 sec 6.95 MBytes 29.1 Mbits/sec
[ 4] 18.0-20.0 sec 7.19 MBytes 30.1 Mbits/sec
[ 4] 0.0-20.0 sec 70.1 MBytes 29.4 Mbits/sec

There are many, many other parameters you can set that are beyond the scope of this article, but
for our purposes, the main use is to prove out our bandwidth. This is where we'll use the TCP
window options and parallel streams. To set a new TCP window you use the -w switch and you
can set the parallel streams by using -P.

Increased TCP window commands:

Server side:
#iperf -s -w 1024k -i 2

Client side:
#iperf -i 2 -t 20 -c 10.10.10.5 -w 1024k

And here are the iperf results from two Softlayer file servers one in Washington, D.C., acting
as Client, the other in Seattle acting as Server:

Client Side:
# iperf -i 2 -t 20 -c 10.10.10.5 -p 8000 -w 1024k
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
[ 3] local 10.10.10.10 port 53903 connected with 10.10.10.5 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
[ 3] 2.0- 4.0 sec 28.5 MBytes 120 Mbits/sec
[ 3] 4.0- 6.0 sec 28.4 MBytes 119 Mbits/sec
[ 3] 6.0- 8.0 sec 28.9 MBytes 121 Mbits/sec
[ 3] 8.0-10.0 sec 28.0 MBytes 117 Mbits/sec
[ 3] 10.0-12.0 sec 29.0 MBytes 122 Mbits/sec
[ 3] 12.0-14.0 sec 28.0 MBytes 117 Mbits/sec
[ 3] 14.0-16.0 sec 29.0 MBytes 122 Mbits/sec
[ 3] 16.0-18.0 sec 27.9 MBytes 117 Mbits/sec
[ 3] 18.0-20.0 sec 29.0 MBytes 122 Mbits/sec
[ 3] 0.0-20.0 sec 283 MBytes 118 Mbits/sec

Server Side:
#iperf -s -w 1024k -i 2 -p 8000
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 1.00 MByte
------------------------------------------------------------
[ 4] local 10.10.10.5 port 8000 connected with 10.10.10.10 port 53903
[ ID] Interval Transfer Bandwidth
[ 4] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
[ 4] 2.0- 4.0 sec 28.6 MBytes 120 Mbits/sec
[ 4] 4.0- 6.0 sec 28.3 MBytes 119 Mbits/sec
[ 4] 6.0- 8.0 sec 28.9 MBytes 121 Mbits/sec
[ 4] 8.0-10.0 sec 28.0 MBytes 117 Mbits/sec
[ 4] 10.0-12.0 sec 29.0 MBytes 121 Mbits/sec
[ 4] 12.0-14.0 sec 28.0 MBytes 117 Mbits/sec
[ 4] 14.0-16.0 sec 29.0 MBytes 122 Mbits/sec
[ 4] 16.0-18.0 sec 28.0 MBytes 117 Mbits/sec
[ 4] 18.0-20.0 sec 29.0 MBytes 121 Mbits/sec
[ 4] 0.0-20.0 sec 283 MBytes 118 Mbits/sec

We can see here, that by increasing the TCP window from the default value to 1MB (1024k) we
achieved around a 400% increase in throughput over our baseline. Unfortunately, this is the limit
of this OS in terms of Window size. So what more can we do? Parallel streams! With multiple
simultaneous streams we can fill the pipe close to its maximum usable amount.

Parallel Stream Command:


#iperf -i 2 -t 20 -c -p 8000 10.10.10.5 -w 1024k -P 7

Client Side:
#iperf -i 2 -t 20 -c -p 10.10.10.5 -w 1024k -P 7
------------------------------------------------------------
Client connecting to 10.10.10.5, TCP port 8000
TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
[ ID] Interval Transfer Bandwidth
[ 9] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
[ 4] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
[ 7] 0.0- 2.0 sec 25.6 MBytes 107 Mbits/sec
[ 8] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
[ 5] 0.0- 2.0 sec 25.8 MBytes 108 Mbits/sec
[ 3] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
[ 6] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
[SUM] 0.0- 2.0 sec 178 MBytes 746 Mbits/sec

(output omitted for brevity on server & client)

[ 7] 18.0-20.0 sec 28.2 MBytes 118 Mbits/sec


[ 8] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
[ 5] 18.0-20.0 sec 28.0 MBytes 117 Mbits/sec
[ 4] 18.0-20.0 sec 28.0 MBytes 117 Mbits/sec
[ 3] 18.0-20.0 sec 28.9 MBytes 121 Mbits/sec
[ 9] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
[ 6] 18.0-20.0 sec 28.9 MBytes 121 Mbits/sec
[SUM] 18.0-20.0 sec 200 MBytes 837 Mbits/sec
[SUM] 0.0-20.0 sec 1.93 GBytes 826 Mbits/sec
Server Side:
#iperf -s -w 1024k -i 2 -p 8000
------------------------------------------------------------
Server listening on TCP port 8000
TCP window size: 1.00 MByte
------------------------------------------------------------
[ 4] local 10.10.10.10 port 8000 connected with 10.10.10.5 port 53903
[ ID] Interval Transfer Bandwidth
[ 5] 0.0- 2.0 sec 25.7 MBytes 108 Mbits/sec
[ 8] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
[ 4] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
[ 9] 0.0- 2.0 sec 24.9 MBytes 104 Mbits/sec
[ 10] 0.0- 2.0 sec 25.9 MBytes 108 Mbits/sec
[ 7] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
[ 6] 0.0- 2.0 sec 25.9 MBytes 109 Mbits/sec
[SUM] 0.0- 2.0 sec 178 MBytes 747 Mbits/sec

[ 4] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec


[ 5] 18.0-20.0 sec 28.3 MBytes 119 Mbits/sec
[ 7] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
[ 10] 18.0-20.0 sec 28.1 MBytes 118 Mbits/sec
[ 9] 18.0-20.0 sec 28.0 MBytes 118 Mbits/sec
[ 8] 18.0-20.0 sec 28.8 MBytes 121 Mbits/sec
[ 6] 18.0-20.0 sec 29.0 MBytes 121 Mbits/sec
[SUM] 18.0-20.0 sec 200 MBytes 838 Mbits/sec
[SUM] 0.0-20.1 sec 1.93 GBytes 825 Mbits/sec

As you can see from the tests above, we were able to increase throughput from 29Mb/s with a
single stream and the default TCP Window to 824Mb/s using a higher window and parallel
streams. On a Gigabit link, this about the maximum throughput one could hope to achieve before
saturating the link and causing packet loss. The bottom line is, I was able to prove out the
network and verify bandwidth capacity was not an issue. From that conclusion, I could focus on
tweaking TCP to get the most out of my network.

I'd like to point out that we will never get 100% out of any link. Typically, 90% utilization is
about the real world maximum anyone will achieve. If you get any more, you'll begin to saturate
the link and incur packet loss. I should also point out that Softlayer doesn't directly support iPerf,
so it's up to you install and play around with. It's such a versatile and easy to use little piece of
software that it's become invaluable to me, and I think it will become invaluable to you as well!

You might also like