The Effects of Latency on Throughput

I still remember 2009 when I was assigned the task to validate a satellite internet link with 16Mbps bandwidth. At that time that was a looooot in my country.

My biggest surprise was that no matter how many speedtests and download tests I did, I simply couldn’t reach the full 16Mbps and saturate the link. And there I went to complain with the service provider.

All this happened until a very patient young engineer explained it to me:

Mário, it is a satellite link. Latency is too high, you have to make many simultaneous downloads or use a download accelerator.

And it was there that, for the first time, I learned that high latency links like those satellite links (ping latencies of ~500-600ms) influence not only voice/video real-time communication but it also influences throughput… the “speed”.

Let’s verify that…

To test it, I used 2 Linux Lubuntu Virtual Machines and iperf3.

iperf3 is an excellent utility to test throughput, packet loss, delay, jitter, etc. It is full of options and allows changing of buffer sizes, packet sizes, transport protocol (TCP/UDP), number of connections… a lot!

The VMs were connect to each other using Virtualbox Host-only Network.

Important note: In all tests the delay shown is “round-trip”. That means it measures 2-way delay, the time between the request and the response being received back.

Delay ~ 0

For the first test, I considered a delay close to 0 ms as can be seen below.

---------------------------------------------------------
[email protected]:~$ ping -c10 -i0.2 lubuntu2
 PING lubuntu2 (192.168.56.2) 56(84) bytes of data.
 --- lubuntu2 ping statistics ---
 10 packets transmitted, 10 received, 0% packet loss, time 1811ms
 rtt min/avg/max/mdev = 0.370/0.613/0.716/0.103 ms

[email protected]:~$  ping -c10 -i0.2 lubuntu3
 PING lubuntu3 (192.168.56.3) 56(84) bytes of data.
 --- lubuntu3 ping statistics ---
 10 packets transmitted, 10 received, 0% packet loss, time 1821ms
 rtt min/avg/max/mdev = 0.522/0.647/0.751/0.075 ms
---------------------------------------------------------

After validating the delay between hosts, I enabled the iperf server in the lubuntu3 VM. This was the iperf server. The client will upload data to it unless we run the test with the --reverse option.

-----------------------------------------------------------
 [email protected]:~$ iperf3 --server
 -----------------------------------------------------------
 Server listening on 5201
 -----------------------------------------------------------
Network setup

The test with a TCP session running for 30 seconds and default configurations shows very good results:

[email protected]:~$ iperf3  --time 30 --interval 0 --omit 1 --client  lubuntu3
 Connecting to host lubuntu3, port 5201
 [  4] local 192.168.56.2 port 51230 connected to 192.168.56.3 port 5201
 [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
 [  4]   0.00-30.00  sec  8.58 GBytes  2.46 Gbits/sec  61531    191 KBytes
 - - - - - - - - - - - - - - - - - - - - - - - - -
 [ ID] Interval           Transfer     Bandwidth       Retr
 [  4]   0.00-30.00  sec  8.58 GBytes  2.46 Gbits/sec  61531             sender
 [  4]   0.00-30.00  sec  8.59 GBytes  2.46 Gbits/sec                  receiver
 iperf Done.

What we see in the results are 2.46 Gbps reported by both the client and server in the lubuntu2>lubuntu3 direction (client>server).

In the reverse direction:

[email protected]:~$ iperf3  --time 30 --interval 0 --omit 1 --client  lubuntu3 --reverse
 Connecting to host lubuntu3, port 5201
 Reverse mode, remote host lubuntu3 is sending
 [  4] local 192.168.56.2 port 51234 connected to 192.168.56.3 port 5201
 [ ID] Interval           Transfer     Bandwidth
 [  4]   0.00-30.00  sec  8.09 GBytes  2.32 Gbits/sec
 - - - - - - - - - - - - - - - - - - - - - - - - -
 [ ID] Interval           Transfer     Bandwidth       Retr
 [  4]   0.00-30.00  sec  8.09 GBytes  2.32 Gbits/sec  53872             sender
 [  4]   0.00-30.00  sec  8.09 GBytes  2.32 Gbits/sec                  receiver
 iperf Done.
 ---------------------------------------------------

For some reason the test in the reverse direction (server > client, lubuntu3>lubuntu2) is always smaller, 2.32Gbps.

Delay +10ms

After the baselining test, I added a light increase of 10ms.

[email protected]:~$ ping -c10 -i0.2  lubuntu2
 PING lubuntu2 (192.168.56.2) 56(84) bytes of data.
 64 bytes from lubuntu2 (192.168.56.2): icmp_seq=1 ttl=64 time=11.2 ms
...
 64 bytes from lubuntu2 (192.168.56.2): icmp_seq=10 ttl=64 time=11.0 ms
 --- lubuntu2 ping statistics ---
 10 packets transmitted, 10 received, 0% packet loss, time 1810ms
 rtt min/avg/max/mdev = 10.836/11.303/11.980/0.323 ms

[email protected]:~$  ping -c10 -i0.2  lubuntu3
 PING lubuntu3 (192.168.56.3) 56(84) bytes of data.
 64 bytes from lubuntu3 (192.168.56.3): icmp_seq=1 ttl=64 time=11.1 ms
...
 64 bytes from lubuntu3 (192.168.56.3): icmp_seq=10 ttl=64 time=11.3 ms
 --- lubuntu3 ping statistics ---
 10 packets transmitted, 10 received, 0% packet loss, time 1808ms
 rtt min/avg/max/mdev = 10.679/11.137/11.714/0.309 ms

The results of the TCP test already show a considerable throughput reduction.

[email protected]:~$ iperf3  --time 30 --interval 0 --omit 1 --client  lubuntu3
 Connecting to host lubuntu3, port 5201
 [  4] local 192.168.56.2 port 51238 connected to 192.168.56.3 port 5201
 [ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
 [  4]   0.00-30.00  sec  1.29 GBytes   370 Mbits/sec  446    547 KBytes
 - - - - - - - - - - - - - - - - - - - - - - - - -
 [ ID] Interval           Transfer     Bandwidth       Retr
 [  4]   0.00-30.00  sec  1.29 GBytes   370 Mbits/sec  446             sender
 [  4]   0.00-30.00  sec  1.30 GBytes   371 Mbits/sec                  receiver
 iperf Done.

[email protected]:~$ iperf3  --time 30 --interval 0 --omit 1 --client  lubuntu3 --reverse
 Connecting to host lubuntu3, port 5201
 Reverse mode, remote host lubuntu3 is sending
 [  4] local 192.168.56.2 port 51242 connected to 192.168.56.3 port 5201
 [ ID] Interval           Transfer     Bandwidth
 [  4]   0.00-30.00  sec   853 MBytes   238 Mbits/sec
 - - - - - - - - - - - - - - - - - - - - - - - - -
 [ ID] Interval           Transfer     Bandwidth       Retr
 [  4]   0.00-30.00  sec   854 MBytes   239 Mbits/sec  463             sender
 [  4]   0.00-30.00  sec   853 MBytes   239 Mbits/sec                  receiver
 iperf Done.

With ~10ms delay, we got 370Mbps (client>server) and 239Mbps (server>client).

Delay = 30ms

For the next test, I added 29ms to have an average of 30ms delay.

 --- lubuntu2 ping statistics ---
 10 packets transmitted, 10 received, 0% packet loss, time 1812ms
 rtt min/avg/max/mdev = 29.754/30.038/30.187/0.219 ms

 --- lubuntu3 ping statistics ---
 10 packets transmitted, 10 received, 0% packet loss, time 1811ms
 rtt min/avg/max/mdev = 29.556/29.994/30.331/0.330 ms

These were the results of the TCP tests:

[email protected]:~$ iperf3 -t 30 -i 0 -O 1 -c  lubuntu3
 [ ID] Interval           Transfer     Bandwidth       Retr
 [  4]   0.00-30.00  sec  1.13 GBytes   323 Mbits/sec   90             sender
 [  4]   0.00-30.00  sec  1.13 GBytes   324 Mbits/sec                  receiver
 iperf Done.

[email protected]:~$ iperf3 -t 30 -i 0 -O 1 -c  lubuntu3 -R
 [ ID] Interval           Transfer     Bandwidth       Retr
 [  4]   0.00-30.00  sec   304 MBytes  84.9 Mbits/sec  181             sender
 [  4]   0.00-30.00  sec   304 MBytes  85.1 Mbits/sec                  receiver
 iperf Done.

Once again, a reduction: 323Mbps end 85Mbps.

Delay 150ms

 --- lubuntu3 ping statistics ---
 10 packets transmitted, 10 received, 0% packet loss, time 1804ms
 rtt min/avg/max/mdev = 149.677/150.130/150.410/0.382 ms

These were the results for 150ms.

 [email protected]:~$ iperf3 -t 30 -i 0 -O 1 -c  lubuntu3
 [ ID] Interval           Transfer     Bandwidth       Retr
 [  4]   0.00-30.00  sec   487 MBytes   136 Mbits/sec  467             sender
 [  4]   0.00-30.00  sec   489 MBytes   137 Mbits/sec                  receiver
 iperf Done.

[email protected]:~$ iperf3 -t 30 -i 0 -O 1 -c  lubuntu3 -R
 [ ID] Interval           Transfer     Bandwidth       Retr
 [  4]   0.00-30.00  sec   115 MBytes  32.2 Mbits/sec  630             sender
 [  4]   0.00-30.00  sec   115 MBytes  32.1 Mbits/sec                  receiver
 iperf Done.

Delay 550ms

I finally got to 550ms, a value equivalent to a geostationary satellite link.

 --- lubuntu3 ping statistics ---
 10 packets transmitted, 10 received, 0% packet loss, time 1811ms
 rtt min/avg/max/mdev = 549.841/550.164/550.638/0.846 ms

I ran the same TCP tests.

 [email protected]:~$ iperf3 -t 30 -i 0 -O 1 -c  lubuntu3
 [ ID] Interval           Transfer     Bandwidth       Retr
 [  4]   0.00-30.00  sec   124 MBytes  34.7 Mbits/sec    0             sender
 [  4]   0.00-30.00  sec   123 MBytes  34.5 Mbits/sec                  receiver
 iperf Done.

[email protected]:~$ iperf3 -t 30 -i 0 -O 1 -c  lubuntu3 -R
 - - - - - - - - - - - - - - - - - - - - - - - - -
 [ ID] Interval           Transfer     Bandwidth       Retr
 [  4]   0.00-30.00  sec  39.4 MBytes  11.0 Mbits/sec  390             sender
 [  4]   0.00-30.00  sec  37.2 MBytes  10.4 Mbits/sec                  receiver
 iperf Done.

Clearly, you can’t expect very high speeds on a satellite link unless you use TCP optimizations or caching. The results were 34.5 Mbps and 10.7 Mbps.

Summary

There is nothing better than some Excel graphs to summarize the numbers.

Throughput vs. Latência 1

To better understand the impact, the graph below excludes the 1st tests with a delay smaller than 1ms.

Throughput vs. Latência 2

If you were not convinced before, these graphs certainly will change that.

But why?

Why does the additional delay and latency influence the one-way throughput? Because a TCP download or upload communication is not really just one-way.

TCP is what we call a reliable protocol. For it to be reliable, the endpoint that receives the data needs to regularly send packets confirming reception of previous packets. The endpoint sending packets must receive these acknowledgements. If we consider that links are rarely perfect (have losses) the effects of these imperfections have an even greater impact because hosts lose more time trying to “understand each other” waiting for those important acknowledgements.

Is there a solution?

As I mentioned in a different post, unless the latency is caused by some cheap equipment or by saturation, there is no way to reduce latency. However, there are ways to make life batter:

  • Caching
  • Protocol optimization/acceleration (especially for TCP)
  • Parallel TCP sessions in the same data transfer.

————

Host Tuning Background Information
http://fasterdata.es.net/host-tuning/background/

Minimizing Latency in Satellite Networks
http://www.satellitetoday.com/telecom/2009/09/01/minimizing-latency-in-satellite-networks/

Host Tuning
http://fasterdata.es.net/host-tuning/

Network Tuning
http://fasterdata.es.net/network-tuning/

TCP Throughput Calculator
https://www.switch.ch/network/tools/tcp_throughput/

IPerf3 Download
https://iperf.fr/iperf-download.php

IPerf2/3 Basic Commands
https://fasterdata.es.net/performance-testing/network-troubleshooting-tools/iperf/

http://software.internet2.edu/owamp/

Leave a Reply

Your email address will not be published. Required fields are marked *