Thursday, May 19, 2011

Home network benchmarking with iperf

Having recently purchased a new NAS device, as well as upgraded network cables in my house to cat6 (1Gbps), I started wondering about just how fast and healthy my network setup was. Are all cables in order? Are any of my switches a bottleneck? Any particular slow endpoints?
performance challenged
There are of course commercial software to assist in this, but after having compiled and loaded up a bunch of Linux software onto my small NAS, it was only natural to give "iperf" a try. It's a small open source client-server utility you start in listening mode on one device, and in transmission mode on another, and have it transfer a bunch of random data over TCP. You can obtain iperf from sourceforge, from your Linux package manager or just Google for precompiled binaries if you are lazy.

Simple unidirectional test
The process is simple, log onto the receiving device and run iperf in listening mode, in the following case it's my Synology NAS:


NAS> iperf -s -p 5555
------------------------------------------------------------
Server listening on TCP port 5555
TCP window size: 85.3 kByte  (default)
------------------------------------------------------------

Then also run iperf, though this time in client mode targeting the server you just started:


casper@laptop:~/$ iperf -c 192.168.0.100 -p 5555
------------------------------------------------------------
Client connecting to 192.168.0.100, TCP port 5555
TCP window size: 19.0 kByte  (default)
------------------------------------------------------------
[  3] local 192.168.0.234 port 52742 connected with 192.168.0.100 port 5555
[ ID] Interval       Transfer     Bandwidth
[  3] 0.0-10.3 sec   26.2MBytes   21.4 Mbit/sec

Note that the above result is quite slow, as a consequence of being on congested WiFi and benchmarking the slowest node in my household, the NAS.

This gives you a bps metric for how fast the client could transfer data to the server. To benchmark both upstream and downstream, you must reverse the client-server pair and repeat the process.

Complex bidirectional test case
By running tests between each and every possible node, it becomes possible to draw some interesting conclusions about the setup in general. The wired setup in my household looks roughly like this (without smartphones, TV, PS3, IP phone etc.):


With Gigabit connections between all major nodes, I expected to see throughput up around 90% of the theoretical maximum bandwidth. What I saw was hardly consistent however, as the following graph shows:


Note that solid edges signify physical cat6 cables, dashed edges logical connections. Edges are labeled with directional throughput in Mbps and nodes contains summerized throughput on the form in/downtream and out/uptream.

Lessons learned
Infrastructure nodes like router and switch would appear to have performed optimal, and there is no hinting of faulty cabling. On several occasions, one can observe throughput beyond 940Mbps, or 94% of the theoretical maximum.
I was a little surprised to see that the overall best performer, was actually my 3 year old sub-notebook. The network stack (hardware+software) is obviously very well implemented on the laptop. The PC's, particularly one of them, would probably benefit from a new dedicated Gigabit adapter rather than relying on the one built into the motherboard.
However, the greatest surprise was to see how the NAS struggled, particularly when receiving data and when having to cross 2 switches. In all fairness, the slow CPU of the NAS (1.6GHz ARM) could very likely be key to the poor performance of iperf - loopback tests on all endpoints incl. the NAS suggests this to be the case.

There are of course many other parameters one could care about; ping time, jitter, packet loss, UDP packets, jumbo frames etc. but I was mostly just concerned about TCP bandwidth throughput. I've learned that perhaps I should flash my DD-WRT router back with the original firmware, or perhaps try another open source image like Tomato, in order to see whether I can mitigate some of the observed bandwidth loss. It also seems abundantly clear, that the network system of my NAS device is more optimized for delivering data than receiving it.

I'd like to do a plugin for Synology's management interface, having it run native iperf against a Java implementation hosted by the client as an applet. Currently looking into whether this is possible. I fear it's too low level for Java, as this was one of the reasons I changed to C# for dealing with raw sockets some years ago.

Post a Comment