Home > Business Critical Applications, VMware > Jumbo Frames on vSphere 5 Update 1

Jumbo Frames on vSphere 5 Update 1

I previously posted an article regarding Jumbo Frames on vSphere 5 but was unable to test Jumbo Frames performance on Windows 2008 R2 because of a bug in the VMware VMXNET3 driver that was available at the time, refer to my article Windows VMXNET3 Performance Issues and Instability with vSphere 5.0. VMware has recently released Update 1 for vSphere 5, which has fixed the bug. I’ll share the latest results from my testing below.

My lab set up for this round of testing was pretty much the same as for the previous tests with the exception of the ESXi 5 build being 623860 due to Update 1. My results show not only has vSphere 5 Update 1 fixed the bug with Jumbo Frames on Windows 2008 and 2008 R2, but it looks to have boosted performance across the board for all the OS’s tested. The below graph also shows that based on my testing Windows 2008 R2 beats Linux when using Jumbo Frames.

This is the first time I’ve seen a Windows OS beat Linux for network throughput when Linux has been tuned (which mine has). It should also be noted that there is very little tuning required to Windows 2008 R2 to get this result. I simply enabled Jumbo Frames and RSS in the VMXNET3 driver and that was it. The tuning required to Linux was a bit more involved. But even so, it is only a win by 9Mb/s, well within the margin of error.

Jumbo Frames vs Non-Jumbo Frames Performance of Different OS's

Results Summary

In this round of testing I only had very limited time available so I have not re-tested vMotion performance. If the above results are anything to go by it probably has improved slightly. The improvements between vSphere 5 GA and vSphere 5 Update 1 appear to be around 10% when using Jumbo Frames, and slightly more when using standard MTU of 1500. All of my tests were run using a 1MB window size and a single 10G link between two hosts. There was minimal other activity on the hosts at the time the tests were executed. Network IO Control is enabled on the hosts as is Teaming Based on Physical NIC Load. Neither appear to have had any significant impact on the throughput results as there was no contention at the time of the testing. SLES 11 SP1 is still slightly ahead of Windows 2008 R2 with standard MTU of 1500, but Windows 2008 R2 has higher performance by 11Mb/s when using Jumbo Frames MTU 9000. The biggest improvement in performance in terms of percent compared to vSphere 5 GA was Windows 2003.

Note: To achieve the performance for Windows 2003 I needed to use 14 concurrent TCP streams. A single stream produced very poor results indeed. The Linux Test required 2 concurrent TCP streams to produce the best performance, although the different between 1 stream and 2 was only around 10%. Windows 2008 R2 required only a single TCP stream from iPerf to achieve the tested results.

Previous Test Results

Here is the graph from my previous test results. When comparing the two you can see the slight differences in performance.

Jumbo Frames vs No Jumbo on ESXi 5Conclusions

My conclusions from my previous article haven’t changed. I still see a benefit in using Jumbo Frames on 10G networks. 10% performance benefit with lower latency is worth the effort in many situations. If you are currently running vSphere 5 GA I would highly recommend upgrading to Update 1 and upgrading to the latest version of VMware Tools, which contains the new VMXNET3 drivers. Apart from the network performance boost there are a whole lot of improvements that will make the upgrade worthwhile.

The original inspiration to write this article came from reading Jason Boche’s article Jumbo Frames Comparison Testing with IP Storage and vMotion.

This post first appeared on the Long White Virtual Clouds blog at longwhiteclouds.com, by Michael Webster +. Copyright © 2012 – IT Solutions 2000 Ltd and Michael Webster +. All rights reserved. Not to be reproduced for commercial purposes without written permission.

Leave a Comment