I just noticed that a new whitepaper was released and as the scoopmeister Eric Sloof hasn’t blogged about it yet I figured, he’s probably sleeping, I would blog about it. I just read the paper and it is a very good read and interesting to know that a single VM can actually saturate the bandwidth of a 10Gbps NIC. Also note the VM to Native comparisons!
Source: VMware vSphere 4.1 Networking Performance
Download:
http://www.vmware.com/files/pdf/techpaper/Performance-Networking-vSphere4-1-WP.pdfDescription
This paper demonstrates that vSphere 4.1 is capable of meeting the performance demands of today’s thoughput-intensive networking applications. The paper presents the results of experiments that used standard benchmarks to measure the networking performance of different operating systems in various configurations. These experiments examine the performance of VMs by looking at VMs that are communicating with external hosts and are communicating among each other, demonstrate how varying the number of vCPUs and vNICs per VM influences performance, and show how the scalability results of overcommitting the number of physical cores on a system by adding four 1-vCPU VMs for every core.
KyleMcM says
Page 8, last paragraph, “vSphere 4.1 supports up to four virtual NICs per VM”. Should this not be ‘ten virtual NICs per VM’?
Jaime says
The last sentence of Page 6 mentions something about this as well:
“Guest VMs were only scaled up to only four virtual NICs because that is the maximum supported by vSphere 4.1.”
There is definitely some clarification if that limitation is to four 10GB NICs for the VM guest and ten 1GB NICs.
The Config Maximums states 4 physical Intel or Broadcom 10GB NICs.
Brandon says
It appears to be worded poorly.
The physical host is limited to 4 10gb NICs, so they matched that in the virtual machine and created a 1:1 relationship between the virtual NICs and the physical NICs. It says in their test setup that they set each virtual NIC to use a different physical NIC, I suppose by putting each virtual NIC in the VMs configuration on a different port group, where the port group was specified to use a specific vmnic as active.
You can have 10 VMXNet3 adapters on a VM, but they wanted a 1:1 relationship between the virtual and physical NICs (it appears); the paper just wasn’t 100% clear on how they achieved it.
KyleMcM says
I wouldn’t say it was worded poorly, I would say it was worded incorrectly. The paper does explain the 1:1 relationship and that I get, but the statements about vSphere 4.1 supporting up to four virtual NICs is just well… incorrect. I certainly think it needs to be corrected.
Ernest says
Hi Duncan,
I liked your humor about Eric Sloof.;-)
Andrew Fidel says
One thing I don’t understand is that they enabled LRO for Linux but haven’t changed the VMNEXT3 driver to be a TOE engine, this would essentially achieve much of the same performance increases if they coalesed transactions.
Brian Knutsson says
It would also be interesting to see vSwitch vs dvSwitch.
Bilal Hashmi says
Thanks Duncan! Really a very good paper to read. Another reason why windows .. oh well I wont go there… but this paper inspired me to blog about how at times Windows VMs with multiple vCPUs are left with RSS not enabled… while they stil wait to schedule all the vCPU, not all vCPUs play any role in processing these receiving packets if RSS is not enabled…
Thanks Duncan for sharing, keep up the good work…