I was listening to some VMworld talks during the weekend and something caught my attention which I hadn’t realized before. The talk I was listening to was VSP2122″VMware vMotion in vSphere 5.0, Architecture and Performance”. Now this probably doesn’t apply to most of the people reading this so let me set the scenario first:
- Different hosts from a CPU/Memory perspective in a single cluster (different NUMA topology)
- VMs with more than 8 vCPUs
Now the thing is that the vNUMA topology is set for a given VM during the power-on. This is based on the NUMA topology of the physical host that has received the power-on request. When you move a VM to a host which has a different NUMA topology then it could result in reduced performance. This is also described in the Performance Best Practices whitepaper for vSphere 5.0. A nice example of how you can benefit from vNUMA is explained in the recently released academic paper “Performance Evaluation of HPC Benchmarks on VMware’s ESXi Server“.
I’ve never been a huge fan of mixed clusters due to complications it adds around resource management and availability, but this is definitely another argument to try to avoid it where and when possible.
Jason Boche says
Good article. Almost always I would recommend against different host configurations in a cluster. It’s asking for trouble.
Paul Nothard says
Also thing to note that it’s no longer just AMD who have NUMA. A surprising number of people I meet believe this to still be the case.
Another good reason to get the experts in (PSO) when doing design work.
Yann Bizeul says
Interesting aspect of memory performance optimization. That leads me to a question :
How does VMware handle vMotion between sibling hosts regarding NUMA optimization ? I mean, some VM may already be placed preferred on identified CPU/RAM bus, but a new VM migrated on the host may not be able to get the same ideal situation. And finally, the same question arises when it comes simply to booting a VM on a host that would not be able to provide fully optimized CPU/Ram bus placement.
Is there a way to rebalance while running to avoid “fragmentation” or am I completely missing something ?
Jay Weinshenker says
So I admit, I haven’t had the chance to test and post about it, but this brings up a inconsistency I came across – according to that presentation and other ESXi 5 documentation I’ve seen vNUMA is only enabled automatically on VMs with at least 8 vCPUs – yet if you go read the Best Practices for Performance Tuning of Latency-Sensitive Workloads in vSphere VMs (http://www.vmware.com/files/pdf/techpaper/VMW-Tuning-Latency-Sensitive-Workloads.pdf ) there’s this:
“vNUMA is automatically enabled for VMs with more vCPUs than the number of cores per socket.”
Which is it?
Finally, I realize your post is on vNUMA (in ESXi 5) and not NUMA, but you need to consider NUMA as well when doing vMotions in (ESX(i) 4 and 4.1) – see kb.vmware.com/kb/2000740
Supposedly that issue is fixed in 4.1u2 which came out in the last 24 hours ( http://www.vmware.com/support/vsphere4/doc/vsp_esxi41_u2_rel_notes.html – search on 200740)
Duncan says
Thanks for the comment. I wasn’t sure either and asked one if the engineers. They told me “more than 8”.
Vishal says
Hi Duncun
vNUMA is by default enabled for a VM which has more than 8 cores why ? why not for less than 8 cores ?
Roberto Neigenfind says
Why vNUMA is by default not enabled for a VM which has less than 8 cores?