• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

numa

vNUMA and vMotion

Duncan Epping · Oct 28, 2011 ·

I was listening to some VMworld talks during the weekend and something caught my attention which I hadn’t realized before. The talk I was listening to was VSP2122″VMware vMotion in vSphere 5.0, Architecture and Performance”. Now this probably doesn’t apply to most of the people reading this so let me set the scenario first:

  • Different hosts from a CPU/Memory perspective in a single cluster (different NUMA topology)
  • VMs with more than 8 vCPUs

Now the thing is that the vNUMA topology is set for a given VM during the power-on. This is based on the NUMA topology of the physical host that has received the power-on request. When you move a VM to a host which has a different NUMA topology then it could result in reduced performance. This is also described in the Performance Best Practices whitepaper for vSphere 5.0. A nice example of how you can benefit from vNUMA is explained in the recently released academic paper “Performance Evaluation of HPC Benchmarks on VMware’s ESXi Server“.

I’ve never been a huge fan of mixed clusters due to complications it adds around resource management and availability, but this is definitely another argument to try to avoid it where and when possible.

Disabling TPS hurting performance?

Duncan Epping · May 11, 2010 ·

On the internal mailinglist there was a discussion today around how disabling TPS (Transparent Page Sharing) could negativitely impact performance. It is something I hadn’t thought about yet but when you do think about it it actually does make sense and is definitely something to keep in mind.

Most new servers have some sort of NUMA architecture today. As hopefully all of you know TPS does not cross a NUMA node boundary. This basically means that pages will not be shared between NUMA nodes. Another thing that Frank Denneman already described in his article here is that when memory pages are allocated remotely there is a memory penalty associated with it. (Did you know there is an “esxtop” metric, N%L,which shows the percentage of remote pages?) These pages are accessed across an interconnect bus which is always slower than so called local memory.

Now you might ask what is the link between NUMA, TPS and degraded performance? Think about it for a second… TPS decreases the amount of physical pages needed. If TPS is disabled there is no sharing and chances of going across NUMA nodes are increased and as stated this will definitely impact performance. Funny how disabling a mechanism(TPS) which is often associated with “CPU overhead” can have a negative impact on memory latency.

Memory incorrectly balanced

Duncan Epping · Dec 28, 2007 ·

During a VMware healthcheck at one of my customers I ran across the following error in /var/log/vmkwarning: “Memory is incorrectly balanced between the NUMA nodes of this system which will lead to poor performance. See /proc/vmware/NUMA/hardware for details on your current memory configuration.”

[Read more…] about Memory incorrectly balanced

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive" and the “vSphere Clustering Technical Deep Dive” series, and he is the host of the "In de aap gelogeerd" (Dutch) and "unexplored territory" (English) podcasts.

Upcoming Events

09-06-2022 – VMUG Belgium
16-06-2022 – VMUG Sweden

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2022 · Log in