We had a discussion internally on static overhead memory. Coincidentally I spoke with Aashish Parikh from the DRS team on this topic a couple of weeks ago when I was in Palo Alto. Aashish is working on improving the overhead memory estimation calculation so that both HA and DRS can be even more efficient when it comes to placing virtual machines. The question was around what determines the static memory and this is the answer that Aashish provided. I found it very useful hence the reason I asked Aashish if it was okay to share it with the world. I added some bits and pieces where I felt additional details were needed though.
First of all, what is static overhead and what is dynamic overhead:
- When a VM is powered-off, the amount of overhead memory required to power it on is called static overhead memory.
- Once a VM is powered-on, the amount of overhead memory required to keep it running is called dynamic or runtime overhead memory.
Static overhead memory of a VM depends upon various factors:
- Several virtual machine configuration parameters like the number vCPUs, amount of vRAM, number of devices, etc
- The enabling/disabling of various VMware features (FT, CBRC; etc)
- ESXi Build Number
Note that static overhead memory estimation is calculated fairly conservative and we take a worst-case-scenario in to account. This is the reason why engineering is exploring ways of improving it. One of the areas that can be improved is for instance including host configuration parameters. These parameters are things like CPU model, family & stepping, various CPUID bits, etc. This means that as a result, two similar VMs residing on different hosts would have different overhead values.
But what about Dynamic? Dynamic overhead seems to be more accurate today right? Well there is a good reason for it, with dynamic overhead it is “known” where the host is running and the cost of running the VM on that host can easily be calculated. It is not a matter of estimating it any longer, but a matter of doing the math. That is the big difference: Dynamic = VM is running and we know where versus Static = VM is powered off and we don’t know where it might be powered!
Same applies for instance to vMotion scenarios. Although the platform knows what the target destination will be; it still doesn’t know how the target will treat that virtual machine. As such the vMotion process aims to be conservative and uses static overhead memory instead of dynamic. One of the things or instance that changes the amount of overhead memory needed is the “monitor mode” used (BT, HV or HWMMU).
So what is being explored to improve it? First of all including the additional host side parameters as mentioned above. But secondly, but equally important, based on the vm -> “target host” combination the overhead memory should be calculated. Or as engineering calls it calculating “Static overhead of VM v on Host h”.
Now why is this important? When is static overhead memory used? Static overhead memory is used by both HA and DRS. HA for instance uses it with Admission Control when doing the calculations around how many VMs can be powered on before unreserved resources are depleted. When you power-on a virtual machine the host side “admission control” will validate if it has sufficient unreserved resource available for the “static memory overhead” to be guaranteed… But also DRS and vMotion use the static memory overhead metric, for instance to ensure a virtual machine can be placed on a target host during a vMotion process as the static memory overhead needs to be guaranteed.
As you can see, a fairly lengthy chunk of info on just a single simple metric in vCenter / ESXTOP… but very nice to know!
Mike Foley says
Hmmmm, I wonder if the hosts could use vCOPS data in their calculations? That could possibly show trends in static memory usage on a per-VM basis? (another reason to use vCOPS :))
Gabrie van Zanten says
If I understand your post correctly, static overhead becomes dynamic overhead when the VM is powered on. Would you have an estimated guess on the differences in MB between the two? In ESXi 4.1 a 4GB / 2 vCPU would have 242MB memory overhead. If this VMs move to different hosts, how much change could there be in overhead worst case / best case? (Rough figure)
@Gabe: i did some tests and found that using swMMU VMM gives you static overhead and hwMMU VMM gives you dynamic overhead. Here are the results (i render it in two steps for clarity):
@Duncan, does vmotion priority have an impact on that ?
Gabrie van Zanten says
Those are big differences !!! Thanks for checking this.
Abdullah Abdullah says
VM uses “two resource types” of memory for the Static memory overhead to turn it on, HA and VMotion and Dynamic Memory Overhead for its management.
For this reason it is necessary to manage well the DRS?