• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

What is static overhead memory?

Duncan Epping · May 6, 2013 ·

We had a discussion internally on static overhead memory. Coincidentally I spoke with Aashish Parikh from the DRS team on this topic a couple of weeks ago when I was in Palo Alto. Aashish is working on improving the overhead memory estimation calculation so that both HA and DRS can be even more efficient when it comes to placing virtual machines. The question was around what determines the static memory and this is the answer that Aashish provided. I found it very useful hence the reason I asked Aashish if it was okay to share it with the world. I added some bits and pieces where I felt additional details were needed though.

First of all, what is static overhead and what is dynamic overhead:

  • When a VM is powered-off, the amount of overhead memory required to power it on is called static overhead memory.
  • Once a VM is powered-on, the amount of overhead memory required to keep it running is called dynamic or runtime overhead memory.

Static overhead memory of a VM depends upon various factors:

  1. Several virtual machine configuration parameters like the number vCPUs, amount of vRAM, number of devices, etc
  2. The enabling/disabling of various VMware features (FT, CBRC; etc)
  3. ESXi Build Number

Note that static overhead memory estimation is calculated fairly conservative and we take a worst-case-scenario in to account. This is the reason why engineering is exploring ways of improving it. One of the areas that can be improved is for instance including host configuration parameters. These parameters are things like CPU model, family & stepping, various CPUID bits, etc. This means that as a result, two similar VMs residing on different hosts would have different overhead values.

But what about Dynamic? Dynamic overhead seems to be more accurate today right? Well there is a good reason for it, with dynamic overhead it is “known” where the host is running and the cost of running the VM on that host can easily be calculated. It is not a matter of estimating it any longer, but a matter of doing the math. That is the big difference: Dynamic = VM is running and we know where versus Static = VM is powered off and we don’t know where it might be powered!

Same applies for instance to vMotion scenarios. Although the platform knows what the target destination will be; it still doesn’t know how the target will treat that virtual machine. As such the vMotion process aims to be conservative and uses static overhead memory instead of dynamic. One of the things or instance that changes the amount of overhead memory needed is the “monitor mode” used (BT, HV or HWMMU).

So what is being explored to improve it? First of all including the additional host side parameters as mentioned above. But secondly, but equally important, based on the vm -> “target host” combination the overhead memory should be calculated. Or as engineering calls it calculating “Static overhead of VM v on Host h”.

Now why is this important? When is static overhead memory used? Static overhead memory is used by both HA and DRS. HA for instance uses it with Admission Control when doing the calculations around how many VMs can be powered on before unreserved resources are depleted. When you power-on a virtual machine the host side “admission control” will validate if it has sufficient unreserved resource available for the “static memory overhead” to be guaranteed… But also DRS and vMotion use the static memory overhead metric, for instance to ensure a virtual machine can be placed on a target host during a vMotion process as the static memory overhead needs to be guaranteed.

As you can see, a fairly lengthy chunk of info on just a single simple metric in vCenter / ESXTOP… but very nice to know!

Related

cloud, Server 4.1, 5.0, 5.1, drs, memory, VMware, vSphere

Reader Interactions

Comments

  1. Mike Foley says

    6 May, 2013 at 15:49

    Hmmmm, I wonder if the hosts could use vCOPS data in their calculations? That could possibly show trends in static memory usage on a per-VM basis? (another reason to use vCOPS :))

    mike

  2. Gabrie van Zanten says

    7 May, 2013 at 00:31

    If I understand your post correctly, static overhead becomes dynamic overhead when the VM is powered on. Would you have an estimated guess on the differences in MB between the two? In ESXi 4.1 a 4GB / 2 vCPU would have 242MB memory overhead. If this VMs move to different hosts, how much change could there be in overhead worst case / best case? (Rough figure)

    • NiTRo says

      11 May, 2013 at 17:37

      @Gabe: i did some tests and found that using swMMU VMM gives you static overhead and hwMMU VMM gives you dynamic overhead. Here are the results (i render it in two steps for clarity):
      http://files.hypervisor.fr/img/OVHD/ovhd_swmmu_32gb.png
      http://files.hypervisor.fr/img/OVHD/ovhd_hwmmu_32gb.png
      http://files.hypervisor.fr/img/OVHD/ovhd_swmmu_1011gb.png
      http://files.hypervisor.fr/img/OVHD/ovhd_hwmmu_1011gb.png
      @Duncan, does vmotion priority have an impact on that ?

      • Gabrie van Zanten says

        12 May, 2013 at 12:17

        Those are big differences !!! Thanks for checking this.

  3. Abdullah Abdullah says

    8 May, 2013 at 14:56

    Like++

  4. Wellington says

    10 May, 2013 at 01:39

    VM uses “two resource types” of memory for the Static memory overhead to turn it on, HA and VMotion and Dynamic Memory Overhead for its management.
    Correct?
    For this reason it is necessary to manage well the DRS?

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium
Aug 21st – VMware Explore
Sep 20th – VMUG DK
Nov 6th – VMware Explore
Dec 7th – Swiss German VMUG

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in