• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

HA admission control, the answers…

Duncan Epping · Nov 9, 2009 ·

I received a whole bunch of questions around my two latest posts on HA admission control. I added all the info to my HA Deepdive page but just in case you don’t regularly read that section I will post them here as well:

  1. The default of 256Mhz when no reservations are set is too conservative in my environment. What happens if you set a 100Mhz reservation?
    Nothing. The minimum value VMware HA uses to calculate with is 256Mhz. Keep in mind that this goes for slots and when using a percentage based admission control policy. Of course this can be overruled with an advanced setting (das.slotCpuInMHz) but I don’t recommend doing this.
  2. What happens if you have an unbalanced cluster and the largest host fails?
    If your admission control policy is based on amount of host failures VMware HA will take this into account. However, when you select a percentage this is not the case. You will need to make sure that you specify a percentage which is equal or preferably larger than the percentage of resources provided by the largest host in this cluster. Otherwise there’s a chance that VMware HA can’t restart all virtual machines.
  3. What would your recommendation be, reserve a specific percentage or set the amount of host failures VMware HA can tolerate?
    It depends. Yes I know, that is the obvious answer but it actually does. There are three options and each have it’s own advantages and disadvantages. Here you go:

    • Amount of host failures
      Pros: Fully automated, when a host is added to a cluster HA calculates how many slots are available.
      Cons: Can be very conservative and
      inflexible when reservations are used as the largest reservation dictates slot sizes.
    • Percentage reserved
      Pros: Flexible. Although reservations have its effect on the amount of available resources it impacts the environment less.
      Cons: Manual calculations need to  be done when adding additional hosts in a cluster. Unbalanced clusters can be a problem when chosen percentage is too low.
    • Designated failover host
      Pros: What you see is what you get.
      Cons: What you see is what you get.

Related

Server best practice, ESX, esxi, ha, vcenter, vSphere

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in