• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Insufficient resources to satisfy HA failover level on cluster

Duncan Epping · Dec 4, 2012 ·

I had this question yesterday where the error “Insufficient resources to satisfy HA failover level on cluster” comes from. And although it is hopefully clear to all of my regular readers this is caused by something that is called vSphere HA Admission Control, I figured I would reemphasize it and make sure people can easily find it when they do a search on my website.

When vSphere HA Admission Control is enabled vCenter Server validates if enough resources are available to guarantee all virtual machines can be restarted. If this is not the case the error around the HA failover level will appear. So what could cause this to happen and how do you solve it?

  • Are all hosts in your cluster still available (any hosts down )?
    • If a host is down  it could be insufficient resource are available to guarantee restarts
  • Check which admission control policy has been selected
    • Depending on which policy has been selected a single large reservation could skew the admission control algorithm (primarily “host failures” policy is impacted by this)
  • Admission Control was recently enabled
    • Could be that the cluster was overcommitted, or various reservations are used,  causing the policy to be violated directly when enabled

In most cases when this error pops up it is caused by a large reservation on memory or CPU and that should always be the first thing to check. There are probably a million scripts out there to check this, but I prefer to use either the CloudPhysics appliance (cloud based flexible solution with new reports weekly), or RVTools which is a nice Windows based utility that produces quick reports. If you are interested in more in-depth info on admission control I suggest reading this section of my vSphere HA deepdive page.

Share it:

  • Tweet

Related

BC-DR, Server 4.1, 5.0, 5.1, ha, VMware, vSphere

Reader Interactions

Comments

  1. ben @ geekswing says

    4 June, 2013 at 23:08

    The host failures policy definitely impacted us. We had the default of “1” set and couldn’t turn any VMs on even though we were barely using 20% of our resources. Turns out the calcluations for the host failures is pretty conservative. Good idea to go through all your VMs to check reservations or change to the percentage policy. Took me awhile to get through it

    Nice post.

  2. ben @ geekswing says

    4 June, 2013 at 23:40

    Just went through a bit briefly on your deep dives. HOLY MOLY! Will have to check those out when I have more time!

    • Duncan Epping says

      5 June, 2013 at 09:01

      It is called deepdive for a reason right 🙂

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

Feb 9th – Irish VMUG
Feb 23rd – Swiss VMUG
March 7th – Dutch VMUG
May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in