• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

6.5

HA Admission control: How can I check how much reserved resources are used?

Duncan Epping · Sep 14, 2018 ·

I had this question twice in the past three months, so somehow this is something which isn’t clear to everyone. With HA Admission Control you set aside capacity for fail-over scenarios, this is capacity which is reserved. But of course VMs also use reservations, how can you see what the total combined used reserved capacity is including Admission Control?

Well, that is actually pretty simple it appears. If you open the Web Client, or the H5 client, and go to your cluster then you have a “Monitor” tab, under the Monitor tab there’s an option called “Resource Reservation” and it has graphs for both CPU as well as Memory. This actually includes admission control. To verify this I took a screenshot in the H5 client before enabling admission control, and after enabling it, as you can see a big increase in “reserved resources”, which indicates that admission control is taken into consideration.

Before:

After:

This is also covered in our Deep Dive book, by the way, if you want to know more, pick it up.

What happened to MaxCostPerEsx41DS? It doesn’t seem to work in vSphere 6.x?

Duncan Epping · Aug 13, 2018 ·

Today I received a question which also caught me by surprise, someone updated from vSphere 5.0 and he noticed that when doing an SDRS Maintenance Mode that the setting MaxCostPerEsx41DS did not work. This setting actually limits the number of active SvMotions on a single datastore. You can imagine that this can be desired when you are “limited” in terms of performance. I was a bit surprised as I had not heard that these settings changed at all. Also, a quick search on internal pages and externally did not deliver any results. After a discussion with some support folks and some more digging, I found a reference to a naming change. Not surprising I guess, but as per vSphere 6.0 the setting is called MaxCostPerEsx6xDS. So if you would like to limit the number of SvMotion’s active at the same time, please note the change in names.

For more background on this topic I would like to refer to Frank’s excellent blog on this topic here.

Insufficient configured resources to satisfy the desired vSphere HA failover level on the cluster

Duncan Epping · Jul 12, 2018 ·

I was talking to the HA team this week, specifically about the upcoming HA book. One thing they mentioned is that people still seem to struggle with the concept of admission control. This is the reason I wrote a whole chapter on it years ago, yet there still seems to be a lot of confusion. One thing that is not clear to people is the percentage calculations. We have some customers with VMs with extremely large reservations, in that case instead of using the “slot policy” they typically switch to “percentage based policy”. Simply as the percentage based policy is a lot more flexible.

However, recently we have had some customers that hit the following error message:

Insufficient configured resources to satisfy the desired vSphere HA failover level on the cluster

This error message, in the case of these particular situations (yes there was a bug as well, read this article on that), set the percentage lower than what would equal a full host. In other words, in a 4 host environment, a single host would equal 25%. In some cases, customers would set the percentage to a value lower than 25%. I am personally not sure why anyone would do this as it contradicts the whole essence of admission control. Nevertheless, it happens.

This message indicates that you may not have sufficient resources, in the case of a host failure, to restart all the VMs. This of course is the result of the percentage being set lower than the value that would equal a single host. Note though, this does not stop you from powering on new VMs. You will only be stopped from powering on new VMs when you exceed the available unreserved resources.

So if you are seeing this error message, please verify the configured percentage if you set it manually. Ensure that at a minimum it equals the largest host in the cluster.

** back to finalizing the book **

 

vSphere 6.5 and limits, do they still apply to SvMotion?

Duncan Epping · Jul 11, 2018 ·

I had a question this week and I thought I wrote about this before but apparently, I did not. Hopefully by now most of you the I/O scheduler has changed over the past couple of years. Within ESXi, we moved to a new scheduler, which often is referred to as mClock. This new scheduler, and a new version of SIOC (storage i/o control) also resulted in some behavioral changes. Some which may be obvious, others which are not. I want to explicitly point out two things which I have discussed in the past which changed with vSphere 6.5 (and 6.7 as such) and both are around the use of limits.

First of all: Limits and SvMotion. In the past when a limit was applied to a VM, this would also artificially limit SvMotion. As of vSphere 6.5 (may apply to 6.0 as well but I have not verified this) this is no longer the case. Starting vSphere 6.0 the I/O scheduler creates a queue for every file on the file system (VMFS), this to avoid for instance a VM stalling other types (metadata) of IO. The queues are called SchedQ and briefly described by Cormac Hogan here. Of course, there’s a lot more to it than Cormac or I discuss here, but I am not sure how much I can share so I am not going to go there. Either way, if you were used to limits being applied to SvMotion as well you are warned… this is no longer the case.

Secondly, the normalization of I/Os changed with limits. In the past when a limit was applied IOs were normalized at 32KB, meaning that a 64KB I/O would count as 2 I/Os and a 4KB I/O would count as 1. This was confusing for a lot of people and as of vSphere 6.5 this is no longer the case. When you place a limit of 100 IOPS the VMDKs will be limited at 100 IOPS, regardless of the I/O size. This, by the way, was documented here on storagehub, not sure though how many people realized this.

Adding a fifth (virtual) ESXi host to vCenter Foundation

Duncan Epping · Jul 6, 2018 ·

When running a 4 node stretched cluster environment it should be possible to use “cheaper” vCenter Server licenses, namely vCenter Foundation. One of the limitations of vCenter Foundation is that you can only manage 4 hosts with it. This is where some customers who wanted to manage a stretched cluster hit some issues. The issue occurs at the point where you want to add the Witness VM to the inventory. Deploying the VM, of course, works fine, but it becomes problematic when you add the virtual ESXi host (Witness Appliance) to the vCenter Foundation instance as vCenter simply will not allow you to add a 5th host. Yes, this 5th host would be a witness, and will not be running any VMs, and even has a special license. Yet, the “add host” wizard does not differentiate between a regular host and a virtual witness appliance.

Fortunately, there’s a workaround. It is fairly straightforward, and it has to do with the order in which you add hosts to vCenter Foundation. If you add the witness VM before the physical hosts then the appliance is not counted against the license. The license count (and allocation) apparently happens after the host has been added, but somehow vCenter does validate beforehand. I guess we do this to avoid abuse.

So if you have vCenter Foundation, and want to build a stretched cluster leveraging a 2+2+1 configuration, meaning 4 physical hosts and 1 witness VM, then simply add the Witness VM to the inventory as a host first and then add the rest. For those wondering, yes this is documented in the release notes of vSphere 6.5 Update, all the way at the bottom.

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Interim pages omitted …
  • Page 11
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in