• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

ha

This host has no isolation addresses defined as required by vSphere HA

Duncan Epping · Dec 19, 2018 ·

I had a comment on one of my 2-node vSAN cluster articles that there was an issue with HA when disabling the Isolation Response. The isolation response is not required for 2-node as it is impossible to properly detect an isolation event and vSAN has a mechanism to do exactly what the Isolation Response does: kill the VMs when they are useless. The error witnessed was “This host has no isolation addresses defined as required by vSphere HA” as shown also in the screenshot below.

This host has no isolation addresses defined as required by vSphere HA

So now what? Well first of all, as mentioned in the comments section as well, vSphere always checks if an isolation address is specified, that could be the default gateway of the management network or it could be the isolation address that you specified through advanced setting das.isolationaddress. When you use das.isolationaddress it often goes hand in hand with das.usedefaultisolationaddress set to false. That last setting, das.usedefaultisolationaddress, is what causes the error above to be triggered. What you should do in a 2-node configuration is the following:

  1. Do not configure the isolation response, explanation to be found in the above-mentioned article
  2. Do not configure das.usedefaultisolationaddress, if it is configured set it to true
  3. Make sure you have a gateway on the management vmkernel, if that is not the case you could set das.isolationaddress and simply set it to 127.0.0.1 to prevent the error from popping up.

Hope this helps those hitting this error message.

Black Friday Gift: Free copy of the vSphere 6.7 Clustering Deep Dive, thanks Rubrik (ebook)

Duncan Epping · Nov 23, 2018 ·

Many asked us if the ebook would be made available for free again. Today I have the pleasure of announcing that Frank, Niels and I have worked once again with Rubrik and the VMUG organization to make the vSphere 6.7 Clustering Deep Dive book available for free! Yes, that is 0 USD / EURO, or whatever your currency is. As the book signing at VMworld was wildly popular, which resulted in the follow up discussion about the ebook.

Ready to up your vSphere game? Join us at #VMworld booth #P305 for a complimentary copy of @ClusterDeepDive + the chance to meet authors @DuncanYB @FrankDenneman @NHagoort! More info: https://t.co/0DQ7nI1wzX pic.twitter.com/7nIGEvjdBF

— Rubrik (@rubrikInc) November 2, 2018

You want a copy? All that we expect you to do is register on Rubrik’s website using your own email address. Anyway, register and start your download engines, pick up a fresh copy of the vSphere Clustering Deep Dive here!

HA Futures: Per VM Admission Control – Part 4 of 4 – (Please comment!)

Duncan Epping · Nov 2, 2018 ·

As admission control hasn’t evolved in the past years, we figured we would include another potential Admission Control change. Right now when you define admission control you do this cluster-wide. You can define you want to tolerate 1 failure for instance, but some VMs simply may be more important than other VMs. What do you do in that case?

Well if that is the case then with today’s implementation you are stuck. This became very clear when customers started using the vSAN policies and defined different “failures to tolerate” for different workloads, it just makes sense. But as mentioned, HA does not allow you to do this. So our proposal is the following: Per VM FTT Admission Control.

In this case you would be able to define Host Failures To Tolerate on a per VM basis. This would provide a couple of benefits in my opinion:

  • You can set a higher Host Failures To Tolerate for critical workloads, increasing the chances of being to restart them when a failure has occurred
  • Aligning the HA Host Failures To Tolerate with the vSAN Host Failures To Tolerate, resulting in similar availability from a compute and storage point of view
  • Lower resource fragmentation by providing on a per VM basis Admission Control, even when using “slot based algorithm”
  • Of course you can use the new admission control types as mentioned in my earlier post.

Hopefully that is clear, and hopefully, it is a proposal you appreciate. Please leave a comment if you find this useful, or if you don’t find this useful. Please help shape the future of HA!

HA Futures: VMCP for Networking – Part 3 of 4 – (Please comment!)

Duncan Epping · Oct 30, 2018 ·

VMCP, or VM Component Protection, has been around for a while. Many of you are probably using this to mitigate storage issues. However, what if the VM network fails? Well, that is a problem right now… if the VM network fails then there’s no response from HA. This by many customers is considered to be a problem. So what would we like to propose? VM Component Protection for Networking!

How would this work? Well the plan would be to allow you to enable VM Component Protection for Networking for any network on your host. This could be the vMotion network, different VM networks etc. On this network HA would need to have an IP address it could check “liveness” against of course, very similar to how it used the default gateway to verify “host isolation”.

On top of that, besides validating liveness through an IP address, of course, HA should also monitor the physical NIC. If either of the two would not work, well then HA should take action immediately. What this action will be will depend on the type of failure that has occurred. We are considering the following two types of responses to a failure:

  1. If vMotion still works, migrate the VM from impacted host to a healthy host
  2. If vMotion doesn’t work, restart the impacted VM on a healthy host

In addition to monitoring the health of the physical NIC, HA can also use in guest/VM monitoring techniques to monitor the network route from within the VM to a certain address/gateway. Would this technique be useful?

What do you think? Please provide feedback/comments below, even if it is just a “yes, please!” Please help shape the future of HA!

HA Futures: Admission Control – Part 2 of 4 – (Please comment, feedback needed!)

Duncan Epping · Oct 23, 2018 ·

Admission Control is always a difficult topic when I talk to customers. It seems that many people still don’t fully grasp the concept, or simply misunderstand how it works. To be honest, I can’t blame them. It doesn’t always make sense when you think things through. Most recently for Admission Control we introduced a mechanism in which you can specify what the “tolerated performance loss” should be for any given VM. This isn’t really admission control unfortunately as it doesn’t stop you from powering on new VMs, it does, however, warn you if you reach the threshold where a host failure would lead to the specified performance degradation.

After various discussion with the HA team over the past couple of years, we are now exploring what we can change about Admission Control to give you more options as a user to ensure VMs are not only restarted but also receive the resources you expect them to receive. As such, the HA team is proposing 3 different ways of doing Admission Control, and we would like to have your feedback on this potential change:

  • Admission Control based on reserved resources and VM overheads
    This is what you have today, nothing changes here. We use the static reservations and ensure that all VMs can be powered on!
  • Admission Control based on consumed resources
    This is similar to the “performance degradation tolerated” option. We will look at the average consumed CPU and Memory resources, let’s say past 24 hours), and base our admission control calculations on that. This will allow you to guarantee performance for workloads to be similar after a failure.
  • Admission Control based on configured resources
    This is a static way of doing admission control similar to the first. The only difference is that here Admission Control will do the calculations based on the resources configured. So if you configured a VM with 24GB of memory, then we will do the math with 24GB of memory for that VM. The big advantage, of course, is that the VMs will always be able to claim the resources they have assigned.

In our opinion, adding these options should help to ensure that VMs will receive the resources you (or your customers) would expect them to get. Please help us by leaving a comment/providing feedback. If you agree that this would be helpful then let us know, if you have serious concerns then we would also like to know. Please help shape the future of HA!

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Page 6
  • Interim pages omitted …
  • Page 54
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in