• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

two node

vSAN Component vote recalculation with Witness Resilience, the follow up!

Duncan Epping · Mar 21, 2025 · Leave a Comment

I wrote about the Witness Resilience feature a few years ago and had a question on this topic today. I did some tests and then realized I already had an article describing how it works, but as I also tested a different scenario I figured I would write a follow up. In this case we are particularly talking about a 2-node configuration, but this would also apply to stretched cluster.

In a stretched cluster, or a 2-node, configuration when a data site goes down (or is placed into maintenance mode) a vote recalculation will automatically be done on each object/component. This is to ensure that if now the witness ends up failing, the objects/VMs will remain accessible. How that works I’ve explained here, and demonstrated for a 2-node cluster here.

But what if the Witness fails first? Well, I can explain it fairly easily, then the VMs will be inaccessible if the Witness goes down. Why is that? Well because the votes will not be recalculated in this scenario. Of course, I tested this and the screenshots below demonstrate it.

This screenshot shows the witness as Absent and both the “data” components have 1 vote. This means that if we fail one of those hosts the component will become inaccessible. Let’s do that next and then check the UI for more details.

As you can see below, the VM is now inaccessible. This is the result of the fact that there’s no longer a quorum, as 2 out of 3 votes are dead.

I hope that explains how this works.

Isolation Address in a 2-node direct connect vSAN environment?

Duncan Epping · Nov 22, 2017 ·

As most of you know by now when vSAN is enabled vSphere HA uses the vSAN network for heartbeating. I recently wrote an article about the isolation address and relationship with heartbeat datastores. In the comment section, Johann asked what the settings should be for 2-Node Direct Connect with vSAN. A very valid question as an isolation is still possible, although not as likely as with a stretched cluster considering you do not have a network switch for vSAN in this configuration. Anyway, you would still like the VMs that are impacted by the isolation to be powered off and you would like the other remaining host to power them back on.

So the question remains, which IP Address do you select? Well, there’s no IP address to select in this particular case. As it is “direct connect” there are probably only 2 IP addresses on that segment (one for host 1 and another for host 2). You cannot use the default gateway either, as that is the gateway for the management interface, which is the wrong network. So what do I recommend:

  • Disable the Isolation Response >> set it to “leave powered on” or “disabled” (depends on the version used
  • Disable the use of the default gateway by setting the following HA advanced setting:
    • das.usedefaultisolationaddress = false

That probably makes you wonder what will happen when a host is isolated from the rest of the cluster (other host and the witness). Well, when this happens then the VMs are still killed, but not as a result of the isolation response kicking in, but as a result of vSAN kicking in. Here’s the process:

  • Heartbeats are not received
  • Host elects itself primary
  • Host pings the isolation address
    • If the host can’t ping the gateway of the management interface then the host declares itself isolated
    • If the host can ping the gateway of the management interface then the host doesn’t declare itself isolated
  • Either way, the isolation response is not triggered as it is set to “Leave powered on”
  • vSAN will now automatically kill all VMs which have lost access to its components
    • The isolated host will lose quorum
    • vSAN objects will become isolated
    • The advanced setting “VSAN.AutoTerminateGhostVm=1” allows vSAN to kill the “ghosted” VMs (with all components inaccessible).

In other words, don’t worry about the isolation address in a 2-node configuration, vSAN has this situation covered! Note that “VSAN.AutoTerminateGhostVm=1” only works for 2-node and Stretched vSAN configurations at this time.

UPDATE:

I triggered a failure in my lab (which is 2-node, but not direct connect), and for those who are wondering, this is what you should be seeing in your syslog.log:

syslog.log:2017-11-29T13:45:28Z killInaccessibleVms.py [INFO]: Following VMs are powered on and HA protected in this host.
syslog.log:2017-11-29T13:45:28Z killInaccessibleVms.py [INFO]: * ['vm-01', 'vm-03', 'vm-04']
syslog.log:2017-11-29T13:45:32Z killInaccessibleVms.py [INFO]: List inaccessible VMs at round 1
syslog.log:2017-11-29T13:45:32Z killInaccessibleVms.py [INFO]: * ['vim.VirtualMachine:1', 'vim.VirtualMachine:2', 'vim.VirtualMachine:3']
syslog.log:2017-11-29T13:46:06Z killInaccessibleVms.py [INFO]: List inaccessible VMs at round 2
syslog.log:2017-11-29T13:46:06Z killInaccessibleVms.py [INFO]: * ['vim.VirtualMachine:1', 'vim.VirtualMachine:2', 'vim.VirtualMachine:3']
syslog.log:2017-11-29T13:46:06Z killInaccessibleVms.py [INFO]: Following VMs are found to have all objects inaccessible, and will be terminated.
syslog.log:2017-11-29T13:46:06Z killInaccessibleVms.py [INFO]: * ['vim.VirtualMachine:1', 'vim.VirtualMachine:2', 'vim.VirtualMachine:3']
syslog.log:2017-11-29T13:46:06Z killInaccessibleVms.py [INFO]: Start terminating VMs.
syslog.log:2017-11-29T13:46:06Z killInaccessibleVms.py [INFO]: Successfully terminated inaccessible VM: vm-01
syslog.log:2017-11-29T13:46:06Z killInaccessibleVms.py [INFO]: Successfully terminated inaccessible VM: vm-03
syslog.log:2017-11-29T13:46:06Z killInaccessibleVms.py [INFO]: Successfully terminated inaccessible VM: vm-04
syslog.log:2017-11-29T13:46:06Z killInaccessibleVms.py [INFO]: Finished killing the ghost vms

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in