I’ve written about vSAN and vSphere HA various times, but I don’t think this has been explicitly called out before. Cormac and I were doing some tests this week and noticed something. When we were looking at results I realized I described it in my HA book a long time ago, but it is so far hidden away that probably no one has noticed.
In a traditional environment when you enable HA you will automatically have HA heartbeat datastores selected. These heartbeat datastores are used by the HA primary host to determine what has happened to a host which is no longer reachable over the management network. In other words, when a host is isolated it will communicate this to the HA primary using the heartbeat datastores. It will also inform the HA primary which VMs were powered off as the result of this isolation event (or not powered off when the isolation response is not configured).
Now, with vSAN, the management network is not used for communication between the hosts but the vSAN network is used. Typically in a vSAN environment, there’s only vSAN storage so there are no heartbeat datastores. As such, when a host is isolated it is not possible to communicate this to the HA primary. Remember, the network is down and there is no access to the vSAN datastore so the host cannot communicate through that either. HA will still function as expected though. You can set the isolation response to power-off and then the VMs will be killed and restarted. That is, if isolation is declared.
So when is isolation declared? A host declares itself isolated when:
- It is not receiving any communication from the primary
- It cannot ping the isolation address
Now, if you have not set any advanced settings then the default gateway of the management network will be the isolation address. Just imagine your vSAN Network to be isolated on a given host, but for whatever reason, the Management Network is not. In that scenario isolation is not declared, the host can still ping the isolation address using the management network vmkernel interface. HOWEVER… vSphere HA will restart the VMs. The VMs have lost access to disk, as such the lock on the VMDK is lost. HA notices the hosts are gone, which must mean that the VMs are dead as the locks are lost, lets restart them.
That is when you could be in the situation where the VMs are running on the isolated hosts and also somewhere else in the cluster. Both with the same mac address and the same name / IP address. Not a good situation. Now, if you would have had datastore heartbeats enabled then this would be prevented. As the isolated host would inform the primary it is isolated, but it would also inform the primary about the state of the VMs, which would be powered-on. The primary would then decide not to restart the VMs. However, the VMs which are running on the isolated host are more or less useless as they cannot write to disk anymore.
Let’s describe what we tested and what the outcome was in a way that is a bit easier to consume a table:
Isolation Address | Datastore Heartbeats | Observed behavior |
---|---|---|
IP on vSAN Network | Not configured | Isolated host cannot ping the isolation address, isolation declared, VMs killed and VMs restarted |
Management Network | Not configured | Can ping the isolation address, isolation not declared, yet rest of the cluster restarts the VMs even though they are still running on the isolated hosts |
IP on vSAN Network | Configured | Isolated host cannot ping the isolation address, isolation declared, VMs killed and VMs restarted |
Management Network | Configured | VMs are not powered-off and not restarted as the “isolated host” can still ping the management network and the datastore heartbeat mechanism is used to inform the master about the state. So the master knows HA network is not working, but the VMs are not powered off. |
So what did we learn, what should you do when you have vSAN? Always use an isolation address that is in the same network as vSAN! This way during an isolation event the isolation is validated using the vSAN vmkernel interface. Always set the isolation response to power-off. (My personal opinion based on testing.) This would avoid the scenario of duplicate mac / ip / names on the network when you have a single network being isolated for a specific host! And if you have traditional storage, then you can enable heartbeat datastores. It doesn’t add much in terms of availability, but still it will allow the HA hosts to communicate state through the datastore.
PS1: For those who don’t know, HA is configured to automatically select a heartbeat datastore. In a vSAN only environment you can disable this by selecting “Use datastore from only the specified list” in the HA interface and then set “das.ignoreInsufficientHbDatastore = true” in the advanced HA settings.
PS2: In a non-routable vSAN network environment you could create a Switch Virtual Interface on the physical switch. This will give you an IP on the vSAN segment for the isolation address leveraging the advanced setting das.isolationaddress0.
Marco says
Hi Duncan
Thanks for your post! But what is a good isolation address in a dedicated (isolated) vSAN-network ( Layer 2)? Especially in a streched cluster environment? The Gateway where needed for the witness communication? I find nowhere verified examples.
Duncan Epping says
that could be an option indeed, or another option would be a Switch Virtual Interface (SVI).
José says
What about when there’s an external datastore (iSCSI/FC/etc)? What’s the correct vSphere HA heartbeat datastore configuration?
Russ says
I am curious about this as well….the only other datastores we have are NFS….can I select them as an option?
Johann says
Hi Duncan, what you be your recommended settings if you are using vSAN ROBO with cross-connect for both vSAN and vMotion traffic.
Johann says
sorry I want to provide more detail on my previous question: When using vSAN ROBO with direct-connect, you do not have any IP addresses available on the vSAN Network to specify as isolation addresses. In this scenario what would be your recommendation for the HA settings?
Duncan says
Very valid question, I will post a new blog later today to share it with others as well.
Duncan Epping says
http://www.yellow-bricks.com/2017/11/22/isolation-address-2-node-direct-connect-vsan-environment/
kartikay says
Hi Duncan,
If we encounter vSAN partitioning on a host with a linux VM (Centos 7) running on it . What should be the correct approach to be followed.
After recovery from partitioning and rebooting the VM, the system it goes into maintenance mode due to failing filesystem check , is this because of the “ghost” VM?
Thanks,
Kartikay
Duncan Epping says
I am not sure why this is, can’t say I have ever seen this. I would recommend contacting support.
Asaf Blubstein says
Hi Duncan,
Thanks a lot for the post, this is extremely helpful.
Would you recommend disabling the default gateway address for isolation check by setting das.usedefaultisolationaddress to false?
I know this is a best practice for a stretched cluster but was wondering if it should be disabled for in a regular cluster with separated vSAN and management networks.
Thankw,
Asaf
duncan@yellow-bricks says
for vSAN it probably should be disabled by default, as it is unlikely that that default gateway of the management interface would be accessible on the vSAN Network. I don’t like to give “default recommendations” in this case, as I prefer customers to think these situations through.
Fred says
Hi Duncan
There is a scenario: When the management network is lost but the vSAN network is good, and some VMs are also shared uplink with the management network, then the VMs cannot be reached and the vCenter can not connect ESXi node, but the vSAN network is not lost, so the HA and isolation will not be trigerred.
How to design the cluster configuraiton in this scenario to avoid the VMs lost?
Duncan Epping says
Normally people don’t have VMs sharing the management network to be honest. Not sure how to get around what you are saying right now. I have filed a feature request that would solve the problem, but today it is not available, and I do not know how long this will take. More on this later.
Cathal Prendeville says
Hi Duncan,
This is an interesting read and it raises a particular issue with Stretched cluster configurations using WTS.
1. With WTS we create dedicated vmks on the Data Nodes at each site, two different VLANs and subnets for each site to have two independent paths to the Witness site from each Data Node site.
2. We create static routes on the host pointing to the local WTS SVI at each site.
3. Based on this document we still need to create a vSAN SVI for the HA isolation addresses based on your comment above..
“Always use an isolation address which is in the same network as vSAN! ”
4. Using the WTS SVI’s would not be as reliable as if there was some vSAN VLAN issue on one site it would not be detected.
Would that be a fair assumption?
Thanks,
Cathal.