I got a question after the previous demo: what would happen if, after a Site Takeover, the two failed sites came back online again? I completely ignored this part of the scenario so far, I am not even sure why. I knew what would happen, but I wanted to test it anyway to confirm that what engineering had described actually happened. For those who cannot be bothered to watch a demo, what happens when the two failed sites come back online again is pretty straightforward. The “old” components of the impacted VMs are discarded, vSAN will recreate the RAID configuration as specified within the associated vSAN Storage Policy, and then a full resync will occur so that the VM is compliant again with the policy. Let me repeat one part: a full resync will occur! So if you do a Site Takeover, I hope you do understand what the impact will be. A full resync will take time, of course, depending on the connection between the data locations.
Storage
Does a Site Takeover work with a 2-node configuration?
I got a question last week if vSAN Site Takeover also works with a 2-node configuration, and my answer was: yes, it should work. However, I had never tested it, so I figured I would build a quick lab environment and see if I was right. I recorded the result, here it is! The demo is pretty straight forward, let me describe what you will see:
- 2-node vSAN environment
- 1 VM named “photon-001”
- Photon-001 VM is “stretched” across both hosts and has a witness component on the witness host
- Host “.245” and the witness will fail and the components on those hosts will go “absent”
- Photon-001 VM becomes inaccessible
- We run the site-takeover command, which will reconfigure the Photon-001 VM
- The Photon-001 VM becomes available again and it automatically restarted
What do I do after a vSAN Stretched Cluster Site Takeover?
Over the last couple of months, various new vSAN features were announced. Two of those features are around the Stretched Cluster configuration, and have probably been the number 1 feature request for a few years. Now that we have Site Takeover and Site Maintenance functionality available, I am starting to get some questions about the impact of them, and in particular, the Site Takeover functionality is raising some questions.
For those who don’t know what these features are, let me describe them briefly:
Site Maintenance = The ability to place a full vSAN stretched cluster Fault Domain into maintenance mode at once. This ensures that all hosts within the fault domain have consistently stored the data, and all hosts will go into maintenance mode at the same time.
Site Takeover = This provides the ability when a Witness and a Data Site has failed to bring back the remaining site through a command line interface. This will reconstruct the remaining “site local” RAID configuration, making the objects available again, which will then allow vSphere HA to restart the VMs.
Now, the question that the above typically raises is what happens to the Witness and the Data Site that failed when you do the Site Takeover? If you look at the VMs RAID configuration, you will notice that both the Witness and the Data Site components of the sites that failed will completely disappear from the RAID configuration.
But what do you do next, because even after you run the Site Takeover, you still see your hosts and the witness in vCenter Server, and you still see a stretched cluster configuration in the UI. Now at first I thought that if the environment was completely up and running again, you had to go through some manual effort to reconstruct the stretched cluster. Basically, remove the failed hosts, wipe the disks, and recreate the stretched cluster. This is, however, not the case.
In the example above, if the Preferred site and the Witness site return for duty, vSAN will automatically discard the stale components in those previously failed sites. It will recreate new components for all objects, and it will do a full resync of the data.
If you end up in a situation where your hosts are completely gone (let’s say as a result of a fire), then you will have to do some kind of manual cleanup as follows, before you rebuild and add hosts back:
- Remove the failed hosts from the vCenter inventory
- Remove the witness from the vCenter inventory
- Delete the witness from the vCenter Server it is running, a real delete!
- Delete the surviving Fault Domain, this should be the only Fault Domain still listed in the vCenter interface
- You now have a normal cluster again
- Rebuild hosts and recreate the stretched cluster
I hope that helps,
vSAN Stretched Cluster vs Fault Domains in a “campus” setting?
I got this question internally recently: Should we create a vSAN Stretched Cluster configuration or create a vSAN Fault Domains configuration when we have multiple datacenters within close proximity on our campus? In this case, we are talking about less than 1ms latency RTT between buildings, maybe a few hundred meters at most. I think it is a very valid question, and I guess it kind of depends on what you are looking to get out of the infrastructure. I wrote down the pros and cons, and wanted to share those with the rest of the world as well, as it may be useful for some of you out there. If anyone has additional pros and cons, feel free to share those in the comments!
vSAN Stretched Clusters:
- Pro: You can replicate across fault domains AND protect additionally within a fault domain with R1/R5/R6 if required.
- Pro: You can decide whether VMs should be stretched across Fault Domains or not, or just protected within a fault domain/site
- Pro: Requires less than 5MS RTT latency, which is easily achievable in this scenario
- Con/pro: you probably also need to think about DRS/HA groups (VM-to-Host)
- Con: From an operational perspective, it also introduces a witness host, and sites, which may complicate things, and at the various least requires a bit more thinking
- Con: Witness needs to be hosted somewhere
- Con: Limited to 3 Fault Domains (2x data + 1x witness)
- Con: Limited to 20+20+1 configuration
vSAN Fault Domains:
- Pro: No real considerations around VM-to-host rules usually, although you can still use it to ensure certain VMs are spread across buildings
- Pro: No Witness Appliance to manage, update or upgrade. No overhead of running a witness somewhere
- Pro: No design considerations around “dedicated” witness sites and “data site”, each site has the same function
- Pro: Can also be used with more than 3 Fault Domains or Datacenters, so could even be 6 Fault Domains, for instance
- Pro: Theoretically can go up to 64 hosts
- Con: No ability to protect additionally within a fault domain
- Con: No ability to specify that you don’t want to replicate VMs across Fault Domains
- Con/Pro: Requires sub-1ms RTT latency at all times, which is low, but will be achievable in a campus cluster, usually
#099 – Introducing vSAN 9.0 featuring Pete Koehler
VMware Cloud Foundation 9.0 was recently launched, and that means vSAN 9.0 is also available. There are many new features introduced in 9.0, so a perfect time to ask Pete Koehler to join the podcast once again and go over some of these key enhancements. Below, you can find the links we discussed during the episode, as well as the embedded player to listen to the episode. Alternatively, you can also listen to the episode via Spotify, Apple, or any other podcast app you may use. Make sure to like and subscribe!