• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vsan stretched

Can I replicate, or snapshot, my vSAN Stretched Cluster Witness appliance for fast recovery?

Duncan Epping · Jan 20, 2026 · Leave a Comment

I’ve been seeing this question pop up more frequently, can I replicate or snapshot my vSAN Stretched Cluster Witness appliance for fast recovery? Usually, people ask this question as they cannot adhere to the 3-site requirement for a vSAN Stretched Cluster. So by setting up some kind of replication mechanism with low RPO, they try to mitigate this risk.

I guess the question stems from a lack of understanding of what the witness does. The witness provides a quorum mechanism, the quorum mechanism helps determine which site has access to the data in the case of a network failure (ISL) between the data locations.

Can I replicate, or snapshot, my vSAN Stretched Cluster Witness appliance for fast recovery?

So why can the Witness Appliance not be snapshotted or replicated then? Well, in order to provide this quorum mechanism, the Witness Appliance stores a witness component for each object. This is not per site, or per VM, but for every object… So if you have a VM with multiple VMDKs, you will have multiple witness objects per VM stored on the witness appliance. That witness object holds metadata and, through a log sequence number, understands which object holds the most recent data. This is where the issue arises. If you revert a Witness Appliance to an earlier point in time, the witness components also revert to an earlier point in time, and will have a different log sequence number than expected. This results in vSAN being unable to make the object available to the surviving site, or the site that is expected to hold quorum.

So in short, should you replicate or snapshot the Witness Appliance? No!

 

What happens after a Site Takeover when my failed sites come back online again?

Duncan Epping · Dec 4, 2025 · Leave a Comment

I got a question after the previous demo: what would happen if, after a Site Takeover, the two failed sites came back online again? I completely ignored this part of the scenario so far, I am not even sure why. I knew what would happen, but I wanted to test it anyway to confirm that what engineering had described actually happened. For those who cannot be bothered to watch a demo, what happens when the two failed sites come back online again is pretty straightforward. The “old” components of the impacted VMs are discarded, vSAN will recreate the RAID configuration as specified within the associated vSAN Storage Policy, and then a full resync will occur so that the VM is compliant again with the policy. Let me repeat one part: a full resync will occur! So if you do a Site Takeover, I hope you do understand what the impact will be. A full resync will take time, of course, depending on the connection between the data locations.

vSAN OSA 9.0 Site Takeover demo!

Duncan Epping · Nov 28, 2025 · Leave a Comment

I posted the Site Maintenance demo, so I figured I would also do a post for the Site Takeover feature. I described those features in a few posts. So make sure to read that if you don’t know what it is about. If you already know, but haven’t seen a demo yet, here you go:

vSAN Stretched Cluster vs Fault Domains in a “campus” setting?

Duncan Epping · Sep 25, 2025 · 2 Comments

I got this question internally recently: Should we create a vSAN Stretched Cluster configuration or create a vSAN Fault Domains configuration when we have multiple datacenters within close proximity on our campus? In this case, we are talking about less than 1ms latency RTT between buildings, maybe a few hundred meters at most. I think it is a very valid question, and I guess it kind of depends on what you are looking to get out of the infrastructure. I wrote down the pros and cons, and wanted to share those with the rest of the world as well, as it may be useful for some of you out there. If anyone has additional pros and cons, feel free to share those in the comments!

vSAN Stretched Clusters:

  • Pro: You can replicate across fault domains AND protect additionally within a fault domain with R1/R5/R6 if required.
  • Pro: You can decide whether VMs should be stretched across Fault Domains or not, or just protected within a fault domain/site
  • Pro: Requires less than 5MS RTT latency, which is easily achievable in this scenario
  • Con/pro: you probably also need to think about DRS/HA groups (VM-to-Host)
  • Con: From an operational perspective, it also introduces a witness host, and sites, which may complicate things, and at the various least requires a bit more thinking
  • Con: Witness needs to be hosted somewhere
  • Con: Limited to 3 Fault Domains (2x data + 1x witness)
  • Con: Limited to 20+20+1 configuration

vSAN Fault Domains:

  • Pro: No real considerations around VM-to-host rules usually, although you can still use it to ensure certain VMs are spread across buildings
  • Pro: No Witness Appliance to manage, update or upgrade. No overhead of running a witness somewhere
  • Pro: No design considerations around “dedicated” witness sites and “data site”, each site has the same function
  • Pro: Can also be used with more than 3 Fault Domains or Datacenters, so could even be 6 Fault Domains, for instance
  • Pro: Theoretically can go up to 64 hosts
  • Con: No ability to protect additionally within a fault domain
  • Con: No ability to specify that you don’t want to replicate VMs across Fault Domains
  • Con/Pro: Requires sub-1ms RTT latency at all times, which is low, but will be achievable in a campus cluster, usually

Doing site maintenance in a vSAN Stretched Cluster configuration

Duncan Epping · Jan 15, 2025 · Leave a Comment

I thought I wrote an article about this years ago, but it appears I wrote an article about doing maintenance mode with a 2-node configuration instead. As I’ve received some questions on this topic, I figured I would write a quick article that describes the concept of site maintenance. Note that in a future version of vSAN, we will have an option in the UI that helps with this, as described here.

First and foremost, you will need to validate if all data is replicated. In some cases, we see customers pinning data (VMs) to a single location without replication, and those VMs will be directly impacted if a whole site is placed in maintenance mode. Those VMs will need to be powered off, or you will need to make sure those VMs are moved to the location that remains running if they need to stay running. Do note, if you flip “Preferred / Secondary” and there are many VMs that are site local, this could lead to a huge amount of resync traffic. If those VMs need to stay running, you may also want to reconsider your decision to replicate those VMs though!

These are the steps I would take when placing a site into maintenance mode:

  1. Verify the vSAN Witness is up and running and healthy (see health checks)
  2. Check compliance of VMs that are replicated
  3. Configure DRS to “partially automated” or “Manual” instead of “Fully automated”
  4. Manually vMotion all VMs from Site X to Site Y
  5. Place each ESXi host in Site X into maintenance mode with the option “no data migration”
  6. Power Off all the ESXi hosts in Site X
  7. Enable DRS again in “fully automated” mode so that within Site Y the environment stays balanced
  8. Do whatever needs to be done in terms of maintenance
  9. Power On all the ESXi hosts in Site X
  10. Exit maintenance mode for each host

Do note, that VMs will not automatically migrate back until the resync for that given VM has been fully completed. DRS and vSAN are aware of the replication state! Additionally, if VMs are actively doing IO when hosts in Site X are going into maintenance mode, the state of data stored on hosts within Site X will differ. This concern will be resolved in the future by providing a “site maintenance” feature as discussed at the start of this article.

  • Page 1
  • Page 2
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in