• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vsan stretched cluster

vSAN ESA Witness memory and CPU resources?

Duncan Epping · Mar 10, 2026 · Leave a Comment

Not sure when this happened, but somehow the resource requirements for the vSAN Witness VM disappeared. Someone asked me last week how much memory is allocated to a VM, and how many vCPUs. Now, of course, this depends on the profile you select as the Witness VM has an M, L, and XL profile. The profile you select is determined by the number of VMs you will be provisioning, yes it is smart to take a growth factor into account. Now, when you deploy the VM, it doesn’t give a hint either, but you can figure out the size by simply looking at the OVF descriptor file. So this is what I got from the vSAN ESA Witness OVF:

  • vSAN ESA Witness XL – 8 vCPUs – 64 GB memory
  • vSAN ESA Witness L – 4 vCPUs – 32 GB memory
  • vSAN ESA Witness M – 4 vCPUs – 16 GB memory

And for those who were wondering, with vSAN OSA the requirements are:

  • vSAN OSA Witness XL – 6 vCPUs – 32 GB memory
  • vSAN OSA Witness L – 2 vCPUs – 32 GB memory
  • vSAN OSA Witness Normal – 2 vCPUs – 16 GB memory
  • vSAN OSA Witness Tiny – 2 vCPUs – 8 GB memory

I hope that helps, and also please do note… if you read this article a few years from now, things may have changed!

What happens after a Site Takeover when my failed sites come back online again?

Duncan Epping · Dec 4, 2025 · Leave a Comment

I got a question after the previous demo: what would happen if, after a Site Takeover, the two failed sites came back online again? I completely ignored this part of the scenario so far, I am not even sure why. I knew what would happen, but I wanted to test it anyway to confirm that what engineering had described actually happened. For those who cannot be bothered to watch a demo, what happens when the two failed sites come back online again is pretty straightforward. The “old” components of the impacted VMs are discarded, vSAN will recreate the RAID configuration as specified within the associated vSAN Storage Policy, and then a full resync will occur so that the VM is compliant again with the policy. Let me repeat one part: a full resync will occur! So if you do a Site Takeover, I hope you do understand what the impact will be. A full resync will take time, of course, depending on the connection between the data locations.

vSAN OSA 9.0 Site Takeover demo!

Duncan Epping · Nov 28, 2025 · Leave a Comment

I posted the Site Maintenance demo, so I figured I would also do a post for the Site Takeover feature. I described those features in a few posts. So make sure to read that if you don’t know what it is about. If you already know, but haven’t seen a demo yet, here you go:

vSAN 9.0 Site Maintenance Mode demo!

Duncan Epping · Nov 27, 2025 · Leave a Comment

I had a few questions about this, so I figured I would record a quick demo showing Site Maintenance. In the demo, I have a stretched cluster configured in vSAN 9.0, and I am going to place the Preferred Site into maintenance mode. First a pre-check will occur to verify all workloads are replicated between locations, and then the site is placed into maintenance while maintaining data consistency across hosts. Next demo I will record will show the Manual Site Takeover command that was also introduced in 9.0 for OSA, but will be also available soon for ESA.

vSAN Stretched Cluster vs Fault Domains in a “campus” setting?

Duncan Epping · Sep 25, 2025 · 2 Comments

I got this question internally recently: Should we create a vSAN Stretched Cluster configuration or create a vSAN Fault Domains configuration when we have multiple datacenters within close proximity on our campus? In this case, we are talking about less than 1ms latency RTT between buildings, maybe a few hundred meters at most. I think it is a very valid question, and I guess it kind of depends on what you are looking to get out of the infrastructure. I wrote down the pros and cons, and wanted to share those with the rest of the world as well, as it may be useful for some of you out there. If anyone has additional pros and cons, feel free to share those in the comments!

vSAN Stretched Clusters:

  • Pro: You can replicate across fault domains AND protect additionally within a fault domain with R1/R5/R6 if required.
  • Pro: You can decide whether VMs should be stretched across Fault Domains or not, or just protected within a fault domain/site
  • Pro: Requires less than 5MS RTT latency, which is easily achievable in this scenario
  • Con/pro: you probably also need to think about DRS/HA groups (VM-to-Host)
  • Con: From an operational perspective, it also introduces a witness host, and sites, which may complicate things, and at the various least requires a bit more thinking
  • Con: Witness needs to be hosted somewhere
  • Con: Limited to 3 Fault Domains (2x data + 1x witness)
  • Con: Limited to 20+20+1 configuration

vSAN Fault Domains:

  • Pro: No real considerations around VM-to-host rules usually, although you can still use it to ensure certain VMs are spread across buildings
  • Pro: No Witness Appliance to manage, update or upgrade. No overhead of running a witness somewhere
  • Pro: No design considerations around “dedicated” witness sites and “data site”, each site has the same function
  • Pro: Can also be used with more than 3 Fault Domains or Datacenters, so could even be 6 Fault Domains, for instance
  • Pro: Theoretically can go up to 64 hosts
  • Con: No ability to protect additionally within a fault domain
  • Con: No ability to specify that you don’t want to replicate VMs across Fault Domains
  • Con/Pro: Requires sub-1ms RTT latency at all times, which is low, but will be achievable in a campus cluster, usually
  • Page 1
  • Page 2
  • Page 3
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in