• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

cloud

Doing site maintenance in a vSAN Stretched Cluster configuration

Duncan Epping · Jan 15, 2025 · Leave a Comment

I thought I wrote an article about this years ago, but it appears I wrote an article about doing maintenance mode with a 2-node configuration instead. As I’ve received some questions on this topic, I figured I would write a quick article that describes the concept of site maintenance. Note that in a future version of vSAN, we will have an option in the UI that helps with this, as described here.

First and foremost, you will need to validate if all data is replicated. In some cases, we see customers pinning data (VMs) to a single location without replication, and those VMs will be directly impacted if a whole site is placed in maintenance mode. Those VMs will need to be powered off, or you will need to make sure those VMs are moved to the location that remains running if they need to stay running. Do note, if you flip “Preferred / Secondary” and there are many VMs that are site local, this could lead to a huge amount of resync traffic. If those VMs need to stay running, you may also want to reconsider your decision to replicate those VMs though!

These are the steps I would take when placing a site into maintenance mode:

  1. Verify the vSAN Witness is up and running and healthy (see health checks)
  2. Check compliance of VMs that are replicated
  3. Configure DRS to “partially automated” or “Manual” instead of “Fully automated”
  4. Manually vMotion all VMs from Site X to Site Y
  5. Place each ESXi host in Site X into maintenance mode with the option “no data migration”
  6. Power Off all the ESXi hosts in Site X
  7. Enable DRS again in “fully automated” mode so that within Site Y the environment stays balanced
  8. Do whatever needs to be done in terms of maintenance
  9. Power On all the ESXi hosts in Site X
  10. Exit maintenance mode for each host

Do note, that VMs will not automatically migrate back until the resync for that given VM has been fully completed. DRS and vSAN are aware of the replication state! Additionally, if VMs are actively doing IO when hosts in Site X are going into maintenance mode, the state of data stored on hosts within Site X will differ. This concern will be resolved in the future by providing a “site maintenance” feature as discussed at the start of this article.

Unexplored Territory Episode 087 – Microsoft on VMware VCF featuring Deji Akomolafe

Duncan Epping · Dec 16, 2024 · Leave a Comment

For the last episode of 2024, I invited Deji Akomolafe to discuss running Microsoft workloads on top of VCF. I’ve known Deji for a long time, and if anyone is passionate about VMware and Microsoft technology, it is him. Deji went over the many caveats, and best practices when it comes to running for instance SQL on top of VMware VCF (or vSphere for that matter). NUMA, CPU Scheduling, latency sensitive settings, power settings, virtual disk controllers, just some of the things you can expect in this episode. You can listen to the episode on Spotify, Apple, or via the embedded player below.

VCF-9 Vision for a Federated Storage View and vSAN (stretched cluster) visualizations!

Duncan Epping · Nov 19, 2024 · Leave a Comment

As mentioned last week, the sessions at Explore Barcelona were not recorded. I still wanted to share with you what it is we are working on, so I decided to record and share a few demos, and some of the slides we presented. In this video, I show our vision for a Federated Storage View for both vSAN and more traditional storage systems. This federated view will not only provide insights in terms of capacity and performance capabilities, it will also provide you with a visualization of a stretched cluster configuration. This is something that I have been asking for for a while now, and it looks like this will become a reality in VCF 9 at some point. As this revolves all around visualization, I would urge you to watch the below video. And as always, if you have feedback, please leave a comment!

Why vSAN Max aka disaggregated storage?

Duncan Epping · Sep 6, 2024 · 2 Comments

At VMware Explore 2024 in Las Vegas I had many customer meetings. Last year my calendar was also swamped and one of the things we spend a lot of time on was explaining to customers where vSAN Max would fit into the picture. vSAN Max was originally positioned as a “storage only vSAN platform for petabyte scale use cases”. I guess this still somewhat applies, but since then a lot has changed.

First, the vSAN Max ReadyNode configurations have changed substantially, and you can start at a much smaller capacity scale than when originally launched. We start at 20TB for an XS ReadyNode configuration, which means that with a 4 node minimum you have 80TB. That is something completely different then the petabytes we originally discussed. The other big difference is also the networking requirements, depending on the capacity needs, those have also come down substantially.

Now was said, originally the platform was positioned as a solution for customers running at petabyte scale. The reason I wanted to write a quick blog is because that is not the main argument today customers have for adopting vSAN Max or considering vSAN Max in their environment. The reason is something I personally did not expect to hear, but it is all about operations and sometimes politics.

Why vSAN Max aka disaggregated storage?

In a traditional environment, of course, depending on the size, you typically see a separation of duties. You have virtualization admins, networking admins, and storage admins. We have worked hard over the past decade to try to create these full-stack engineers, but the reality is that many companies still have these silos, and they will likely still exist 20 years from now.

This is where vSAN Max can help. The HCI model typically means that the virtualization administrator takes on the storage responsibilities when they implement vSAN, but with vSAN Max this doesn’t necessarily need to be the case. As various customers mentioned last week, with vSAN Max you could have a fully separated environment that is managed by a different team. Funny how often this was brought up as a great use case for vSAN. Especially with the amount of vSAN capacity included in VCF this makes more and more sense!

You could even have a different authentication service connected to the vCenter Server, which manages your vSAN Max clusters! You could have other types of hosts, cluster sizes, best practices, naming schemes, etc. This will all be up to the team managing that particular euuh silo. I know, sometimes a silo is seen as something negative, but for a lot of organizations, this is how they operate, and prefer to operate for the foreseeable future. If so, vSAN Max can cater for that use case as well!

What does Datastore Sharing/HCI Mesh/vSAN Max support when stretched?

Duncan Epping · Oct 31, 2023 ·

This question has come up a few times now, what does Datastore Sharing/HCI Mesh/vSAN Max support when stretched? It is a question which keeps coming up somehow, and I personally had some challenges to find the statements in our documentation as well. I just found the statement and I wanted to first of all point people to it, and then also clarify it so there is no question. If I am using Datastore Sharing / HCI Mesh, or will be using vSAN Max, and my vSAN Datastore is stretched, what does VMware (or does not) support?

We have multiple potential combinations, let me list them and add whether it is supported or not, not that this is at the time of writing with the current available version (vSAN 8.0 U2).

  • vSAN Stretched Cluster datastore shared with vSAN Stretched Cluster –> Supported
  • vSAN Stretched Cluster datastore shared with vSAN Cluster (not stretched) –> Supported
  • vSAN Stretched Cluster datastore shared with Compute Only Cluster (not stretched) –> Supported
  • vSAN Stretched Cluster datastore shared with Compute Only Cluster (stretched, symmetric) –> Supported
  • vSAN Stretched Cluster datastore shared with Compute Only Cluster (stretched, asymmetric) –> Not Supported

So what is the difference between symmetric and asymmetric? The below image, which comes from the vSAN Stretched Configuration, explains it best. I think Asymmetric in this case is most likely, so if you are running Stretched vSAN and a Stretched Compute Only, it most likely is not supported.

This also applies to vSAN Max by the way. I hope that helps. Oh and before anyone asks, if the “server side” is not stretched it can be connected to a stretched environment and is supported.

 

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 28
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in