• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Can I disable the vSAN service if the cluster is running production workloads?

Duncan Epping · Feb 7, 2025 · Leave a Comment

I just had a discussion with someone who had to disable the vSAN service, while the cluster was running a production workload. They had all their VMs running on 3rd party storage, so vSAN was empty, but when they went to the vSAN Configuration UI the “Turn Off” option was grayed out. The reason this option is grayed out is that vSphere HA was enabled. This is usually the case for most customers. (Probably 99.9%.) If you need to turn off vSAN, make sure to temporarily disable vSphere HA first, and of course enable it again after you turned off vSAN! This ensures that HA is reconfigured to use the Management Network instead of the vSAN Network.

Another thing to consider, it could be that you manually configured the “HA Isolation Address” for the vSAN Network, make sure to also change that to an IP address on the Management Network again. Lastly, if there’s still anything stored on vSAN, this will be inaccessible when you disable the vSAN service. Of course, if nothing is running on vSAN, then there will be no impact to the workload.

Can I disable the vSAN service if the cluster is running production workloads?

Unexplored Territory Episode 089 – Discussing the VCP-VVF and VCP-VCF certification with Bart Peeters!

Duncan Epping · Jan 27, 2025 · Leave a Comment

I’ve seen many folks asking about how difficult the VCP-VCF and VCF-VVF exams are on X and Reddit, so I figured I would invite someone who has actually taken both exams and was even involved in the creation of various VMware exams in the past, and working on the development of an upcoming exam! The podcast is available on all platforms, and of course can be listened to below as well via the embedded player. You can also find the links of the discussed topics here:

  • VMUG Advantage with free exam voucher details
  • VCP-VVF Admin experience
  • VCP-VCF Admin experience
  • VCP-VCF Architect experience
  • VCF Course
  • Exam pricing announcement

vSAN ESA supported with 10GbE networking?

Duncan Epping · Jan 22, 2025 · 2 Comments

Somehow I get this question a lot lately, and it seems there’s still some conflicting documentation and messaging out there, is vSAN ESA supported with 10GbE networking or not? The answer is simple, yes it is supported officially. Although you may see some, outdated, blogs and docs state that 25GbE is the minimum, this was actually revised with the introduction of the AF-0 Ready Node configuration for vSAN ESA.

Now, I can fully understand people want this in writing, and after some digging I actually found an update documentation page on the Broadcom website that describes the vSAN ESA and vSAN OSA networking requirements. You can find it here: VMware vSAN – Physical NIC Requirements.

Do files stored on vSAN with vSAN File Services count against the max object count?

Duncan Epping · Jan 20, 2025 · Leave a Comment

Today I got an interesting question internally: Do files stored on vSAN with vSAN File Services count against the max object count? As I haven’t really discussed this in the past few years, I figured I would do a quick refresher. With vSAN File Services, the files people store on a file share are stored inside a vSAN object. The object itself counts towards the maximum component count you can have in a cluster, but of course the individual files do not.

When it comes to vSAN File Services, for each share you create, you will have to select a policy. The policy will be applied to the object that is created for the file share. Each object, as always, consists of one, or multiple, components. Those components will count towards the maximum number of components a vSAN cluster can have. For a vSAN ESA host the maximum number is 27k components, for vSAN OSA the maximum number of components per host is 9k. Do take into consideration that RAID-1 has a different number of components than RAID-6 for instance, but in general, this should for most customers not be a huge concern unless you have a very large environment (or a small environment and are pushing the boundaries in terms of shares etc).

I hope this helps. PS: The video below shows a demo I gave a few years back in which I inspect these components in the UI and CLI.

Doing site maintenance in a vSAN Stretched Cluster configuration

Duncan Epping · Jan 15, 2025 · Leave a Comment

I thought I wrote an article about this years ago, but it appears I wrote an article about doing maintenance mode with a 2-node configuration instead. As I’ve received some questions on this topic, I figured I would write a quick article that describes the concept of site maintenance. Note that in a future version of vSAN, we will have an option in the UI that helps with this, as described here.

First and foremost, you will need to validate if all data is replicated. In some cases, we see customers pinning data (VMs) to a single location without replication, and those VMs will be directly impacted if a whole site is placed in maintenance mode. Those VMs will need to be powered off, or you will need to make sure those VMs are moved to the location that remains running if they need to stay running. Do note, if you flip “Preferred / Secondary” and there are many VMs that are site local, this could lead to a huge amount of resync traffic. If those VMs need to stay running, you may also want to reconsider your decision to replicate those VMs though!

These are the steps I would take when placing a site into maintenance mode:

  1. Verify the vSAN Witness is up and running and healthy (see health checks)
  2. Check compliance of VMs that are replicated
  3. Configure DRS to “partially automated” or “Manual” instead of “Fully automated”
  4. Manually vMotion all VMs from Site X to Site Y
  5. Place each ESXi host in Site X into maintenance mode with the option “no data migration”
  6. Power Off all the ESXi hosts in Site X
  7. Enable DRS again in “fully automated” mode so that within Site Y the environment stays balanced
  8. Do whatever needs to be done in terms of maintenance
  9. Power On all the ESXi hosts in Site X
  10. Exit maintenance mode for each host

Do note, that VMs will not automatically migrate back until the resync for that given VM has been fully completed. DRS and vSAN are aware of the replication state! Additionally, if VMs are actively doing IO when hosts in Site X are going into maintenance mode, the state of data stored on hosts within Site X will differ. This concern will be resolved in the future by providing a “site maintenance” feature as discussed at the start of this article.

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Interim pages omitted …
  • Page 489
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in