• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

u1

vSAN File Services fails to create file share with error Failed to create the VDFS File System.

Duncan Epping · Oct 4, 2022 ·

Last week on our internal slack channel one of the field folks had a question. He was hitting a situation where vSAN File Services failed when creating a file share with the error “Failed to create the VDFS File System”. We went back and forth a bit and after a while I jumped on Zoom to look at the issue, and troubleshoot the environment. After testing various combinations of policies I noticed that a particular policy worked, whole another policy did not. At first it looked like that stretched cluster policies would not work but after creating a new policy with a different name it did work. One thing left, the name of the policy. It appears that the use of special characters in the VM Storage Policy name results in the error “Failed to create the VDFS File System”. In this particular case the VM Storage Policy that was used was “stretched – mirrored FTT=1 RAID-1”. The character that was causing the issue was the “=” character.

How do you resolve it? Simply change the name of the policy. For instance, the following would work: “stretched – mirrored FTT1 RAID-1”.

Which vSAN Maintenance Mode option was used on a host?

Duncan Epping · Jun 7, 2021 ·

Last week there was a question on VMTN around maintenance mode, this customer wanted to find out which vSAN Maintenance Mode option was used while the host was placed in maintenance mode. The person who placed the host into maintenance wasn’t around, and I guess the customer wanted to know if data was moved from the host or not. They looked at the events and tasks, but unfortunately it wasn’t obvious from that vCenter section, next up were the logs. I also looked at the logs, and you would expect this info to be stored in hostd.log, but it wasn’t, so where is it? Well it actually is a configuration item, and you can dig it up here:

Pre ESXi 7.0 the info is stored in the “esx.conf” file, just grep for “hostDecommission”. Let me show you:

$ grep "/vsan/hostDecommission" /etc/vmware/esx.conf
/vsan/hostDecommissionVersion = "10"
/vsan/hostDecommissionState = "decom-state-decommissioned"
/vsan/hostDecommissionMode = "decom-mode-ensure-object-accessibility"

If the ESX is at 7.0 or later, just run the below config store command:

$ configstorecli config current get -c vsan -g system -k host_state
{
   "decom_mode": "ENSUREOBJECT_ACCESSIBILITY",
   "decom_state": "DECOMMISSIONED",
   "decom_version": 0
}

I hope that helps others who have a similar question!

vSphere HA configuration for HCI Mesh!

Duncan Epping · Oct 29, 2020 ·

I wrote a vSAN HCI Mesh Considerations blog post a few weeks ago. Based on that post I received some questions, and one of the questions was around vSphere HA configurations. Interestingly I also had some internal discussions around how vSAN HCI Mesh and HA were integrated. Based on the discussions I did some testing just to validate my understanding of the implementation.

Now when it comes to vSphere HA and vSAN the majority of you will be following the vSAN Design Guide and understand that having HA enabled is crucial for vSAN. Also when it comes to vSAN configuring the Isolation Response is crucial, and of course setting the correct Isolation Address. However, so far there’s been an HA feature which you did not have to configure for vSAN and HA to function correctly, and that feature is VM Component Protection aka APD / PDL responses.

Now, this changes with HCI Mesh. Specifically for HCI Mesh the HA and vSAN team have worked together to detect APD (all paths down) down scenarios! When would this happen? Well if you look at the below diagram you can see that we have “Client Clusters” and a “Server Cluster”. The “Client Cluster” consumes storage from the “Server Cluster”. If for whatever reason a host in the “Client Cluster” loses access to the “Server Cluster”, it results in the VMs on that host consuming storage on the “Server Cluster” to lose access to the datastore. This is essentially an APD (all paths down) scenario.

Now, to ensure the VMs are protected by HA for this situation you only need to enable the APD response. This is very straight-forward. You simply go to the HA cluster settings and set the “Datastore with APD” setting to either “Power off and restart VMs – Conservative” or “Power off and restart VMs – Aggressive”. The difference between conservative and aggressive is that with conservative HA will only kill the VMs when it knows for sure the VMs can be restarted, wherewith aggressive it will also kill the VMs on a host impacted by an APD while it isn’t sure it can restart the VMs. Most customers will use the “Conservative Restart Policy” by the way.

As I also mentioned in the HCI Mesh Considerations blog, one thing I would like to call out is the timing for the APD scenario: The APD is declared after 60 seconds, after which the APD response (restart) is triggered automatically after 180 seconds. Mind that this is different than with an APD response with traditional storage, as with traditional storage it will take 140 seconds before the APD is declared. You can, of course, in the log file see that an APD is detected, declared and VMs are killed as a result. Note that the “fdm.log” is quite verbose, so I copied only the relevant lines from my tests.

APD detected for remote vSAN Datastore /vmfs/volumes/vsan:52eba6db0ade8dd9-c04b1d8866d14ce5
Go to terminate state for VM /vmfs/volumes/vsan:52eba6db0ade8dd9-c04b1d8866d14ce5/a57d9a5f-a222-786a-19c8-0c42a162f9d0/YellowBricks.vmx due to APD timeout (CheckCapacity:false)
Failover operation in progress on 1 Vms: 1 VMs being restarted, 0 VMs waiting for a retry, 0 VMs waiting for resources, 0 inaccessible vSAN VMs.

Now for those wondering if it actually works, of course, I tested it a few times and recorded a demo, which can be watched on youtube (easier to follow in full screen), or click play below. (Make sure to subscribe to the channel for the latest videos!)

I hope this helps!

VMware vSphere Cluster Services (vCLS) considerations, questions and answers.

Duncan Epping · Oct 9, 2020 ·

In the vSphere 7.0 Update 1 release VMware introduced a new service called the VMware vSphere Cluster Services (vCLS). vCLS provides a mechanism that allows VMware to decouple both vSphere DRS and vSphere HA from vCenter Server. Niels Hagoort wrote a lengthy article on this topic here. You may wonder why VMware introduces this, well as Niels states. by decoupling the clustering services (DRS and HA) from vCenter Server via vCLS we ensure the availability of critical services even when vCenter Server is impacted by a failure.

vCLS is a collection of multiple VMs which, over time, will be the backbone for all clustering services. In the 7.0 U1 release a subset of DRS functionality is enabled through vCLS. Over the past week(s) I have seen many questions coming in and I wanted to create a blog with answers to these questions. When new questions or considerations come up, I will add these to the list below.

[Read more…] about VMware vSphere Cluster Services (vCLS) considerations, questions and answers.

Site locality in a vSAN Stretched Cluster?

Duncan Epping · May 28, 2019 ·

On the community forums, a question was asked around the use of site locality in a vSAN Stretched Cluster. When you create a stretched cluster in vSAN you can define within a policy how the data needs to be protected. Do you want to replicate across datacenters? Do you want to protect the “site local data” with  RAID-1 or RAID-5/6? All of these options are available within the UI.

What if you decide to not stretch your object across locations, is it mandatory to specify which datacenter the object should reside in?

The answer is simple: no it is not. The real question, of course is, would be: should you define the location? Most definitely! If you wonder how to do this, simplicy specify it within the policy you define for these objects as follows:

The above screenshot is taken from the H5 client, if you are still using the Web Client it probably looks slightly different (Thanks Seamus for the screenshot):

Why would you do this? Well, that is easy to explain. When the objects of a VM get provisioned the decision will be made per object where to place it. If you have multiple disks, and you haven’t specified the location, you could find yourself in the situation where disks of a single non-stretched VM are located in different datacenters. This is, first of all, terrible for performance, but maybe more importantly also would impact availability when anything happens to the network between the datacenters. So when you use site locality for non-stretched VMs, make sure to also configure the location so that your VM and objects will align as demonstrated in the below diagram.

 

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 10
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in