• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

u2

vSAN File Services fails to create file share with error Failed to create the VDFS File System.

Duncan Epping · Oct 4, 2022 · Leave a Comment

Last week on our internal slack channel one of the field folks had a question. He was hitting a situation where vSAN File Services failed when creating a file share with the error “Failed to create the VDFS File System”. We went back and forth a bit and after a while I jumped on Zoom to look at the issue, and troubleshoot the environment. After testing various combinations of policies I noticed that a particular policy worked, whole another policy did not. At first it looked like that stretched cluster policies would not work but after creating a new policy with a different name it did work. One thing left, the name of the policy. It appears that the use of special characters in the VM Storage Policy name results in the error “Failed to create the VDFS File System”. In this particular case the VM Storage Policy that was used was “stretched – mirrored FTT=1 RAID-1”. The character that was causing the issue was the “=” character.

How do you resolve it? Simply change the name of the policy. For instance, the following would work: “stretched – mirrored FTT1 RAID-1”.

vSAN Storage Rules policy capability allows to set dedupe per VM?

Duncan Epping · Aug 24, 2021 ·

There was a question posted on the VMware Community Forums, and as this is something I have been asked regularly, I figured I would do a quick blog post about it. Although I have covered this before, it doesn’t hurt to repeat, as it appears to be somewhat confusing for people. When you create a VM Storage Policy, starting with vSAN 7.0 U2 you have the ability to specify if a VM needs to be Encrypted, have Dedupe and Compression enabled, have Compression-Only enabled, and/or needs to be stored on all-flash vSAN or Hybrid. Never noticed it? Look at the screenshot below.

In the screenshot, you see that you have the ability to specify which data service needs to be enabled. I guess this is where the confusion comes into play, as this functionality is not about enabling the data service for the VM to which you assign the policy. This is about which data service needs to be enabled on the datastore to which the VM can be provisioned. Huh, what? Okay, let’s explain.

If you are using vSAN as your storage platform, and you are sharing vSAN Datastores between clusters leveraging the HCI Mesh feature, then you could find yourself in a situation where some clusters are hybrid and some are all-flash. Some may have data services enabled like Encryption or Deduplication, some may not. In that scenario you want to be able to specify which features need to be enabled for the datastore the VM is provisioned to. So what this “storage rules” feature does is that it ensure that the datastore which is shown as “compatible” actually has the specified capabilities enabled! In other words, if you tick “data-at-rest encryption” in a policy and assign the policy to a VM, then only the datastores which have “data-at-rest encryption” enabled will be shown as compatible with your VM!

So again, “storage rules” apply to the data services that should be enabled on the vSAN Datastore, and do not enable data services on a per VM/VMDK basis.

<Update for vSAN 8.0 ESA>

With vSAN 8.0 ESA the above has changed. With vSAN 8.0 ESA you can set compression per VM, and you actually do that using the policy storage rules. I discussed this in this blog post.

Which vSAN Maintenance Mode option was used on a host?

Duncan Epping · Jun 7, 2021 ·

Last week there was a question on VMTN around maintenance mode, this customer wanted to find out which vSAN Maintenance Mode option was used while the host was placed in maintenance mode. The person who placed the host into maintenance wasn’t around, and I guess the customer wanted to know if data was moved from the host or not. They looked at the events and tasks, but unfortunately it wasn’t obvious from that vCenter section, next up were the logs. I also looked at the logs, and you would expect this info to be stored in hostd.log, but it wasn’t, so where is it? Well it actually is a configuration item, and you can dig it up here:

Pre ESXi 7.0 the info is stored in the “esx.conf” file, just grep for “hostDecommission”. Let me show you:

$ grep "/vsan/hostDecommission" /etc/vmware/esx.conf
/vsan/hostDecommissionVersion = "10"
/vsan/hostDecommissionState = "decom-state-decommissioned"
/vsan/hostDecommissionMode = "decom-mode-ensure-object-accessibility"

If the ESX is at 7.0 or later, just run the below config store command:

$ configstorecli config current get -c vsan -g system -k host_state
{
   "decom_mode": "ENSUREOBJECT_ACCESSIBILITY",
   "decom_state": "DECOMMISSIONED",
   "decom_version": 0
}

I hope that helps others who have a similar question!

HCI Mesh error: Failed to run the remote datastore mount pre-checks

Duncan Epping · Apr 21, 2021 ·

I had two comments on my HCI Mesh compute only blogpost where both were reporting the same error when trying to mount a remote datastore. The error that popped up was the following:

Failed to run the remote datastore mount pre-checks.

I tried to reproduce it in my lab, as both had upgraded from 7.0 to U2 I did the same, that didn’t result in the same error though. The error doesn’t provide any details around why the pre-check fails, as shown below in the screenshot. After some digging I found out that the solution is simple though, you need to make sure IPv6 is enabled on your hosts. Yes, even when you are not using IPv6, it still needs to be enabled to pass the pre-checks. Thanks, Jiří and Reza for raising the issue!

HCI Mesh error: Failed to run the remote datastore mount pre-checks

Using HCI Mesh with a stand-alone vSphere host?

Duncan Epping · Apr 13, 2021 ·

Last week at the French VMUG we received a great question. The question was whether you can use HCI Mesh (datastore sharing) with a stand-alone vSphere Host. The answer is simple, no you cannot. VMware does not support enabling vSAN, and HCI Mesh, on a single stand-alone host. However, if you still want to mount a vSAN Datastore from a single vSphere host, there is a way around this limitation.

First, let’s list the requirements:

  1. The host needs to be managed by the same vCenter Server as the vSAN Cluster
  2. The host needs to be under the same virtual datacenter as the vSAN Cluster
  3. Low latency, high bandwidth connection between the host and the vSAN Cluster

If you meet these requirements, then what you can do to mount the vSAN Datastore to a single host is the following:

  1. Create a cluster without any services enabled
  2. Add the stand-alone host to the cluster
  3. Enable vSAN, select “vSAN HCI Mesh Compute Cluster”
  4. Mount the datastore

Note, when you create a cluster and add a host, vCenter/EAM will try to provision the vCLS VM. Of course this VM is not really needed as HA and DRS are not useful with a single host cluster. So what you can do is enable “retreat mode”. For those who don’t know how to do this, or those who want to know more about vCLS, read this article.

As I had to test the above in my lab, I also created a short video demonstrating the workflow, watch it below.

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 7
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
Aug 21st – VMware Explore
Sept – VMUG Slovenia (virtual)
Oct – VMUG Sweden
Nov 6th – VMware Explore
Nov 23rd – UK VMUG
Dec 7th – Swiss German VMUG

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in