• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

u2

HCI Mesh error: Failed to run the remote datastore mount pre-checks

Duncan Epping · Apr 21, 2021 ·

I had two comments on my HCI Mesh compute only blogpost where both were reporting the same error when trying to mount a remote datastore. The error that popped up was the following:

Failed to run the remote datastore mount pre-checks.

I tried to reproduce it in my lab, as both had upgraded from 7.0 to U2 I did the same, that didn’t result in the same error though. The error doesn’t provide any details around why the pre-check fails, as shown below in the screenshot. After some digging I found out that the solution is simple though, you need to make sure IPv6 is enabled on your hosts. Yes, even when you are not using IPv6, it still needs to be enabled to pass the pre-checks. Thanks, Jiří and Reza for raising the issue!

HCI Mesh error: Failed to run the remote datastore mount pre-checks

Using HCI Mesh with a stand-alone vSphere host?

Duncan Epping · Apr 13, 2021 ·

Last week at the French VMUG we received a great question. The question was whether you can use HCI Mesh (datastore sharing) with a stand-alone vSphere Host. The answer is simple, no you cannot. VMware does not support enabling vSAN, and HCI Mesh, on a single stand-alone host. However, if you still want to mount a vSAN Datastore from a single vSphere host, there is a way around this limitation.

First, let’s list the requirements:

  1. The host needs to be managed by the same vCenter Server as the vSAN Cluster
  2. The host needs to be under the same virtual datacenter as the vSAN Cluster
  3. Low latency, high bandwidth connection between the host and the vSAN Cluster

If you meet these requirements, then what you can do to mount the vSAN Datastore to a single host is the following:

  1. Create a cluster without any services enabled
  2. Add the stand-alone host to the cluster
  3. Enable vSAN, select “vSAN HCI Mesh Compute Cluster”
  4. Mount the datastore

Note, when you create a cluster and add a host, vCenter/EAM will try to provision the vCLS VM. Of course this VM is not really needed as HA and DRS are not useful with a single host cluster. So what you can do is enable “retreat mode”. For those who don’t know how to do this, or those who want to know more about vCLS, read this article.

As I had to test the above in my lab, I also created a short video demonstrating the workflow, watch it below.

Site locality in a vSAN Stretched Cluster?

Duncan Epping · May 28, 2019 ·

On the community forums, a question was asked around the use of site locality in a vSAN Stretched Cluster. When you create a stretched cluster in vSAN you can define within a policy how the data needs to be protected. Do you want to replicate across datacenters? Do you want to protect the “site local data” with  RAID-1 or RAID-5/6? All of these options are available within the UI.

What if you decide to not stretch your object across locations, is it mandatory to specify which datacenter the object should reside in?

The answer is simple: no it is not. The real question, of course is, would be: should you define the location? Most definitely! If you wonder how to do this, simplicy specify it within the policy you define for these objects as follows:

The above screenshot is taken from the H5 client, if you are still using the Web Client it probably looks slightly different (Thanks Seamus for the screenshot):

Why would you do this? Well, that is easy to explain. When the objects of a VM get provisioned the decision will be made per object where to place it. If you have multiple disks, and you haven’t specified the location, you could find yourself in the situation where disks of a single non-stretched VM are located in different datacenters. This is, first of all, terrible for performance, but maybe more importantly also would impact availability when anything happens to the network between the datacenters. So when you use site locality for non-stretched VMs, make sure to also configure the location so that your VM and objects will align as demonstrated in the below diagram.

 

Impact of adding Persistent Memory / Optane Memory devices to your VM

Duncan Epping · May 22, 2019 ·

I had some questions around this in the past month, so I figured I would share some details around this. As persistent memory (Intel Optane Memory devices for instance) is getting more affordable and readily available more and more customers are looking to use it. Some are already using it for very specific use cases, usually in situations where the OS and the App actually understand the type of device being presented. What does that mean? At VMworld 2018 there was a great session on this topic and I captured the session in a post. Let me copy/paste the important bit for you, which discusses the different modes in which a Persistent Memory device can be presented to a VM.

  • vPMEMDisk = exposed to guest as a regular SCSI/NVMe device, VMDKs are stored on PMEM Datastore
  • vPMEM = Exposes the NVDIMM device in a “passthrough manner, guest can use it as block device or byte addressable direct access device (DAX), this is the fastest mode and most modern OS’s support this
  • vPMEM-aware = This is similar to the mode above, but the difference is that the application understands how to take advantage of vPMEM

But what is the problem with this? What is the impact? Well when you expose a Persistent Memory device to the VM, it is not currently protected by vSphere HA, even though HA may be enabled on your cluster. Say what? Yes indeed, the VM which has the PMEM device presented to it will be disabled for vSphere HA! I had to dig deep to find this documented anywhere, and it is documented in this paper. (Page 47, at the bottom.) So what works and what not? Well if I understand it correctly:

  • vSphere HA >> Not supported on vPMEM enabled VMs, regardless of the mode
  • vSphere DRS >> Does not consider vPMEM enabled VMs, regardless of the mode
  • Migration of VM with vPMEM / vPMEM-aware >> Only possible when migrating to host which has PMEM
  • Migration of VM with vPMEMDISK >> Possible to a host without PMEM

Also note, as a result (data is not replicated/mirrored) a failure could potentially lead to loss of data. Although Persistent Memory is a great mechanism to increase performance, it is something that should be taken into consideration when you are thinking about introducing it into your environment.

Oh, if you are wondering why people are taking these risks in terms of availability, Niels Hagoort just posted a blog with a pointer to a new PMEM Perf paper which is worth reading.

 

 

DRS Advanced Setting IsClusterManaged

Duncan Epping · May 7, 2019 ·

On Reddit, someone asked what DRS advanced setting IsClusterManaged does and if it is even legit. I can confirm it is legit, it is a setting which was introduced to prevent customers from disabling DRS while the cluster is managed by vCloud Director for instance. As disabling DRS would lead to deleting resource pools, which would be a very bad situation to find yourself in when you run vCloud Director as it leans on DRS Resource Pools heavily. So if you see the advanced setting IsClusterManaged in your environment for DRS, just leave it alone, it is there for a reason. (Most likely because you are using something like vCloud Director…)

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Interim pages omitted …
  • Page 7
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in