• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

VMware

vSAN File Services and the different communication layers

Duncan Epping · Apr 21, 2020 ·

I received a bunch of questions based on my vSAN File Services posts the past couple of days. Most questions were around how the different layers talk to each other, and where vSAN comes in to play in this platform. I understand why, I haven’t discussed this aspect yet, but that is primarily as I wasn’t sure what I could/should talk about. Let’s start with a description of how communication works, top to bottom.

  • The NFS Client connects to the vSAN File Services NFS Server
  • The NFS Server runs within the protocol stack container, the IPs provided during the configuration are assigned to the protocol stack container
  • The protocol stack container runs within FS VM, the FS VM has no IP address assigned
  • The FS VM has a VMCI device (vSocket interface), which is used to communicate with the ESXi host securely
  • The ESXi host has VDFS kernel modules
  • VDFS communicates with vSAN layer and SPBM
  • vSAN is responsible for the lifecycle management of objects
  • A file share has a 1:1 relationship with a VDFS volume and is formed out of vSAN objects
  • Each file share / VDFS volume has a policy assigned, and the layout of the vSAN objects are determined by this policy
  • Objects are formatted with the VDFS file system and presented as a single VDFS volume

I guess a visual may help clarify things a bit, as for me it also took a while to wrap my head around this. Look at the diagram below.

So in other words, every FS VM allows for communication to the kernel using the vSockets library through the VMCI device. I am not going to explain what vSocket is as the previous link refers to a lengthy document on this topic. The VDFS layer leverages vSAN and SPBM for the lifecycle management of the objects that form a file share. So what is this VDFS layer then? Well VDFS is the layer that exposes a (distributed) file system that resides within the vSAN object(s) and allows the protocol stack container to share it as NFS v3 or v4.1. As mentioned, the objects are presented as a single VDFS volume.

So even though vSAN File Services uses a VM to ultimately allow a client to connect to a share, the important part here is that the VM is only used for the protocol stack container. All of the distributed file system logic lives within the vSphere layer. I hope that helps to explain the architecture a bit and how the layers communicate. I also recorded a quick demo, including the diagram above with the explanation of the layers, that shows how a protocol stack container is moved from one FS VM to another when a host goes into maintenance mode. This allows for NFS clients to stay connected to the same IP-address for the file shares for NFS v3, for NFS v4.1 we do provide the ability to connect to a primary IP address and load balance automatically.

 

Enabling vSAN File Services in a vSAN cluster larger than 8 hosts

Duncan Epping · Apr 20, 2020 ·

I noticed something over the weekend, and I want to make sure customers do not run in to this problem. If you have more than 8 hosts in your vSAN Cluster and enable vSAN File Services than the H5 client will ask your for more than 8 IP addresses. These IP addresses are used by the protocol stack containers. However, as described in this post, vSAN File Services will only ever instantiated 8 protocol stack containers in the current release. So do not provide more than 8 IPs, I tried it, and I also ran in to the scenario where vSAN File Services was not configured completely and properly as a result. You can simply click the “x” as pointed out in the screenshot below to remove the IP address entry line(s) to work around this issue. Hopefully it will be fixed soon in the UI.

vSAN FS: Existing domain information has been pre-populated below

Duncan Epping · Apr 16, 2020 ·

I have been playing with vSAN File Services a lot the past couple of weeks. I have been configuration and re-configuring it a few times. At some point, I found myself in the situation where when I wanted to enable vSAN File Services and provide new IP details that I received the following error: “Existing domain information has been pre-populated below”. shown in the below screenshot.

Why did this happen? Well, the configuration details are stored in the objects that form the file shares. I disabled vSAN File Services while I still had file shares running. This then results in the scenario where when you enable vSAN File Services that it detects the file share objects, it will read the configuration details and assume that you will want to configure it with the same Domain/Network details so that you can access the existing shares. But what if you don’t? What if you want a brand new shiny environment? Well, that is also possible and you can do that as following:

  • Enable vSAN File Services with existing domain information
  • When configured, go to File Service Shares and delete all existing file shares
  • When all are deleted, disable vSAN File Services
  • When all tasks are complete, enable vSAN File Services again
  • Enter new Domain and Networking details

Pretty simple right?

vSAN File Services considerations

Duncan Epping · Apr 15, 2020 ·

I was looking into vSAN File Services this week as I had some customers asking about requirements and constraints. I wanted to list some of the things to understand about vSAN File Service as it is important when you are designing and configuring it. First of all, it is good to have an understanding of the implementation, well at least somewhat as vSAN File Services is managed/upgraded/update as part of vSAN. It is not an entity you as an admin, don’t manage the appliance you see deployed. I created a quick demo about vSAN File Services, which you can find here.

If you look at the diagram (borrowed from the VMware documentation) above, you can see that vSAN File Service leverages Agent/Appliance VMs and within each Agent VM a container, or “protocol stack”, is running. The protocol stack exposes the file system as an NFS file share. [Read more…] about vSAN File Services considerations

Scaling out your vSAN File Services Cluster

Duncan Epping · Apr 10, 2020 ·

This week I have been testing with vSAN File Services and one of the procedures I wanted to run through was scaling out my vSAN File Services cluster. In my case, I have a cluster of 5 hosts and what I want to do is add a host to my vSAN cluster, expand the vSAN Datastore and also grow my vSAN File Services cluster.

First of all, when you add a host into the cluster you need to make sure it is in maintenance mode. If if is not in maintenance mode then vSAN FS will instantly try to clone a vSAN File Services agent VM (FS VM) on to it and that process will fail as there’s no disk group yet. So make sure to place the host into maintenance mode before adding it to the cluster.

After you added it to the cluster, you have to create the disk group first. Claim all the disks that need to be part of the disk group and create the disk group. When you have done that you can take the host out of maintenance mode. Now the FS VM will be cloned and powered on. However, one thing you will need to do is expand the IP Pool for the vSAN FS Protocol Stack container. You can do this as follows:

  • Go to your cluster
  • Click on vSAN / Services
  • Go to File Service and click Edit on the right
  • Go to the IP Pool page by clicking Next twice
  • Add that additional IP address and DNS Name
  • Click Next / Finish

Now a new Protocol Stack Container can be instantiated in that new FS VM and your vSAN File Services cluster has been scaled out properly. I created a simple demo showing you what the process looks like, make sure to check it out below!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 23
  • Page 24
  • Page 25
  • Page 26
  • Page 27
  • Interim pages omitted …
  • Page 124
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 ยท Log in