• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

VMware Cloud Foundation

vSAN File Services: Seeing an imbalance between protocol stack containers and FS VMs

Duncan Epping · Apr 22, 2020 ·

Last week I had this question around vSAN File Services and an imbalance between protocol stack containers and FS VMs. I personally had witnessed the same thing and wasn’t even sure what to expect. So what does this even mean, an imbalance? Well as I have already explained, every host in a vSAN Cluster which has vSAN File Services enabled will have a File Services VM. Within these VMs you will have protocol stack containers running, up to a maximum of 8 protocol stack containers per cluster. Just look at the diagram below.

Now this means that if you have 8 hosts, or less, in your cluster that by default every FS VM in your cluster will have a protocol stack container running. But what happens when you go into maintenance mode? When you go into maintenance mode the protocol stack container moves to a different FS VM, so you end up in a situation where you will have 2 (or more even) protocol stack containers running within 1 FS VM. That is the imbalance I just mentioned. More than 1 protocol stack container per FS VM, while you have an FS VM with 0 protocol stack containers. If you look at the below screenshot, I have 6 protocol stack containers, but as you can see we have the bottom two on the same ESXi host, and there’s no protocol stack container on host “dell-f”.

How do you even this out? Well, it is simple, it takes some time. vSAN File Services will look at your distribution of protocol stack containers every 30 minutes. Do mind, it will take the number of file shares associated with the protocol stack containers into consideration. If you have 0 file shares associated with a protocol stack container then vSAN isn’t going to bother balancing it, as there’s no point at that stage. However, if you have a number of shares and each protocol stack container owns one, or more, shares than balancing will happen automatically. Which is what I witness in my lab. Within 30 minutes I saw the situation changing as shown in the screenshot below, a nice evenly balanced file services environment! (Protocol stack container ending with .215 moved to host “dell-f”.)

vSAN File Services and the different communication layers

Duncan Epping · Apr 21, 2020 ·

I received a bunch of questions based on my vSAN File Services posts the past couple of days. Most questions were around how the different layers talk to each other, and where vSAN comes in to play in this platform. I understand why, I haven’t discussed this aspect yet, but that is primarily as I wasn’t sure what I could/should talk about. Let’s start with a description of how communication works, top to bottom.

  • The NFS Client connects to the vSAN File Services NFS Server
  • The NFS Server runs within the protocol stack container, the IPs provided during the configuration are assigned to the protocol stack container
  • The protocol stack container runs within FS VM, the FS VM has no IP address assigned
  • The FS VM has a VMCI device (vSocket interface), which is used to communicate with the ESXi host securely
  • The ESXi host has VDFS kernel modules
  • VDFS communicates with vSAN layer and SPBM
  • vSAN is responsible for the lifecycle management of objects
  • A file share has a 1:1 relationship with a VDFS volume and is formed out of vSAN objects
  • Each file share / VDFS volume has a policy assigned, and the layout of the vSAN objects are determined by this policy
  • Objects are formatted with the VDFS file system and presented as a single VDFS volume

I guess a visual may help clarify things a bit, as for me it also took a while to wrap my head around this. Look at the diagram below.

So in other words, every FS VM allows for communication to the kernel using the vSockets library through the VMCI device. I am not going to explain what vSocket is as the previous link refers to a lengthy document on this topic. The VDFS layer leverages vSAN and SPBM for the lifecycle management of the objects that form a file share. So what is this VDFS layer then? Well VDFS is the layer that exposes a (distributed) file system that resides within the vSAN object(s) and allows the protocol stack container to share it as NFS v3 or v4.1. As mentioned, the objects are presented as a single VDFS volume.

So even though vSAN File Services uses a VM to ultimately allow a client to connect to a share, the important part here is that the VM is only used for the protocol stack container. All of the distributed file system logic lives within the vSphere layer. I hope that helps to explain the architecture a bit and how the layers communicate. I also recorded a quick demo, including the diagram above with the explanation of the layers, that shows how a protocol stack container is moved from one FS VM to another when a host goes into maintenance mode. This allows for NFS clients to stay connected to the same IP-address for the file shares for NFS v3, for NFS v4.1 we do provide the ability to connect to a primary IP address and load balance automatically.

 

vSAN File Services considerations

Duncan Epping · Apr 15, 2020 ·

I was looking into vSAN File Services this week as I had some customers asking about requirements and constraints. I wanted to list some of the things to understand about vSAN File Service as it is important when you are designing and configuring it. First of all, it is good to have an understanding of the implementation, well at least somewhat as vSAN File Services is managed/upgraded/update as part of vSAN. It is not an entity you as an admin, don’t manage the appliance you see deployed. I created a quick demo about vSAN File Services, which you can find here.

If you look at the diagram (borrowed from the VMware documentation) above, you can see that vSAN File Service leverages Agent/Appliance VMs and within each Agent VM a container, or “protocol stack”, is running. The protocol stack exposes the file system as an NFS file share. [Read more…] about vSAN File Services considerations

Creating a vSAN 7.0 Stretched Cluster

Duncan Epping · Mar 31, 2020 ·

A while ago I wrote this article and created this demo showing the creation of a vSAN Stretched Cluster. I bumped into the article/video today when I was looking for something and figured that it is time to recreate the vSAN Stretched Cluster demo. I recorded it today, using vSphere / vSAN 7, and just wanted to share it with you here. Hope you enjoy it.

Introducing vSAN File Services as part of vSAN 7.0

Duncan Epping · Mar 17, 2020 ·

There was a great article by Cormac talking about vSAN File Services published last week. I had some articles planned, but hadn’t started yet, so when I saw the article I figured I would do something different. No point in rehashing what Cormac has already shared right. I figured I would shoot another demo, and do a brief write-up so people know what vSAN File Services is all about. In vSAN/vSphere 7.0 there’s now a new feature, and this is vSAN File Service. vSAN File Services can simply be enabled on a cluster level and provides you NFS 3 and 4.1 capabilities. The great thing about the solution is that you can create file shares and associate policies with the file shares. In other words, you can have a file share with RAID-1, RAID-5, or maybe even striping or stretched across locations when that is supported.

When you enable File Service, vSAN will deploy a number of “agent VMs” and these VMs/appliances are fully managed by vSAN. These agent VMs run Photon OS and have the file service capabilities enabled through docker/container technology. After these File Service Agent VMs have been deployed, the docker container instances have been instantiated and configured, vSAN File Service will be up and running and available for use. Next, you could simply create a file share and start consuming it. But before I reveal everything, let’s just head over to the demo below. I hope you enjoy it!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 6
  • Page 7
  • Page 8
  • Page 9
  • Page 10
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 ยท Log in