• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

file services

Inspecting vSAN File Services share objects

Duncan Epping · Apr 28, 2020 ·

Today I was looking at vSAN File Services a bit more and I had some challenges figuring out the details on the objects associated with a File Share. Somehow I had never noticed this, but fortunately, Cormac pointed it out. In the Virtual Objects section of the UI you have the ability to filter, and it now includes the option to filter for objects associated to File Shares and to Persistent Volumes for containers as well. If you click on the different categories in the top right you will only see those specific objects, which is what the screenshot below points out.

Something really simple, but useful to know. I created a quick youtube video going over it for those who prefer to see it “in action”. Note that at the end of the demo I also show how you can inspect the object using RVC, although it is not a tool I would recommend for most users, it is interesting to see that RVC does identify the object as “VDFS”.

vSAN File Services: Seeing an imbalance between protocol stack containers and FS VMs

Duncan Epping · Apr 22, 2020 ·

Last week I had this question around vSAN File Services and an imbalance between protocol stack containers and FS VMs. I personally had witnessed the same thing and wasn’t even sure what to expect. So what does this even mean, an imbalance? Well as I have already explained, every host in a vSAN Cluster which has vSAN File Services enabled will have a File Services VM. Within these VMs you will have protocol stack containers running, up to a maximum of 8 protocol stack containers per cluster. Just look at the diagram below.

Now this means that if you have 8 hosts, or less, in your cluster that by default every FS VM in your cluster will have a protocol stack container running. But what happens when you go into maintenance mode? When you go into maintenance mode the protocol stack container moves to a different FS VM, so you end up in a situation where you will have 2 (or more even) protocol stack containers running within 1 FS VM. That is the imbalance I just mentioned. More than 1 protocol stack container per FS VM, while you have an FS VM with 0 protocol stack containers. If you look at the below screenshot, I have 6 protocol stack containers, but as you can see we have the bottom two on the same ESXi host, and there’s no protocol stack container on host “dell-f”.

How do you even this out? Well, it is simple, it takes some time. vSAN File Services will look at your distribution of protocol stack containers every 30 minutes. Do mind, it will take the number of file shares associated with the protocol stack containers into consideration. If you have 0 file shares associated with a protocol stack container then vSAN isn’t going to bother balancing it, as there’s no point at that stage. However, if you have a number of shares and each protocol stack container owns one, or more, shares than balancing will happen automatically. Which is what I witness in my lab. Within 30 minutes I saw the situation changing as shown in the screenshot below, a nice evenly balanced file services environment! (Protocol stack container ending with .215 moved to host “dell-f”.)

vSAN File Services and the different communication layers

Duncan Epping · Apr 21, 2020 ·

I received a bunch of questions based on my vSAN File Services posts the past couple of days. Most questions were around how the different layers talk to each other, and where vSAN comes in to play in this platform. I understand why, I haven’t discussed this aspect yet, but that is primarily as I wasn’t sure what I could/should talk about. Let’s start with a description of how communication works, top to bottom.

  • The NFS Client connects to the vSAN File Services NFS Server
  • The NFS Server runs within the protocol stack container, the IPs provided during the configuration are assigned to the protocol stack container
  • The protocol stack container runs within FS VM, the FS VM has no IP address assigned
  • The FS VM has a VMCI device (vSocket interface), which is used to communicate with the ESXi host securely
  • The ESXi host has VDFS kernel modules
  • VDFS communicates with vSAN layer and SPBM
  • vSAN is responsible for the lifecycle management of objects
  • A file share has a 1:1 relationship with a VDFS volume and is formed out of vSAN objects
  • Each file share / VDFS volume has a policy assigned, and the layout of the vSAN objects are determined by this policy
  • Objects are formatted with the VDFS file system and presented as a single VDFS volume

I guess a visual may help clarify things a bit, as for me it also took a while to wrap my head around this. Look at the diagram below.

So in other words, every FS VM allows for communication to the kernel using the vSockets library through the VMCI device. I am not going to explain what vSocket is as the previous link refers to a lengthy document on this topic. The VDFS layer leverages vSAN and SPBM for the lifecycle management of the objects that form a file share. So what is this VDFS layer then? Well VDFS is the layer that exposes a (distributed) file system that resides within the vSAN object(s) and allows the protocol stack container to share it as NFS v3 or v4.1. As mentioned, the objects are presented as a single VDFS volume.

So even though vSAN File Services uses a VM to ultimately allow a client to connect to a share, the important part here is that the VM is only used for the protocol stack container. All of the distributed file system logic lives within the vSphere layer. I hope that helps to explain the architecture a bit and how the layers communicate. I also recorded a quick demo, including the diagram above with the explanation of the layers, that shows how a protocol stack container is moved from one FS VM to another when a host goes into maintenance mode. This allows for NFS clients to stay connected to the same IP-address for the file shares for NFS v3, for NFS v4.1 we do provide the ability to connect to a primary IP address and load balance automatically.

 

Enabling vSAN File Services in a vSAN cluster larger than 8 hosts

Duncan Epping · Apr 20, 2020 ·

I noticed something over the weekend, and I want to make sure customers do not run in to this problem. If you have more than 8 hosts in your vSAN Cluster and enable vSAN File Services than the H5 client will ask your for more than 8 IP addresses. These IP addresses are used by the protocol stack containers. However, as described in this post, vSAN File Services will only ever instantiated 8 protocol stack containers in the current release. So do not provide more than 8 IPs, I tried it, and I also ran in to the scenario where vSAN File Services was not configured completely and properly as a result. You can simply click the “x” as pointed out in the screenshot below to remove the IP address entry line(s) to work around this issue. Hopefully it will be fixed soon in the UI.

vSAN FS: Existing domain information has been pre-populated below

Duncan Epping · Apr 16, 2020 ·

I have been playing with vSAN File Services a lot the past couple of weeks. I have been configuration and re-configuring it a few times. At some point, I found myself in the situation where when I wanted to enable vSAN File Services and provide new IP details that I received the following error: “Existing domain information has been pre-populated below”. shown in the below screenshot.

Why did this happen? Well, the configuration details are stored in the objects that form the file shares. I disabled vSAN File Services while I still had file shares running. This then results in the scenario where when you enable vSAN File Services that it detects the file share objects, it will read the configuration details and assume that you will want to configure it with the same Domain/Network details so that you can access the existing shares. But what if you don’t? What if you want a brand new shiny environment? Well, that is also possible and you can do that as following:

  • Enable vSAN File Services with existing domain information
  • When configured, go to File Service Shares and delete all existing file shares
  • When all are deleted, disable vSAN File Services
  • When all tasks are complete, enable vSAN File Services again
  • Enter new Domain and Networking details

Pretty simple right?

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in