I was looking into vSAN File Services this week as I had some customers asking about requirements and constraints. I wanted to list some of the things to understand about vSAN File Service as it is important when you are designing and configuring it. First of all, it is good to have an understanding of the implementation, well at least somewhat as vSAN File Services is managed/upgraded/update as part of vSAN. It is not an entity you as an admin don’t manage the appliance you see deployed. I created a quick demo about vSAN File Services which you can find here.
If you look at the diagram (borrowed from docs.vmware.com) above you can see that vSAN File Service leverages Agent/Appliance VMs and within each Agent VM a container, or “protocol stack”, is running. The protocol stack is what exposes the file system as an NFS file share. That has a few implications, and I want to make sure that people understand those before they start with vSAN File Services. Let’s list the requirements, constraints, and some of the things to know so they are obvious.
- NFS v3 and NFS v4.1 are both supported for 7.0
- Kerberos authentication is supported with NFS for 7.0 U1
- SMB v2.1 and v3 are supported for 7.0 U1
- Active Directory authentication is supported with SMB for 7.0 U1
- Use of spaces in OU names is not supported at this moment
- A minimum of 3 hosts within a cluster
- A maximum of 64 hosts within a cluster
- Supported on 2-node starting vSAN 7.0 U2
- Supported on a stretched cluster starting vSAN 7.0 U2
- Data-in-transit encryption is supported starting vSAN 7.0 U2
- Unmap is supported starting vSAN 7.0 U2
- Access Based Enumeration is supported starting with vSAN 7.0 U3
- Not supported today on a cluster with “compute only” nodes
- It is supported to mount the NFS share from your ESXi host, but you are not allowed to run VMs on it!
- Maximum of 64 active FS containers/protocol stacks are provisioned when using vSAN 7.0 U2 and up
- With 7.0 U1 it was 32 active FS containers at most
- With 7.0 it was 8 active FS containers at most
- Maximum number of shares per cluster is 100 starting vSAN 7.0 U2
- Maximum size of the file share is equal to the maximum available capacity of the vSAN cluster
- FS VMs are provisioned with 4 vCPUs and 8GB of memory
- FS VMs are provisioned by vSphere ESX Agent Manager
- You will have one FS VM for each host of up to 32 hosts with 7.0 U1
- You will have one FS VM for each host up to 64 hosts with 7.0 U2
- FS VMs are tied to a specific host from a compute and storage perspective, and they align of course!
- FS VMs are not integrated with vSAN Fault Domains
- FS VMs are powered off
and deleted(With 7.0 U1 the deletion doesn’t happen anymore!) when going into maintenance mode - FS VMs are provisioned and powered on when exiting maintenance mode
- The IP addresses assigned to file services need to be on the same L2 segment
- On a standard and distributed (v)Switch, the following settings are enabled on the port group automatically: Forged Transmits, Promiscuous Mode
- For NSX-T you will only need to enable Mac Learning on the Segment Profile
- vSAN automatically downloads the OVF for the appliance, if vCenter Server cannot connect to the internet you can manually download it
- The ovf is stored on the vCenter Appliance here, if you ever want to delete it: /storage/updatemgr/vsan/fileService/
- The FS VM has its own policy (FSVM_Profile_DO_NOT_MODIFY), which should not be modified!
- The appliance is not protected across hosts, it is RAID-0 as resiliency is handled by the container layer!
- Can I increase the memory size or the number of vCPUs of the FS VM?
Please contact VMware Global Support Services for details on how to do this.
I would highly recommend creating a dedicated port group for vSAN File Service! Why? Well, Forged Transmits and Promiscuous Mode or MAC Learning are enabled by default during the configuration on the port group you selected for the vSAN File Service deployment. You may ask why this needs to be enabled, well basically because a MAC address and IP address are assigned to the container within the FS VM. This allows for resilience at the container layer but means that from a networking perspective the environment needs to be aware of it.
I hope the above details will help folks when deploying vSAN File Services in their environment. Remember, some of the limitations and constructs will definitely change with upcoming releases!
This sounds like vSAN file services is not vSAN at all. It is a new solution using local vSAN disk.
It is vSAN. It is not using local disks for the File Shares.
If the (max 8) hosts go maintenance, you lose FS, right ?
You can not use policy to control vSAN (do not modify)
You are using local disk (tied storage, aligned) across vSan layer. But you are not using vSAN disk management, nor reliability features ?
You are looking at the FS VM, which is a host for the container. The FS VM has no storage for the File Shares attached to it.
Each file share has a policy associated with it. Each file share as such is resilient etc.
Cool, I was confused by the “aligned” comment. Thanks!
No problem, I realized it is a bit confusing indeed.
Hi Duncan
Re: “On a Distributed Switch the following settings are enabled on the port group automatically: Forged Transmits, MAC Learning”
I just checked in my lab and I do not see MAC learning enabled on a distributed port group. Only promisc mode and forged transmits.
PS C:\Users\Administrator\Desktop> Get-MacLearn -DVPortgroupName @(“management”)
DVPortgroup : management
MacLearning : False
NewAllowPromiscuous : True
NewForgedTransmits : True
NewMacChanges : False
Limit :
LimitPolicy :
LegacyAllowPromiscuous : True
LegacyForgedTransmits : True
LegacyMacChanges : False
Thanks for sharing!
Hi Duncan,
I have configured File Service in my Lab enviroment and run in to some problems!
We have 172.20.x.x as internal net can this be an issue?
I can not talk to my vSan file share nodes other than from the same C net!
Is there an conflict between my internal network and “File Service Node container network”
Thanks!
Is it really a “VSAN 7 only” feature? I’m a bit confused, I have vCenter 7 but the ESXi hosts are 6.7 (so VSAN should be also 6.7, right?). Nevertheless, the hosts had the feature “vSAN File Services” listed when they were installed and VSAN was still running on eval license…
Adding the VSAN 6 license key failed, until I found out I need to convert the key to VSAN 7. Now they accepted the the license key, and File Services is offered to be configured, but failed in the end with error “Cannot enable vSAN file service because compatible OVF is not found”.
So is the error in vCenter which offers to configure it, despite the hosts cannot do it? Or do I have “VSAN 7” because i have vCenter 7 and VSAN 7 license keys (which is the only way to assign it) and it should work?
Yes, vSAN File Services is vSAN 7.0 and up only.
“Maximum of 32 active FS containers/protocol stacks are provisioned” limit is for one cluster or one vCenter Server.
this is a limit for the cluster!
Hello Duncan
I want to ask Stretched Cluster with File Services.
I read this section. https://core.vmware.com/resource/vsan-frequently-asked-questions#section6
With vSAN 7 and vSAN 7 Update 1, vSAN File Service is not supported on stretched clusters and 2 node deployments.
4 node deployment and stretched cluster is supported? 2 node one site, 2 node other site.
Or Stretched cluster with file services not supported nevertheless.
The following two principles apply:
vSAN File Services is not supported with vSAN Stretched clusters.
vSAN File Services is not supported with vSAN 2 node configuration.
Thank you Duncan. Could we use Fault Domains in File Services? 3 one site, 3 other site?
Not sure what you mean, vSAN File Services and Stretched Clusters are not supported right now.
Sorry for misunderstanding. Could we use Fault Domains in File Services enabled clusters.
Not Stretched cluster. We plan to deploy 3 node one rack, 3 node other rack. One rack fail, other rack continue the operation.
With 7.0 u1 we create a maximum of 32 vSAN FS VMs with FS Containers. So yes, this should just work fine. Especially when you stay below 32 hosts per cluster.
Hi Duncan,
is a distributed virtual switch mandatory or a standard one can be used?
you can also use a standard switch
Thank you
hello Duncan
Hope you are doing fine. We are working on a design where vCenter/ESXi are deployed on VLAN network and VSAN FS VM on overlay segment. In between VLAN and Overlay segment we have a firewall. I could not find what ports should be enabled between vCenter/ESXi and VSAN FS VM so that communication between vCenter/ESXi and vSAN FS VM works uninterrupted. It would be helpful If you can suggest something on this