I felt I would need to get this out there, as it is not something many seem to be aware off . More and more people are starting to use storage solutions which offer 1 large shared datastore, examples are solutions like Virtual SAN, Tintri and Nutanix. I have seen various folks saying: unlimited number of VMs per datastore, but of course there are limits to everything! If you are planning to build a big cluster (HA enabled), keep in mind that per cluster your limit for a datastore is 2048 powered-on virtual machines! Say what? Yes that is right, per cluster you are limited to 2048 powered-on VMs on a single datastore. This is documented in the Max Config Guide of both vSphere 5.5 and vSphere 5.1. Please note it says datastore and not VMFS or NFS explicitly, this applies to both!
The reason for this today is the vSphere HA poweron list. I described that list in this article, in short: this list keeps track of the power-state of your virtual machines If you need more VMs in your cluster than 2048 you will need to create multiple datastores for now. (More details in the blog post) Do note that this is a known limitation and I have been told that the engineering team is researching a solution to this problem. Hopefully it will be in one of the upcoming releases.
Mimmo says
I’m still tied to the old-but-good rule of thumb: no more than 16 VMs for datastore 🙂
dgfallon says
If HA is disabled on the cluster does the limit still apply? Thinking about VDI environments where HA isnt always used.
Duncan Epping says
No, as per the title: “vSphere HA and VMs per Datastore limit”. So only when vSphere HA is enabled you will be limited to 2048.
Gica Livada says
In both Max Config Guides we have:
– Powered-on virtual machines per VMFS volume: 2048
– Powered-on virtual machine config files per datastore in an HA cluster: 2048
INHO, with HA disabled, we still have the limitation for VMFS (but not for NFS).