If I would ask you what the max amount of VMs per Host is for vSphere what would your answer be?
My bet is that your answer would be 320 VMs. This, of course, based on the “virtual machines per host” number that page 5 of the Configurations Maximum for vSphere shows.
But is this actually the correct answer? No it’s not. The correct answer is, it depends. Yes… it depends on the fact if you are using HA or not. The following restrictions apply to an HA cluster(page 7):
- Max 32 Hosts per HA Cluster.
- Max 1280 VMs per Cluster.
- Max 100 VMs per Host.
- If the number of Hosts exceeds 8 in a cluster, the limit of VMs per host is 40.
These are serious restrictions that will need to be taken into account when making a design for a virtual environment. It touches literally everything. From your Cluster size down to the hardware you’ve selected. I know these configuration maximums get revised with every update but it is most definitely something one would need to consider and discuss with the customer…
Just wondering what your thoughts are,
Andrew Storrs says
Figured I’d repost my tweets here so you could respond to non-Twitter users too š
astorrs: @depping one thing you didn’t mention… the “why”. Why are we limited to 40 per host in 8+ node clusters? Makes the UCS VM #’s useless, etc
I understand the cluster=host no more than 320 thing, I’m questioning the technical reason why this exists.
Jason Boche says
Limit of 40 VMs per host isn’t very appealing considering the horsepower today’s commodity hardware has under the hood. With VI3 I’ve heard of scaling issues in clusters beyond 12 nodes. Lowering the number to 8 so that we can achieve higher consolidation ratios is kind of a bummer and could really drive smaller density host hardware. I guess much of what this boils down to is going to depend on the architectural and implementation philosophies of each company.
Rawlinson says
Good info Duncan, this is the kind a stuff that is useful to be aware of since customers always ask about this and specially on the VMware design topics.
superted says
@Jason we have run a 17 host cluster on VI3 for over a year now without any issues at all on IBM blades.
I really dont understand why the 40 limit exists though, is it due to limitations in the HA protocal vmware implemented?
It wont affect me directly right now but future implementations it might have a bearing on.
Ed
Arnim van Lieshout says
I agree with Andrew that the reason for these limits are unclear.
However going beyond 40 VMs per host raises other problems too.
What about risk and impact?
When a host goes down it takes the running VMs with it (leaving FT out of the picture here). So the question I always ask is, what is the maximum amount of VMs that may go down at the same time? If that many servers go down, do we still meet the SLA requirements?
Despite of HA, the VMs are down for a couple of minutes!
So IMHO there is no single answer to Duncan’s question and all comes down to a combination of technical- and business limits/requirements. Therefore the answer will be different for each company.
superted says
@ Arnim
Very true. It all depends on availability etc.
As we use ours in a production enviroment we have tried to strike a balance between number of VM’s per host and number of hosts.
cskow says
I absolutely agree on the scaling point above, but has anyone else noticed that while little blades and 2U servers are reaching fairly high density in terms of cores, anyone else notice that the more DDR3 you put in a server these days the slower the memory gets? This is not VMware friendly.
As it stands, I personally probably wouldn’t want to run more than about 45 VMs on a machine except in event of failure, just because that’s a large part of your business that could get restarted if there’s a failure. Too much density on a server can be as much of a problem as too little.
aarondelp says
Hey all – I actually have some insight into this one. The reason for the 40 vm max over 8 hosts is probably a hold over from the 3.5 days. We have a customer that ran into problems running 60+ vms on a host in a cluster. VMotion and HA would quit working. It was a bug confirmed by VMWare in 3.5 and was going to be corrected in vSphere. VMWare also stated the limit would raise to 100 vms per host. I don’t know if that limit is still in place for 3.5 clusters today.
It appears that the correction to the HA code doesn’t scale past 8 hosts for some reason though.
I don’t know the why behind it, I just know the limit is there.
Troy Clavell says
I would believe this is based around HA. I still think VMware is trying to get their hands wrapped and HA and just what the proper number of guests per host are to ensure HA is as functional as possible.
Troy Clavell says
“hands wrapped around HA” š
Dave Convery says
Duncan –
Great points. But so far, I have not seen anyone putting more than 40 gusets per host anyway. Obviously, that is using ESX 3.5, but there always seem to be “other” deciding factors as well, like backup strategies, replication, storage limitations, general policy, etc.
AC says
Well, we are well over 40 VMs per host in our VDI environment and approaching that on our server environment as well (big DL580s). Factor in n+1 for each cluster and things just got a lot more expensive since we have a handful of 10-node clusters already. I reviewed the 3.5 Configuration Maximums document and can’t find any mention of this sort of limit. Obviously the 4.0 Config Maximums document does mention it. So my questions are:
1) Does this limit actually exist in 3.5 (sounds like it is a hypothetical soft limit according to aarondelp’s comment)?
2) Is this a hard limit in ESX 4?
Between buying lots of new hardware to replace all the DL580 G3s (about ready for retirement anyway, granted) and G4s we have and likely having to buy different hardware (more, smaller clusters with n+1 capacity), vSphere is getting mighty pricey real fast.
Duncan Epping says
1) it’s not mentioned in the max config guide in 3.5.
2) it’s not a hard limit
Anton Zhbankov says
Duncan, thanks a lot for pointing at this paragraph.
But I don’t see any tasks except VDI that require consolidation ratio more than 40:1.
Duncan says
it’s not a requirement… but with the Nehalem it is something you will run into. These hosts can easily carry 40+ VMs which might lead to issues.
JoshBryan says
So if we are running 8 hosts with 100 VM’s per and we lose 4 hosts will HA properly boot the VM’s onto the remaining 4 hosts if the resources are available? Or will this soft/hard limit prevent HA from functioning if we are hitting the ceiling before failures?
Duncan Epping says
Yes, according to the max config guide that’s a fully tested and certified situation and should work fine.
LW says
I would believe this is based around HA. I still think VMware is trying to get their hands wrapped and HA and just what the proper number of guests per host are to ensure HA is as functional as possible.
anonymous says
Regardless of VDI…
HA limit of VMs: the actual sentence is “Configurations exceeding 40 virtual machines per host are limited to cluster size no greater than 8 nodes.”
This means that if you have a cluster with 9 nodes or more, you can only have 40VMs per node.
But if you have 1 to 8 nodes, you’re not limited by HA vm per host limits. Actually the limit here is 100 VMs: “Virtual machines per host in HA cluster: 100”.
1 to 8 nodes: HA limit 100 VMs per host (800VM’s)
9 to 32 nodes: HA limit 40 VMs per host (1280VM’s)
Also, when you think about VMware View, don’t forget the View Servers (Connection Broker, Secure, Replicas); they can and should reside within the virtual environment (these are the only virtual servers included in the licensing), so you should consider those as a number as well when calculating the limits.
However, there are other limits imposed by whatever features you are using… vCPU’s, vRAM, DRS,DPM,Conn Broker,etc.
And obviously, remember to limit your View Pools with those limits in conscience.
But in the end, careful not only with the limits but with the capacity of your host, network and storage. Remember that concurrent operations might also be an issue if you’re not dimension for such…
Glad to help!
Regards.
Cap.
Adrian says
Duncan
With the release of ESX 4.1 has this limit now been removed. The config maximums document for vSphere 4.1 no longer mentions a limit of 40 VM’s for a HA cluster of 9 or more VM’s.
I am not sure if the limit has been removed or it is a design limit that still exists and Vmware no longer want to highlight the fact.
Regards
Adrian
Duncan Epping says
doesn’t exist anymore.
Sander says
Can anyone confirm that this limit is removed?
Duncan Epping says
confirmed.