One of the things that always keeps people busy are the “max config” numbers for a release. I figured I would dedicate a blog post to vSphere 5.5 platform scalability now that it has been released. A couple of things that stand out if you ask me when it comes to platform scalability:
- 320 Logical CPUs per host
- 4TB of memory per host
- 16 NUMA nodes per host
- 4096 vCPUs maximum per host
- 40 GbE NIC support
- 62TB VMDK and Virtual RDM support
- 16Gb Fibrechannel end-to-end support
- VMFS Heap Increase, 64TB open VMDK per host max
Some nice key improvements right when it comes to platform scalability? I think so! Not that I expect to have the need for a host with 4TB of memory and 320 pCPUs in the near future, but you never know right. Some more details to be found in the what’s new whitepaper in vSphere 5.5 for Platform.
Lewis says
if you were using 320 pCPUs, you’d probably want a little more than 4TB of RAM in your box
Lewis says
or you’d want to learn how to load balance ๐
Andrew Fidel says
I believe the 320p CPU was doubling the previous 160 because the Ivy-Bridge EX will support 16 cores * 8 sockets * 2 (Hyper-Threading) for 256 pCPU’s which exceeded the previous maximum. Of course that platform also supports 12TB of memory so the 4T is going to be a big limit there.
The interesting thing to me is the 16 NUMA nodes, is Unisys planning a glue-logic based 16 socket box (though they’ve announced their own hypervisor for their upcoming E7 based box)?
Finally the lack of hot extend on >2TB VMDK’s is a major miss.
Ralf says
But still no FT with more than one vCPU.
Duncan says
Correct, no multi vCPU FT yet
Garret says
Is the maximum number of paths to storage for a host still 1024? With larger hosts you would think this would have increased.
Duncan says
It has not changed
Independent_forever says
All these new releases are always nice but so many features we have found we’ll never use PLUS I would like to see limits in FT lifted and correct the naming issue when migrating datastores….we still the issue were the Vm files themselves show the old name…they fixed the folder piece but the files are still showing old names…that makes new version kind of bittersweet when some existing annoyances and/or limitations still exist even after this time…they keep increasing capabilities but who uses VMs or HOSTs with this many CPUs or TBs of memory for a VM? It’s almost ridiculous….I still think vmware is king and will be for some time but remove the limits and missing pieces from existing versions…it makes upgrades more meaningful….
Ralf says
Renaming of VM file names was fixed in 5.1 U1.
http://www.yellow-bricks.com/2013/01/25/storage-vmotion-does-not-rename-files/
jeff says
I too am curious if the maximum paths per host has been increased from 1024. With 8 paths per Datastore now(due to how our virtual san backend works), I am stuck with 128 Luns.
Duncan says
It has not changed
bigiron says
Can someone confirm what’s the maximum vCPU of a single VM that can be deployed in vSphere 5.5 Standard license? It’s currently 8 vCPU with 5.1 standard, does this increase in vSphere 5.5? Thanks.
Andrew Fidel says
The vCPU entitlements are removed from all editions. Configure up to the maximum supported number of vCPUs in the VM is all editions
http://up2v.nl/2013/08/26/what-is-new-in-vmware-vsphere-5-5/
bigiron says
Does that include Essentials? I know it mentions “All Editions” but that might be referring only to Standard, Enterprise, and Enterprise+
Nico says
To avoid any confusion, 320x “pCPUs” per host means 320x “logical” processors.
Nico says
It’s a question and not an confirmation… ๐
Duncan Epping says
yes you are correct, that should say “logical” instead of pCPUs indeed. thanks for that,
Jason singh says
Does anyone know how many VMs per second can be boot up ? Are there known limits/recommendations? I realize the answer could be it depends. Any configuration guidance on maximum VMs per second? Thanks.