A lot has changed with vSphere 5.0 and so has one of the most under-appreciated “features”…. VMFS. VMFS has been substantially changed and I wanted to list some of the major changes and express my appreciation for the great work the VMFS team has done!
- VMFS-5 uses GPT instead of MBR
- VMFS-5 supports volumes up to 64TB
- This includes Pass-through RDMs!
- VMFS-5 uses a Unified Blocksize –> 1MB
- VMFS-5 uses smaller Sub-Blocks
- ~30.000 8KB blocks versus ~3000 64KB blocks with VMFS-3
- VMFS-5 has support for very small files (1KB)
- Non-disruptive upgrade from VMFS-3 to VMFS-5
- ATS locking enhancements (as part of VAAI)
Although some of these enhancements seem to be “minor” I beg to differ. These enhancements and new capabilities will reduce the amount of volumes needed in your environment and will increase the VM-to-Volume density ultimately leading to less management! Yes I can hear the skeptics thinking “do I really want to introduce such a large failure domain, my standard is a 500GB LUN”. Think about it for a second, although that standard might have been valid years ago, it probably isn’t today. The world has changed, recovery times have decreased, disks continue to grow, locking mechanisms have been improved and can be offloaded through VAAI. Max 10 VMs on a volume? I don’t think so!
Craig says
This is all good, but the 2Tb -512 vmdk limit is still a downer.
Duncan Epping says
Don’t forget that RDMs can do 64TB…. And we are working on the VMDK side as well, cannot tell when that will be fixed though unfortunately.
Michael Audet says
Hello,
Does that mean if we configure a VM to use RDM disks, it can use one that is larger than the 2TB -512 limit?
Currently, a pointer vmdk is created that must stay under this limit even with RDMs..has this changed in vsphere5?
If so, that is a good first step at eliminating this 2TB limit because we do have a file server (physical) that would require up to 6TB of space for data and this limit has kept us from coverting to VM.
I understand the VMDK is still limited when creating standard vdisks—just wondering if using RDMs this limit is bypassed?
I can’t wait for this 2TB limit to be removed because it is starting to impact how we use VMs. Disk space and LUNs are just huge now and this 2TB limit is, well, antiquated now.
Thanks for any input you might have.
Michael
Duncan Epping says
Yes Physical RDMs can be larger than 2TB!
cwjking says
Man, Using RDM’s IMO are more of a problem then they are worth… They are usually a “USE-CASE” with us like they have to have a really damn good reason. What I can recommend to not have a pRDM is to just break allocate multiple large VMDK’s to a single VM.
What we did is migrated all our shares to a VSA filer or NFS CIFS with NetAPP. Sure its probably not something you want to do but it will keep you 100% flexible and get your closer to being a virtualized shop.
pRDM don’t allow you to do alot of things. Like, you cannot backup up the VM in an easy fashion at all. If you are an “agentless” shop like us thats a problem… that is why we stay away from pRDM’s. To each their own.
Duncan says
Many customers still use in-guest agents is my experience. Especially for large fileservers or databaseservers if they don’t have in-guest NFS mounts.
tom says
I haven’t researched this, but does use of RDM’s prevent a VM from moving around via vmotion or DRS? I heard that and would like to know.
Duncan Epping says
no it does not.
Mark Gabbs says
ah yes….the depreciation of the 500-GB volume.
as long as my sync/async volume replication software (depending on my SAN vendor) can handle those large volumes, we ARE going to change the architectures of the SAN LUN sizing….
Andy says
Great info Duncan. Great to see the 2 TB volume limit eclipsed and to see design recommendations moving away from the 500GB/10VMs per volume. Can’t wait to get my hands on vSphere 5.
Albin says
Is 2Tb-512 limit is still valid limit in VMFS5?
Duncan Epping says
for VMDK’s 2TB is still the limit…. but you can have RDMs up to 64TB!
Albin says
This is good news. Tnx Duncan
Eugene says
So I guess now there is a good reason to use RDMs 🙂
Marcel Mertens says
Ha, months ago Duncan told us to scale up (http://www.yellow-bricks.com/2010/03/17/scale-up/) which means fewer but bigger hosts.
The result is that VMware changed the licensing to make more money. The hosts became bigger and bigger, but the sockets (and VMware Licenses) are still 2 oder 4 way system. Now vmware cut this and charge you also for the amount of memory used. vSphere is a brand new product but the delivered vRAM per CPU is somewhere from 2009.
In 2012 oder 2013 vSphere 5 is still the lastest version but “normal” ESX host has 256 or 512 GB memory. So you have to spend 3 oder 4 times more for you licenses or SnS.
With vSphere 6 you will be limited in storage pool size..
Tom says
The new licensing also penalizes small businesses, whom VMware doesn’t really care about or whether we switch over to XenServer. Big Business doesn’t care because they will just charge Main Street more money to pay VMware and Maritz gets richer. This was a bad licensing decision. If VMware really wants to outsell MS/Citrix, remove the memory thing, just lower the price per socket. It will be very difficult to explain this to non-techie people.
Bilal Hashmi says
Don’t mean to advertise my blog on Duncan’s blog, that will be a stupid thing to do. However, I have noticed a lot of folks have been freaking out about the vRAM but it really depends on how your environment is setup. I wrote a little blurb about this using some extreme cases.. IMO I think its more than what its made out to be to be honest and most importantly, its not a technical flaw in the hypervisor but more of a pricing model that can be adjusted easily if need be.. will you feel better if the vRAM on Ent Plus was increased to 64GB. How many VMs can you really push on a single host before you start seeing some CPU ready issues?
http://www.cloud-buddy.com/?p=391
Duncan Epping says
You really think that Paul or Steve inform me about licensing changes roughly a year before a release or based their changes on my articles? I’m not that influential… although I wish I was,
Henk Arts says
guys, what has this license discussion has to do with this great blogpost? Please appriciate Duncan’s effort to this great knowledge!
tHENKs!
Mxx says
VMFS-5 uses a Unified Blocksize –> 1MB
What happened to the previous recommendation of upping that to 8MB?
Duncan Epping says
All constraints are lifter around the blocksize. So no need to change it.
Chris Nakagaki says
I love the fact that we will no longer need to worry about block size!! Thanks for this tidbit of info.
Ryan B says
Can we do an online upgrade of a volume with 8 MB blocks to VMFS-5?
Thank you.
Michael says
Yes, you can. Block size will remain 8MB.
Kelvin says
Duncan,
Great post, thanks 🙂
Does the 64TB RDM limit apply to both physical and virtual mode RDM’s?
On the subject of licensing, I remain undecided about the new model. My first impressions were positive in that I thought it afforded more flexibility and removed some of the more complex nuances of the previous license model (number of cores per processor, amount of installed RAM etc.). Having taken a second look today though I can see some fairly serious cost implications when compared with vSphere 4 licensing for some scenarios (take this post for example: http://itblog.rogerlund.net/2011/07/vmware-vsphere-5-licensing-customer.html). I think I need to see some more real world examples and the potential impact on some of our existing customers before I make up my mind.
Michael says
AFAIK, virtual RDM is still limited to 2TB.
Kelvin says
“guys, what has this license discussion has to do with this great blogpost? Please appriciate Duncan’s effort to this great knowledge!”
Quite agree by the way… didn’t intend for my last post to detract from this great information. Was just posting my thoughts in response to some of the others 🙂
Joris says
We’re running our datastores on NFS (on a NetApp filer) only. We’ve had some interesting results with thin provisioning and deduplication, saving us upwards of 60% on storage capacity.
Any word on wether VMFS5 is now more interesting than NFS?
Fabio says
So with all the goodies on VMFS5 + VAAI Locking what will be the new “standard” size for a VMFS datastore? 1 TB?
[I know that the answer is “It depends” but I am looking at the average datastore…]
Duncan Epping says
It depends, but I have been doing 1TB datastores for a while now and don’t see the problem at all.
joss says
— VMFS-5 uses GPT instead of MBR
So it means vmware converter will be able to conver a GPT partitioned server ??
Duncan Epping says
No it means that VMFS will start using GPT instead of MBR… Converter is not the topic here, I don’t know to be honest.
Cwjking says
Cool Duncan, I am just happy to get bigger damn datastores. The VMDK size is not a concern for us because usually we scale out with VM’s. 😉
craig says
even though the size had been increase, I am just wonder whether any changes in term of performance especially if there are high number of virtual machine in place. could be like 40 vm in single LUN? any idea?
Duncan says
Most of the performance improvements will come from VAAI to be honest…
craig says
can we still define 8MB block size as we did previously?
Duncan Epping says
No, unified blocksize… no need to define it.
cwjking says
Sorry Duncan couldn’t reply to your last comment made to me.
I agree some do use it. But the future is vStorage API. We don’t design our ESXi host with a backup network in mind anymore but as you stated I guess most people aren’t with the times. Perhaps it would best said that not all use VEEAM :).
Ravi Shanghavi says
Duncan, I’m sorry but to clarify… we can’t vMotion any machines with RDMs attached as physical compatibilty mode. Which is a must for shared disks in clusters. Br, Ravi
Ravi Shanghavi says
Duncan, I’m sorry but to clarify… we can’t vMotion any machines with RDMs attached as physical compatibilty mode. Which is a must for shared disks in clusters. Br, Ravi Shanghavi Ottawa,Canada
Brian says
Okay, reading over your new Deepdive book and thought this was the best place to ask. I am wondering if instead of my current (test) environments multiple 2TB LUNS if instead I should create a new VMFS5 LUN of say maybe 12TB instead of the multiple LUNS. Now keep in mind the storage I am speaking of is an IBM XIV. This is not something I would do if I was connecting to something like an IBM 8700. Am I not thinking this through correctly? I don’t see a negative with that approach?
Brian says
Why use VEEAM when you can use TSM-VE? 😉
Matt says
Brian,
Did you ever find any more guidance regarding your XIV and VMFS sizing? Same boat here.
Brian says
Not really, I haven’t had time to really think more about it either though. I guess I need to better understand how going from say 20 to 25 VM’s per LUN to something like 100+ VM’s on a single large LUN would work in regards to the VM’s making requests to the storage. Since you have one pool of storage now will the requests queue up with 100 VM’s trying to hit it versus it being broken up with the multiple LUNS? I dunno, just thinking out loud really. Let me know what you end up doing.
Mike Donley says
What is the maximum size of an actual virtual disk an RDM in VSphere 5? I know the RDM volume size is 2 TB, but how large can you create an actual virtual disk on that volume? Also, are RDMs formatted with VMFS? Thanks!
Duncan Epping says
2TB for virtual RDM, 64TB for physical. RDMs are not formatted with VMFS. RDM’s are formatted by the Guest in the desired format. Judging by your questions, I wonder if you have a need for them really… Typically VMFS is the way to go!
Alex says
Hi Duncan,
I know that VMFS is the way to go but what if I need to install 2 VM (win2012) as file server for a total of about 90 TB of disk space needed? I can see that my only option is to use pRDM attaching from 4 to 8 disk to my VMs. Backup of those disks will be made by some agent to write to tape. pRDMs are still compatible with vMotion and H.A. . the only concern is if i need to backup only the System Disk (C:\) would I be allowed to take snapshot for those VMs? Can you give me your opinion about using few very large disk (pRDM) or should I use a lot more disks (vmdks) and concatenate them inside the VM (dynamic disks)?
thanks,
alex
Anthony says
@Ravi Shanghavi … pRDMs can be vmotioned from host to host. What is preventing your VMs from vMotioning is the clustering requires you to set the SCSI controller to SCSI bus sharing Physical mode. Once the SCSI bus sharing is set to physical mode then you los the ability to vmotion a VM between hosts. This is the big pit fall of Windows clustering in a VM environment.
Ravi Shanghavi says
@Anthony This isn’t particular to Windows clustering, it in fact applies to real clustering solutions like Veritas Cluster Server as well. Either way, pita. But an offline vmotion is still possible so you’re not completely SOL.
Nikki says
Recently I have reinstall the esxi5.0 server & accidently has taken a shared FC datastore whihc was running on VMFS 3.0 & also had 5 VMs .
As I can able to see all the VM are not accessible & new datastore named datastore (1) has created , & the original FC Datastore name is “datastore” .
Obvisouly I wont be able to see the VMs in a new datastore . But I also heard that If there is already a vmfs partition then the new installation wont delete the VMDK files . is this right ?
Is there any way I can get my VMs/VMDK back ?
Thanks
Nikki
Alex says
Hi Duncan,
I know that VMFS is the way to go but what if I need to install 2 VM (win2012) as file server for a total of about 90 TB of disk space needed? I can see that my only option is to use pRDM attaching from 4 to 8 disk to my VMs. Backup of those disks will be made by some agent to write to tape. pRDMs are still compatible with vMotion and H.A. . the only concern is if i need to backup only the System Disk (C:\) would I be allowed to take snapshot for those VMs? Can you give me your opinion about using few very large disk (pRDM) or should I use a lot more disks (vmdks) and concatenate them inside the VM (dynamic disks)?
thanks,
alex