VMFS Blocksizes have always been a hot topic regarding storage performance. It has been discussed by many including Eric Siebert on ITKE and Gabe also opened a topic on VMTN and he answered his own question at the bottom. Steve Chambers wrote a great article about Disk Alignment and Blocksize on VI:OPS which also clearly states:”the VMFS block size is irrelevant for guest I/O.” Reading these articles/topics we can conclude that an 8MB blocksize opposed to a 1MB blocksize doesn’t increase performance.
But, is this really the case? Isn’t there more to it than meets the eye?
Think about thin-provisioning for a second. If you create a thin provisioned disk on a datastore with a 1MB blocksize the thin provisioned disk will grow with increments of 1MB. Hopefully you can see where I’m going. A thin provisioned disk on a datastore with an 8MB blocksize will grow in 8MB increments. Each time the thin-provisioned disk grows a SCSI reservation takes place because of meta data changes. As you can imagine an 8MB blocksize will decrease the amount of meta data changes needed, which means less SCSI reservations. Less SCSI reservations equals better performance in my book.
For the current VI3 environments, besides VDI, I hardly have any customers using thin provisioned vmdk’s. But with the upcoming version of ESX/vCenter this is likely to change because the GUI will make it possible to create thin provisioned vmdk’s. Not only during the creation of vmdk’s will thin provisioned disks be an option, but also when you initiate a Storage VMotion you will have the option to migrate to a thin provisioned disk. It’s more than likely that thin provisioned disks will become a standard in most environments to reduce the storage costs. If it does, remember that when a thin provisioned disk grows a SCSI reservation takes place and less reservations is definitely beneficial for the stability and performance of your environment.
Ken Cline says
So…is thin provisioning an argument for the use of NFS as your backing storage? No worries about SCSI reservations…
Carlo Costanzo says
Based on this and the referenced articles, it seems that an 8MB block size would be a smart option regardless of actual LUN size.
Of course remembering Ken Cline’s KISS article, http://kensvirtualreality.wordpress.com/2009/03/09/when-is-it-ok-to-default-on-your-vi/
I have to wonder why VMware chose to default to 1mb block sizes.
frank denneman says
I’m wrestling with this exact question myself. A client of mine is ready to deploy a lefthand Iscsi SAN with thin-provisioning enabled.
Maybe I can try to do some testing on the new SAN.
Also remember that all VMs, unless eager zeroed thick (CLI or deployed from a template only) it will incur the dreaded double IO hit for each new block that gets written to as well. One more reason people thing IO sucks in VMware, when if you do it right it doesn’t…
Also on thin-disks, enterprise class storage from NetApp/EMC/etc… don’t generally suffer greatly from performance, so you should see perhaps a 2% degradation in performance for thin vs. thick when having to grow/write zeroes. Now if your storage array is some random SATA with a little cache it’s going to be significantly higher performance hit.
I agree Kix, and IO doesn’t suck indeed but it’s just a matter of taking everything in account when creating a design.
Thanks for the comments guys!!
I thought the VM disk files grew in increments larger than the block size? I have always seen VMs grow in increments of 16MB regardless of block size. Is this changed in vSphere or have I just been hallucinating?
frank denneman says
I think you are thinking off snapshots.
A snapshot will grow in blocks of 16MB.
Frank is right, you are talking about snapshots. Snapshots grow in chunks of 16MB.
Attila Bognar says
There was a VMworld Europe 2009 session with a note regarding VMFS: it seems there will be improvements regarding the reservation mechanism (maybe that is the reason thin provision disks will be offered). Can you confirm this?
Locking mechanism has been improved with 3.31, you can find more details in this topic over on the vmtn forums: http://communities.vmware.com/message/963043
Kix can you explain what you mean by eager zeroed thick? We may be deploying SQL Server to our VMWare cluster later this year and so I have to start thinking about maximizing I/O speed.
Frank Denneman says
Page 12 of the document “Performance Tuning Best Practices for ESX Server 3”
http://www.vmware.com/pdf/vi_performance_tuning.pdf explains the four vmdk disk types pretty well.
But what if I need a greater file size capability on my local vmfs datastore, in ESX 4 it won’t let you choose unless I’m missing something?
thanks for the quick response, I will do that in future!!