I wrote about block sizes a couple of times already but I had the same discussion twice over the last couple of weeks at a customer site and on Twitter(@VirtualKenneth) so lets recap. First the three articles that started these discussions: vSphere VM Snapshots and block size, That’s why I love blogging… and Block sizes and growing your VMFS.
I think the key take aways are:
- Block sizes do not impact performance, neither large or small, as the OS dictates the block sizes used.
- Large block sizes do not increase storage overhead as sub-blocks are used for small files. The sub-blocks are always 64KB.
- With thin provisioning there theoretically are more locks when a thin disk is growing but the locking mechanism has been vastly improved with vSphere which means this can be neglected. A thin provisioned VMDK on a 1Mb block size VMFS volume grows in chunks of 1MB and so on…
- When separating OS from Data it is important to select the same block size for both VMFS volumes as other wise it might be impossible to create snapshots.
- When using a virtual RDM for Data the OS VMFS volume must have an appropriate block size. In other words the maximum file size must match the RDM size.
- When growing a VMFS volume there is no way to increase the block size and maybe you will need to grow the volume to grow the VMDK. Which might possibly be beyond the limit of the maximum file size.
My recommendation would be to forget about the block size. Make your life easier and standardize, go big and make sure you have the flexibility you need now and in the future.
Rick Vanover says
Good post, Duncan as always. I too agree to go with the larger block size at all times to use the built-in efficiencies, standardize, and avoid future compatibility issues.
Great post, and I agree with your conclusion. The only thing that has kept me on edge with always using only 8MB blocks, is that I was bitten by the old 16MB VMFS2 -> VMFS3 upgrade. It cost us weeks of additional bill hours, and many questions about my expertise, to migrate beyond what it should have been because we only used 16MB block sizing under VMFS2.
I don’t anticipate that this will ever occur again, but in those days I said the same thing. I would like to hear your thoughts on this. Am I just being paranoid because of an old traumatizing event?
that might happen in the future, who knows… the problem is you need flexibility today, so my recommendation would be pick a large block size.
Hi Duncan. Great information as always. My assumption is that your post, “When using a virtual RDM for Data the OS VMFS volume must have an appropriate block size. In other words the maximum file size must match the RDM size,” was specific to the snapshot discussion? Otherwise, the VMFS block size has no bearing on the size limitation of the RDM.
I had a question.
I do understand, OS dictates block size even though the Application workload block size may be different. so the workload block size inside the VM is constricted by the OS size. vscsistats will also help understand the VM workload IO pattern.
question , Now when we go down the stack , IO from app to the OS has to go thru the VMFS file system to the virtual storage stack to the storage stack. IN that case , will the VMFS block size here not be a limiting factor ? Assuming that OS can drive a larger workload, VMFS 1MB block may limit the amount of IO drive thru the stack ? Is that correct to assume OR is the VMFS block size just for allocating disk extents and ALL VM IO passes thru to the storage stack without any restriction ….