I had a discussion about Thin Provisioning with a colleague last week. One of the reasons for me not to recommend it yet for high I/O VMs was performance. I had not seen a white-paper or test yet that showed their was little impact of growing the VMDK. Eric Gray of Vcritical.com had the scoop, VMware just published an excellent whitepaper called “Performance study of VMware vStorage Thin Provisioning“. I highly recommend it!
Surprisingly enough there is no performance penalty for writing to a Thin Provisioned VMDK when it comes to locking. I expected that due to SCSI reservations there would at least be some sort of hit but there isn’t. (Except for zero’ing of course, see paragraph below) The key take away for me still is: operational procedures.
Make sure you set the correct alarms when thin provisioning a VMDK. You need to regularly check what the level of “overcommitment” is, what the total capacity is and the percentage of disk space still available.
Another key take away is around performance though:
The figure shows that the aggregate throughput of the workload is around 180MBps in the post-zeroing phase of both thin and thick disks, and around 60MBps when the disks are in zeroing phase.
In other words, when the disk is zeroed out while writing there’s a HUGE and I mean HUGE performance hit. To avoid this for thick disks there’s an option called “eager zeroed thick”. Although this type is currently only available from the command line and takes longer to provision, as it zeroes out the disk on creation, it could lead to a substantial performance increase. This would only be beneficial for write intensive VMs of course, but it definitely is something that needs to taken into account.
Please note: On page two, bottom, it states that VMDKs on NFS are thin by default. This is not the case. It’s the NFS server that dictates the type of disks used. (Source: page 99)