• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Performance: Thin Provisioning

Duncan Epping · Nov 15, 2009 ·

I had a discussion about Thin Provisioning with a colleague last week. One of the reasons for me not to recommend it yet for high I/O VMs was performance. I had not seen a white-paper or test yet that showed their was little impact of growing the VMDK. Eric Gray of Vcritical.com had the scoop, VMware just published an excellent whitepaper called “Performance study of VMware vStorage Thin Provisioning“. I highly recommend it!

Surprisingly enough there is no performance penalty for writing to a Thin Provisioned VMDK when it comes to locking. I expected that due to SCSI reservations there would at least be some sort of hit but there isn’t. (Except for zero’ing of course, see paragraph below) The key take away for me still is: operational procedures.

Make sure you set the correct alarms when thin provisioning a VMDK. You need to regularly check what the level of “overcommitment” is, what the total capacity is and the percentage of disk space still available.

Another key take away is around performance though:

The figure shows that the aggregate throughput of the workload is around 180MBps in the post-zeroing phase of both thin and thick disks, and around 60MBps when the disks are in zeroing phase.

In other words, when the disk is zeroed out while writing there’s a HUGE and I mean HUGE performance hit. To avoid this for thick disks there’s an option called “eager zeroed thick”. Although this type is currently only available from the command line and takes longer to provision, as it zeroes out the disk on creation, it could lead to a substantial performance increase. This would only be beneficial for write intensive VMs of course, but it definitely is something that needs to taken into account.

Please note: On page two, bottom, it states that VMDKs on NFS are thin by default. This is not the case. It’s the NFS server that dictates the type of disks used. (Source: page 99)

Share it:

  • Tweet

Related

Server Storage, vstorage, whitepaper

Reader Interactions

Comments

  1. mb says

    15 November, 2009 at 17:09

    Where is possible to find this PDF in the VMware site?
    I always look for performances test in Techinal Papers (http://www.vmware.com/resources/techresources/cat/91 ) but here I do not find the PDF you are describing so, I guess, there are other places to get useful docs.

  2. Jason Smith says

    15 November, 2009 at 17:58

    Selecting “Support clustering features such as Fault Tolerance” when creating a VMDK will provision it eager zeroed thick.

  3. Duncan says

    16 November, 2009 at 00:42

    No clue, I noticed the link on another blog…

  4. Eric Gray says

    16 November, 2009 at 00:53

    The PDF will be listed on the VMware Technical Resource Center soon, it was just published Friday evening. I know the writer and she tipped me off early… sometimes it pays to be a VCritical subscriber. 🙂

    Eric

  5. Daragh Naughton says

    17 November, 2009 at 15:19

    Hi Duncan,

    Regarding NFS and VMDK provisioning, the ESX Config Guide on p.99 states (as you have quoted here):

    “The virtual disks that you create on NFS-based datastores use a disk format dictated by the NFS server”

    How can the NFS Server dictate this? Surely its the ESX Server’s choice, based on the type of NFS etc?

  6. Duncan says

    17 November, 2009 at 16:10

    No clue, somehow we must be checking a setting which is passed on to the hypervisor. But this is what people have always told me. I actually never looked into it, took it for granted.

  7. Daragh Naughton says

    18 November, 2009 at 17:55

    A bit of digging revealed:

    “This behaviour of the VMware Infrastructure with a virtually provisioned NFS file system is a result of the NFS protocol, not of virtual provisioning. With NFS, storage for a Virtual Machine is not reserved in advance; it is reserved when data is actually written to the virtual machine. This is because the NFS protocol is thinly provisioned by default. Data blocks in the file system are allocated to the NFS client (ESX in this case) only when they are needed.”

    Source: “Implementing Virtual Provisioning on EMC CLARiiON and Cellera with Virtual Infrastructure”
    http://www.emc.com/collateral/software/white-papers/h6131-implementing-vp-with-clariion-celerra-vmware-infrastructure-wp.pdf

    So it is the NFS server that only allocates the blocks – it is absolutely nothing to do with the hypervisor. Interesting.

  8. Dennis says

    18 November, 2009 at 20:27

    What is about fragmentation when using Thin Provisioned disks? In my opinion there must be a fragmentation of VMFS after some time especially when overallocating.

  9. Duncan Epping says

    18 November, 2009 at 21:34

    Fragmentation is not an issue as usually blocks (read/write) requests from the OS are smaller than the thin chunks.

  10. Dennis says

    18 November, 2009 at 23:20

    any idea if there is a big difference between vmfs version 3.31 and 3.33? Most customers will not upgrade vmfs but use the ESX 3.5 vmfs version. Many documents show that some of the metadata layout changes regarding minimized SCSI reservations were on 3.33.

  11. Carl L says

    23 November, 2009 at 16:57

    On the NFS question: I suspect that if you do an eager thick disk that the NFS server will have to fully allocate the space since the VMKernel will write zero’s to the entire disk. Never tried it though.

  12. rotary laser levels says

    7 October, 2010 at 17:39

    After searching Google I found your site about Performance: Thin Provisioning » Yellow Bricks . I think both are good and I will be coming back to you and them in the future. Thanks

  13. Peter W says

    13 March, 2013 at 09:03

    The VMWare whitepaper doesn’t claim that there is no penalty for writing to a thin VMDK. There is indeed a penalty. Comparing the zeroing phase of a thick VMDK to the zeroing phase of a thin VMDK illustrates that thick VMDKs should be eager zeroed, not that there is no penalty for writing to a thin VMDK. Thin VMDKs in their zeroing phase which will induce performance hits every time the disk grows until the disk reaches the maximum provisioned size. To make best use of thin VMDKs they should be created on datastores that are completely pre-zeroed, thus negating the performance hits related to zeroing during thin VMDK disk growth (as the underlying storage on the datastore has been pre-zeroed).

    • Duncan Epping says

      13 March, 2013 at 10:24

      That is what the paragraph at the bottom states…

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the HCI BU at VMware. He is a VCDX (# 007) and the author of multiple books including "vSAN Deep Dive" and the “vSphere Clustering Technical Deep Dive” series.

Upcoming Events

04-Feb-21 | Czech VMUG – Roadshow
25-Feb-21 | Swiss VMUG – Roadshow
04-Mar-21 | Polish VMUG – Roadshow
09-Mar-21 | Austrian VMUG – Roadshow
18-Mar-21 | St Louis Usercon Keynote

Recommended reads

Sponsors

Want to support us? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2021 · Log in