• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Death to false myths: The type of virtual disk used determines your performance

Duncan Epping · Dec 19, 2012 ·

I was reading Eric Sloof’s article last week about how the disk type will impact your performance. First of all, let me be clear that I am not trying to bash Eric here… I think Eric has a valid point, well at least in some cases and let me explain why.

On his quest of determining if the virtual disk type determines performance Eric tested the following on his environment:

  • Thin
  • Lazy zero thick
  • Eager zero thick

These are the three disk types you can choose from when creating a new virtual disk. The difference between them is simple. Thick are fully allocated virtual disk files. Lazy zero means that the disk is not zeroed out yet, eager zero means the full disk is zeroed out during the provisioning process. Thin, well I guess you know what it means… not fully allocated and not also not zeroed. This also implies that in the case of “thin” and “lazy zero thick” something needs to happen when a new “block” is accessed. This is what Eric showed in his test. But is that relevant?

First of all, the test was conducted with an Iomega PX6-300d. One thing to point out here  is that this device isn’t VAAI capable and limited from a performance perspective due to the CPU power and limited set of spindles. The lack of VAAI however impacts the outcome the most. This, in my opinion means, that the outcome cannot really be used by those who have VAAI capable arrays. The outcome would be different when one of the following two (or both) VAAI primitives are supported and used by the array:

  • Write Same aka “Zero Out”
  • ATS aka “Hardware Offloaded Locking”

Cormac Hogan wrote an excellent article on this topic and an excellent white paper, if you want to know more about ATS and Write Same make sure to read them.

Secondly, it is important to realize that most applications don’t have the same write pattern as the benchmarking tool used by Eric. In his situation the tool basically fills up an X amount of blocks sequentially to determine the max performance. Only if you do fill up a disk at once, or very large unwritten sections, you potentially could see a similar result. Let me emphasize that, could potentially.

I guess this myth was once a fact, back in 2009 a white paper was released about Thin/Thick disks. In this paper they demonstrate the difference between thin, lazy zero and eager zero thick… yes they do proof there is a difference but this was pre-VAAI. Now if you look at another example, a nice extreme example, which is a performance test done by my friends of Pure Storage you will notice there is absolutely no difference. This is an extreme example considering it’s an all-flash VAAI based storage system, nevertheless it proofs a point. But not just all-flash arrays see a huge improvement, take a look at this article by Derek Seaman about 3Par’s “write same” (zero’ing) implementation, I am pretty sure that in his environment he would also not see the huge discrepancy Eric witnessed.

I am not going to dispel this myth as it is a case of “it depends”. It depends on the type of array used, and for instance how VAAI was implemented as that could make difference. In most cases however it is safe to say that the performance difference will not be big, if noticeable at all during normal usage. I am not even discussing all the operational implications of using eager-zero-thick… (as Frank Denneman will respond soon to this blog post has a nice article about that. :-))

Related

Server, Storage 5.0, 5.1, myth, vSphere

Reader Interactions

Comments

  1. Ken.C says

    19 December, 2012 at 15:55

    This is fascinating. I will have to point at it when my users ask me for thick disks. One caveat that took me a lot of digging and pain:

    Certain VAAI functions(HardwareAcceleratedInit and HardwareAcceleratedMove) do not work when RecoverPoint is in the equation–and in fact they can crash your SPs. There are also similar issues with a different VAAI primitive (EnableBlockDelete) and earlier versions of the EMC VNX code.

    • Duncan Epping says

      19 December, 2012 at 16:02

      Correct, hence the reason I am not dispelling this myth… just stating that results will vary based on the used hardware. Simply test it 🙂

      And you can use ESXTOP by the way to see if VAAI primitives are actually being used as it holds counters for them.

  2. Phil says

    19 December, 2012 at 15:57

    Tell this to Cisco please…

  3. Eric says

    19 December, 2012 at 16:00

    I just got asked this question by a client yesterday. I wasn’t willing to tell them that there wasn’t a performance impact of thin provisioning but urged them to use it.

  4. Martin says

    19 December, 2012 at 16:31

    I think focus on this should be placed on understanding the read\write pattern of the application first then consideration on the benefits of 1) Disk provisioning type, and 2) Benefits\features of technology such as VAAI.

    An OLTP database versus a file server will have very different patterns which will have huge influence on these choices.

    What I do agree on is that if you have a choice to deploy on best of breed technology, performance impact might not be notable easily.

  5. Totie Bash says

    19 December, 2012 at 17:15

    For me it is even simpler, if it is a Database, Sharepoint or Exchange server and performance is a must, allocate the disk accordingly and use thick. If it is just a file server, Mcafee EPO, WSUS or VDI where the users won’t even notice the difference I use thin. Granted you do have to guard the datastore from filling up but that’s the job of SDRS. As a tech we know the differences but from the end users standpoint they care about the perceived performance.

    • Duncan Epping says

      19 December, 2012 at 17:50

      Why would an end-user need to know which disk type is being provisioned…

      • Phil says

        19 December, 2012 at 21:40

        They don’t need to know, they want to know. As Totie said, it is perceived. The end user will think by following certain “best” practices that their VM will have enhanced powers. It is a juggling act that has to be performed in some environments to make groups such as DBA’s comfortable with virtualizing.

  6. Craig says

    19 December, 2012 at 19:24

    This is a really interesting point. I deal primarily with Netapp infrastructure, and with a zero’d out Aggregate and a write once file system, does performing a eager zero do anything at all other than waste your time in zeroing out a zero’d out bock that will not be over written…

  7. Tomas Fojta says

    19 December, 2012 at 20:10

    Are you saying that if I use thick lazy zero disk that VAAI write zero is used on the first access? I do not think that is the case. And if you are not then how would this VAAI primitive change the result? And how would ATS VAAI help in this case (compared to lazy zero thick) as there is no need for locking because the full vmdk is already reserved?

    I think Eric has a point and VMware agrees with him as FT, virtual MSCS clustering or vCenter Operations have eager zero disk as requirement. Especially in vC Ops case it is pretty clear that for the first 6 months when the database grows from 0 its maximum size every write requires instead of one frontend IO two IOs and you would basically half the performance of your storage.

    • Duncan Epping says

      19 December, 2012 at 22:51

      1) Yes I am saying that when a block of a thick or thin disks needs to be zeroed the VAAI write same primitive is used
      2) VAAI ATS would help in the case of Thin as locking is offloaded
      3) FT and MSCS requires eager-zero thick, not for performance but because the multi-writer mode that is enabled for these disks requires this
      4) I am guessing the VC Ops requirement is a case of “better safe than sorry” and they picked the safest option, which I can understand.
      5) With regards to zero’ing out, I am pretty sure not every write would require two IOs as the zero out probably happens on a VMFS block level, which means that a large chunk is zeroed when needed and many writes can take advantage of this. Also, half the performance probably is exaggerated considering caching etc.

      So I wouldn’t say VMware agrees with him, you might agree with him but that is something different.

      • Tomas Fojta says

        20 December, 2012 at 12:53

        Good discussion!

        re 1: As you write below this would help only in those situation where the storage does not actually zero the blocks on the back-end but marks the blocks as unused in its memory.
        re 2: True, but I was specifically mentioning the comparison of thick lazy zero to eager zero.
        re 3: I learned something new, that’s why I like this discussions.
        re 4: BTW VMware SQL virtualization best practices also state that eager zero disk should be used.
        re 5: But even the lowly Iomega does caching.
        I agree with you that it is not a myth but a ‘consideration’. I just think that VAAI will not help much especially when comparing with zero thick.

        • Duncan Epping says

          20 December, 2012 at 13:24

          1) As I said, test it. Even if the storage writes zeroes, impact is not going to be as big as Eric showed. That is the whole point of the discussion.
          2) not sure what you are referring to, I don’t think I said anywhere VAAI ATS will help with that
          4) best practice –> safest option
          5) not sure you can call that caching

        • Duncan Epping says

          20 December, 2012 at 13:27

          and I think VAAI results speak for itself:

          http://www.yellow-bricks.com/2011/03/24/vaai-sweetness/

          http://www.purestorage.com/blog/vm-performance-on-flash-part-2-thin-vs-thick-provisioning-does-it-matter/

          http://derek858.blogspot.nl/2010/12/3par-vaai-write-same-test-results-upto.html

          http://blog.synology.com/blog/?p=1364

          Anyway, believe what you want to believe, as I have said multiple times in my article… results will vary based on the type of array used and the way that vendor has implemented VAAI.

    • James Hess says

      20 December, 2012 at 04:44

      “I think Eric has a point and VMware agrees with him as FT, virtual MSCS clustering or vCenter Operations have eager zero disk as requirement.”

      Did it not occur to you, that the reason VMware FT absolutely requires eager zero thick, and can’t take lazy thick or thin, is unrelated to performance, and is because the disk will be accessed in a multiple-writer mode that cannot cope with one of the hosts (writers) dynamically updating VMDK metadata?

      As for MSCS; there are only a small number of validated supported configuration scenarios, the very simplest ones. Even VMDKs accessed using iSCSI, NFS, or FCoE are not supported.

      I am sure the reason the software vendor’s official word is they won’t support thin disks with MSCS again also has little/nothing to do with technical performance characteristics.

      Ditto for Microsoft’s official word of not supporting the use of Thin disks, vMotion, disks hosted on NFS NAS devices for Exchange VMs, use of the VMware snapshot capability to take backups, etc, and not supporting use of VMware HA on any server in a DAG cluster.

      It may work just fine, but vendor of most software will want to be as conservative as possible, spend as few resources and as little time as possible validating complex configurations; overspecify the requirements and restrictions, to minimize their customers’ use of paid-up support resources.

      If there is just 1 support case that doesn’t get opened, because of the “Use a thick VMDK requirement”; then the software vendor reduces the vendor’s cost of their customer’s paid up support.

      Support cases of “My MSCS cluster crashed (underlying reason; the VM datastore with a few other thin-provisioned VMs and snapshots ran out of space)” are included in that.

      One thing to note… if the MSCS server has thick-provisioned VMDKs; when the datastore runs out of space, it will likely be other VMs (not the MSCS VMs) to error out and stop in their tracks first.

  8. Andrea says

    19 December, 2012 at 23:10

    I make sure to use eager thick zero for copy-on-write guest filesystems like ZFS.

  9. Alex Tanner says

    20 December, 2012 at 09:42

    Hi

    My understanding of the VAAI primitives came from a slightly different perspective, which could be confirmed with Cormac and may have changed from the original VAAI implementation.

    My understanding of wtite-same was to introduce an offload of the function from the ESXi host – such that when the ESXi host wanted to zero a disk rather than sending a repeat command of 1000s of zeros – which consumes fabric traffic, a command was sent to a VAAI enabled array to zero from this LBA reference to this LBA reference and the array went off and executed that command – this approach clearly cut down on fabric traffic, BUT I am not sure it changed the fundamental approach of what the array does when it zeros out space (other than the fact that it does it in one efficient hit, and probably as one queued operation, rather than the array being interrupted by demands for other activities and the zeroing activity being interlaced with other activities).

    Critically as Duncan points out this very much depends on the workload and how it accesses those disk blocks – few of the standard workloads are sequential in nature – most being small block random workloads with varying degrees of skew (or rehitting existing block references for either read or write) and this will impact the efficiency of the zeroing process to varying degrees over the real estate claimed by that application. Layer array auto-tiering activity on top and the whole approach in a modern enterprise array is a world away from that of a unit using standard RAID groups of a single tier fronted by small amounts of array cache

    • Duncan Epping says

      20 December, 2012 at 10:26

      Thanks for the comment Alex,

      I want to point out that every array has a different implementation. Some will go out and write those zeroes for you, others… well they don’t, they just update their meta-data. I guess it comes down to how smart your array is 🙂

      Also, write-same is not only used during the initial creation of an eager-zero thick disk… but also when a lazy-zero thick or a thin disk needs to expand. You can simply check this by creating a new VM and installing an OS to it. Open up ESXTOP and check the VAAI stats. (blog post on how to do that later today.)

  10. Jeremy says

    20 December, 2012 at 10:37

    I had an idea recently to thin provision disks but at night run a script that would write 10GB (this could be independently set) of data and then scrub it. This would remove the performance issue during oporational hours whilst still maintaining a thin provisioned environment.

    What would you think? Good idea bad idea?

    Of course if your array supported VAAI it wouldn’t matter but for older arrays it might help.

    Jeremy

  11. Jose says

    23 January, 2013 at 18:46

    Hi there,
    Interesting discussion. I have another question.
    I am performing a test about how thin / thick provisioning affects performance and also with/without having VAAI activated.
    I realised that once you create, configure and run the VM, VAAI does not do anything. I mean, there is no difference in results (IOPS, throughput and delay) even for a write intensive profile.
    So, my question is: should does VAAI improve the performance of a running VM also when every management operations are already done on the VM? Or it ONLY speeds up those operations (cloning, migrating, creating vDisks, …)

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium
Aug 21st – VMware Explore
Sep 20th – VMUG DK
Nov 6th – VMware Explore
Dec 7th – Swiss German VMUG

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in