vSphere 5.0: UNMAP (vaai feature)

With vSphere 5.0 a brand new primitive has been introduced which is called Dead Space Reclamation as part of the overall thin provisioning primitive. Dead Space Reclamation is also sometimes referred to as unmap and it enables you to reclaim blocks of thin-provisioned LUNs by telling the array that specific blocks are obsolete, and yes that command used is the SCSI “unmap” command.

Now you might wonder when you would need this, but think about it for a second.. what happens when you enable Storage DRS? Indeed, virtual machines might be moved around. When a virtual machine is migrated from a thin provisioned LUN to a different LUN you probably would like to reclaim the blocks that were originally allocated by the array to this volume as they are no longer needed for the source LUN. That is what unmap does. Now of course not only when a virtual machine is storage vmotioned but also when a virtual machine or for instance a virtual disk is deleted. Now one thing I need to point out that this is about unmapping blocks associated to a VMFS volume, if you delete files within a VMDK those blocks will not be unmapped!

When playing around with this I had a question from one of my colleagues, he did not have the need to unmap blocks from these thin-provisioned LUNs so he asked if you could disable it, and yes you can:

esxcli system settings advanced set --int-value 1 --option /VMFS3/EnableBlockDelete

The cool thing is that it works with net-new VMFS-5 volumes but also with upgraded VMFS-3 to VMFS-5 volumes:

  1. Open the command line and go to the folder of the datastore:
    cd /vmfs/volumes/datastore_name
  2. Reclaim a percentage of free capacity on the VMFS5 datastore for the thin-provisioned device by running:
    vmkfstools -y <value>

The value should be between 0 and 100, with 60 being the maximum recommended value. I ran it on a thin provisioned LUN with 60% as the percentage to reclaim. Unfortunately I didn’t have access to the back-end of the array so could not validate if any disk space was reclaimed.

/vmfs/volumes/4ddea74d-5a6eb3bc-f95e-0025b5000217 # vmkfstools -y 60
Attempting to reclaim 60% of free capacity 3.9 TB on VMFS-5 file system 'tm-pod04-sas600-sp-4t'.
Done.
/vmfs/volumes/4ddea74d-5a6eb3bc-f95e-0025b5000217 #

Be Sociable, Share!

    Comments

    1. says

      Is this advanced parameter meant to be hidden, meaning you can’t query the current value by using esxcli? I noticed I was not able to see this parameter when running the “list” operation but as you pointed out in the article, you can update the value to either 0 or 1.

      The only way I’ve been able to query it is using the legacy esxcfg-advcfg option:
      ~ # esxcfg-advcfg -g /VMFS3/EnableBlockDelete
      Value of EnableBlockDelete is 0

      OR using vim-cmd:

      ~ # vim-cmd hostsvc/advopt/view VMFS3.EnableBlockDelete
      (vim.option.OptionValue) [
      (vim.option.OptionValue) {
      dynamicType = ,
      key = "VMFS3.EnableBlockDelete",
      value = 0,
      }
      ]

      Also you can use the hidden parameter /VMFS3/BlockDeleteThreshold which allows you to set a pre-defined threshold which by default it’s 10% … I’m assuming you know this already :)

    2. says

      I was just playing with the unmap and an EMC VNX tonight (disclaimer: I am an EMC employee). The question of how to recover if the VM moved popped in my head and 15 seconds later I have my answer.

      thanks man

      nick

    3. Ole Andre Schistad says

      One major piece of this puzzle is reclamation within the VMDK itself.

      It does not really help much that VMFS is able to reclaim its own space, if the VMDKs just keep growing.

      Now the supported method for shrinking a VMDK is to run the VM through VMware converter. I think we can all agree that that is a non-option in medium business and up.

      But now that EXT4 (and RHEL 6) supports issuing Discards, it really is time to demand that the VMDK layer begins honoring this standard by unmapping blocks from VMFS.

      Any chance of seeing that on the roadmap soon?

    4. Patrick says

      vSphere cannot natively zero the blocks. Some third-party defragmentation utilities are starting to perform this for you for now (Diskeeper 2012 comes to mind). Windows Server 2012 at least in it’s current form is suppose to on the new file system. Considering the demand for the feature I would be surprised if it wasn’t a checkbox feature by a given hypervisor within a few years. It’s just a matter of when and how well it works (or doesn’t).

    5. anon says

      Ran into this issue recently, why isn’t this automated? why the requirement to use vmkfstools? Also, why would the delete option synchronous vs. asynchronous ? Does this work w/o svmotion? aka just delete a VM from a datastore? Shouldn’t it just use the vStorage API’s and say go reclaim this space on the san, and keep going and let the storage do the work reclaiming the space?

    6. Nabarun Dey says

      Hi Duncan,

      I had a query.
      I do sdelete to zero out the blocks.
      Now if VAAI is enabled and the UNMAP primitive is working will the space which was used by the VMDK be reclaimed by the storage?
      Or do we need to de-allocate the space manually using vmkfstools -K after which the storage can reclaim?

    Leave a Reply