vSphere 5.0: UNMAP (vaai feature)

With vSphere 5.0 a brand new primitive has been introduced which is called Dead Space Reclamation as part of the overall thin provisioning primitive. Dead Space Reclamation is also sometimes referred to as unmap and it enables you to reclaim blocks of thin-provisioned LUNs by telling the array that specific blocks are obsolete, and yes that command used is the SCSI “unmap” command.

Now you might wonder when you would need this, but think about it for a second.. what happens when you enable Storage DRS? Indeed, virtual machines might be moved around. When a virtual machine is migrated from a thin provisioned LUN to a different LUN you probably would like to reclaim the blocks that were originally allocated by the array to this volume as they are no longer needed for the source LUN. That is what unmap does. Now of course not only when a virtual machine is storage vmotioned but also when a virtual machine or for instance a virtual disk is deleted. Now one thing I need to point out that this is about unmapping blocks associated to a VMFS volume, if you delete files within a VMDK those blocks will not be unmapped!

When playing around with this I had a question from one of my colleagues, he did not have the need to unmap blocks from these thin-provisioned LUNs so he asked if you could disable it, and yes you can:

esxcli system settings advanced set --int-value 1 --option /VMFS3/EnableBlockDelete

The cool thing is that it works with net-new VMFS-5 volumes but also with upgraded VMFS-3 to VMFS-5 volumes:

  1. Open the command line and go to the folder of the datastore:
    cd /vmfs/volumes/datastore_name
  2. Reclaim a percentage of free capacity on the VMFS5 datastore for the thin-provisioned device by running:
    vmkfstools -y <value>

The value should be between 0 and 100, with 60 being the maximum recommended value. I ran it on a thin provisioned LUN with 60% as the percentage to reclaim. Unfortunately I didn’t have access to the back-end of the array so could not validate if any disk space was reclaimed.

/vmfs/volumes/4ddea74d-5a6eb3bc-f95e-0025b5000217 # vmkfstools -y 60
Attempting to reclaim 60% of free capacity 3.9 TB on VMFS-5 file system 'tm-pod04-sas600-sp-4t'.
Done.
/vmfs/volumes/4ddea74d-5a6eb3bc-f95e-0025b5000217 #

Storage DRS interoperability

I was asked about this a couple of times over the last few days so I figured it might be an interesting topic. This is described in our book as well in the Datastore Cluster chapter but I decided to rewrite it and add some of it into a table to make it easier to digest. Lets start of with the table and explain why/where/what… Keep in mind that this is my opinion and not necessarily the best practice or recommendation of your storage vendor. When you implement Storage DRS make sure to validate this against their recommendations. I have marked the area where I feel caution needs to be taken with (*).

Capability Mode Space I/O Metric
Thin Provisioning Manual Yes (*) Yes
Deduplication Manual Yes (*) Yes
Replication Manual (*) Yes Yes
Auto-tiering Manual Yes No (*)

Yes you are reading that correctly, Storage DRS enabled with all of them and even with I/O metric enabled except for auto-tiering. Now although I said “Manual” for all of them I even believe that in some of these cases Fully Automated mode would be perfectly fine. Now as it will of course depend on the environment I would suggest to start out in Manual mode if any of these 4 storage capabilities are used to see what the impact is after applying a recommendation.

First of all “Manual Mode”… What is it? Manual Mode basically means that Storage DRS will make recommendations when the configured thresholds for latency or space utilization has been exceeded. It also will provide recommendations for placement during the provisioning process of a virtual machine or a virtual disk. In other words, when setting Storage DRS to manual you will still benefit from it as it will monitor your environment for you and based on that recommend where to place or migrate virtual disks to.

In the case of Thin Provisioning I would like to expand. I would recommend before migrating virtual machines that the “dead space” that will be left behind on the source datastore after the migration can be reclaimed by the use of the unmap primitive as part of VAAI.

Deduplication is a difficult one. The question is, will the “deduplication” process be as efficient after the migration as it was before the migration. Will it be able to deduplicate the same amount of data? There is always a chance that this is not the case… But than again, do you really care all that much about it when you are running out of disk space on your datastore or are exceeding your latency threshold? Those are very valid reasons to move a virtual disk as both can lead to degradation of service.

In an environment where replication is used care should be taken when balancing recommendations are applied. The reason for this being that the full virtual disk that is migrated will need to be replicated after the migration. This temporarily leads to an “unprotected state” and as such it is recommended to only migrate virtual disks which are protected during scheduled maintenance windows.

Auto-tiering arrays have been a hot debate lately. Not many seem to agree with my stance but up til today no one has managed to give me a great argument or explain to me exactly why I would not want to enable Storage DRS on auto-tiering solutions. Yes I fully understand that when I move a virtual machine from datastore A to datastore B the virtual machine will more than likely end up on relatively slow storage and the auto-tiering solution will need to optimize the placement again. However when you are running out of diskspace what would you prefer, down time or a temporary slow down? In the case of “I/O” balancing this is different and in a follow up post I will explain why this is not supported.

** This article is based on vSphere 5.0 information **

Punch Zeros!

I was just playing around with vSphere 5.0 and noticed something cool which I hadn’t noticed before. I logged in to the ESXi Shell and typed a command I used a lot in the past, vmkfstools, and I noticed an option called -K. (Just been informed that 4.1 has it as well, I never noticed it though… )

-K –punchzero
This option deallocates all zeroed out blocks and leaves only those blocks that were allocated previously and contain valid data. The resulting virtual disk is in thin format

This is one of those options which many have asked for as in order to re”thin” their disks it would normally require a Storage vMotion. Unfortunately though it only currently works when the virtual machine is powered off, but I guess that is just the next hurdle that needs to be taken.

vSphere 5.0: Profile-Driven Storage, what is it good for?

By now most of you heard about this new feature called Profile-Driven Storage that will be introduced with vSphere 5.0, but what is it good for? Some of you, depending on the size of the environment, currently have a nice long operational procedure to deploy virtual machines. The procedure usually contains gathering information about the requirements of the virtual machine’s disks, finding the right datastore to meet these requirements, deploy the virtual machine and occasionally check if the virtual machine’s disks are still placed correctly. This is what Profile-Driven Storage aims to solve.

Profile-Driven Storage, in the vCenter UI referred to as VM Storage Profiles, decrease the amount of administration required to properly deploy virtual machines by allowing for the creation of Profiles. These profiles typically list the requirements of storage and can be linked to a virtual machine. I know it all sounds a bit vague, let me visualize that:

In this scenario a virtual machine requires “Gold Storage”, now lets just assume for now that that means RAID-10 and Replicated. By linking the profile to this virtual machine it is possible to validate if the virtual machine is actually located on the right tier of storage. Now this profile can of course be linked to a virtual machine / virtual disk after it has been provisioned, but even more importantly it can be used during the provisioning of the virtual machine to ensure the user picks a datastore (cluster) which is compatible with the requirements! Just check the following screenshot of what that would look like:

Now you might wonder where this storage tier comes from, this is a VM Storage Profile containing storage capabilities provided by:

  • VASA aka vSphere Storage APIs – Storage Awareness
  • User defined capabilities

User defined capabilities are fairly simple to explain, the profile you create (gold / silver / bronze) will be linked to a User Defined “tag” you define on a datastore. For instance you could tag a datastore as “RAID-10″. When would you do this? Well typically when your storage vendor doesn’t offer a Storage Provider for VASA (yet). That takes us to the second method of selecting storage capabilities for your VM Storage Profile, VASA. VASA is a new “API” which enables you to see the characteristics of a datastore through vCenter. With characteristics I am referring to things like: raid level, de-duplication, replication etc. You know what, my a step-by-step guide makes it clear:

  • Go to VM Storage Profiles
  • Create a VM Storage Profile
  • Provide a Name
  • Select the correct Capabilities
  • Finish the creation
  • Create a new VM and select the correct VM Storage Profile, note that only 1 datastore is compatible
  • After creation you can easily check if it is compliant or not by going to the VMs summary tab

A couple of simple initial steps as you can clearly see, but a huge help when provisioning virtual machines and when validating storage / vm requirements!