VMFS-5 LUN Sizing

I had a question on my old VMFS LUN Sizing article I did back in 2009… The question was how valid the used formula and values still were in today’s environment especially considering VMFS-5 is around the corner. It is a very valid question so I decided to take my previous article and rewrite it. Now one thing to keep in mind though is that I tried to make it usable for generic consumption and you will still need to figure out things yourself as I simply don’t have all info needed to make it cookie-cutter, but I guess this is as close as it can get.

Parameters:

MinSize = 1.2GB
MaxVMs = 40
SlackSpace = 20%
AvgSizeVMDK = 30GB
AvgDisksVMs = 2
AvgMemSize = 3GB

Before I will drop the formula I want to explain the MaxVMs parameter. You will need to figure out how many IOps your LUN can handle first, for a hint check this article. But besides IOps you will also beed to take burst room into account and of course the RTO defined for this environment:

((IOpsPerLUN – 20%) / AVGIOpsPerVM) ≤ (MaxVMsWithinRTO)

Keep in mind that the article I pointed out just a second ago is geared towards worst case numbers, so no cache or other benefits. Secondly I subtracted 20% which is room for bursting. Now this is by no means a best practice and this number will need to be tweaked based on the size of your LUN and the total amount of IOps you LUN can handle. For instance when you are using 8 SATA spindles that 20% might only be 80 IOps, depending on the raid level used, in the case of SAS it could be 280 IOps with just 8 spindles and that is a huge difference. Anyway I leave that up to you to decide but I used 20% headroom for both disk space (for snapshots and the memory overhead swap files) and performance, just to keep it simple. The second part of this one is MaxVMsWithinRTO. In short make sure that you can recover the number of VMs on the datastore within the defined recovery time objective (RTO). You don’t want to find yourself in a situation where the RTO is 4hrs but the total amount of time for the restore is 24 hours.

Formula, aaahhh yes here we go. Now note that I did not take traditional constraints around “SCSI Reservations Conflicts” into account as with VMFS -5 and VAAI SCSI Locking Offload these  are lifted. If you have an array which doesn’t support the ATS primitive make sure you take this into account as well. Although the SCSI locking mechanism has been improved over the last years it could still limit you when you have a lot of power-on events, vMotion events etc.

(((MaxVMs * AvgDisksVMs) * AvgSizeVMDK) + ( MaxVMs * AvgMemSize)) + SlackSpace ≥ MinSize

Lets use the numbers defined in the parameters above and do the math:

(((40 * 2) * 30GB) + (40 * 3GB)) + 20% = (2400GB + 120GB) * 1.2 = 3024 GB

I hope this helps making your storage design decisions. One thing to keep in mind of course is that most storage arrays have optimal configurations for LUN sizes in terms of performance. Depending on your IOps requirements you might want to make sure that these align.

What’s new?

I had a lot of trouble finding the vSphere 5.0 What’s New whitepapers so I figured I would list all of them as I probably wouldn’t be the only one finding it challenging to get all of these. These are useful to quickly scan what has been introduced for a specific category. I would recommend reading these as it will give you a better understanding of what is coming up!

vSphere 5.0: Storage initiatives

Storage has been my primary focus for the 5.0 launch. The question often asked when talking about the separate components is how it all fits together. Lets first list some of the new or enhanced features:

  • VMFS-5
  • vSphere Storage APIs – Array Integration aka VAAI
  • vSphere Storage APIs – Storage Awareness aka VASA
  • Profile-Driven Storage (VM Storage Profiles in the GUI)
  • Storage I/O Control
  • Storage DRS

I wrote separate articles about all of these features and hopefully you have read them and already see the big picture. If you don’t than this is a good opportunity to read them or head over to the vSphere Storage Blog for more details on some of these.. I guess the best way to explain it is by using an example of what life could be like when using all of these new or enhanced features compared to what is used to be like:

The Old Way: Mr Admin is managing a large environment and currently has 300 LUNs each being 500GB divided across three 8 hosts clusters. He is maintaining a massive spreadsheet with storage characteristics and runs scripts to validate virtual machines are place on the correct tier of storage. He is leveraging SIOC to avoid the noisy neighbor problem and leveraging the VAAI primitives to offload some of the tasks to the array. Still he spends a lot of time waiting, monitoring, managing virtual machines and datastores.

vSphere 5.0: Mr Admin is managing a large environment and currently has 60 thin provisioned 2.5TB LUNs presented to a single cluster. Mr Admin defines several storage tiers using VM Storage Profiles detailing storage characteristics provided through VASA. Per tier based on the information provided through VASA a Datastore Cluster is created. Datastore Clusters form the basis of Storage DRS and Storage DRS will be responsible for initial placement and preventing both IO and diskspace bottlenecks in your environment. As Storage IO Control is automatically enabled when SDRS IO balancing is enabled the noisy neighbor problem will also be eliminated. When provisioning a new virtual machine Mr Admin simple picks the appropriate VM Storage Profile and selects the compliant Datastore Cluster. If in any case Storage DRS would move things around, the “Reclaim Dead Space” feature of VAAI is used to unmap the blocks from the source datastore so that these can be re-used if and when needed.

No more spreadsheets, no extensively monitoring diskspace / latency, no more manual validation of virtual machine placement… It is all about ease of management, reducing operational effort and offloading tasks to vCenter or even your storage array!

vSphere 5.0: UNMAP (vaai feature)

With vSphere 5.0 a brand new primitive has been introduced which is called Dead Space Reclamation as part of the overall thin provisioning primitive. Dead Space Reclamation is also sometimes referred to as unmap and it enables you to reclaim blocks of thin-provisioned LUNs by telling the array that specific blocks are obsolete, and yes that command used is the SCSI “unmap” command.

Now you might wonder when you would need this, but think about it for a second.. what happens when you enable Storage DRS? Indeed, virtual machines might be moved around. When a virtual machine is migrated from a thin provisioned LUN to a different LUN you probably would like to reclaim the blocks that were originally allocated by the array to this volume as they are no longer needed for the source LUN. That is what unmap does. Now of course not only when a virtual machine is storage vmotioned but also when a virtual machine or for instance a virtual disk is deleted. Now one thing I need to point out that this is about unmapping blocks associated to a VMFS volume, if you delete files within a VMDK those blocks will not be unmapped!

When playing around with this I had a question from one of my colleagues, he did not have the need to unmap blocks from these thin-provisioned LUNs so he asked if you could disable it, and yes you can:

esxcli system settings advanced set --int-value 1 --option /VMFS3/EnableBlockDelete

The cool thing is that it works with net-new VMFS-5 volumes but also with upgraded VMFS-3 to VMFS-5 volumes:

  1. Open the command line and go to the folder of the datastore:
    cd /vmfs/volumes/datastore_name
  2. Reclaim a percentage of free capacity on the VMFS5 datastore for the thin-provisioned device by running:
    vmkfstools -y <value>

The value should be between 0 and 100, with 60 being the maximum recommended value. I ran it on a thin provisioned LUN with 60% as the percentage to reclaim. Unfortunately I didn’t have access to the back-end of the array so could not validate if any disk space was reclaimed.

/vmfs/volumes/4ddea74d-5a6eb3bc-f95e-0025b5000217 # vmkfstools -y 60
Attempting to reclaim 60% of free capacity 3.9 TB on VMFS-5 file system 'tm-pod04-sas600-sp-4t'.
Done.
/vmfs/volumes/4ddea74d-5a6eb3bc-f95e-0025b5000217 #

Storage DRS interoperability

I was asked about this a couple of times over the last few days so I figured it might be an interesting topic. This is described in our book as well in the Datastore Cluster chapter but I decided to rewrite it and add some of it into a table to make it easier to digest. Lets start of with the table and explain why/where/what… Keep in mind that this is my opinion and not necessarily the best practice or recommendation of your storage vendor. When you implement Storage DRS make sure to validate this against their recommendations. I have marked the area where I feel caution needs to be taken with (*).

Capability Mode Space I/O Metric
Thin Provisioning Manual Yes (*) Yes
Deduplication Manual Yes (*) Yes
Replication Manual (*) Yes Yes
Auto-tiering Manual Yes No (*)

Yes you are reading that correctly, Storage DRS enabled with all of them and even with I/O metric enabled except for auto-tiering. Now although I said “Manual” for all of them I even believe that in some of these cases Fully Automated mode would be perfectly fine. Now as it will of course depend on the environment I would suggest to start out in Manual mode if any of these 4 storage capabilities are used to see what the impact is after applying a recommendation.

First of all “Manual Mode”… What is it? Manual Mode basically means that Storage DRS will make recommendations when the configured thresholds for latency or space utilization has been exceeded. It also will provide recommendations for placement during the provisioning process of a virtual machine or a virtual disk. In other words, when setting Storage DRS to manual you will still benefit from it as it will monitor your environment for you and based on that recommend where to place or migrate virtual disks to.

In the case of Thin Provisioning I would like to expand. I would recommend before migrating virtual machines that the “dead space” that will be left behind on the source datastore after the migration can be reclaimed by the use of the unmap primitive as part of VAAI.

Deduplication is a difficult one. The question is, will the “deduplication” process be as efficient after the migration as it was before the migration. Will it be able to deduplicate the same amount of data? There is always a chance that this is not the case… But than again, do you really care all that much about it when you are running out of disk space on your datastore or are exceeding your latency threshold? Those are very valid reasons to move a virtual disk as both can lead to degradation of service.

In an environment where replication is used care should be taken when balancing recommendations are applied. The reason for this being that the full virtual disk that is migrated will need to be replicated after the migration. This temporarily leads to an “unprotected state” and as such it is recommended to only migrate virtual disks which are protected during scheduled maintenance windows.

Auto-tiering arrays have been a hot debate lately. Not many seem to agree with my stance but up til today no one has managed to give me a great argument or explain to me exactly why I would not want to enable Storage DRS on auto-tiering solutions. Yes I fully understand that when I move a virtual machine from datastore A to datastore B the virtual machine will more than likely end up on relatively slow storage and the auto-tiering solution will need to optimize the placement again. However when you are running out of diskspace what would you prefer, down time or a temporary slow down? In the case of “I/O” balancing this is different and in a follow up post I will explain why this is not supported.

** This article is based on vSphere 5.0 information **