Storage DRS is a brand new feature of vSphere 5.0. It has been one of my focus areas for the last 6 months and probably one of the coolest features of vSphere 5.0. Storage DRS enables you to aggregate datastores in to a single object, called a datastore cluster. This new object is what you will be managing from now on. Storage DRS enables smart placement of virtual machines based on utilized diskspace, latency and LUN performance capabilities. In other words, when you create a new virtual machine you will select a Datastore Cluster instead of a Datastore and Storage DRS will place the virtual machine on one of the datastores in that datastore cluster. This is where the strength lies of Storage DRS, reducing operational effort associated with provisioning of virtual machines…
But that’s not all there is, Storage DRS is a lot more than just initial placement… lets sum the core functionality of Storage DRS up:
- Initial Placement
- Migration Recommendations (Manual / Fully Automated)
- Affinity Rules
- Maintenance Mode
These in my opinion are the 4 core pieces of functionality that Storage DRS provides. Initial placement as stated will reduce the amount of operational effort required to provision virtual machines. Storage DRS will figure out which datastore it should be placed on, no need anymore to manually monitor each datastore and figure out which one has the most available diskspace and relative low latency. On top of that SDRS also provides Migration Recommendations if and when thresholds are exceeded, it can generate them (manual mode) or generate and apply them (fully automated mode). These thresholds are utilized disk space(80%) and latency (15ms). This helps preventing bottlenecks in terms of disk space and hot spots in terms of latency.
Affinity Rules and Maintenance Mode are very similar to what DRS offers today. You have the ability to split disks and virtual machines with Affinity Rules, or keep them together. With Maintenance Mode it will be very easy to migrate to new LUNs or to do planned maintenance on a volume, couple of clicks and all VMs will be moved off.
Once again I would like to stress that although the Migration Recommendations (especially in Fully Automated mode) sound really sexy, and it is, it will more than likely be the Initial Placement recommendations where you will benefit the most. More technical information will follow soon here and on frankdenneman.nl
Parikshith Reddy says
Great Stuff Duncan! Keep them coming…Enjoying these post’s on vSphere5 a lot.
Gustavo ossandon says
Will IT work with VMware Cloud Director ???
Magnus says
Hi Duncan,
do you know how the VMware HA/FDM slot sizes re calculated for the different admission control policies in vSphere 5?
Duncan Epping says
I’ll do an article about this soon,
Rickard Nobel says
Is there a need for specific support from the SAN to be able to get the characteristic from it, that is to report the RAID level and similar?
Duncan Epping says
No SDRS works indepently of that. It figures out the capabilities of the LUN in terms of IOps/Latency and balances based on that info.
Jonathan Meier says
Duncan,
How does SDRS deal with VM density per LUN? Was there changes to the LUN locking mechanism? I have not been able to find too much on the VMFS changes outside of GPT, block. By the way I finished the new Clustering Deep Dive book last night and I thought it was excellent.
AB says
Will this affect deduplication. We have datastore dedicated for specific OS so that de-duplication can save some space.
Duncan says
@AB: Yes it will. See this article: http://www.yellow-bricks.com/2011/07/15/storage-drs-interoperability/
Ed says
I’m finding that storage DRS is hurting more than helping in my environment. I’m using NetApp de-duped volumes and storage DRS is making recommendations not on the de-duped space but on the sum of used + de-duped. This means that in some cases the datastore cluster has NO free space even though there are many terabytes available in the cluster. Initial placement of VMs, storage vMotions, and creation of new VMDKs all fail even though there is plenty of free space.
This is with 5.0 Update 1 on the hosts and vCenter 5.0 Update 1a.
My NetApp rep is suggesting that I turn Storage DRS off – not just to manual – so I can’t even use it for initial placemetn of VMs. Not good at all.
Duncan Epping says
Not sure why your NetApp rep would suggest turning it off instead of Manual. It is fully supported, and I definitely would recommend setting it to manual in your situation. I would like to know his justification for this recommendation.
Ed says
The problem I’ve seen is even with it set to manual, the datastore cluster can’t find a datastore with space, even though there is free space in the datastores within the cluster.
It may be fully supported, but it’s not working as I think it should. The free space it seems to be using is not the same as the free space in the datastore.
Duncan says
I suggest filing a bug / support request for this Ed. If it cannot find free space there is something wrong when there is sufficient free space available. (pre-dedupe free space needs to be equal or more than required free space for VM)
Ed says
I’m confused by the comment that “pre-dedupe free space needs to be equal or more than required free space for VM”.
Let’s say I have a 2TB datastore with a 50% de-dupe ratio (this ratio is common for virtual machines). I over-provision this to 3TB, de-dupe it down to 1.5TB, and therefore have 500GB free. Should I not be able to create another 50GB VM on this datastore?
Why is there a pre-dedupe space requirement?
I can put in a ticket but your last comment suggests that it’s working as it was designed to work.
Duncan Epping says
Yes you should be able to create that VM. I mean that the newly created VM will need to fit “pre-dedupe”
Ed says
It looks like what I found is a bug – SRDS is broken in the current release. KB 2017605 documents something similar and my issue might have the same underlying bug even though my VMDKs are zero-eager thick, not thin provisioned.
Ralf says
I’ve seen some strange sDRS initial placement recommendations lately (5.1b vCenter). My sDRS usage threshold is 88% (manual sDRS). I’ve 27 connected datastores, some still have 400 GB free space, some only 250 GB. sDRS always recommends to put a new VM on the datastore that is nearly full. Example: new VM_1 -> sDRS recommends to place it on the DS that is at 87% usage before and will be at 91% after placement. Next VM_2: sDRS recommends the same DS that is already at 91% and therefore exceeds the threshold, after placement is would be at 99,3%. There are other DS with more free space and even better IO values (though I’ve disabled IO metrics now).
There is nothing in the KB that describes this problem and no solution from support yet.