• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

storage drs

What is new for Storage DRS in vSphere 6.0?

Duncan Epping · Feb 9, 2015 ·

Storage DRS must be one of the most under-appreciated features that is part of vSphere. For whatever reason it doesn’t get the airtime it deserves, not even from VMware folks which is a shame if you ask me. I was reading the What’s New material for vSphere 6.0 and I noticed that the “What is new for Storage DRS in vSphere 6.0” was completely missing. I figured I would do a quick write up of what has been improved and introduced for SDRS in 6.0 as some of the enhancements are quite significant! Lets start with a list and then look at these enhancements in more detail:

  • Deep integration with vSphere APIs for Storage Awareness (VASA)
  • Site Recovery Manager integration
  • vSphere Replication integration
  • Integration with Storage Policy Based Management

Lets start with the top one, deep integration with vSphere APIs for Storage Awareness (VASA) as that is the biggest improvement if you ask me. What the integration with VASA results in is fairly straight forward, when the VASA plugin for your storage system is configured then Storage DRS will understand what capabilities are enabled on your storage system and more specific your datastores. For example: when using Storage DRS previously on a deduplicated datastore it could happen that the migration initiated by Storage DRS had a negative result on the total available capacity on your storage system. This would be caused by the fact that the deduplication ratio was lower on the destination then it was on the source. Not a very pleasant surprise you can imagine. Also when for instance VMs are snapshotted from a storage system point of view or datastores are replicated… you can imagine that there would be an impact when moving a VM around in that scenario. With 6.0 Storage DRS is capable of understanding:

  • Array-based thin-provisioning
  • Array-based deduplication
  • Array-based auto-tiering
  • Array-based snapshot
  • Array-based replication

I guess you get the drill, SDRS is now fully capable of understanding the array capabilities and will make balancing decisions taking these capabilities in to consideration. For instance in the case of replication, when replication is enabled and your datastore is part of a consistency group then SDRS will ensure that the VM is only migrated to a datastore which belongs to the same consistency group! For deduplication this is the opposite by the way, in this case SDRS will be informed about which datastores belong to which deduplication domains and when datastores belong to the same domain it will know that moving between those datastores will have little to no effect on capacity. Depending on the level of detail the storage vendor provides through VASA SDRS will even be aware of how efficient the deduplication process is for a given datastore. (Not a VASA requirement, rather a recommendation so results may vary per vendor implementation) Auto-tiering is also an interesting one as this is something that comes up regularly. In this scenario with previous versions of SDRS it could happen that SDRS was moving VMs while the auto-tier array was just promoting or demoting blocks to a lower or higher tier. As you can imagine not a desired scenario and with the VASA integration this can be prevented from happening.

Second big thing is Site Recovery Manager and vSphere Replication integration. I already mentioned the consistency group awareness, of course this is also part of the SRM integration and when VMs are protected by SRM then SDRS will make sure that those VMs are only moved within their consistency group. If for whatever reason there is no way to move within a consistency group then SDRS as a second option can move VMs between datastores which are part of the same SRM Protection Group. Note that this could have an impact though on your workloads! SDRS of course will never automatically move a VM from a replicated to a non-replicated datastore. In fact, there is a strict hierarchy of what type of moves can be recommended:

  1. Moves within the same consistency group
  2. Moves across consistency groups, but within the same protection group
  3. Moves across protection groups
  4. Moves from a replicated datastore to non-replicated

Note that SDRS will try option 1 first, if it fails, will try option 2, if that fails will try option 3, and so on. Under no circumstances is a recommendation in the category of 2, 3 or 4 executed automatically. You will receive a warning after which you can manually apply the recommendation. This is done to ensure the administrator has full control and full awareness of the migration and can apply it during maintenance or during non-peak hours.

With regards to vSphere Replication also a lot has changed. So far there was no support for vSphere Replication enabled VMs to be part of an SDRS datastore cluster but with 6.0 it is fully supported. As of 6.0 Storage DRS will recognize replica VMs (which are replicated using vSphere Replication) and thresholds have been exceeded then SDRS will query vSphere Replication and will be able to migrate replicas to solve the resource constraint.

Up next the integration with Storage Policy Based Management. In the past when you had different tiers of datastores as part of the same Datastore Cluster then SDRS could potentially move a VM which was assigned policy “gold” to a datastore which was associated with a “silver” policy. With vSphere 6.0, SDRS is aware of storage policies in SPBM and will only move or place VMs to a datastore that can satisfy that VM’s storage policy.

Oh and before I forget, there is also the introduction of IOPS reservations on a per virtual disk level. This isn’t really part of Storage DRS but a function of the mClock scheduler and integrated with Storage IO Control and SDRS where needed. It isn’t available in the UI even in this release, only exposed through the VIM API so I doubt many of you will use it… figured though I would mention it already, and I will do a deeper write up later this week probably.

Different tiers of storage in a single Storage DRS datastore cluster?

Duncan Epping · Aug 6, 2013 ·

This question around adding different tiers of storage in a single Storage DRS datastore cluster keeps popping up every once in a while. I can understand where it is coming from as one would think that VM Storage Profiles combined with Storage DRS would allow you to have all types of tiers in one cluster, but then balance within that “tier” within that pool.

Truth is that that does not work with vSphere 5.1 and lower unfortunately. Storage DRS and VM Storage Profiles (Profile Driven Storage) are not tightly integrated. Meaning that when you provision a virtual machine in to a datastore cluster and Storage DRS needs to rebalance the cluster at one point, it will consider ANY datastore within that datastore cluster as a possible placement destination. Yes I agree, it is not what you hoped for… it is – what it is. (feature request filed) Frank visualized this nicely in his article a while back:

So when you architect your datastore clusters, there are a couple of things you will need to keep in mind. These are the design rules at a minimum, that is if you ask me:

  • LUNs of the same storage tier
    • See above
  • More LUNs = more balancing options
    • Do note size matters, a single LUN will need to be able to fit your largest VM!
  • Preferably LUNs of the same array (so VAAI offload works properly)
    • VAAI XCOPY (used by SvMotion for instance) doesn’t work when going from Array-A to Array-B
  • When replication is used, LUNs that are part of the same consistency group
    • You will want to make sure that VMs that need to be consistent from a replication perspective are not moved to a LUN that is outside of the consistency group
  • Similar availability characteristics and performance characteristics
    • You don’t want potential performance or availability to degrade when a VM is moved

Hope this helps,

vSphere 5.1 Storage DRS Interoperability

Duncan Epping · May 13, 2013 ·

A while back I did this article on Storage DRS Interoperability. I had questions last week about this so I figured I would write a new article which reflects the current state (vSphere 5.1). I also included some details that are part of the interoperability white paper Frank and I did so that we have a fairly complete picture. This white paper is on 5.0, it will probably be updated at some point in the future.

The first column describes the feature or functionality, the second column the recommended or supported automation mode and the third and fourth column show which type of balancing is supported.

Capability Automation Mode Space Balancing I/O Metric Balancing
Array-based Snapshots Manual Yes Yes
Array-based Deduplication Manual Yes Yes
Array-based Thin provisioning Manual Yes Yes
Array-based Auto-Tiering Manual Yes No
Array-based Replication Manual Yes Yes
vSphere Raw Device Mappings Fully Automated Yes Yes
vSphere Replication Fully Automated Yes Yes
vSphere Snapshots Fully Automated Yes Yes
vSphere Thin provisioned disks Fully Automated Yes Yes
vSphere Linked Clones Fully Automated (*) Yes Yes
vSphere Storage Metro Clustering Manual Yes Yes
vSphere Site Recovery Manager Not supported n/a n/a
VMware vCloud Director Fully Automated (*) Yes Yes
VMware View (Linked Clones) Not Supported n/a n/a
VMware View (Full Clones) Fully Automated Yes Yes

(*) = Change from 5.0

Death to false myths: Storage IO Control = Storage DRS IO load balancing

Duncan Epping · Dec 17, 2012 ·

I often hear people making comments around Storage IO Control and Storage DRS IO Load Balancing being one and the same thing. It has been one of those myths that has been floating around for a long time now, and with this article I am going to try to stop it.

I guess where this myth comes from is that when you create a Datastore Cluster and you enable Storage DRS IO Load Balancing then it configures Storage IO Control for you automatically on all datastores which are part of that particular Datastore Cluster. This seems to give people the impression that they are the same thing.

I have heard people making these claims especially around interoperability discussions. For example, one of the common made mistakes is that you should not enable Storage IO Control on a datastore which has auto-tiering (like EMC FAST for instance) enabled. Now the thing is that in the Storage DRS Interop white paper it is listed that when using an auto-tiering array you should disable IO Load Balancing when using Storage DRS. However, let is be clear Storage IO Control and Storage DRS Load Balancing are not one and the same thing and Storage IO Control is supported in those scenarios!

Storage DRS uses Storage IO Control to retrieve the IO metrics required to create load balancing recommendations. So lets repeat that, Storage DRS leverages Storage IO Control. Storage IO Control works perfectly fine without Storage DRS. Storage IO Control is all about handling queues and limiting the impact of short IO spikes. Storage DRS is about sustained latency and moving virtual machines around to balance out the environment.

I guess I can summarize this article in just one sentence:
Storage IO Control != Storage DRS IO Load Balancing

 

Should I use many small LUNs or a couple large LUNs for Storage DRS?

Duncan Epping · Dec 6, 2012 ·

At several VMUGs I presented a question that always came up was the following: “Should I use many small LUNs or a couple of large LUNs for Storage DRS? What are the benefits of either?”

I posted about VMFS-5 LUN sizing a while ago and I suggest reading that first if you haven’t yet, just to get some idea around some of the considerations taken when sizing datastores. I guess that article already more or less answers the question… I personally prefer many “small LUNs” than a couple of large LUNs, but let me explain why. As an example, lets say you need 128TB of storage in total. What are your options?

You could create 2x 64TB LUNs, 4x 32TB LUNs, 16x 8TB LUNs or 32x 4TB LUNs. What would be easiest? Well I guess 2x 64TB LUNs would be easiest right. You only need to request 2 LUNs and adding them to a datastore cluster will be easy. Same goes for the 4x 32TB LUNs… but with 16x 8TB and 32x 4TB the amount of effort increases.

However, that is just a one-time effort. You format them with VMFS, add the to the datastore cluster and you are done. Yes, it seems like a lot of work but in reality it might take you 20-30 minutes to do this for 32 LUNs. Now if you take a step back and think about it for a second… why did I wanted to use Storage DRS in the first place?

Storage DRS (and Storage IO Control for that matter) is all about minimizing risk. In storage, two big risks are hitting an “out of space” scenario or extremely degraded performance. Those happen to be the two pain points that Storage DRS targets. In order to prevent these problems from occurring Storage DRS will try to balance the environment, when a certain threshold is reached that is. You can imagine that things will be “easier” for Storage DRS when it has multiple options to balance. When you have one option (2 datastores – source datastore) you won’t get very far. However, when you have 31 options (32 datastores – source datastore) that increases the chances of finding the right fit for your virtual machine or virtual disk while minimizing the impact on your environment.

I already dropped the name, Storage IO Control (SIOC), this is another feature to take in to account. Storage IO Control is all about managing your queues, you don’t want to do that yourself. Believe me it is complex and no one likes queues right. (If you have Enterprise Plus, enable SIOC!) Reality is though, there are many queues in between the application and the spindles your data sits on. The question is would you prefer to have 2 device queues with many workloads potentially queuing up, or would you prefer to have 32 device queues? Look at the impact that this could have.

Please don’t get me wrong… I am not advocating to go really small and create many small LUNs. Neither am I saying you should create a couple of really large LUNs. Try to find the the sweetspot for your environment by taking failure domain (backup restore time), IOps, queues (SIOC) and load balancing options for Storage DRS in to account.

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 5
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007) and the author of the "vSAN Deep Dive" and the “vSphere Clustering Technical Deep Dive” series.

Upcoming Events

14-Apr-21 | VMUG Italy – Roadshow
26-May-21 | VMUG Egypt – Roadshow
May-21 | Australian VMUG – Roadshow

Recommended reads

Sponsors

Want to support us? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2021 · Log in