• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

storage io control

vSphere 6.5 what’s new – Storage IO Control

Duncan Epping · Oct 26, 2016 ·

There are many enhancements in vSphere 6.5, an overhaul of Storage IO Control is one of them. In vSphere 6.5 Storage IO Control has been reimplemented by leveraging the VAIO framework. For those who don’t know VAIO stands for vSphere APIs for IO Filtering. This is basically a framework that allows you to filter (storage) IO and do things with it. So far we have seen caching and replication filters from 3rd party vendors, and now a Quality of Service filter from VMware.

Storage IO Control has been around for a while and hasn’t really changed that much since its inception. It is one of those features that people take for granted and you actually don’t know you have turned on in most cases. Why? Well Storage IO Control (SIOC) only comes in to play when there is contention. When it does come in to play it ensure that every VM gets its fair share of storage resources. (I am not going to explain the basics, read this posts for more details.) Why the change in SIOC design/implementation? Well fairly simple, the VAIO framework enabled policy based management. This goes for caching, replication and indeed also QoS. Instead of configuring disks or VMs individually, you will now have the ability to specify configuration details in a VM Storage Policy and assign that policy to a VM or VMDK. But before you do, make sure you enable SIOC first on a datastore level and set the appropriate latency threshold. Lets take a look at how all of this will work:

  • Go to your VMFS or NFS Datastore and right click datastore and click “Configure SIOC” and enable SIOC
  • By default the congestion threshold is set to 90% of peak throughput you can change this percentage or specify a latency threshold manually by defining the number of miliseconds latency
  • Now go to the VM Storage Policy section
  • Go to Storage Policy Components section first and check out the pre-created policy components, below I show “Normal” as an example
  • If you want you can also create a Storage Policy Component yourself and specify custom shares, limits and a reservation. Personally I would prefer to do this when using SIOC and probably remove the limit if there is no need for it. Why limit a VM on IOPS when there is no contention? And if there is contention, well then SIOC shares based distribution of resource will be applied.
  • Next you will need to do is create a VM Storage Policy, click that tab and click “create VM storage policy”
  • Give the policy a name and next you can select which components to use
  • In my case I select the “Normal IO shares allocation” and I also add Encryption, just so we know what that looks like. If you have other data services, like replication for instance, you could even stack those on top. That is what I love about the VAIO Framework.
  • Now you can add rules, I am not going to do this. And next the compatible datastores will be presented and that is it.
  • You can now assign the policy to a VM. You do this by right clicking a particular VM and select “VM Policies” and then “Edit VM Storage Policies”
  • Now you can decide which VM Storage Policy to apply to the disk. Select your policy from the dropdown and then click “apply to all”, at least when that is what you would like to do.
  • When you have applied the policy to the VM you simply click “OK” and now the policy will be applied to the VM and for instance the VMDK.

And that is it, you are done. Compared to previous versions of Storage Policy Based Management this may all be a bit confusing with the Storage Policy Components, but believe me it is very powerful and it will make your life easier. Mix and match whatever you require for a workload and simply apply it.

Before I forget, note that the filter driver at this point is used to enforce limits only. Shares and the Reservation still leverages the mClock scheduler.

 

 

SIOControlFlag2 what is it?

Duncan Epping · Dec 19, 2014 ·

I had a question this week what Misc SIOControlFlag2 was. Some refer to it as SIOControlFlag2 and I’ve also seen Misc.SIOControlFlag2. In the end it is the same thing. It is something that sometimes pops up in the log files, or some may stumble in to the setting in the “advanced settings” on a host level. The question I had was why the value is 0 on some hosts, 2 on others or even 34 on other hosts.

Let me start with saying that it is nothing to worry about, even when you are not using Storage IO Control. It is an internal setting which is used by ESXi (hostd sets it) when there is an operation done where disk files on a volume are opened (vMotion / power on etc). This is set to ensure that when Storage IO Control is used that the “SIOC injector” knows when to or when not to use the volume to characterize it. Do not worry about this setting being different on the hosts in your cluster, it is an internal setting which has no impact on your environment itself, other then when you use SIOC this will help SIOC making the right decision.

Increase Storage IO Control logging level

Duncan Epping · May 2, 2013 ·

I received this question today around how to increase the Storage IO Control logging level. I knew either Frank or I wrote about this in the past but I couldn’t find it… which made sense as it was actually documented in our book. I figured I would dump the blurp in to an article so that everyone who needs it for whatever reason can use it.

Sometimes it is necessary to troubleshoot your environment and having logs to review is helpful in determining what is actually happening. By default, SIOC logging is disabled, but it should be enabled before collecting logs. To enable logging:

  1. Click Host Advanced Settings.
  2. In the Misc section, select the Misc.SIOControlLogLevel parameter. Set the value to 7 for complete logging.  (Min value: 0 (no logging), Max value: 7)
  3. SIOC needs to be restarted to change the log level, to stop and start SIOC manually, use: /etc/init.d/storageRM {start|stop|status|restart}
  4. After changing the log level, you see the log level changes logged in /var/log/vmkernel

Please note that SIOC log files are saved in /var/log/vmkernel.

Death to false myths: Storage IO Control = Storage DRS IO load balancing

Duncan Epping · Dec 17, 2012 ·

I often hear people making comments around Storage IO Control and Storage DRS IO Load Balancing being one and the same thing. It has been one of those myths that has been floating around for a long time now, and with this article I am going to try to stop it.

I guess where this myth comes from is that when you create a Datastore Cluster and you enable Storage DRS IO Load Balancing then it configures Storage IO Control for you automatically on all datastores which are part of that particular Datastore Cluster. This seems to give people the impression that they are the same thing.

I have heard people making these claims especially around interoperability discussions. For example, one of the common made mistakes is that you should not enable Storage IO Control on a datastore which has auto-tiering (like EMC FAST for instance) enabled. Now the thing is that in the Storage DRS Interop white paper it is listed that when using an auto-tiering array you should disable IO Load Balancing when using Storage DRS. However, let is be clear Storage IO Control and Storage DRS Load Balancing are not one and the same thing and Storage IO Control is supported in those scenarios!

Storage DRS uses Storage IO Control to retrieve the IO metrics required to create load balancing recommendations. So lets repeat that, Storage DRS leverages Storage IO Control. Storage IO Control works perfectly fine without Storage DRS. Storage IO Control is all about handling queues and limiting the impact of short IO spikes. Storage DRS is about sustained latency and moving virtual machines around to balance out the environment.

I guess I can summarize this article in just one sentence:
Storage IO Control != Storage DRS IO Load Balancing

 

Should I use many small LUNs or a couple large LUNs for Storage DRS?

Duncan Epping · Dec 6, 2012 ·

At several VMUGs I presented a question that always came up was the following: “Should I use many small LUNs or a couple of large LUNs for Storage DRS? What are the benefits of either?”

I posted about VMFS-5 LUN sizing a while ago and I suggest reading that first if you haven’t yet, just to get some idea around some of the considerations taken when sizing datastores. I guess that article already more or less answers the question… I personally prefer many “small LUNs” than a couple of large LUNs, but let me explain why. As an example, lets say you need 128TB of storage in total. What are your options?

You could create 2x 64TB LUNs, 4x 32TB LUNs, 16x 8TB LUNs or 32x 4TB LUNs. What would be easiest? Well I guess 2x 64TB LUNs would be easiest right. You only need to request 2 LUNs and adding them to a datastore cluster will be easy. Same goes for the 4x 32TB LUNs… but with 16x 8TB and 32x 4TB the amount of effort increases.

However, that is just a one-time effort. You format them with VMFS, add the to the datastore cluster and you are done. Yes, it seems like a lot of work but in reality it might take you 20-30 minutes to do this for 32 LUNs. Now if you take a step back and think about it for a second… why did I wanted to use Storage DRS in the first place?

Storage DRS (and Storage IO Control for that matter) is all about minimizing risk. In storage, two big risks are hitting an “out of space” scenario or extremely degraded performance. Those happen to be the two pain points that Storage DRS targets. In order to prevent these problems from occurring Storage DRS will try to balance the environment, when a certain threshold is reached that is. You can imagine that things will be “easier” for Storage DRS when it has multiple options to balance. When you have one option (2 datastores – source datastore) you won’t get very far. However, when you have 31 options (32 datastores – source datastore) that increases the chances of finding the right fit for your virtual machine or virtual disk while minimizing the impact on your environment.

I already dropped the name, Storage IO Control (SIOC), this is another feature to take in to account. Storage IO Control is all about managing your queues, you don’t want to do that yourself. Believe me it is complex and no one likes queues right. (If you have Enterprise Plus, enable SIOC!) Reality is though, there are many queues in between the application and the spindles your data sits on. The question is would you prefer to have 2 device queues with many workloads potentially queuing up, or would you prefer to have 32 device queues? Look at the impact that this could have.

Please don’t get me wrong… I am not advocating to go really small and create many small LUNs. Neither am I saying you should create a couple of really large LUNs. Try to find the the sweetspot for your environment by taking failure domain (backup restore time), IOps, queues (SIOC) and load balancing options for Storage DRS in to account.

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in