• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

sioc

Increase Storage IO Control logging level

Duncan Epping · May 2, 2013 ·

I received this question today around how to increase the Storage IO Control logging level. I knew either Frank or I wrote about this in the past but I couldn’t find it… which made sense as it was actually documented in our book. I figured I would dump the blurp in to an article so that everyone who needs it for whatever reason can use it.

Sometimes it is necessary to troubleshoot your environment and having logs to review is helpful in determining what is actually happening. By default, SIOC logging is disabled, but it should be enabled before collecting logs. To enable logging:

  1. Click Host Advanced Settings.
  2. In the Misc section, select the Misc.SIOControlLogLevel parameter. Set the value to 7 for complete logging.  (Min value: 0 (no logging), Max value: 7)
  3. SIOC needs to be restarted to change the log level, to stop and start SIOC manually, use: /etc/init.d/storageRM {start|stop|status|restart}
  4. After changing the log level, you see the log level changes logged in /var/log/vmkernel

Please note that SIOC log files are saved in /var/log/vmkernel.

Limit a VM from an IOps perspective

Duncan Epping · Apr 29, 2013 ·

Last couple of weeks I heard people either asking questions around how tot limit a VM from an IOps perspective or making comments that Storage IO Control (SIOC) allows you to limit VMs. As I pointed at least three folks to this info I figured I would share it publicly.

There is an IOps limit setting on the virtual disk as an option… This is what allows you to limit a virtual machine / virtual disk to a specific amount of IOps. Now it should be noted that when you set this limit this is handled (vSphere 5.1 and prior) by the local host scheduler, also known as SFQ. One thing to realize though is that when you set a limit on multiple virtual disks for a virtual machine is that all of these limits will be added up and that will be your threshold. In other words:

  • Disk01 – 50 IOps limit
  • Disk02 – 200 IOps limit
  • Combined total: 250 IOps limit
  • If Disk01 only uses 5 IOps then Disk02 can use 245 IOps!

There is one caveat though, “combined total” only goes for the disks which are stored on the same datastore. So if you have 4 disks and they are stored across 4 datastores then each of the individual limits apply respectively.

More details can be found in this KB article: http://kb.vmware.com/kb/1038241

Death to false myths: Storage IO Control = Storage DRS IO load balancing

Duncan Epping · Dec 17, 2012 ·

I often hear people making comments around Storage IO Control and Storage DRS IO Load Balancing being one and the same thing. It has been one of those myths that has been floating around for a long time now, and with this article I am going to try to stop it.

I guess where this myth comes from is that when you create a Datastore Cluster and you enable Storage DRS IO Load Balancing then it configures Storage IO Control for you automatically on all datastores which are part of that particular Datastore Cluster. This seems to give people the impression that they are the same thing.

I have heard people making these claims especially around interoperability discussions. For example, one of the common made mistakes is that you should not enable Storage IO Control on a datastore which has auto-tiering (like EMC FAST for instance) enabled. Now the thing is that in the Storage DRS Interop white paper it is listed that when using an auto-tiering array you should disable IO Load Balancing when using Storage DRS. However, let is be clear Storage IO Control and Storage DRS Load Balancing are not one and the same thing and Storage IO Control is supported in those scenarios!

Storage DRS uses Storage IO Control to retrieve the IO metrics required to create load balancing recommendations. So lets repeat that, Storage DRS leverages Storage IO Control. Storage IO Control works perfectly fine without Storage DRS. Storage IO Control is all about handling queues and limiting the impact of short IO spikes. Storage DRS is about sustained latency and moving virtual machines around to balance out the environment.

I guess I can summarize this article in just one sentence:
Storage IO Control != Storage DRS IO Load Balancing

 

Should I use many small LUNs or a couple large LUNs for Storage DRS?

Duncan Epping · Dec 6, 2012 ·

At several VMUGs I presented a question that always came up was the following: “Should I use many small LUNs or a couple of large LUNs for Storage DRS? What are the benefits of either?”

I posted about VMFS-5 LUN sizing a while ago and I suggest reading that first if you haven’t yet, just to get some idea around some of the considerations taken when sizing datastores. I guess that article already more or less answers the question… I personally prefer many “small LUNs” than a couple of large LUNs, but let me explain why. As an example, lets say you need 128TB of storage in total. What are your options?

You could create 2x 64TB LUNs, 4x 32TB LUNs, 16x 8TB LUNs or 32x 4TB LUNs. What would be easiest? Well I guess 2x 64TB LUNs would be easiest right. You only need to request 2 LUNs and adding them to a datastore cluster will be easy. Same goes for the 4x 32TB LUNs… but with 16x 8TB and 32x 4TB the amount of effort increases.

However, that is just a one-time effort. You format them with VMFS, add the to the datastore cluster and you are done. Yes, it seems like a lot of work but in reality it might take you 20-30 minutes to do this for 32 LUNs. Now if you take a step back and think about it for a second… why did I wanted to use Storage DRS in the first place?

Storage DRS (and Storage IO Control for that matter) is all about minimizing risk. In storage, two big risks are hitting an “out of space” scenario or extremely degraded performance. Those happen to be the two pain points that Storage DRS targets. In order to prevent these problems from occurring Storage DRS will try to balance the environment, when a certain threshold is reached that is. You can imagine that things will be “easier” for Storage DRS when it has multiple options to balance. When you have one option (2 datastores – source datastore) you won’t get very far. However, when you have 31 options (32 datastores – source datastore) that increases the chances of finding the right fit for your virtual machine or virtual disk while minimizing the impact on your environment.

I already dropped the name, Storage IO Control (SIOC), this is another feature to take in to account. Storage IO Control is all about managing your queues, you don’t want to do that yourself. Believe me it is complex and no one likes queues right. (If you have Enterprise Plus, enable SIOC!) Reality is though, there are many queues in between the application and the spindles your data sits on. The question is would you prefer to have 2 device queues with many workloads potentially queuing up, or would you prefer to have 32 device queues? Look at the impact that this could have.

Please don’t get me wrong… I am not advocating to go really small and create many small LUNs. Neither am I saying you should create a couple of really large LUNs. Try to find the the sweetspot for your environment by taking failure domain (backup restore time), IOps, queues (SIOC) and load balancing options for Storage DRS in to account.

Using Storage IO Control and Network IO Control together?

Duncan Epping · Dec 7, 2011 ·

I had a question today from someone who asked if there was any point in enabling SIOC (Storage IO Control) when you have NIOC (Network IO Control) enabled and configured. Lets start with the answer: Yes there is! NIOC controls traffic on a single NIC port level. In other words, when you have 10GbE NIC ports and vMotion, VMs and NFS (for instance) use the same NIC port it will prevent one of the streams from claiming all bandwidth while others need it. It basically is the police officer who controls a group of people getting too loud in a single room.

As not many people realize this lets repeat it… NIOC controls traffic on a NIC port level. Not on a NIC pair, not on a host level and not on a cluster wide level. On a NIC port level!

SIOC does IO control on a Datastore-VM layer. Meaning that when a certain threshold is reached it will determine on a datastore wide level which hosts and essentially which VMs get a specific chunk of the resources. SIOC prevents a single VM from claiming all IO resources for a datastore in a cluster. SIOC is cluster wide on a datastore level! It basically is the police officer who asks your neighbor to tone it down when as he is bothering the rest of the street.

Yes, enabling SIOC and NIOC together makes a lot of sense!

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in