• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

sioc

Enable Storage IO Control on all Datastores!

Duncan Epping · Jan 20, 2011 ·

This week I received an email from one of my readers about some weird Storage IO Control behavior in their environment.  On a regular basis he would receive an error stating that an “external I/O workload has been detected on shared datastore running Storage I/O Control (SIOC) for congestion management”. He did a quick scan of his complete environment and couldn’t find any hosts connecting to those volumes. After exchanging a couple of emails about the environment I managed to figure out what triggered this alert.

Now this all sounds very logical but probably is one of the most common made mistakes… sharing spindles. Some storage platforms carve out a volume from a specific set of spindles. This means that these spindles are solely dedicated to that particular volume. Other storage platforms however group spindles and layer volumes across these. Simply said, they are sharing spindles to increase performance. NetApp’s “aggregates” and HP’s “disk groups” would be a good example.

This can and probably will cause the alarm to be triggered as essentially an unknown workload is impacting your datastore performance. If you are designing your environment from the ground-up, make sure that all spindles that are backing your VMFS volumes have SIOC enabled.

However, in an existing environment this will be difficult, don’t worry that SIOC will be overly conservative and unnecessarily throttle your virtual workload. If and when SIOC detects an external workload it will stop throttling the virtual workload to avoid giving the external more bandwidth while negatively impact the virtual workload. From a throttling perspective that will look as follows:

32 29 28 27 25 24 22 20 (detect nonVI –> Max Qdepth )
32 31 29 28 26 25 (detect nonVI –> Max Qdepth)
32 30 29 27 25 24 (detect nonVI –> Max Qdepth)
…..

Please note that the above example depicts a scenario where SIOC notices that the latency threshold is still exceeded and the cycle will start again, SIOC checks latency values every 4 seconds. The question of course remains how SIOC knows that there is an external workload accessing the datastore. SIOC uses a what we call a “self-learning algorithm”. It keeps track of historical observed latency, outstanding IOs and window sizes. Based on that info it can identify anomalies and that is what triggers the alarm.

To summarize:

  • Enable SIOC on all datastores that are backed by the same set of spindles
  • If you are designing a green field implementation try to avoid sharing spindles between non VMware and VMware workloads

More details about when this event could be triggered can be found in this KB article.

Storage IO Control and Storage vMotion?

Duncan Epping · Jan 14, 2011 ·

I received a very good question this week to which I did not have the answer, I had a feeling but that is not enough. The question was if Storage vMotion would be “throttled” by Storage IO Control. As I happened to have a couple of meetings scheduled this week with the actual engineers I asked the question and this was their answer:

Storage IO Control can throttle Storage vMotion when the latency threshold is exceeded. The reason for this being is that Storage vMotion is “billed” to the virtual machine.

This basically means that if you initiate a Storage vMotion the “process” belongs to the VM and as such if the host is throttled the Storage vMotion process might be throttled as well by the local scheduler(SFQ) depending on the amount of shares that were originally allocated to this virtual machine. Definitely something to keep in mind when doing a Storage vMotion of a large virtual machine as it could potentially lead to an increase of the amount of time it takes for the Storage vMotion to complete. Don’t get me wrong, that is not necessarily a negative thing cause at the same time it will prevent that particular Storage vMotion to consume all available bandwidth.

Storage IO Control Best Practices

Duncan Epping · Oct 19, 2010 ·

After attending Irfan Ahmad’s session on Storage IO Control at VMworld I had the pleasure to sit down with Irfan and discuss SIOC. Irfan was so kind to review my SIOC articles(1, 2) and we discussed a couple of other things as well. The discussion and the Storage IO Control session contained some real gems and before my brain resets itself I wanted to have these documented.

Storage IO Control Best Practices:

  • Enable Storage IO Control on all datastores
  • Avoid external access for SIOC enabled datastores
    • To avoid any interference SIOC will stop throttling, more info here.
  • When multiple datastores share the same set of spindles ensure all have SIOC enabled with comparable settings and all have sioc enabled.
  • Change latency threshold based on used storage media type:
    • For FC storage the recommended latency threshold is  20 – 30 MS
    • For SAS storage the recommended latency threshold is  20 – 30 MS
    • For SATA storage the recommended latency threshold is 30 – 50 MS
    • For SSD storage the recommended latency threshold is 15 – 20 MS
  • Define a limit per VM for IOPS to avoid a single VM flooding the array
    • For instance limit the amount of IOPS per VM to a 1000

SIOC, tying up some loose ends

Duncan Epping · Oct 8, 2010 ·

After my initial post about Storage IO Control I received a whole bunch of questions. Instead of replying via the commenting system I decided to add them to a blog post as it would be useful for everyone to read this. Now I figured this stuff out be reading the PARDA whitepaper 6 times and by going through the log files and CLI of my ESXi host, this is not cast in stone. If anyone has any additional question don’t hesitate to ask them and I’ll be happy to add them and try to answer them!

Here are the questions with the answers underneath in italic:

  1. Q: Why is SIOC not enabled by default?
    A: As datastores can be shared between clusters, clusters could be differently licensed and as such SIOC is not enabled by default.
  2. Q: If vCenter is only needed when enabling the feature, who will keep track of latencies when a datastore is shared between multiple hosts?
    A: Latency values are actually stored on the Datastore itself. From the PARDA academic paper, I figured two methods could be used for this either through network communication or as stated by using the Datastore. Notice the file “iormstat.sf” in green in the screenshot below, I guess that answers the question… the datastore itself is used to communicate the latency of a datastore. I also confirmed with Irfan that my assessment was true.
  3. Q: Where does datastore-wide disk scheduler run from?
    A: The datastore-wide disk scheduler is essentially SIOC or also known as the “PARDA Control Algorithm” and runs on each host sharing that datastore. PARDA consists of two key components which are “latency estimation” and “window size computation”. Latency estimation is used to detect if SIOC needs to throttle queues to ensure each VM gets its fair share. Window size computation is used to calculate what this queue depth should be for your host.
  4. Q: Is PARDA also responsible for throttling the VM?
    A: No, PARDA itself or better said the two major processes that form PARDA (latency estimation and window size computation) don’t control “host local” fairness, the Local scheduler (SFQ) is responsible for that.
  5. Q: Can we in any way control the I/O contention in vCD VM environment (say one VM running high I/O impacting another VM running on same host/datastore)
    A: I would highly recommend to enable this in vCloud Environments to prevent storage based DoS attacks (or just noisy neighbors) and to ensure IO fairness can be preserved. This is one of the reasons VMware developed this mechanism.
  6. Q: I can’t enable SIOC with an Enterprise licence – “License not available to perform the operation”. Is it Enterprise Plus only?
    A: SIOC requires Enterprise Plus
  7. Q: Can I verify what the Latency is?
    A: Yes you can, go to the Host – Performance Tab and select “Datastore”, “Real Time”, select the datastore and select “Storage I/O Control normalized latency”. Please note that the unit for measurement is microseconds!
  8. Q: This doesn’t appear to work in NFS?
    A: SIOC can only be enabled on VMFS volumes currently.

If you happen to be at VMworld next week, make sure to attend this session: TA8233 Prioritizing Storage Resource Allocation in ESX Based Virtual Environments Using Storage I/O Control!

Storage I/O Fairness

Duncan Epping · Sep 29, 2010 ·

I was preparing a post on Storage I/O Control (SIOC) when I noticed this article by Alex Bakman. Alex managed to capture the essence of SIOC in just two sentences.

Without setting the shares you can simply enable Storage I/O controls on each datastore. This will prevent any one VM from monopolizing the datatore by leveling out all requests for I/O that the datastore receives.

This is exactly the reason why I would recommend anyone who has a large environment, and even more specifically in cloud environments, to enable SIOC. Especially in very large environments where compute, storage and network resources are designed to accommodate the highest common factor it is important to ensure that all entities can claim their fair share of resource and in this case SIOC will do just that.

Now the question is how does this actually work? I already wrote a short article on it a while back but I guess it can’t hurt to reiterate thing and to expand a bit.

First a bunch of facts I wanted to make sure were documented:

  • SIOC is disabled by default
  • SIOC needs to be enabled on a per Datastore level
  • SIOC only engages when a specific level of latency has been reached
  • SIOC has a default latency threshold of 30MS
  • SIOC uses an average latency across hosts
  • SIOC uses disk shares to assign I/O queue slots
  • SIOC does not use vCenter, except for enabling the feature

When SIOC is enabled disk shares are used to give each VM its fair share of resources in times of contention. Contention in this case is measured in latency. As stated above when latency is equal or higher than 30MS, and the statistics around this are computed every 4 seconds, the “datastore-wide disk scheduler” will determine which action to take to reduce the overall / average latency and increase fairness. I guess the best way to explain what happens is by using an example.

As stated earlier, I want to keep this post fairly simple and I am using the example of an environment where every VM will have the same amount of shared. I have also limited the amount of VMs and hosts in the diagrams. Those of you who attended VMworld session TA8233 (Ajay and Chethan) will recognize these diagrams, I recreated and slightly modified them.

The first diagram shows three virtual machines. VM001 and VM002 are hosted on ESX01 and VM003 is hosted on ESX02. Each VM has disk shares set to a value of 1000. As Storage I/O Control is disabled there is no mechanism to regulate the I/O on a datastore level. As shown in the bottom by the Storage Array Queue in this case VM003 ends up getting more resources than VM001 and VM002 while all of them from a shares perspective were entitled to the exact same amount of resources. Please note that both Device Queue Depth’s are 32, which is the key to Storage I/O Control but I will explain that after the next diagram.

As stated without SIOC there is nothing that regulates the I/O on a datastore level. The next diagram shows the same scenario but with SIOC enabled.

After SIOC has been enabled it will start monitoring the datastore. If the specified latency threshold has been reached (Default: Average I/O Latency of 30MS) for the datastore SIOC will be triggered to take action and to resolve this possible imbalance. SIOC will then limit the amount of I/Os a host can issue. It does this by throttling the host device queue which is shown in the diagram and labeled as “Device Queue Depth”. As can be seen the queue depth of ESX02 is decreased to 16. Note that SIOC will not go below a device queue depth of 4.

Before it will limit the host it will of course need to know what to limit it to. The “datastore-wide disk scheduler” will sum up the disk shares for each of the VMDKs. In the case of ESX01 that is 2000 and in the case of ESX02 it is 1000. Next the  “datastore-wide disk scheduler” will calculate the I/O slot entitlement based on the the host level shares and it will throttle the queue. Now I can hear you think what about the VM will it be throttled at all? Well the VM is controlled by the Host Local Scheduler (also sometimes referred to as SFQ), and resources on a per VM level will be divided by the the Host Local Scheduler based on the VM level shares.

I guess to conclude all there is left to say is: Enable SIOC and benefit from its fairness mechanism…. You can’t afford a single VM flooding your array. SIOC is the foundation of your (virtual) storage architecture, use it!

ref:
PARDA whitepaper
storage i/o control whitepaper
vmworld storage drs session
vmworld storage i/o control session

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in