• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Limit a VM from an IOps perspective

Duncan Epping · Apr 29, 2013 ·

Last couple of weeks I heard people either asking questions around how tot limit a VM from an IOps perspective or making comments that Storage IO Control (SIOC) allows you to limit VMs. As I pointed at least three folks to this info I figured I would share it publicly.

There is an IOps limit setting on the virtual disk as an option… This is what allows you to limit a virtual machine / virtual disk to a specific amount of IOps. Now it should be noted that when you set this limit this is handled (vSphere 5.1 and prior) by the local host scheduler, also known as SFQ. One thing to realize though is that when you set a limit on multiple virtual disks for a virtual machine is that all of these limits will be added up and that will be your threshold. In other words:

  • Disk01 – 50 IOps limit
  • Disk02 – 200 IOps limit
  • Combined total: 250 IOps limit
  • If Disk01 only uses 5 IOps then Disk02 can use 245 IOps!

There is one caveat though, “combined total” only goes for the disks which are stored on the same datastore. So if you have 4 disks and they are stored across 4 datastores then each of the individual limits apply respectively.

More details can be found in this KB article: http://kb.vmware.com/kb/1038241

Related

Server, Storage sioc, Storage

Reader Interactions

Comments

  1. Steffen says

    11 December, 2013 at 12:01

    Hey Duncan, do you have any reference where that thing with adding up the IOPS limits when multiple disks are on the same datastore is documented, besides the KB article? Couldnt find anything about it and never ever even heard about it.
    This really ***** in our situation. We have a heavy-hitter VM on a SDRS cluster and were forced to use IOPS limits on their disks. But it doesnt seem to get properly applied. Now it makes sense why. But – the individual disks get moved around by SDRS. So sometimes the VMs disks are spread around, sometimes they are all on the same datastore. In this case, IOPS limits dont make sense and dont help us :/

  2. Duncan Epping says

    11 December, 2013 at 19:20

    No I don’t to be honest.

  3. Steffen says

    12 December, 2013 at 12:09

    Well, thanks anyways for your response. Gotta take a deeper look at it…

  4. Ariel Liguori says

    7 March, 2014 at 16:24

    Guys, anyone get a KB or something just confiming this?

    • Steffen says

      19 March, 2014 at 10:37

      Hi Ariel, only the KB mentioned by Duncan, esp. Example 2 in there is very useful to understand the behaviour.

  5. Thejas K V says

    25 March, 2014 at 08:55

    I had a scenario where a VM with three vdisks and whose IOPS limit as below,
    disk1 (os disk) – 16
    disk2 – 200
    disk3-100

    All the disks are located on same datastore (single LUN).

    Pumped the IO using iometer (same workload for disk2 and disk3) inside the GOS and I was expecting cmds/sec for disk2 will fall under 200 and disk3 under 100. I thought the value will be considered to control IOPS in a vdisk as set in the UI. But got to know that my understanding is wrong.

    The actual behavior is the allocated IOPS will be added up (as all vdisks are in same datastore) and considered that is the limit of IOPS of the VM (not individual vdisk) to that datastore where it resides.

    So in the above case, 16+100+200=316. You will see the disk2’s cmd/sec goes more than 200 also, but ensuring that total IOPS going out of the VM is not more than 316.

  6. Thejas K V says

    25 March, 2014 at 17:12

    I am not seeing any difference when share is set for a VM. Here is the details of my configuration.

    Have a host running with two VMs. Each VMs have three disks.

    – I had set IOPS limit on disk1 to 16, disk to 16 and disk3 to 300. Note: This is done on both VMs.
    Note: IOMeter with 100% write is running on both host against disk3.
    – Changed the shares of all the disks in VM1 to 2000. (So other VM, which VM2 remains with share 1000 which is default).

    As per my understanding, I expected the cmd/sec (esxtop) of the VM1 will be better than VM2 as VM1 have more shares compared to VM2.

    But I don’t see any differences in cmd/sec when compared between the VMs.The values are 44 for both VMs (I mean disk3 ie scsi0:2).

    Please help me to understand more.

    • Thejas K V says

      26 March, 2014 at 09:30

      I got it worked. Confirmed that share will play a role only when there is a contention of IOPS between VMs. ie you will see the difference when the adapter queue depth on the host is full. I confirmed the behavior by running two VMs, but this time by specifying the outstanding IOPS in iometer to 64 (which is equal to the adapter queue depth).

      The cmd/sec (in esxtop) of both VMs where ~500 without any share set. I changed the share value to high to VM1 and seen the cmd/sec on VM1 bumped high ie 1565 and on VM2 it is adjusted to low ie 181.

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in