• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vstorage

Disk.SchedNumReqOutstanding the story

Duncan Epping · Jun 23, 2011 ·

There has been a lot of discussion in the past around Disk.SchedNumReqOutstanding and what the value should be and how it relates to the Queue Depth. Jason Boche wrote a whole article about when Disk.SchedNumReqOutstanding (DSNRO) is used and when not and I guess I would explain it as follows:

When two or more virtual machines are issuing I/Os to the same datastore Disk.SchedNumReqOutstanding will limit the amount of I/Os that will be issued to the LUN.

So what does that mean? It took me a while before I fully got it, so lets try to explain it with an example. This is basically how the VMware IO scheduler (Start-Time Fair Scheduling aka SFQ) works.

You have set your queue depth for your HBA to 64 and a virtual machine is issuing I/Os to a datastore. As it is just a single VM up to 64 IOs will then end up in the device driver immediately. In most environments however LUNs are shared by many virtual machines and in most cases these virtual machines should be treated equally. When two or more virtual machines issue I/O to the same datastore DSNRO kicks in. However it will only throttle the queue depth when the VMkernel has detected that the threshold of a certain counter is reached. The name of this counter is Disk.SchedQControlVMSwitches and by default it is set to 6, meaning that the VMkernel will need to have detected 6 VM switches when handling I/O before it will throttle the queue down to the value of Disk.SchedNumReqOutstanding, by default 32. (VM Switches means that it will need to detect 6 times that the selected I/O is not coming from the same VM as the previous I/O.)

The reason the throttling happens is because the VMkernel cannot control the order of the I/Os that have been issued to the driver. Just imagine you have a VM A issuing a lot of I/Os and another, VM B, issuing just a few I/Os. VM A would end up using most of the full queue depth all the time. Every time VM B issues an I/O it will be picked quickly by the VMkernel scheduler (which is a different topic) and sent to the driver as soon as another one completes from there, but it will still get behind the 64 I/Os already in the driver, which will add significantly to it’s I/O latency. By limiting the amount of outstanding requests we will allow the VMkernel to schedule VM B’s I/O sooner in the I/O stream from VM A and thus we reduce the latency penalty for VM B.

Now that brings us to the second part of all statements out there, should we really set Disk.SchedNumReqOutstanding to the same value as your queue depth? Well in the case you want your I/Os processed as quickly as possible without any fairness you probably should. But if you have mixed workloads on a single datastore, and wouldn’t want virtual machines to incur excessive latency just because a single virtual machine issues a lot if I/Os, you probably shouldn’t.

Is that it? No not really, there are several questions that remain unanswered.

  • What about sequential I/O in the case of Disk.SchedNumReqOutstanding?
  • How does the VMkernel know when to stop using Disk.SchedNumReqOutstanding?

Lets tackle the sequential I/O question first. The VMkernel will allow by default to issue up to 8 sequential commands (controlled by Disk.SchedQuantum) from a VM in a row even when it would normally seem more fair to take an I/O from another VM. This is done in order not to destroy the sequential-ness of VM workloads because I/Os that happen to sectors nearby the previous I/O are handled by an order of magnitude (10x is not unusual when excluding cache effects or when caches are small compared to the disk size) faster than an I/O to sectors far away. But what is considered to be sequential? Well if the next I/O is less than 2000 sectors away from the current the I/O it is considered to be sequential (controlled by Disk.SectorMaxDiff).

Now if for whatever reason one of the VMs becomes idle you would more than likely prefer your active VM to be able to use the full queue depth again. This is what Disk.SchedQControlSeqReqs is for. By default Disk.SchedQControlSeqReqs is set to 128, meaning that when a VM has been able to issue 128 commands without any switches Disk.SchedQControlVMSwitches will be reset to 0 again and the active VM can use the full queue depth of 64 again. With our example above in mind, the idea is that if VM B is issuing very rare IOs (less than 1 in every 128 from another VM) then we still let VM B pay the high penalty on latency because presumably it is not disk bound anyway.

To conclude, now that the coin has finally dropped on Disk.SchedNumReqOutstanding I strongly feel that the advanced settings should not be changed unless specifically requested by VMware GSS. Changing these values can impact fairness within your environment and could lead to unexpected behavior from a performance perspective.

I would like to thank Thor for all the help he provided.

List of VAAI capable storage arrays?

Duncan Epping · Jun 6, 2011 ·

I was browsing the VMTN community and noticed a great tip from my colleague Mostafa Khalil and I believe it is worth sharing with you. The original question was: “Does anybody have a list of which arrays support VAAI (or a certain subset of the VAAI features)?”. Mostafa updated the post a couple of days back with the following reponse which also shows the capabilities of the 2.0 version of the VMware HCL:

A new version of the Web HCL will provide search criteria specific to VAAI.

As of this date, the new interface is still in “preview” stage. You can access it by clicking the “2.0 preview” button at the top of the page which is at: http://www.vmware.com/go/hcl/

  • The criteria are grouped under Features Category, Features and Plugin’s.
  • Features Category: Choice of “All” or “VAAI-Block”
  • Features: Choice of “All”, Block Zero”, “Full Copy”, “HW Assisted Locking” and more.
  • Plugin’s: Choice of “All” and any of the listed plugins.

HCL.jpg

Unfortunately there appear to be some glitches when it comes to listing all the arrays correctly, but I am confident that it will be fixed soon… Thanks Mostafa for the great tip.

Surprising results FC/NFS/iSCSI/FCoE…

Duncan Epping · May 16, 2011 ·

I received a preliminary copy of a this report a couple of weeks ago, but since then nothing has changed. NetApp took the time to compare FC against, FCoE, iSCSI and NFS. Like most of us, probably, I still had the VI3 mindset and expected that FC would come out on top. Fact of the matter is that everything is so close, the differences are neglectable and tr-3916 shows that regardless of the type of data access protocol used you can get the same mileage. I am glad NetApp took the time to test these scenarios and used various test scenarios. It is no longer about which protocol works best, what drives the most performance… no it is about what is easiest for you to manage! Are you an NFS shop? No need to switch to FC anymore. Do you like the simplicity of iSCSI? Go for it…

Thanks NetApp for this valuable report. Although this report of course talks about NetApp it is useful material to read for all of you!

Disk.UseDeviceReset do I really need to set it?

Duncan Epping · Apr 13, 2011 ·

I noticed a discussion on an internal mailinglist which mentioned the advanced setting “Disk.UseDeviceReset” as it is mentioned in the FC SAN guide. The myth that you need to set this setting to “0 “in order to allow for Disk.UseLunReset to function properly has been floating around too long. Lets discuss first what this options does. In short, when an error or SCSI reservations need to be cleared a SCSI reset will be send. We can either do this on a device level or on a LUN level. With device level meaning that we will send it to all disks / targets on a bus. As you can imagine this can be disruptive and when there is no need to reset a SCSI but this should be avoided. With regards to the settings here is what will happen with the different settings:

  • Disk.UseDeviceReset = 0  &  Disk.UseLunReset = 1  --> LUN Reset
  • Disk.UseDeviceReset = 1  &  Disk.UseLunReset = 1  --> LUN Reset
  • Disk.UseDeviceReset = 1  &  Disk.UseLunReset = 0  --> Device Reset

I hope that this makes it clear that there is no point in changing the Disk.UseDeviceReset setting as Disk.UseLunReset overrules it.

ps: I filed a document bug and hope that it will be reflected in the doc soon.

Per-volume management features in 4.x

Duncan Epping · Apr 7, 2011 ·

Last week I noticed that one of the articles that I wrote in 2008 is still very popular. This article explains the various possible combinations of the advanced settings “EnableResignature” and “DisallowSnapshotLUN”. For those who don’t know what these options do in a VI3 environment; they allow you to access a volume which is marked as “unresolved” due to the fact that the VMFS metadata doesn’t match the physical properties of the LUN. In other words, the LUN that you are trying to access could be a Snapshot of a LUN or a copy (think replication) and vSphere is denying you access.

These advanced options where often used in DR scenarios where a fail-over of a LUN needed to occur or when for instance when a virtual machine needed to be restored from a snapshot of a volume. Many of our users would simply change the setting for either EnableResignature to 1 or for DisallowSnapshotLUN to 0 and force the LUN to be available again. Those readers who paid attention noticed that I used “LUN” instead of “LUNs” and here lies one of the problems…. These settings were global settings. Which means that ANY given LUN that was marked as “unresolved” would be resignatured or mounted. This unfortunately more than often lead to problems where incorrect volumes were mounted or resignatured. These volumes should probably have not have been presented in the first place but that is not the point. The point is that a global setting increases the chances that issues like these occur. With vSphere this problem was solved as VMware introduced “esxcfg-volume -r”.

This command enables you to resignature on a per volume basis…. Well not only that, “esxcfg-volume -m” enables you to mount volumes and “-l” is used to list volumes. Of course you can also do this through the vSphere client as well:

  • Click the Configuration tab and click Storage in the Hardware panel.
  • Click Add Storage.
  • Select the Disk/LUN storage type and click Next.
  • From the list of LUNs, select the LUN that has a datastore name displayed in the VMFS Label column and click Next.
    The name present in the VMFS Label column indicates that the LUN is a copy that contains a copy of an existing VMFS datastore.
  • Under Mount Options, select Assign a New Signature and click Next.
  • In the Ready to Complete page, review the datastore configuration information and click Finish.

But what if I do want to resignature all these volumes at once? What if you have a corner case scenario where this is a requirement? Well in that case you could actually still use the advanced features as they haven’t exactly disappeared, they have been hidden in the UI (vSphere Client) but they are still around. From the commandline you can still query the status:

esxcfg-advcfg -g /LVM/EnableResignature

Or you can change the global configuration option:

esxcfg-advcfg -s 1 /LVM/EnableResignature

Please note that the first step was hiding them, but they will be deprecated at some future release. It is recommended to use “esxcfg-volume” and resignature on a per volume basis.

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Interim pages omitted …
  • Page 11
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in