• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

performance

No one likes queues

Duncan Epping · Mar 4, 2011 ·

Well depending on what type of queues we are talking about of course, but in general no one likes queues. We are however confronted with queues on a daily basis, especially in compute environments. I was having a discussing with an engineer around storage queues and he sent me the following which I thought was worth sharing as it gives a good overview of how traffic flows from queue to queue with the default limits on the VMware side:

From top to bottom:

  • Guest device driver queue depth (LSI=32, PVSCSI=64)
  • vHBA (Hard coded limit: LSI=128, PVSCSI=255)
  • disk.schedNumOutstanding=32 (VMKernel),
  • VMkernel Device Driver (FC=32, iSCSI=128, NFS=256,  local disk=32)
  • Multiple SAN/Array Queues (Check Chad’s article for more details but it includes port buffers, port queues, disk queues etc (might be different for other storage vendors))

The following is probably worth repeating or clarifying:

The PVSCSI default queue depth is 64. You can increase it to 255 if required, please note that it is a per-device queue depth and keep in mind that this would only be truly useful when it is increased all the way down the stack and the Array Controller supports it. There is no point in increasing the queuedepth on a single layer when the other layers are not able to handle it as it would only push down the delay one layer. As explained in an article a year or three ago, disk.schednumreqoutstanding is enforced when multiple VMs issue I/Os on the same physical LUN, when it is a single VM it doesn’t apply and it will be the Device Driver queue that limits it.

I hope this provides a bit more insight to how the traffic flows. And by the way, if you are worried a single VM floods one of those queues there is an answer for that, it is called Storage IO Control!

RE: VMFS 3 versions – maybe you should upgrade your vmfs?

Duncan Epping · Feb 25, 2011 ·

I was just answering some questions on the VMTN forum when someone asked the following question:

Should I upgrade our VMFS luns from 3.21 (some in 3.31) to 3.46 ? What benefits will we get?

This person was referred to an article by Frank Brix Pedersen who states the following:

Ever since ESX3.0 we have used the VMFS3 filesystem and we are still using it on vSphere. What most people don’t know is that there actually is sub versions of the VMFS.

  • ESX 3.0 VMFS 3.21
  • ESX 3.5 VMFS 3.31 key new feature: optimistic locking
  • ESX 4.0 VMFS 3.33 key new feature: optimistic IO

The good thing about it is that you can use all features on all versions. In ESX4 thin provisioning was introduced but it does need the VMFS to be 3.33. It will still work on 3.21. The changes in the VMFS is primarily regarding the handling of SCSI reservations. SCSI reservations happens a lot of times. Creation of new vm, growing a snapshot delta file or growing thin provisioned disk etc.

I want to make sure everyone realizes that this is actually not true. All the enhancements made in 3.5, 4.0 and even 4.1 are not implemented on a filesystem level but rather on a VMFS Driver level or through the addition of specific filters or even a new datamover.

Just to give an extreme example: You can leverage VAAI capabilities on a VMFS volume with VMFS filesystem version 3.21, however in order to invoke VAAI you will need the VMFS 3.46 driver. In other words, a migration to a new datastore is not required to leverage new features!

Storage vMotion performance difference?

Duncan Epping · Feb 24, 2011 ·

Last week I wrote about the different datamovers being used when a Storage vMotion is initiated and the destination VMFS volume has a different blocksize as the source VMFS volume. Not only will it make a difference in terms of reclaiming zero space, but as mentioned it also makes a different in performance. The question that always arises is how much difference does it make? Well this week there was a question on the VMTN community regarding a SvMotion from FC to FATA and the slow performance. Of course within a second FATA was blamed, but that wasn’t actually the cause of this problem. The FATA disks were formatted with a different blocksize and that cause the legacy datamover to be used. I asked Paul, who started the thread, if he could check what the difference would be when equal blocksizes were used. Today Paul did his tests and he blogged about it here and but I copied the table which contains the details that shows you what performance improvement the fs3dm (please note, that VAAI is not used… this is purely a different datamover) brought:

From To Duration in minutes
FC datastore 1MB blocksize FATA datastore 4MB blocksize 08:01
FATA datastore 4MB blocksize FC datastore 1MB blocksize 12:49
FC datastore 4MB blocksize FATA datastore 4MB blocksize 02:36
FATA datastore 4MB blocksize FC datastore 4MB blocksize 02:24

As I explained in my article about the datamover, the difference is caused by the fact that the data doesn’t travel all the way up the stack… and yes the difference is huge!

Anti-virus and the impact in virtualized environments

Duncan Epping · Feb 16, 2011 ·

I was reading Richard Garsthagen’s article about anti-virus solutions yesterday and decided that this deserved a little bit of extra attention as it is an often overlooked area when it comes to architecture and impact. As Richard points out the difference in terms of load that it generates and overhead is enormous. All of these combined will most definitely result in an increase of consolidation ratio. Not only that but is will also seriously lower the risk during for instance a VDI boot storm but also think about the impact of HA initiated restarts. This could cause an enormous amount of IOps and CPU/Memory overhead which in its turn could impact the other virtual machines.

I guess there is no point in rehashing what is written in the whitepaper of what Richard wrote, I just want to point out the whitepaper as I believe it is a good read. As always results may vary but it is pretty obvious that from an architectural and operational perspective End Point Security is most definitely worth looking into and I cannot wait for more vendors to jump on the bandwagon. Download the tolly report here. (I personally found the disk results very interesting…)

Storage Performance

Duncan Epping · Feb 3, 2011 ·

This is just a post to make it easier finding these excellent articles/threads on VMTN about measuring storage performance:

  • Scott Drummonds – Storage System Performance Analysis with Iometer
  • VMTN Unofficial Storage Perf Thread I – http://communities.vmware.com/thread/73745
  • VMTN Unofficial Storage Perf Thread II – http://communities.vmware.com/thread/19784

All these have one “requirement”  and that is that Iometer is used.

Another one that I wanted to point out are these excellent scripts that Clinton Kitson created which collects and processes vscsistats data. That by itself is cool, but what is planned for the next update is even cooler. Live plotted 3d graphs. Can’t wait for that one to be released!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 4
  • Page 5
  • Page 6
  • Page 7
  • Page 8
  • Interim pages omitted …
  • Page 21
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in