• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

esxi

Queue depth throttling

Duncan Epping · Apr 6, 2009 ·

Most of you hopefully read about the new queue depth throttling feature in the release notes of ESX 3.5 U4 which has been released last week. A couple of customers asked me if this would be beneficial for them to set up.

Currently queue depth throttling is only supported for 3PAR Storage Arrays.

This, of course, doesn’t mean that it will not work with any of the other arrays. It actually does… but it probably hasn’t been tested to the full extent. Again, keep in mind that it’s currently not supported with any other array then 3PAR.

Now, what’s this queue depth throttling about? The knowledge base article actually has a good explanation of what it does:

VMware ESX 3.5 Update 4 introduces an adaptive queue depth algorithm that adjusts the LUN queue depth in the VMkernel I/O stack. This algorithm is activated when the storage array indicates I/O congestion by returning a BUSY or QUEUE FULL status. These status codes may indicate congestion at the LUN level or at the port (or ports) on the array. When congestion is detected,VMkernel throttles the LUN queue depth. VMkernel attempts to gradually restore the queue depth when congestion conditions subside.

In laymans terms: It’s an “algorithm” for handling queue sizes. When the array indicates it’s busy and/or that the queue is full it cuts the size of the queue in half so the array isn’t flooded with requests and can recover to a normal situation. When the array gives the green light the size of the queue will be increased again till the max specified queue depth has been reached.

Repairing your vmdk header files…

Duncan Epping · Apr 3, 2009 ·

Increasing the size of a disk when a snapshot exists or deleting the wrong folder on your vmfs volume, it’s something that probably has happened to all of us.

This usually means that you will either need to edit the current vmdk header file or even recreate it. Although it’s not a difficult task it’s still error prone cause it’s a manual task, the procedure is outlined in this KB article for those interested.

Eric Forgette(NetApp), also known of mbralign and mbrscan, wrote a script that automates the recreation of a vmdk header file. The script also gives you the option to verify a header and if it’s corrupt fix it. Eric posted his script on the NetApp community forums and it can be found here.

I especially like the “fix” option of which the following is an example output:

[root@x3 root]# vmdkdtool /vmfs/volumes/test/testvm/fixed-template.vmdk

vmdkdtool version 1.0.090402.
Copyright (c) 2009 NetApp, Inc.
All rights reserved.

/vmfs/volumes/test/testvm/fixed-template-flat.vmdk is 12884902400 bytes (12.0000004768372 GB)

size = 25165825 (current 25125)
sectors = 63 (current value 21)
heads = 255 (current value 3)
cylinders = 1566 (current value 106)

NOTE: A backup of the file will be made if you choose yes.
Shall I fix the descriptor file? yes
Creating a backup of /vmfs/volumes/test/testvm/fixed-template.vmdk
Fixed.

Head over to the NetApp communities and pick it up, definitely a must have for your toolkit.

Virtual Geek Week?

Duncan Epping · Apr 3, 2009 ·

It must have been Virtual Geek week this week! I guess most of you already know Virtual Geek, and if you didn’t you’ve been missing out on the good stuff. Virtual Geek is being maintained by Chad Sakac of EMC and let’s say there’s a reason why his blog is called “Virtual Geek”. Chad posted a series of blog articles which are a must read for anyone interested in storage related to VMware and storage/VMware in general.

It started out with the “VMFS best practices and counter FUD” article where he sets the facts straight and debunks several myths like max amount of vm’s per VMFS volume and the use of extents. Besides countering this FUD there are also some very valuable tips in this article, for instance the advanced setting “Disk.SchedNumReqOutstanding” and the why/where/when.

In his second post this week he revealed that the upcoming release of ESX/vCenter(vSphere) will include the counterpart of the EMC Storage Viewer (vCenter plugin, youtube demo to be found here.). For all Clarion/Celerra customers who are planning on upgrading to vSphere a nice “little” extra!

The third one was the one I have been personally waiting for, the brand new version of the Celerra VSA. If you want to run a virtual “virtual environment” this virtual storage appliance is a must have. Especially if you want to test SRM this VSA will come in handy. Be sure to also download the how to guide that Chad provided in the “HOWTO 401” article.

Number four and five deal about multipathing and MRU behavior. I fully agree that understanding how MRU works is essential if you are using the policy. Post number 5 contains the script that is demoed in post 4. The script load balances the LUNs on the backend of the array(storage processors) and of course makes sure this is reflected on ESX for an optimal performance.

Let’s hope there’s more to come over the next weeks…

Storage VMotion, exploring the next version of ESX/vCenter

Duncan Epping · Apr 2, 2009 ·

I was exploring the next version of ESX / vCenter again today and did a Storage VMotion via the vSphere client. I decided to take a couple of screenshots to  get you guys acquainted with the new look/layout.

Doing a Storage VMotion via the GUI is nothing spectacular cause we all have used the 3rd party plugins. But changing the disk from thick to thin is. With vSphere it will be possible to migrate to thin provisioned disks, which can and will save disk space and might me desirable for servers that have low disk utilization and disk changes. [Read more…] about Storage VMotion, exploring the next version of ESX/vCenter

Disk latency and esxtop

Duncan Epping · Apr 1, 2009 ·

We just had a very good and interesting VMTN Podcast on virtualized MS SQL performance and best practices. One of the questions was about disk performance. Hemant Gaidhan talked about esxtop and how to discover possible performance issues, and specifically mentioned latency. I’ve never really looked into this section of esxtop and did a quick search and of course the “Interpreting esxtop Statistics” answered which counters to watch and what each counter represents:

Section 4.2.2 Latency Statistics
This group of counters report latency values measured at three different points in the ESX storage stack. In the context of the figure below, the latency counters in esxtop report the Guest, ESX Kernel and Device latencies. These are under the labels GAVG, KAVG and DAVG, respectively. Note that GAVG is the sum of DAVG and KAVG counters.

I recommend reading the rest of the 4.2.2 section to anyone looking for more indepth info on esxtop and storage performance. Also read page 14/15 of Hemant’s document on SQL Server performance/best practices. Another great read and tip from Hemant was the “Scalable Storage Performance” whitepaper.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 41
  • Page 42
  • Page 43
  • Page 44
  • Page 45
  • Interim pages omitted …
  • Page 66
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in