Storage Performance

This is just a post to make it easier finding these excellent articles/threads on VMTN about measuring storage performance:

All these have one “requirement”  and that is that Iometer is used.

Another one that I wanted to point out are these excellent scripts that Clinton Kitson created which collects and processes vscsistats data. That by itself is cool, but what is planned for the next update is even cooler. Live plotted 3d graphs. Can’t wait for that one to be released!

Enable Storage IO Control on all Datastores!

This week I received an email from one of my readers about some weird Storage IO Control behavior in their environment.  On a regular basis he would receive an error stating that an “external I/O workload has been detected on shared datastore running Storage I/O Control (SIOC) for congestion management”. He did a quick scan of his complete environment and couldn’t find any hosts connecting to those volumes. After exchanging a couple of emails about the environment I managed to figure out what triggered this alert.

Now this all sounds very logical but probably is one of the most common made mistakes… sharing spindles. Some storage platforms carve out a volume from a specific set of spindles. This means that these spindles are solely dedicated to that particular volume. Other storage platforms however group spindles and layer volumes across these. Simply said, they are sharing spindles to increase performance. NetApp’s “aggregates” and HP’s “disk groups” would be a good example.

This can and probably will cause the alarm to be triggered as essentially an unknown workload is impacting your datastore performance. If you are designing your environment from the ground-up, make sure that all spindles that are backing your VMFS volumes have SIOC enabled.

However, in an existing environment this will be difficult, don’t worry that SIOC will be overly conservative and unnecessarily throttle your virtual workload. If and when SIOC detects an external workload it will stop throttling the virtual workload to avoid giving the external more bandwidth while negatively impact the virtual workload. From a throttling perspective that will look as follows:

32 29 28 27 25 24 22 20 (detect nonVI –> Max Qdepth )
32 31 29 28 26 25 (detect nonVI –> Max Qdepth)
32 30 29 27 25 24 (detect nonVI –> Max Qdepth)
…..

Please note that the above example depicts a scenario where SIOC notices that the latency threshold is still exceeded and the cycle will start again, SIOC checks latency values every 4 seconds. The question of course remains how SIOC knows that there is an external workload accessing the datastore. SIOC uses a what we call a “self-learning algorithm”. It keeps track of historical observed latency, outstanding IOs and window sizes. Based on that info it can identify anomalies and that is what triggers the alarm.

To summarize:

  • Enable SIOC on all datastores that are backed by the same set of spindles
  • If you are designing a green field implementation try to avoid sharing spindles between non VMware and VMware workloads

More details about when this event could be triggered can be found in this KB article.

Storage IO Control and Storage vMotion?

I received a very good question this week to which I did not have the answer, I had a feeling but that is not enough. The question was if Storage vMotion would be “throttled” by Storage IO Control. As I happened to have a couple of meetings scheduled this week with the actual engineers I asked the question and this was their answer:

Storage IO Control can throttle Storage vMotion when the latency threshold is exceeded. The reason for this being is that Storage vMotion is “billed” to the virtual machine.

This basically means that if you initiate a Storage vMotion the “process” belongs to the VM and as such if the host is throttled the Storage vMotion process might be throttled as well by the local scheduler(SFQ) depending on the amount of shares that were originally allocated to this virtual machine. Definitely something to keep in mind when doing a Storage vMotion of a large virtual machine as it could potentially lead to an increase of the amount of time it takes for the Storage vMotion to complete. Don’t get me wrong, that is not necessarily a negative thing cause at the same time it will prevent that particular Storage vMotion to consume all available bandwidth.

Introducing voiceforvirtual.com

At VMworld I met up with the guys presenting the Storage I/O Control session, Irfan Ahmad and Chethan Kumar. As many of you hopefully know Irfan has always been active in the social media space (virtualscoop.org). Chethan however is “new” and just started his own blog.

Chethan is a Senior Member of the Performance Engineering team  At VMware. He focuses on characterizing / troubleshooting performance of Enterprise Applications (mostly databases) in virtual environments using VMware products. Chethan has also studied the performance characteristic of the VMware storage stack and was one of the people who produced this great whitepaper on Storage I/O Control. Chethan just released his first article and I am sure many excellent articles will follow. Make sure you add him to your bookmarks/rss reader.

Running Virtual Center Database in a Virtual Machine

I just completed an interesting project. For years, we at VMware believed that SQL server databases run well when virtualized. We have illustrated this through several benchmark studies published as white papers. It was time for us to look at real applications. One such application that can be found in most of the vSphere based virtual environments is the database component of the vCenter server (the brain behind a vSphere environment). Using the vCenter database as the application and the resource intensive tasks of the vCenter databases (implemented as stored procedures in SQL server-based databases) as the load generator, I compared the performance of these resource intensive tasks in a virtual machine (in a vSphere 4.1 host) to that in a native server.

vStorage APIs for Array Integration aka VAAI

It seems that a lot of vendors are starting to update their firmware to enable virtualized workloads from the vStorage APIs for Array Integration, also known as VAAI. Not only the vendors are starting to show interest, also the bloggers are picking up on it. Hence the reason I wanted to reiterate some of the excellent details out there and wanted to make sure everyone understands what VAAI brings. Although currently there are “only” three major improvements they can and probably will make a huge difference:

  1. Hardware Offloaded Copy
    Up to 10x faster VM deployment, cloning, Storage vMotion etc. VAAI offloads the copy task to the array, enabling the usage of native storage based mechanism resulting in a decrease of deployment time but equally important reducing the amount of data flowing between the array and server. Check this post by Bob Plankers and this one by Matt Liebowitz which clearly demonstrates the power of hardware offloaded copies! (reducing cloning from 19Minutes to 6Minutes!)
  2. Write Same/Zero
    10 x times less I/O for common tasks. Take for instance a zero-out process. It typically sends the same SCSI command several times. By enabling this option the same command will be repeated by the storage platform resulting in reduced utilization of the server while decreasing the time span of the action.
  3. Hardware Offloaded Locking
    SCSI Reservation Conflicts…. How many times have I heard that during Health Checks / Design Reviews and while troubleshooting performance related issues. Well VAAI solves those issues as well by offloading the locking mechanism to the array, also known as Atomic Test & Set aka ATS. It will more than likely reduce latency in an environment where thin-provisioned disks are used or linked clones, or even where VMware based snapshots are used. ATS removes the need to lock the full VMFS volume but instead locks a block when an update needs to occur.

One thing I wanted to point out here, which I haven’t seen mentioned yet, is that VAAI will actually allow you to have larger VMFS volumes. Now don’t get me wrong, I am not saying that you can go beyond 2TB-512b by enabling VAAI… My point is that by having VAAI enabled you will reduce the “load” on the array and on the servers. I placed quotes around load as it will not reduce the load from a VM perspective. What I am trying to get at is that many people have limited the amount of VMs per VMFS volume because of “SCSI Reservation Conflicts”. With VAAI this will change. Now you can keep your calculations “simple” and base your VMFS size on the amount of eggs you can have in a single basket and the sum of all VMs IOPS requirements.

After reading about all of this goodness I bet many of you want to use it straight away, well of course your array will need to support it first. Tomi Hakala created a nice list of all storage platforms that are currently supported and those that will be supported soon including a time frame. If your array is supported this KB explains perfectly how to enable/disable it.

I started out with saying that there are currently only three major enhancements…. that means indeed that there is more coming up in the future. Some of which I can’t discuss and others that I can as those were already mentioned at VMworld. (If you have access to TA7121 watch it!) I can’t say when they will be available or in which release, but I think it is great to know more enhancements are being worked on.

  • Dead Space Reclamation
    Dead space is previously written blocks that are no longer used by the VM. Currently in order to reclaim diskspace (for instance when you’ve deleted a lot of files) these blocks you will need to zero them out with for instance sdelete and then Storage vMotion the VM. Dead Space Reclamation will enable the storage system to reclaim these dead blocks by giving block liveness information.
  • Out-of-space conditions notifications
    This is very much an improvement for day-to-day operations. It will enable notification of possible “out-of-space” conditions on both the array vendor’s tool both also within the vSphere client!

Must reads:

Chad Sakac – What does VAAI mean to you?
Bob Plankers – If you ever needed convincing about VAAI
AndreTheGiant – VAAI
VMware KB – VAAI FAQ
VMware Support Blog – VAAI changes the way storage is handled
Matt Liebowitz – Exploring the performance benefits of VAAI
Bas Raayman – What is VAAI, and how does it add spice to my life as a VMware admin?