Using VAAI ATS capable array and VMFS-5?

<update 21-Jan-2013>I have just been informed that this issue was fixed in vSphere 5.0 Update 1. The KB article and 5.0 U1 release notes will be updated shortly!</update>

If you are using a VAAI ATS capable array and VMFS-5 you might want to read this KB Article. The article describes a situation where it is impossible to mount VMFS volumes when they were formatted with VMFS-5 on a VAAI ATS (locking offloading) capable array. These are the kind of problems that you won’t hit on a daily basis but when you do you will be scratching your head for a while. Note that this also applies to scenarios where for instance SRM is used. The error to look for in your vmkernel log is:

Failed to reserve volume

So anyone with a 5.0 environment and newly formatted VMFS-5 volumes might want to test this. Although the article states that so far it has only been encountered with EMC Clariion, NS and VNX storage, it also notes that it might not be restricted to it. The solution fortunately is fairly simple, just disable VAAI ATS for now.

esxcli system settings advanced set -i 0 -o /VMFS3/HardwareAcceleratedLocking

For more details read the KB and I would also suggest following it with an RSS reader if you have this issue, that way you get notified when there is an update.

TechPubs youtube videos

I just noticed these 3 cool TechPubs youtube videos, the techpubs channel has been around for a while and I have been enjoying their videos a lot. Recently a couple of new videos were released and I hadn’t gotten around to watching them yet, but these are definitely part of my favorites. One is on vSphere HA by the lead engineer: Keith Farkas (also a reviewer on our book), and two others are by Sachin Thakkar. Sachin is one of the leads on vSphere virtual networking features like VXLAN. I enjoyed watching these very much as they give a nice overview of what this feature is about in just a couple of minutes. I also personally feel it is nice to “get to know” the people behind this cool feature/technology…

Make sure to follow the TechPubs channel for more cool videos. Now it is back to christmas shopping again ;-)

vSphere HA

VXLAN

Renaming virtual machine files using SvMotion back in 5.0 U2

I have been pushing for this heavily internally together with Frank Denneman and it pleases me to say that it is finally back… You can rename your virtual machine files again using Storage vMotion as of 5.0 u2.

vSphere 5 Storage vMotion is unable to rename virtual machine files on completing migration
In vCenter Server , when you rename a virtual machine in the vSphere Client, the vmdk disks are not renamed following a successful Storage vMotion task. When you perform a Storage vMotion of the virtual machine to have its folder and associated files renamed to match the new name. The virtual machine folder name changes, but the virtual machine file names do not change.

This issue is resolved in this release

src: https://www.vmware.com/support/vsphere5/doc/vsp_vc50_u2_rel_notes.html#resolvedissues

Those who want to know what else is fixed, you can find the full release notes here of both ESXi 5.0 U2 and vCenter 5.0 U2:

** do note that this fix is not part of 5.1 yet **

Using ESXTOP to check VAAI primitive stats

Yesterday a comment was made around a VAAI primitive on my article about virtual disk types and performance. In this case “write same” was mentioned and the comment was about how it would not be used when expanding a thin disk or lazy zero thick disk. Now the nice thing is that with ESXTOP you can actually see VAAI primitive stats. For instance “ATS” (locking) can be seen, but also… write same or “ZERO” as ESXTOP calls it.

If you open up ESXTOP and do the following you will see these VAAI primitive stats:

  • esxtop
  • press “u”
  • press “f”
  • press “o”
  • press “enter”

The screenshot below shows you what that should look like, nice right… In this case 732 blocks were zeroed out using the write-same / zero VAAI primitive.

VAAI primitive stats

Death to false myths: The type of virtual disk used determines your performance

I was reading Eric Sloof’s article last week about how the disk type will impact your performance. First of all, let me be clear that I am not trying to bash Eric here… I think Eric has a valid point, well at least in some cases and let me explain why.

On his quest of determining if the virtual disk type determines performance Eric tested the following on his environment:

  • Thin
  • Lazy zero thick
  • Eager zero thick

These are the three disk types you can choose from when creating a new virtual disk. The difference between them is simple. Thick are fully allocated virtual disk files. Lazy zero means that the disk is not zeroed out yet, eager zero means the full disk is zeroed out during the provisioning process. Thin, well I guess you know what it means… not fully allocated and not also not zeroed. This also implies that in the case of “thin” and “lazy zero thick” something needs to happen when a new “block” is accessed. This is what Eric showed in his test. But is that relevant?

First of all, the test was conducted with an Iomega PX6-300d. One thing to point out here  is that this device isn’t VAAI capable and limited from a performance perspective due to the CPU power and limited set of spindles. The lack of VAAI however impacts the outcome the most. This, in my opinion means, that the outcome cannot really be used by those who have VAAI capable arrays. The outcome would be different when one of the following two (or both) VAAI primitives are supported and used by the array:

  • Write Same aka “Zero Out”
  • ATS aka “Hardware Offloaded Locking”

Cormac Hogan wrote an excellent article on this topic and an excellent white paper, if you want to know more about ATS and Write Same make sure to read them.

Secondly, it is important to realize that most applications don’t have the same write pattern as the benchmarking tool used by Eric. In his situation the tool basically fills up an X amount of blocks sequentially to determine the max performance. Only if you do fill up a disk at once, or very large unwritten sections, you potentially could see a similar result. Let me emphasize that, could potentially.

I guess this myth was once a fact, back in 2009 a white paper was released about Thin/Thick disks. In this paper they demonstrate the difference between thin, lazy zero and eager zero thick… yes they do proof there is a difference but this was pre-VAAI. Now if you look at another example, a nice extreme example, which is a performance test done by my friends of Pure Storage you will notice there is absolutely no difference. This is an extreme example considering it’s an all-flash VAAI based storage system, nevertheless it proofs a point. But not just all-flash arrays see a huge improvement, take a look at this article by Derek Seaman about 3Par’s “write same” (zero’ing) implementation, I am pretty sure that in his environment he would also not see the huge discrepancy Eric witnessed.

I am not going to dispel this myth as it is a case of “it depends”. It depends on the type of array used, and for instance how VAAI was implemented as that could make difference. In most cases however it is safe to say that the performance difference will not be big, if noticeable at all during normal usage. I am not even discussing all the operational implications of using eager-zero-thick… (as Frank Denneman will respond soon to this blog post has a nice article about that. :-))