High latency VPLEX configuration and vMotion optimization

This week someone asked me about an advanced setting to optimize vMotion for VPLEX configurations. This person referred to the vSphere 5.5 Performance Best Practices paper and more explicitly the following section:

Add the VMX option (extension.converttonew = “FALSE”) to virtual machine’s .vmx files. This option optimizes the opening of virtual disks during virtual machine power-on and thereby reduces switch-over time during vMotion. While this option can also be used in other situations, it is particularly helpful on VPLEX Metro deployments.

I had personally never heard of this advanced setting and I did some searches both internally and externally and couldn’t find any references other than in the vSphere 5.5 Performance paper. Strange, as you could expect with a generic recommendation like the above that it would be mentioned at least in 1 or 2 other spots. I reached out to one of the vMotion engineers and after going back and forth I figured out what the setting is for and when it should be used.

During testing with VPLEX and VMs using dozens of VMDKs in a “high latency” situation it could take longer than expected before the switchover between hosts had happened. First of all, when I say “high latency” we are talking about close to the max tolerated for VPLEX which is around 10ms RTT. When “extension.converttonew” is used the amount of IO needed during the switchover is limited, and when each IO takes 10ms you can imagine that has a direct impact on the time it takes to switchover. Of course these enhancements where also tested in scenarios where there wasn’t high latency, or a low number of disks were used, and in those cases the benefits of the enhancements were negligible and the operation overhead of configuring this setting did not weigh up against the benefits.

So to be clear, this setting should only be used in scenarios where high latency and a high number of virtual disks results in a long switchover time during migrations of VMs between hosts in a vMSC/VPLEX configuration. I hope that helps.

vSphere Metro Storage Cluster with vSphere 6.0 paper released

I’d already blogged about this on the VMware blog, but I figured I would share it here as well. The vSphere Metro Storage Cluster with vSphere 6.0 white paper has been released. I worked on this paper together with my friend Lee Dilworth, it is an updated version of the paper we did in 2012. It contains all of the new best practices for vSphere 6.0 when it comes to vSphere Metro Storage Cluster implementations, so if you are looking to implement one or upgrade an existing environment make sure to read it!

VMware vSphere Metro Storage Cluster Recommended Practices

VMware vSphere Metro Storage Cluster (vMSC) is a specific configuration within the VMware Hardware Compatibility List (HCL). These configurations are commonly referred to as stretched storage clusters or metro storage clusters and are implemented in environments where disaster and downtime avoidance is a key requirement. This best practices document was developed to provide additional insight and information for operation of a vMSC infrastructure in conjunction with VMware vSphere. This paper explains how vSphere handles specific failure scenarios, and it discusses various design considerations and operational procedures. For detailed information about storage implementations, refer to documentation provided by the appropriate VMware storage partner.

vSphere Metro Storage Cluster with vSphere 5.5

I had a couple of questions around the exact settings for vSphere Metro Storage Clusters with vSphere 5.5. It was the third time in two weeks I shared the same info about vMSC with vSphere 5.5 so I figured I would write a quick blog making the information a bit easier to find through google. Below you can find the settings required for a vSphere Metro Storage Cluster with vSphere 5.5. Note that in-depth details around operations / testing can be found in this white paper: version 5.x // version 6.0.

  1. VMkernel.Boot.terminateVMOnPDL = True
  2. Das.maskCleanShutdownEnabled = True 
  3. Disk.AutoremoveOnPDL = 0 

I want to point out that if you migrate from 5.0 or 5.1 that Host Advanced Setting “VMkernel.Boot.terminateVMOnPDL” replaces disk.terminateVMOnPDLDefault (/etc/vmware/settings). Das.maskCleanShutdownEnabled is actually configured to “true” by default as of vSphere 5.1 and later, but personally I prefer to set it anyway so that I know for sure it has been configured accurately. Then there is Disk.AutoremoveOnPDL, this setting is new in vSphere 5.5 as discussed here. Make sure to disable it, as PDLs are likely to be temporary there is no point removing the devices and then having to do a rescan to have them reappear, it only slows down your process recovery. (EMC also recommends this by the way, see page 21 of this PDF on vMSC/VPLEX).

What happens to VMs when a cluster is partitioned?

I had this question this week around what happens to VMs when a cluster is partitioned. Funny thing is that with questions like these it seems like everyone is thinking the same thing at the same time. I had the question on the same day from a customer running traditional storage and had a network failure across racks and from a customer running Virtual SAN who just wanted to know how this situation was handled. The question boils down to this, what happens to the VM in “Partition 1″ when the VM is restarted in Partition 2?

The same can be asked for a traditional environment, only difference being that you wouldn’t see those “disk groups” in the bottom but a single datastore. In that case a VM can be restarted when a disk lock is lost… What happens to the VM in partition 1 that has lost access to its disk? Does the isolation response kick in? Well if you have vSphere 6.0 then potentially VMCP can help because if you have a single datastore and you’ve lost access to it (APD) then the APD response can be triggered. But if you don’t have vSphere 6.0 or don’t have VMCP configured, or if you have VSAN, what would happen? Well first of all, it is a partition scenario and not an isolation scenario. On both sides of the partition HA will have a master and hosts will be able to ping each other so there is absolutely no reason to invoke the “isolation response” as far as HA is concerned. The VM will be restarted in partition 2 and you will have it running in Partition 1, you will either need to kill it manually in Partition 1, or you will need to wait until the partition is lifted. When the partition is lifted the kernel will realize it no longer holds the lock (as it is lost it to another host) and it will kill the impacted VMs instantly.

Conservative vs Aggressive for VMCP APD response

I just finished writing the vMSC 6.0 Best Practices paper which is about to be released when a question came in. The question was around the APD scenario and whether the response to an APD should be set to aggressive or conservative. Its a good question and my instinct immediately says: conservative… But should it be configured to that in all cases? If so, why on earth do we even have the aggressive method? That got me thinking. (By the way, make sure to read this article by Matt Meyer on VMCP on the vSphere blog, good post!) But before I spill the beans, what is aggressive / conservative in this case and what is this feature again?

VM Component Protection (VMCP) is new in 6.0 and it allows vSphere to respond to a scenario where the host have lost access to a storage device. (Both PDL and APD.) In previous releases vSphere was already capable of responding to PDL scenarios but the settings weren’t really exposes in the UI and that has been done with 6.0 and the APD response has also been added at the same time. Great feature if you ask me, especially in stretched environments as it will help during certain failure scenarios. [Read more…]