• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

ha

Permanent Device Loss (PDL) enhancements in vSphere 5.0 Update 1 for Stretched Clusters

Duncan Epping · Mar 16, 2012 ·

In the just released vSphere 5.0 Update 1 some welcome enhancements were added around vSphere HA and how a Permanent Device Loss (PDL) condition is handled. The PDL condition is a condition that is communicated by the array to ESXi via a SCSI sense code and indicates that a device (LUN) is unavailable and more than likely permanently unavailable. This is a condition which is useful for “stretched storage cluster” configurations where in the case of a failure in Datacenter-A the configuration in Datacenter-B can take over. An example of when a condition like this would be communicated by the array would be when a LUN “detached” in a site isolation. PDL is probably most common in non-uniform stretched solutions like EMC VPLEX. With VPLEX site affinity is defined per LUN. If your VM resides in Datacenter-A while the LUN it is stored on has affinity to Datacenter-B in case of failure this VM could lose access to the LUN. These enhancements will ensure the VM is killed and restarted on the other side.

Please note that action will only be taken when a PDL sense code is issued. When your storage completely fails for instance it is impossible to reach the PDL condition as there is no communication possible anymore from the array to the ESXi host and the state will be identified by the ESXi host as an All Paths Down (APD) condition. APD is a more common scenario in most environments. If you are testing these enhancements please check the log files to validate which problem has been identified.

With vSphere 5.0 and prior HA did not have a response in a PDL condition, meaning that when a virtual machine was residing on a datastore which had a PDL condition the virtual machine would just sit there. This virtual machine would be unable to read or write from disk however. As of vSphere 5.0 Update 1 a new mechanism has been introduced which allows vSphere HA to take action when a datastore has reached a PDL state. Two advanced settings make this possible. The first setting is configured on a host level and is “disk.terminateVMOnPDLDefault”. This setting can be configured in /etc/vmware/settings and should be set to “True”. This setting ensures that a virtual machine is killed when the datastore it resides on is in a PDL state. The virtual machine is killed as soon as it initiates disk I/O on a datastore which is in a PDL condition and all of the virtual machine files reside on this datastore. Note that if a virtual machine does not initiate any I/O it will not be killed!

The second setting is a vSphere HA advanced setting called das.maskCleanShutdownEnabled. This setting is also not enabled by default and it will need to be set to “True”. This settings allows HA to trigger a restart response for a virtual machine which has been killed automatically due to a PDL condition. This setting allows HA to differentiate between a virtual machine which was killed due to the PDL state or a virtual machine which has been powered off by an administrator.

As soon as “disaster strikes” and the PDL sense code is sent you will see the following popping up in the vmkernel.log that indicates the PDL condition and the kill of the VM:

2012-03-14T13:39:25.085Z cpu7:4499)WARNING: VSCSI: 4055: handle 8198(vscsi4:0):opened by wid 4499 (vmm0:fri-iscsi-02) has Permanent Device Loss. Killing world group leader 4491
2012-03-14T13:39:25.085Z cpu7:4499)WARNING: World: vm 4491: 3173: VMMWorld group leader = 4499, members = 1

As mentioned earlier, this is a welcome enhancement which especially in non-uniform stretched storage environment can help in specific failure scenarios.

Migrating VMs between clusters in vSphere 5.0 results in VMs being unprotected?

Duncan Epping · Mar 8, 2012 ·

Today on the community forums someone mentioned an issue where his VMs were not protected by vSphere HA after they had been migrated between clusters. After reading it I vague recalled this being a known issue. I dug up the KB and the work around is fairly simple:

  • Disable HA on the cluster where the unprotected VM resides
  • Enable HA on the cluster again

If you need to do a lot of migrations to a different cluster you can also temporarily disable HA, migrate all VMs and then enable it again. This leads to the same result as above, all VMs will be protected again. This is on the radar of our developers and they are working on fixing this in a future release.

HA Admission Control does not disallow HA initiated restarts

Duncan Epping · Mar 6, 2012 ·

I had a question about HA Admission Control today and as this is something that has come up multiple times I figured I would dedicate an article to it. This customer had enabled HA Admission Control and artificially wanted to control the amount of virtual machines a single host could run by manually specifying the slot size. (For more details on Admission Control slot sizes and how to configure these read the Deepdive page.) When they simulated a failure they were surprised that some host had more virtual machines running than should be allowed according to the configured slot size… This is however, contrary to their beliefs, by design. Let me copy/paste a paragraph from our book which talks about admission control.

What is HA Admission Control about? Why does HA contain this concept called Admission Control? The “Availability Guide” a.k.a HA bible states the following:

“vCenter Server uses admission control to ensure that sufficient resources are available in a cluster to provide failover protection and to ensure that virtual machine resource reservations are respected.”

Please read that quote again and especially the first two words. Indeed it is vCenter Server that is responsible for Admission Control. Although this might seem like a trivial fact it is important to understand that this means that Admission Control will not disallow HA initiated restarts. HA initiated restarts are done on a host level and not through vCenter. It is Admission Control’s task to ensure sufficient resources are available for HA to restart virtual machines, hence the reason HA does not take Admission Control in to account.

I hope this clears things up. I was pretty sure I have discussed this in multiple articles but as it comes up fairly often I figured dedicating and article to it would make it easier to find. I know it is not really clear in our documentation and I’ve requested this to be changed to reflect the actual behavior and avoid misunderstandings like these.

I selected “failover host” and my VMs still end up on a different host after an HA event

Duncan Epping · Mar 2, 2012 ·

I received a question today about HA admission control policies, and more specifically about the “failover host” admission control policy. The question was why VMs were restarted on a different host then selected with the “Failover Host” admission control policy. Shouldn’t this policy guarantee that a VM is restarted on the designated host?

The answer is fairly straight forward, and I thought I blogged about this already but I cannot find it so here it goes. Yes, in a normal condition HA will request the designated failover host to restart the failed VMs. However there are a couple of cases where HA will not restart a VM on the designated failover host(s):

  • When the failover host is not compatible with the virtual machine (portgroup or datastore missing)
  • When the failover host does not have sufficient resource available for the restart
  • When the virtual machine restart fails HA retries on a different host

Keep that in mind when using this admission control policy, it is no hard guarantee that the designated failover host will restart all failed VMs.

Re: when to disable HA? /cc @hashmibilal

Duncan Epping · Jan 25, 2012 ·

Bilal Hashmi wrote a nice article about HA today and in this article he asked a couple of questions. As I think the info is useful for everyone I decided to respond through a blog article instead of by commenting.

Let me start by saying that in general HA should never be disabled. The later versions of vSphere have a neat option called “Enable Host Monitoring”. This option should be used for scheduled network maintenance. The difference between disabling host monitoring and disabling HA is that disabling host monitoring does not cause a full reconfiguration (see screenshot below) of HA and a new election process. Just the “host monitoring” functionality is disabled, which is what you want in this scenario.

Bilal asked multiple questions / made multiple statements in his article, I will respond to two of these specifically to explain the way HA handles failures/isolation:

In this case within 30 sec of the management network outage, each host would have declared itself isolated and wont attempt to restart any VMs like the primaries would in vSphere 5.

So why is this? As soon as a Master is isolated it will drop “ownership” of datastores on which VMs are running that are part of its cluster. Before the other hosts trigger the isolation response for a given VM they will validate if the datastore on which this VM is stored is “owned” by a master. In the case of a cluster wide isolation due to a network outage / maintenance the ownership would be dropped and this would result in HA not triggering the isolation response. This is a major change compared to vSphere 4.x and prior!

Now what happens when the network outage is over and the hosts are in a position to talk to each other? I have not been able to find documentation on whether an isolated host will enter an election (vSphere 4 or 5) ones the communication channel is open and bring the cluster back to life.

Lets focus on vSphere 5.0 as that seems most relevant. A host remains isolated until it observes HA network traffic, like for instance election messages OR it starts getting a response from an isolation address. Meaning that as long as the host is in “isolated state” it will continue to validate its isolation by pinging the isolation address. As soon as the isolation address responds it will initiate an election process or join an existing election process and the cluster will return to a normal state.

There’s absolutely no need to manually intervene. HA takes care of all of this for you.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 24
  • Page 25
  • Page 26
  • Page 27
  • Page 28
  • Interim pages omitted …
  • Page 54
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in