• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

ha

vSphere 5.1 Clustering Deepdive only $17.95, limited time!

Duncan Epping · Nov 21, 2012 ·

Frank and I decided to put the vSphere 5.1 Clustering Deepdive (paper copy) up for sale for only $ 17.95. This is a limited time offer (21st of December), so if you want to get yourself, your friend-husband-father-kids or even grandmother a nice present be quick.

How’s that for a black-friday, cyber-monday, christmas / sinterklaas special? Yes indeed, the paper copy is cheaper than all e-books on vSphere 5.x out there on Amazon and with 5 stars (11 reviews) you know you can’t go wrong.

Happy holidays,

Frank and Duncan
(ps: the kindle copy is only 7.49, so even combined it is cheaper than most e-books out there :-))

vSphere HA compatibility list, how do I check it?

Duncan Epping · Nov 8, 2012 ·

Someone reported issues that in their environment VMs could not be restarted as there were no compatible hosts available. The relevant part of the error message was:

N3Vim5Fault16NoCompatibleHostE

I don’t know why in this case it happened as the log files unfortunately don’t provide these details. This person had manually restarted all of his VMs and that actually worked okay. This could mean that some how the “compatibility list” that vSphere HA maintains was not complete or it wasincorrect. So the question would be how do you validate that if you ever end up in a scenario like this?

First of all before I forget, create a support dump. That way VMware Global Support Services can help pinpointing your problems and provide tips on how to prevent these from occurring.

On a host, and you will have to SSH in to one, you can actually run a script that provides you with some nice details around this. Lets go through the options of the script and explain what you can get out of it. The script is called “prettyPrint.sh” can be found in “/opt/vmware/fdm/fdm/”.

./prettyPrint.sh hostlist

The hostlist option provides all relevant details about the hosts which are part of this cluster including “hostId”, host name, ip address etc.

./prettyPrint.sh clusterconfig

The clusterconfig option provides all configuration info of your cluster like admission control and isolation response.

./prettyPrint.sh compatlist

The compatlist option provides the list of VMs and host they are compatible with, only for vSphere 5.0.

./prettyPrint.sh vmmetadata

The vmmetadata option provides the list of VMs and host they are compatible with, only for vSphere 5.1.

So in this case “vmmetadata” was important as it lists VMs compatible with which host. In this case “<index>0</index> refers to a VM and “<compatMask>0,1,2,3</compatMask> refers to the hosts it is compatible with. Nice right?!

   <compatMatrix>
      <restartCompat>
         <index>0</index>
         <compatMask>0,1,2,3</compatMask>
      </restartCompat>
      <restartCompat>
         <index>1</index>
         <compatMask>0,1,2,3</compatMask>
      </restartCompat>
      <restartCompat>
         <index>2</index>
         <compatMask>0,1,2,3</compatMask>
      </restartCompat>
   </compatMatrix>

** Update: Added Portgroup Test **

On VMTN someone asked if HA also takes networking in to account when restarting VMs. If a given portgroup is not available on specific hosts will HA smartly place VMs? In my test I removed the “VM Network” portgroup from one of my hosts (host with ID 2). When listing the compatibility list the following shows up:

<restartCompat>
       <index>0</index>
       <compatMask>0,1,3</compatMask>
</restartCompat>

As you can see host with ID 2 is missing.

How do I configure an HA vpxd.das advanced setting?

Duncan Epping · Nov 7, 2012 ·

On the community forums someone asked a question around how to set “config.vpxd.das.electionWaitTimeSec”. I was looking at the documentation and it is indeed not really clear on what / where / how to set an HA vpxd.das advanced setting. This KB article kind explains it, but let me summarize it and simplify it.

There are various sorts of advanced settings, but for HA three in particular:

  • das.* –> Cluster level advanced setting.
  • fdm.* –> FDM host level advanced setting (FDM = Fault Domain Manager = vSphere HA)
  • vpxd.* –> vCenter level advanced setting.

How do you configure these?

  • Cluster Level
    • In the vSphere Client: Right click your cluster object, click “edit settings”, click “vSphere HA” and hit the “Advanced Options” button.
    • In the Web Client: Click “Hosts and Clusters”, click your cluster object, click the “Manage” tab, click “Settings” and “vSphere HA”, hit the “Edit” button
  • FDM Host Level
    • Open up an SSH session to your host and edit “/etc/opt/vmware/fdm/fdm.cfg”
  • vCenter Level
    • In the vSphere Client: Click “Administration” and “vCenter Server Settings”, click “Advanced Settings”
    • In the Web Client: Click “vCenter”, click “vCenter Servers”, select the appropriate vCenter Server and click the “Manage” tab, click “Settings” and “Advanced Settings”

By the way, this KB also lists all HA advanced settings that are relevant… might be worth reading as well. Hope this helps configuring your HA vpxd.das advanced setting.

vSphere HA fail-over in action – aka reading the log files

Duncan Epping · Oct 17, 2012 ·

I had a discussion with Benjamin Ulsamer at VMworld and he had a question about the state of a host when both the management network and storage network was isolated. My answer was that in that case the host will be reported as “dead” as there is no “network heartbeat” and no “datastore heartbeat”. (more info about heartbeating here) Funny thing is when you look at the log files you do see isolated instead of dead. Why is that? Before we answer it lets go through the log files and paint the picture:

Two hosts (esx01 and esx02) with a management network and an iSCSI storage network. vSphere 5.0 is used and Datastore Heartbeating is configured. For whatever reason for the network of esx02 is isolated (both storage and management as it is a converged environment. So what can you see in the log files?

Lets look at “esx02” first:

  • 16:08:07.478Z [36C19B90 info ‘Election’ opID=SWI-6aace9e6] [ClusterElection::ChangeState] Slave => Startup : Lost master
    • At 16:08:07 the network is isolated
  • 16:08:07.479Z [FFFE0B90 verbose ‘Cluster’ opID=SWI-5185dec9] [ClusterManagerImpl::CheckElectionState] Transitioned from Slave to Startup
    • The host recognizes it is isolated and drops from Slave to “Startup” so that it can elect itself as master to take action
  • 16:08:22.480Z [36C19B90 info ‘Election’ opID=SWI-6aace9e6] [ClusterElection::ChangeState] Candidate => Master : Master selected
    • The host has elected itself as master
  • 16:08:22.485Z [FFFE0B90 verbose ‘Cluster’ opID=SWI-5185dec9] [ClusterManagerImpl::CheckHostNetworkIsolation] Waited 5 seconds for isolation icmp ping reply. Isolated
    • Can I ping the isolation address?
  • 16:08:22.488Z [FFFE0B90 info ‘Policy’ opID=SWI-5185dec9] [LocalIsolationPolicy::Handle(IsolationNotification)] host isolated is true
    • No I cannot, and as such I am isolated!
  • 16:08:22.488Z [FFFE0B90 info ‘Policy’ opID=SWI-5185dec9] [LocalIsolationPolicy::Handle(IsolationNotification)] Disabling execution of isolation policy by 30 seconds.
    • Hold off for 30 seconds as “das.config.fdm.isolationPolicyDelaySec” was configured
  • 16:08:52.489Z [36B15B90 verbose ‘Policy’] [LocalIsolationPolicy::GetIsolationResponseInfo] Isolation response for VM /vmfs/volumes/a67cdaa8-9a2fcd02/VMWareDataRecovery/VMWareDataRecovery.vmx is powerOff
    • There is a VM with an Isolation Response configured to “power off”
  • 16:10:17.507Z [36B15B90 verbose ‘Policy’] [LocalIsolationPolicy::DoVmTerminate] Terminating /vmfs/volumes/a67cdaa8-9a2fcd02/VMWareDataRecovery/VMWareDataRecovery.vmx
    • Lets kill that VM!
  • 16:10:17.508Z [36B15B90 info ‘Policy’] [LocalIsolationPolicy::HandleNetworkIsolation] Done with isolation handling
    • And it is gone, done with handling the isolation

Lets take a closer look at “esx01”, what does this host see with regards to the management network and storage network isolation of “esx02”:

  • 16:08:05.018Z [FFFA4B90 error ‘Cluster’ opID=SWI-e4e80530] [ClusterSlave::LiveCheck] Timeout for slave @ host-34
    • The host is not reporting itself any longer, the heartbeats are gone…
  • 16:08:05.018Z [FFFA4B90 verbose ‘Cluster’ opID=SWI-e4e80530] [ClusterSlave::UnreachableCheck] Beginning ICMP pings every 1000000 microseconds to host-34
    • Lets ping the host itself, it could be the FDM agent is dead.
  • 16:08:05.019Z [FFFA4B90 verbose ‘Cluster’ opID=SWI-e4e80530] Reporting Slave host-34 as FDMUnreachable
  • 16:08:05.019Z [FFD5BB90 verbose ‘Cluster’] ICMP reply for non-existent pinger 3 (id=isolationAddress)
    • As it is just a 2 node cluster, lets make sure I am not isolated myself, I got a reply so I am not isolated!
  • 16:08:10.028Z [FFFA4B90 verbose ‘Cluster’ opID=SWI-e4e80530] [ClusterSlave::UnreachableCheck] Waited 5 seconds for icmp ping reply for host host-34
  • 16:08:14.035Z [FFFA4B90 verbose ‘Cluster’ opID=SWI-e4e80530] [ClusterSlave::PartitionCheck] Waited 15 seconds for disk heartbeat for host host-34 – declaring dead
    • There is also no datastore heartbeat so the host must be dead. (Note that it cannot see the difference between a fully isolated host and a dead host when using IP based storage on the same network.)
  • 16:08:14.035Z [FFFA4B90 verbose ‘Cluster’ opID=SWI-e4e80530] Reporting Slave host-34 as Dead
    • It is officially dead!
  • 16:08:14.036Z [FFE5FB90 verbose ‘Invt’ opID=SWI-42ca799] [InventoryManagerImpl::RemoveVmLocked] marking protected vm /vmfs/volumes/a67cdaa8-9a2fcd02/VMWareDataRecovery/VMWareDataRecovery.vmx as in unknown power state
    • We don’t know what is up with this VM, power state unknown…
  • 16:08:14.037Z [FFE5FB90 info ‘Policy’ opID=SWI-27099141] [VmOperationsManager::PerformPlacements] Sending a list of 1 VMs to the placement manager for placement.
    • We will need to restart one VM, lets provide its details to the Placement Manager
  • 16:08:14.037Z [FFE5FB90 verbose ‘Placement’ opID=SWI-27099141] [PlacementManagerImpl::IssuePlacementStartCompleteEventLocked] Issue failover start event
    • Issue a failover event to the placement manager.
  • 16:08:14.042Z [FFE5FB90 verbose ‘Placement’ opID=SWI-e430b59a] [DrmPE::GenerateFailoverRecommendation] 1 Vms are to be powered on
    • Lets generate a recommendation on where to place the VM
  • 16:08:14.044Z [FFE5FB90 verbose ‘Execution’ opID=SWI-898d80c3] [ExecutionManagerImpl::ConstructAndDispatchCommands] Place /vmfs/volumes/a67cdaa8-9a2fcd02/VMWareDataRecovery/VMWareDataRecovery.vmx on __localhost__ (cmd ID host-28:0)
    • We know where to place it!
  • 16:08:14.687Z [FFFE5B90 verbose ‘Invt’] [HalVmMonitor::Notify] Adding new vm: vmPath=/vmfs/volumes/a67cdaa8-9a2fcd02/VMWareDataRecovery/VMWareDataRecovery.vmx, moId=12
    • Lets register the VM so we can power it on
  • 16:08:14.714Z [FFDDDB90 verbose ‘Execution’ opID=host-28:0-0] [FailoverAction::ReconfigureCompletionCallback] Powering on vm
    • Power on the impacted VM

That is it, nice right… and is just a short version of what is actually in the log files. It contains a massive amount of details! Anyway, back to the question… if not already answered, the remaining host in the cluster sees the isolated host as dead as there is no:

  • network heartbeat
  • response to a ping to the host
  • datastore heartbeat

The only thing the master can do at that point is to assume the “isolated” host is dead.

 

** Disclaimer: This article contains references to the words master and/or slave. I recognize these as exclusionary words. The words are used in this article for consistency because it’s currently the words that appear in the software, in the UI, and in the log files. When the software is updated to remove the words, this article will be updated to be in alignment. **

Limit the amount of eggs in a single basket through vSphere 5.1 DRS

Duncan Epping · Oct 1, 2012 ·

A while back I had discussion with someone and he asked me if it was possible to limit the amount of eggs in a single basket, in other words limit the amount of VMs per host. The reason this customer wanted to do this was to limit the impact of a failure. They had roughly 1500 VMs in their cluster and some hosts carried 50 VMs while other had 20 or 80. This is the nature of DRS though and totally expected.

If one of these hosts would fail, and lets say they had 80 VMs the impact of that would be substantial. To minimize the risk they wanted to limit the amount of VMs per host. I had thought about this before and had already asked the HA and DRS team if they could do anything around this. The DRS team started looking in to it and to my surprise they managed to get it in quick.

In VMworld 2012 session “VSP2825: DRS: Advanced Concepts, Best Practices and Future Directions” by Ajay Gulati and Aashish Parikh a solution is presented. (You can watch this session for free on youtube, highly recommended!) This solution is a new vSphere DRS advanced setting which is introduced in vSphere 5.1.

 LimitVMsPerESXHost

Note that when you configure this setting it might impact the performance of your virtual machines as it could limit the load balancing mechanism of your cluster. If you have no requirements to limit the amount of VMs per ESXi host, don’t do it. When this setting is configured, vSphere DRS will not allow migrations to a host which has reached the threshold and will also not admit new VMs to the host if it has reached the threshold.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 16
  • Page 17
  • Page 18
  • Page 19
  • Page 20
  • Interim pages omitted …
  • Page 54
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in