• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

VMware

Scripts for “Proactive DRS/DPM”

Duncan Epping · Jun 22, 2010 ·

I never noticed this set of scripts to be honest but Anne Holler(VMware Employee) posted these about a year ago. What the scripts do is change various DRS/DPM settings to pro-actively manage your environment and change DRS and DPM behaviour based on expected workload.

Proactive DRS:

  • setDRSAggressive.pl
    The script setDRSAggressive.pl sets various DRS operating parameters so that it will recommend rebalancing VMotions even when current VM demand does not make those moves appear worthwhile. As an example use case, if powerOnHosts.pl (see “Proactive DPM” posting) is used to trigger host power-ons at 8am before an expected steep increase in VM demand weekdays at 9am, setDRSAggressive.pl can also be scheduled to run at 8am to force rebalancing moves to the powered-on hosts.
  • setDRSDefault.pl
    The script setDRSDefault.pl resets DRS’ operating parameters so that it resumes its normal behaviour.  (Behaviour before using setDRSAggressive.pl)
  • setMaxMovesPerHost.pl
    The script setMaxMovesPerHost.pl can be used to increase DRS’ limit on the number of VMotions it will recommend in each (default every 5 minutes) regular DRS invocation

Proactive DPM:

  • powerOnHosts.pl
    The script powerOnHosts.pl changes cluster settings to engender
    recommendations to power on all standby hosts and then to disable DPM so that those hosts are kept on while demand remains low.
  • enableDPM.pl
    The script enableDPM.pl re-enables DPM to run in its normal reactive behavior. As an example use case, this script can be scheduled to run each weekday morning at (say) 10am (after full VM demand load is expected to be established) or at (say) 5pm (after full VM demand load is likely to diminish) to resume normal DPM operation.

I had multiple customers asking me if it was possible to schedule a change of the DRS and DPM configuration. My answer used to be yes you can script it but never managed to find a script until I bumped into these coincidentally today.

DRS Sub Cluster? vSphere 4.next

Duncan Epping · Jun 21, 2010 ·

On the community forums a question was asked around Campus Clusters and pinning VMs to a specific set of hosts. In vSphere 4.0 that’s currently not possible unfortunately and it definitely is a feature that many customers would want to use.

Banjot Chanana revealed during VMworld that it was an upcoming feature but did not go into much details. However on the community forums, thanks @lamw for point this out, Elisha just revealed the following:

Controls will be available in the upcoming vSphere 4.1 release to enable this behavior. You’ll be able to set “soft” (ie. preferential) or “hard” (ie. strict) rules associating a set of vms with a set of hosts. HA will respect the hard rules and only failover vms to the appropriate hosts.

Basically DRS Host Affinity rules which VMware HA adheres to. Can’t wait for the upcoming vSphere version to be released and to figure out how all these nice “little” enhancements change our designs.

HA: Max amount of host failures?

Duncan Epping · Jun 18, 2010 ·

A colleague had a question around the maximum amount of host failures HA could take. The availability guide states the following:

The maximum Configured Failover Capacity that you can set is four. Each cluster has up to five primary hosts and if all fail simultaneously, failover of all hosts might not be successful.

However, when you select the “Percentage” admission control policy you can set it to 50% even when you have 32 hosts in a cluster. That means that the amount of failover capacity being reserved equals 16 hosts.

Although this is fully supported but there is a caveat of course. The amount of primary nodes is still limited to five. Even if you have the ability to reserve over 5 hosts as spare capacity that does not guarantee a restart. If, for what ever reason, half of your 32 hosts cluster fails and those 5 primaries happen to be part of the failed hosts your VMs will not restart. (One of the primary nodes coordinates the fail-over!) Although the “percentage” option enables you to save additional spare capacity there’s always the chance all primaries fail.

All in all, I still believe the Percentage admission control policy provides you more flexibility than any other admission control policy.

Which host is selected for an HA initiated restart?

Duncan Epping · Jun 16, 2010 ·

Got asked the following question today and thought it was valuable for everyone to know the answer to this:

How is a host selected for VM placement when HA restarts VMs from a failed host?

It’s actually a really simple mechanism. HA keeps track of the unreserved capacity of each host of the cluster. When a fail-over needs to occur the hosts are ordered. The host with the highest amount of unreserved capacity being the first option. Now to make it absolutely crystal clear, HA keeps track of the unreserved capacity and it is not DRS which does this. HA works completely independent of vCenter and as we all know DRS is part of vCenter. HA also works when DRS is disabled or unlicensed!

Now one thing to note is that HA will also verify if the host is compatible with the VM or not. What this means is that HA will verify if the VMs network is available on the target host and if the datastore is available on the target hosts. If both are the case a restart will be initiated on that host. To summarize:

  1. Order available host based on unreserved capacity
  2. Check compatibility (VM Network / Datastore)
  3. Boot up!

PVSCSI and a 64bit OS

Duncan Epping · Jun 8, 2010 ·

Yesterday we had an internal discussion about the support of PVSCSI in combination with a 64bit OS. VMware’s documentation currently states the following:

Paravirtual SCSI adapters are supported on the following guest operating systems:

Windows Server 2008
Windows Server 2003
Red Hat Enterprise Linux (RHEL) 5

source

As we normally spell out every single detail this KB article is kind of ambiguous in my opinion. To clarify it, both 32bit and 64bit versions of the detailed operating systems are currently supported (vSphere 4.0). One thing to note though is that there are still limitations, for instance booting a Linux guest from a disk attached to a PVSCSI adapter is currently not supported.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 90
  • Page 91
  • Page 92
  • Page 93
  • Page 94
  • Interim pages omitted …
  • Page 123
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in