• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

drs

Scripts for “Proactive DRS/DPM”

Duncan Epping · Jun 22, 2010 ·

I never noticed this set of scripts to be honest but Anne Holler(VMware Employee) posted these about a year ago. What the scripts do is change various DRS/DPM settings to pro-actively manage your environment and change DRS and DPM behaviour based on expected workload.

Proactive DRS:

  • setDRSAggressive.pl
    The script setDRSAggressive.pl sets various DRS operating parameters so that it will recommend rebalancing VMotions even when current VM demand does not make those moves appear worthwhile. As an example use case, if powerOnHosts.pl (see “Proactive DPM” posting) is used to trigger host power-ons at 8am before an expected steep increase in VM demand weekdays at 9am, setDRSAggressive.pl can also be scheduled to run at 8am to force rebalancing moves to the powered-on hosts.
  • setDRSDefault.pl
    The script setDRSDefault.pl resets DRS’ operating parameters so that it resumes its normal behaviour.  (Behaviour before using setDRSAggressive.pl)
  • setMaxMovesPerHost.pl
    The script setMaxMovesPerHost.pl can be used to increase DRS’ limit on the number of VMotions it will recommend in each (default every 5 minutes) regular DRS invocation

Proactive DPM:

  • powerOnHosts.pl
    The script powerOnHosts.pl changes cluster settings to engender
    recommendations to power on all standby hosts and then to disable DPM so that those hosts are kept on while demand remains low.
  • enableDPM.pl
    The script enableDPM.pl re-enables DPM to run in its normal reactive behavior. As an example use case, this script can be scheduled to run each weekday morning at (say) 10am (after full VM demand load is expected to be established) or at (say) 5pm (after full VM demand load is likely to diminish) to resume normal DPM operation.

I had multiple customers asking me if it was possible to schedule a change of the DRS and DPM configuration. My answer used to be yes you can script it but never managed to find a script until I bumped into these coincidentally today.

DRS Sub Cluster? vSphere 4.next

Duncan Epping · Jun 21, 2010 ·

On the community forums a question was asked around Campus Clusters and pinning VMs to a specific set of hosts. In vSphere 4.0 that’s currently not possible unfortunately and it definitely is a feature that many customers would want to use.

Banjot Chanana revealed during VMworld that it was an upcoming feature but did not go into much details. However on the community forums, thanks @lamw for point this out, Elisha just revealed the following:

Controls will be available in the upcoming vSphere 4.1 release to enable this behavior. You’ll be able to set “soft” (ie. preferential) or “hard” (ie. strict) rules associating a set of vms with a set of hosts. HA will respect the hard rules and only failover vms to the appropriate hosts.

Basically DRS Host Affinity rules which VMware HA adheres to. Can’t wait for the upcoming vSphere version to be released and to figure out how all these nice “little” enhancements change our designs.

Generating load?

Duncan Epping · Apr 9, 2010 ·

Every once in a while you would want to stress a VM or multiple VMs to test the working of for instance VMware DRS. There are multiple tools available but most of them only focus on CPU and are usually not multithreaded. My colleague Andres Mitchell developed a cool tool which will load a multithreaded CPU load and will generate memory load.

Andrew tweeted about this tool this week and I tested it today. It looks great and it works great.

Saw some tweets wanting ways to generate load in a VM. Here’s one I prepared earlier: http://bit.ly/9A39Xh (multithreaded CPU and mem load). I exchanged a couple of emails with Andrew after this tweet and asked him to write a short explanation of what the tool does:

It’s a pretty simple utility to generate CPU and/or memory load within a virtual machine (or a physical server if you are still living in the dark ages). You can specify the number of threads to generate for CPU load and the approximate load each thread generates. You can also specify how much memory you want the application to consume. There’s a timer so you can configure it to only generate the specified load for a set period of time, and system memory utilisation and system/per core CPU utilisation indicators within the application.

Here’s a screenshot of the app:

CPU/MEM Reservation Behavior

Duncan Epping · Mar 3, 2010 ·

Again an interesting discussion we had amongst some colleagues (Thanks Frank, Andrew and Craig! Especially Craig as most text below comes from The Resource Guru). The topic was CPU/Memory reservations and more specifically the difference in behavior of these two.

One would expect that both a CPU and Memory reservation would have the same behavior when it comes to claiming and releasing resources but unfortunately this is not the case. Or should we say fortunately?

The following is taken from the resource management guide:

CPU Reservation:
Consider a virtual machine with reservation=2GHz that is
totally idle. It has 2GHz reserved, but it is not using any of
its reservation. Other virtual machines cannot reserve these 2GHz. Other virtual machines can use these 2GHz, that is, idle
CPU reservations are not wasted.

Memory Reservation:
If a virtual machine has a memory reservation but has not yet accessed its full reservation, the unused memory can be reallocated to other virtual machines. After a virtual machine has accessed its full reservation, ESX Server allows the virtual machine to retain this much memory, and will not reclaim it, even if the virtual machine becomes idle and stops accessing memory.

The above paragraph is a bit misleading , as it seems to imply that a VM has to access its full reservation. What it should really say is “Memory which is protected by a reservation will not be reclaimed by ballooning or Host-level swapping even if it becomes idle,” and “Physical machine memory will not be allocated to the VM until the VM accesses virtual RAM needing physical RAM backing.” Then that pRAM is protected by the reservation and won’t be reclaimed by ballooning or .vswp-file swapping. If there is any .vswp memory at all as no .vswp is created when the reservation is equal to the provisioned memory.

Note, however, that even if pRAM is not allocated to the VM to back vRAM because the VM hasn’t accessed corresponding vRAM yet, the whole reservation is reserved, but the pRAM could still be used  This gets really confusing. But I think of it thus:

  1. Reservations can be defined at the VM level or the Resource Pool level.
  2. Reservations at the RP level are activated or reserved immediately.
  3. Reservations at the VM level are activated or reserved when the VM is powered on.
  4. An activated reservation is removed from the total physical Resource “Unreserved” accounting.
  5. Reserving and using a resource are distinct: memory or CPU can be reserved but not used or used but not reserved.
  6. CPU reservations are friendly.
  7. Memory reservations are greedy and hoard memory.
  8. Memory reservations are activated at startup, yet pRAM is only allocated as needed. Unallocated pRAM may be used by others.
  9. Once pRAM is protected by a memory reservation, it will never be reclaimed by ballooning of .vswp-swapping even if the corresponding vRAM is idle.

Example: A VM has 4 GB of vRAM installed and a 3 GB memory reservation defined. When the VM starts, 3 GB of pRAM are reserved. If the host had 32 GB of RAM installed and no reservations active, it now has 29 GB “unreserved”.

However, if the VM accesses only 500 MB of vRAM, only 500 MB of pRAM are allocated (or granted) to it. Other VMs could use 2500 MB of RAM that you would think is part of the reservation. They cannot reserve that 2500 MB however. As soon as the VM accesses 3 GB of vRAM and so has 3 GB of pRAM backing it, no other VMs can use that 3 GB of pRAM even if the VM never touches it again, because that pRAM is now protected by the 3 GB Reservation.  If the VM uses 4 GB, it gets the 3 GB guaranteed never ballooned or swapped, but the remaining 1 GB is subject to ballooning or swapping.

Simple huh 😉

Custom shares on a Resource Pool, scripted

Duncan Epping · Feb 24, 2010 ·

We’ve spoken about Resource Pools a couple of times over the last months and specifically about shares. (The Resource Pool Priority-Pie Paradox, Resource Pools and Shares.) The common question I received was how can we solve this. The solution is simple: Custom Shares.

However, the operational overhead associated with custom shares is something most people want to avoid. Luckily for those who have the requirement to use share based resource pools one my colleague Andrew Mitchell shared a powershell script. This powershell script defines custom shares based on a pre-defined weight and the amount of VMs / vCPUs in the resource pool. I would recommend to schedule the script to run on a weekly basis and ensure the correct amount of shares have been set to avoid running into one of the scenarios described in the articles above.

Please keep in mind that if you use nested resource pools you will need to run a separate script for each level in the hierarchy.

Eg. If the resource pools are setup like this the following you will need one script to set the shares for RP1, RP2 and RP3, and another script to set the shares for RP1-Child1 and RP1-Child2.

RP1
>>RP1-Child1
>>RP1-Child2
RP2
RP3

Download the script here. Again to emphasize it I am not the author, we would appreciate it though if you could share any modifications / enhancements to this script.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 14
  • Page 15
  • Page 16
  • Page 17
  • Page 18
  • Page 19
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in