• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Scripting

Scripts for “Proactive DRS/DPM”

Duncan Epping · Jun 22, 2010 ·

I never noticed this set of scripts to be honest but Anne Holler(VMware Employee) posted these about a year ago. What the scripts do is change various DRS/DPM settings to pro-actively manage your environment and change DRS and DPM behaviour based on expected workload.

Proactive DRS:

  • setDRSAggressive.pl
    The script setDRSAggressive.pl sets various DRS operating parameters so that it will recommend rebalancing VMotions even when current VM demand does not make those moves appear worthwhile. As an example use case, if powerOnHosts.pl (see “Proactive DPM” posting) is used to trigger host power-ons at 8am before an expected steep increase in VM demand weekdays at 9am, setDRSAggressive.pl can also be scheduled to run at 8am to force rebalancing moves to the powered-on hosts.
  • setDRSDefault.pl
    The script setDRSDefault.pl resets DRS’ operating parameters so that it resumes its normal behaviour.  (Behaviour before using setDRSAggressive.pl)
  • setMaxMovesPerHost.pl
    The script setMaxMovesPerHost.pl can be used to increase DRS’ limit on the number of VMotions it will recommend in each (default every 5 minutes) regular DRS invocation

Proactive DPM:

  • powerOnHosts.pl
    The script powerOnHosts.pl changes cluster settings to engender
    recommendations to power on all standby hosts and then to disable DPM so that those hosts are kept on while demand remains low.
  • enableDPM.pl
    The script enableDPM.pl re-enables DPM to run in its normal reactive behavior. As an example use case, this script can be scheduled to run each weekday morning at (say) 10am (after full VM demand load is expected to be established) or at (say) 5pm (after full VM demand load is likely to diminish) to resume normal DPM operation.

I had multiple customers asking me if it was possible to schedule a change of the DRS and DPM configuration. My answer used to be yes you can script it but never managed to find a script until I bumped into these coincidentally today.

UML diagram your VM, vdisks and snapshots by @lucd22

Duncan Epping · Apr 7, 2010 ·

Somehow I missed out on this excellent script/blog about diagramming your vmdk’s and associated snapshot trees for the Planet V12n Top 5 post I do for the VMTN Blog weekly.

Luc Dekens is one of the leading PowerCLI script gurus and created this amazing script that creates a diagram of the relationship between VMs, VMDKs and Snapshots. Now you might wonder what the use case would be when there is a one to one relationship like the following:

Many will understand the relationship when you have a single snapshot. But is that still the case when you have multiple snapshots running on multiple disks? Probably not, check this diagram to get an idea:

Great work Luc, and my apologies for not selecting it for the Planet V12n Top 5 as it definitely deserved a spot.

Where are my files?

Duncan Epping · Apr 1, 2010 ·

I was working on an automated build procedure yesterday of ESX hosts in a cloud environment. I stored my my temporary post configuration script in /tmp/ as I have been doing since 3.0.x. When the installation was finished the host rebooted and I waited on the second reboot to occur, which is part of my post configuration. Weird thing is it never happened.

So I assumed I made a mistake and went over my script. Funny thing is it just looked fine. For troubleshooting purposes I decided to strip my script and only do a “touch /tmp/test” in the %post section to see if the file would be created or not. I also removed the “automatic reboot” after the installation. When the installation was finished I went into the console and noticed my file “test” in /tmp. So I rebooted the system and checked /tmp again…. gone. HUH?

I figured it had something to do with the installer. I installed ESX manually, including a “/tmp” partition, and booted the server. I copied a bunch of random files into /tmp and rebooted the server… again the files were deleted. Now I might be going insane, but I am pretty certain this used to work just fine in the good old days ESX 3.0.X. Apparently something changed, but what?

After some googling and emailing I discovered  that this a change in behaviour is a known issue (release notes). When ESX 4.0 is booted the “/etc/init.d/vmware” cleans out /tmp. (See below) Something you might want to take into account when using /tmp.

# Clear /tmp to create more space
if IsLocalFileSystem /tmp ; then
rm -rf /tmp/*
fi

I want to thank my colleague from VMware GSS Fintan Comyns for pointing this out.

NFS based automated installs of ESX 4

Duncan Epping · Mar 26, 2010 ·

Just something I noticed today while testing an automated install from NFS. The arguments I pass to the installer are:

initrd=initrd.img mem=512m ksdevice=vmnic1 ip=192.168.1.123 netmask=255.255.255.0 gateway=192.168.1.1 ks=nfs://192.168.1.10:/nfs/install/ks.cfg quiet

Let’s focus on the part that’s incorrect, with ESX 3 the following bit(part of the bootstrap above) would work:

ks=nfs://192.168.1.10:/nfs/install/ks.cfg

As of ESX 4 this doesn’t work anymore, and when I do an “alt-f2” and go to /var/log and check the esx-installer.log file it shows the following error:

mount: 192.168.1.10::nfs/install failed, reason given by server: Permission denied

After checking the permissions on my NFS share 4 times I was pretty certain that this could not cause this issue. After trying some various combinations I noticed that the format of the string for “ks” has changed. As of ESX 4 you can’t use the second colon(:) anymore. So the correct format is:

ks=nfs://192.168.1.10/nfs/install/ks.cfg

I still receive a warning but the installer does continue. If anyone knows why the following message is displayed please speak up:

No COS NICs have been added by the user

Network loss after HA initiated failover

Duncan Epping · Mar 25, 2010 ·

I had a discussion with one of my readers last week and just read this post on the VMTN community which triggered this article.

When you create a highly available environment take into account that you will need to have enough vSwitch ports available when a failover needs to occur. By default a vSwitch will be created with 56 ports and in general this is sufficient for most environments. However when two of your hosts fail in a 10 host cluster you might end up with 60 or more VMs running on a single host. If this would happen several VMs would not have a vSwitch port assigned.

The most commonly used command when creating an automated build procedure probably is:

esxcfg-vswitch -a vSwitch1

This would result in a vSwitch named “vSwitch1” with the default amount of 56 ports. Now it is just as easy to set it up with 128 ports for instance:

esxcfg-vswitch -a vSwitch1:128

Always design for a worst case scenario. Also be aware of the overhead, some ports are reserved for internal usage. You might want to factor in some additional ports for this reason as for instance in the example above you will have 120 ports available for your VMs and not the 128 you specified.

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 19
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

Feb 9th – Irish VMUG
Feb 23rd – Swiss VMUG
March 7th – Dutch VMUG
May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in