• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Archives for 2009

How to change the SRM change of power state time out values

Duncan Epping · May 29, 2009 ·

One of my customers recently asked if it was possible to change the time-out for a power state change, at the same time this question was asked and answered on an internal mailing list. I thought it would be nice to document it. An example of a power state change task would be the shutdown that is initiated by SRM when you run a recovery plan. The default value is 120 seconds which might not be long enough and could lead to issues when a power off is forced. You can increase or decrease this value by editing the SRM configuration file (vmware-dr.xml). Look for the following section:

<Recovery>
<powerStateChangeTimeout>120</ powerStateChangeTimeout>
</Recovery>

Like stated above, the time-out value is in seconds. The default value is 120 and it can be changed according to your requirements. This change will be effective when the SRM service has been restarted. (If you can’t find this section in the XML file, just add it…)

Patches for ESX 3.X

Duncan Epping · May 29, 2009 ·

VMware just released a bunch of patches for ESX 3.5 and 3.0.x. Those of you who did not yet upgrade to vSphere 4 might want to look in to these new patches as they contain security and critical updates.

VMware View 3.1 and new hot blog “That’s My View”

Duncan Epping · May 27, 2009 ·

A brand new version of View has just been released. You can find the download and the release notes here: VMware View 3.1 Download, Release Notes. There are a whole bunch of enhancements which definitely make this new release worth checking out. I’m not going to post them, just read the release notes.

Another thing I wanted to let you guys know about is this great “new” blog called That’s My View. That’s My View as the name already suggests mainly deals about Desktop Virtualization. Christoph Dommermuth started this blog but since then recruited multiple co-writers. If you want to keep up to date and get the latest tips and tricks I suggest you head over and subscribe to their RSS feed or just bookmark it.

Partitioning your ESX host – part II

Duncan Epping · May 27, 2009 ·

A while back I published an article on partitioning your ESX host. This was based on 3.5, and of course with vSphere this has slightly changed. Let me start by quoting a section from the install and configure guide.

You cannot define the sizes of the /boot, vmkcore, and /vmfs partitions when you use the graphical or text installation modes. You can define these partition sizes when you do a scripted installation.

The ESX boot disk requires 1.25GB of free space and includes the /boot and vmkcore partitions. The /boot partition alone requires 1100MB.

The reason for this is the fact that the service console is a VMDK. This VMDK is stored on the local VMFS volume by default in the following location: esxconsole-<system-uuid>/esxconsole.vmdk. By the way, “/boot” has been increased as a “safety net” for future upgrades to ESX(i).

So for the manual installations there are three partitions less to worry about. I would advise to use the following sizes for the rest of the partitions, and I would also recommend to rename the local VMFS partition during installation. The default name is “Storage1”, my recommendation would be “<hostname>-localstorage”.

Primary:
/     - 5120MB
Swap  - 1600MB
Extended Partition:
/var  - 4096MB
/home - 2048MB
/opt  - 2048MB
/tmp  - 2048MB

With the disk sizes these days you should have more than enough space for a roughly 18GB for ESX in total.

Max amount of VMs per Host?

Duncan Epping · May 25, 2009 ·

If I would ask you what the max amount of VMs per Host is for vSphere what would your answer be?

My bet is that your answer would be 320 VMs. This, of course, based on the “virtual machines per host” number that page 5 of the Configurations Maximum for vSphere shows.

But is this actually the correct answer? No it’s not. The correct answer is, it depends. Yes… it depends on the fact if you are using HA or not. The following restrictions apply to an HA cluster(page 7):

  • Max 32 Hosts per HA Cluster.
  • Max 1280 VMs per Cluster.
  • Max 100 VMs per Host.
  • If the number of Hosts exceeds 8 in a cluster, the limit of VMs per host is 40.

These are serious restrictions that will need to be taken into account when making a design for a virtual environment. It touches literally everything. From your Cluster size down to the hardware you’ve selected. I know these configuration maximums get revised with every update but it is most definitely something one would need to consider and discuss with the customer…

Just wondering what your thoughts are,

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 39
  • Page 40
  • Page 41
  • Page 42
  • Page 43
  • Interim pages omitted …
  • Page 85
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in