• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

high availability

vSphere 5.0 HA: Changes in admission control

Duncan Epping · Aug 3, 2011 ·

I just wanted to point out a couple of changes for HA in vSphere 5.0 with regards to admission control. Although they might seem minor they are important to keep in mind when redesigning your environment. Lets just discuss each of the admission control policies and list the changes underneath.

  • Host failures cluster tolerates
    Still uses the slot algorithm. Major change here is that you can have a value larger than 4 hosts. The 4 host limit was imposed by the Primary/Secondary node concept. As this constraint has been lifted it is now possible to select a value up to 31. So in the case of a 16 host cluster you can set the value to 15. (Yes you could even set it to 31 as the UI doesn’t limit you but that wouldn’t make sense would it…) Another change is the default slotsize for CPU. The default slotsize used to be 256MHz. This has been decreased to 32MHz.
  • Percentage as cluster resources reserved
    This admission control policy has been overhauled and it is now possible to select a percentage for both CPU and Memory separately. In other words you can set CPU to 30% and Memory to 25%. The algorithm hasn’t changed and this is still my preferred admission control policy!
  • Specify Failover host
    Allows you to select multiple hosts instead of just 1. So for instance in an 8 host cluster you can specify two as designated failover hosts. These hosts will not be used during normal operations, keep this in mind!

For more details on admission control I would like to refer to the HA deepdive (not updated to 5.0 yet) or my book on vSphere 5.0 Clustering which contains many examples of how to correctly set the percentage for instance.

das.slotCpuInMHz and das.slotMemInMB

Duncan Epping · Nov 18, 2010 ·

I was reading some threads on the VMTN forum and noticed a question on an advanced HA setting called “das.slotMemInMB”. The setting is briefly mentioned in my deep-dive, but after re-reading the section I think I could have been more clear in describing what is does, how it works and when to use it. Of course anything that goes for das.slotMemInMB also goes for das.slotCpuInMHz.

This is what I added to the deep-dive but I also wanted to share it through a regular blog to give it a bit more attention:

The advanced setting das.slotCpuInMHz and das.slotMemInMB allow you to specify an upper boundary for your slot size. When one of your VMs has an 8GB reservation this setting can be used to define for instance an upper boundary of 1GB to avoid resource wastage and an overly conservative slot size. However when for instance das.slotMemInMB is configured to 2048MB and the lowest reservation is 500MB then the slotsize for memory will be 500MB+memory overhead. If a lower boundary needs to be specified the advanced setting “das.vmMemoryMinMB” or “ das.vmCpuMinMHz” can be used.

HA, the missing link…

Duncan Epping · Oct 20, 2010 ·

One of the things that has always been missing from VMware’s High Availability solution stack is application awareness. As I explained in one of my earlier posts this is something that VMware is actively working on. Instead of creating a full App clustering level VMware decided to extend “VM Monitoring” and created an API to enable App level resiliency.

At VMworld I briefly sat down with Tom Stephens who is part of the Technical Marketing Team as an expert on HA and of course the recently introduced App Monitoring. Tom explained me what App Monitoring enables our partners to do and he used Symantec as the example. Symantec monitors the Application and all its associated services and ensure appropriate action is taken depending on the type of failure. Now keep in mind, it is still a single node so in case of OS maintenance their will be a short downtime. However, I personally feel that this does bridge a gap, this could add that extra 9 and that extra level of assurance your customer needs for his tier-1 app.

Not only will it react to a failover, but it also ensures for instance that all service are stopped and started in the correct order if and when needed. Now think about that for a second, you are doing maintenance during the weekend and need to reboot some of the Application Servers which are owned by someone else. This feature would enable you to reboot the machine and guarantee that the App will be started correctly as it knows the dependencies!

Tom recently published a great article about this new HA functionality and the key benefits of it, make sure you read it on the VMware Uptime blog!

vSphere 4.1 HA feature, totally unsupported but too cool

Duncan Epping · Jul 16, 2010 ·

Early 2009 I wrote an article on the impact of Primary Nodes and Secondary Nodes on your design. This was primarily focussed on Blade environments and basically it discussed how to avoid having all your primary nodes in a single chassis. If that single chassis would fail, no VMs would be restarted as one of the primary nodes is the “failover coordinator” and without a primary node to assign this role to a failover can’t be initiated.

With vSphere 4.1 a new advanced setting has been introduced. This setting is not even experimental, it is unsupported. I don’t recommend anyone using this in a production environment, if you do want to play around with it use your test environment. Here it is:

das.preferredPrimaries = hostname1 hostname2 hostname3
or
das.preferredPrimaries = 192.168.1.1,192.168.1.2,192.168.1.3

The list of hosts that are preferred as primary can either be space or comma separated. You don’t need to specify 5 hosts, you can specify any number of hosts. If you specify 5 and all 5 are available they will be the primary nodes in your cluster. If you specify more than 5, the first 5 of your list will become primary.

Please note that I haven’t personally tried it and I can’t guarantee it will work.

VMware View without HA?

Duncan Epping · Jul 15, 2010 ·

I was discussing something with one of my former colleagues a couple of days ago. He asked me what the impact was of running VMware View in an environment without HA.

To be honest I am not a View SME, but I do know a thing or two about HA/vSphere in general. So the first thing that I mentioned was that it wasn’t a good idea. Although VDI in general is all about density not running HA in these environments could lead to serious issues when a host fails.

Now, just imagine you have 80 Desktop VMs per host running and roughly 8 hosts in a DRS only cluster on NFS based storage. One of those hosts is isolated from the network…. what happens?

  1. User connection is dropped
  2. VMDK Lock times out
  3. User tries to reconnect
  4. Broker powers on the VM on a new host

Now that sounds great doesn’t it? Well yeah in a way it does, but what happens when the host is not isolated anymore?

Indeed, the VMs were still running. So basically you have a split brain scenario. The only way in the past to avoid this was to make sure you had HA enabled and had set HA to power off the VM.

But with vSphere 4 Update 2 a new mechanism has been introduced. I wanted to stress this, as some people have already made assumption that it is part of AAM/HA. It actually isn’t… The question for powering off the VM to recover from the split brain scenario is generated by “hostd” and answered by “vpxa”. In other words, with or without HA enabled ESX(i) will recover the split brain

Again, I am most definitely not a Desktop/View guy so I am wondering how the View experts out there look against disabling HA on your View Compute Cluster. (Note that on the Management Layer this should be enabled.)

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 5
  • Page 6
  • Page 7
  • Page 8
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in