A while back I wrote down all the HA advanced options. With ESX 3.5 Update 2 VMware added a couple extra advanced options, this is the complete list:
- das.failuredetectiontime – Amount of milliseconds, timeout time for isolation response action(with a default of 15000 milliseconds).
- das.isolationaddress[x] – IP adres the ESX hosts uses for it’s heartbeat, where [x] = 0‐9. It will use the default gateway by default.
- das.usedefaultisolationaddress – Value can be true or false and needs to be set in case the default gateway, which is the default isolation address shouldn’t be used for this purpose.
- das.poweroffonisolation – Values are False or True, this is for setting the isolation response. Default a VM will be powered off.
- das.vmMemoryMinMB – Higher values will reserve more space for failovers.
- das.vmCpuMinMHz – Higher values will reserve more space for failovers.
- das.defaultfailoverhost – Value is a hostname, this host will be the primary failover host.
The new ones:
- das.failuredetectioninterval – Changes the heartbeat interval among HA hosts. By default, this occurs every second (1000 milliseconds).
- das.allowVmotionNetworks – Allows a NIC that is used for VMotion networks to be
considered for VMware HA usage. This permits a host to have only one NIC configured for management and VMotion combined.- das.allowNetwork[x] – Enables the use of port group names to control the networks used for VMware HA, where [x] = 0 – ?. You can set the value to be ʺService Console 2ʺ or ʺManagement Networkʺ to use (only) the networks associated with those port group names in the networking configuration.
- das.isolationShutdownTimeout – Shutdown time out for the isolation response “Shutdown VM”, default is 300 seconds. In other words, if a VM isn’t shutdown clean when isolation response occured it’s being powered off after 300 seconds.
Joop van Helvoort says
Congratulations on the first hits on google with the terms: das.allowNetwork.
(It’s weird when you get an alert with a term and can’t find info about it (Which was the case until today))
I was wondering what the [x] is for in das.allowNetwork[x]. Would you mind shedding some light on this?
Duncan Epping says
Thanks Joop,
you can set several networks, in other words:
das.allownetwork0 = service console
das.allownetwork1 = service console 2
Joop van Helvoort says
Hi,
Perfect, HA works again on our cluster. We had one ESX server with a hardware iSCSI card. Since that one server didn’t have a iSCSI network on the console HA kept failing on this machine.
Thank you for your explanation of this option
Scott says
One question… In my setup we have a deadend switch connecting the 4 ESX servers we have and it is used for vmotion only. Can I use that network even though the vmotion gateway address I put in is not really there (becasue traffic never leaves the switch)?
thanks.
Duncan Epping says
well, das.allowvmotionnetworks is specifically for ESXi. You can’t actually use a dead gateway for this purpose. But you can specify not to use the default gateway and specify a seperate isolation address.
Simon Wilson says
Hi,
Do you know where the parameters are stored?
I’ve typed a duff second network into das.allownetworks and if I try to delete it in VC, it gives the old “object reference not set to and instance of an object” message.
Duncan says
that’s a great question. have to look into that.
Dennes says
I read in the VMWare forums that supposedly there’s a problem with HA in 3.5 U2 when used in a 2 physical server environment and i was advised to stick to U1. Are you aware of any issues with 3.5 U2?
Daniel Whittaker says
Dennes: So far as I can tell I am experiencing the following issues with U2:
– HA seems to work fine on a 2 ESX server cluster in the normal sense where if one ESX host is isolated the already-powered-on VMs on that host will automatically power up on the other host. However, I cannot manually power on any VMs on the one remaining host whilst the other host is down, receiving a “Insufficient resources” error pop up box.
– Cannot remember specifically, but I think VMWare increased the requirements for resources and the slot size per vm in U2. This makes having reservations set for resources on your VMs particularly hard to manage if you’re running a decent number of VMs per ESX.
– Finally, although I’m not sure this is particularly a HA issue, when putting an ESX host into Maintenance Mode it will not progress further than 2% as it will not automatically VMotion live VMs over to my remaining host. I have to manually initiate a migration, which completes successfully. Once I’m manually live-migrated the powered-on VMs the Entering Maintenance Mode task continues from where it paused and completes successfully.
If anyone has any insights on these issues, please do respond. They’re driving me a little nuts.