• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

networking

Compare your hosts…

Duncan Epping · Jan 19, 2009 ·

One of the most promising features in my opinion for the upcoming version of ESX definitely is “Host Profiles”. With host profiles you can ensure that each and every single ESX Host has been installed in the same way. But this feature isn’t available yet, and you would probably like to know if at least all hosts in a Cluster share the same LUN’s and/or Portgroups.

Well, it’s no surprise probably that Hugo Peeters created a script that does exactly that:

This Powershell script generates an overview of any items that are not available to every ESX server in a VMware cluster. These items might prevent your vm’s being vmotioned by DRS or restarted by HA. Pretty serious business, I’d say!

The items involved are:
1. datastores
2. LUNs (important when using Raw Device Mappings)
3. port groups

Hugo exports the output to a nice html file so no more importing to Excel needed or whatever.

Hop over to Hugo and pick up the script. The link is at the bottom of the article!

VMware Technical papers

Duncan Epping · Dec 16, 2008 ·

VMware lately published a whole bunch of must read technical papers:

Storage Design Options for VMware Virtual Desktop Infrastructure

Companies planning to deploy VDI face decisions about the use of both local and shared storage,
and in the case of shared storage solutions, choosing between differing technologies available in
today’s market. Selecting the appropriate storage model is important for both performance and costs reasons. Certain solutions require less overhead than others, as do different implementations of the same technology. Costs can vary greatly depending on which storage options are chosen. Fortunately organizations can leverage a myriad of best practices to help drive these costs down, while improving performance. This paper provides information on technical concepts related to storage implementations in a VMware ® Virtual Desktop Infrastructure ( VDI) environment.

VMware View Reference Architecture Kit

This reference architecture kit is comprised of four distinct papers written by VMware and our supporting partners to serve as a guide to assist in the early phases of planning, design and deployment of VMware View based solutions. The building block approach uses common components to minimize support costs and deployment risks during the planning of VMware View based deployments.

SQL Server Workload Consolidation

Database workloads are very diverse. While most database servers are lightly loaded, larger database workloads can be resource-intensive, exhibiting high I/O rates or consuming large amounts of memory. With improvements in virtualization technology and hardware, even servers running large database workloads run well in virtual machines. Servers running Microsoft’s SQL Server, among the top database server platforms in the industry today, are no exception.

Using IP Multi Cast with VMware

IP multicast is a popular protocol implemented in many applications for simultaneously and efficiently delivering information to multiple destinations. Multicast sources send single copies of information over the network and let the network take responsibility for replicating and forwarding the information to multiple recipients.

Scripted installs and nic teaming

Duncan Epping · Nov 7, 2008 ·

As of ESX 3.5 it was impossible to add an additional NIC to a team as active without resorting to editing the esx.conf file:

# Active and standby setup and maxActive from 1 to 2
mv /etc/vmware/esx.conf /tmp/esx.conf.bak
/bin/sed -e ’s/net\/vswitch\/child\[0001\]\/teamPolicy\/maxActive = \”1\”/net\/vswitch\/child\[0001\]\/teamPolicy\/maxActive = \”2\”/g’ /tmp/esx.conf.bak >> /etc/vmware/esx.conf

So as you can see, a “sed” command changed the maxActive from 1 to 2. But I rather not use these kinds of solutions, editing the esx.conf that is. As of ESX 3.5 U3 that’s not necessary anymore, VMware fixed this issue:

Network adapters lose bonding during scripted installation
The esxcfg-vswitch -L command now works as expected and with the same functionality as in 3.0.x.

During a scripted installation, the following two commands did not result in a bonded pair of active network adapters on virtual switch VS_VM1. Instead, vmnic3 became the active adapter and vmnic4 became the standby adapter.
esxcfg-vswitch -L vmnic3 VS_VM1
esxcfg-vswitch -L vmnic4 VS_VM1

So just use esxcfg-vswitch again and don’t edit the esx.conf anymore!

Release notes VC U3

Duncan Epping · Oct 7, 2008 ·

There seems to be an incorrect advanced option in VC U3.

HA network compliance check
During the configuration of HA in VirtualCenter 2.5 Update 2, the Task & Events tabs might display the following error message and recommendation:
HA agent on <esxhostname> in cluster <clustername> in <datacenter> has an error Incompatible HA Network:
Consider using the Advanced Cluster Settings das.allowNetwork to control network usage.

Starting with VirtualCenter 2.5 Update 2, HA has an enhanced network compliance check to increase cluster reliability. This enhanced network compliance check helps to ensure correct cluster-wide heartbeat network paths. VirtualCenter 2.5 Update 3 allows you to bypass this check to prevent HA configuration problems. To bypass the check, add das.bypassNetworkVerification=yes to the HA advanced settings.

The described option should actually be “das.bypassNetCompatCheck with the values  “true” or “false. So keep this in mind!!!

Update: HA Advanced Options

Duncan Epping · Oct 6, 2008 ·

A while back I wrote down all the HA advanced options. With VirtualCenter 2.5 Update 3(and the ESX patch that came with it) VMware added another extra advanced options, this is the complete list:

  • das.failuredetectiontime – Amount of milliseconds, timeout time for isolation response action(with a default of 15000 milliseconds).
  • das.isolationaddress[x] – IP adres the ESX hosts uses for it’s heartbeat, where [x] = 0-9. It will use the default gateway by default.
  • das.usedefaultisolationaddress – Value can be true or false and needs to be set in case the default gateway, which is the default isolation address shouldn’t be used for this purpose.
  • das.poweroffonisolation – Values are False or True, this is for setting the isolation response. Default a VM will be powered off.
  • das.vmMemoryMinMB – Higher values will reserve more space for failovers.
  • das.vmCpuMinMHz – Higher values will reserve more space for failovers.
  • das.defaultfailoverhost – Value is a hostname, this host will be the primary failover host.
  • das.failuredetectioninterval – Changes the heartbeat interval among HA hosts. By default, this occurs every second (1000 milliseconds).
  • das.allowVmotionNetworks – Allows a NIC that is used for VMotion networks to be
  • considered for VMware HA usage. This permits a host to have only one NIC configured for management and VMotion combined.
  • das.allowNetwork[x] – Enables the use of port group names to control the networks used for VMware HA, where [x] = 0 – ?. You can set the value to be ʺService Console 2ʺ or ʺManagement Networkʺ to use (only) the networks associated with those port group names in the networking configuration.
  • das.isolationShutdownTimeout – Shutdown time out for the isolation response “Shutdown VM”, default is 300 seconds. In other words, if a VM isn’t shutdown clean when isolation response occured it’s being powered off after 300 seconds.
  • das.bypassNetCompatCheck – Disable the “compatible network” check for HA that was introducedwith Update 2. Default value is “false”, setting it to “true” disables the check.Virtual Machine Monitoring HA advanced options
  • das.failureInterval = The polling interval for failures. Default value is 30.
  • das.maxFailureWindows = Minimum amount of seconds between failure. Default value is 3600 seconds, if VM fails within 3600 seconds VM HA doesn’t restart the machine.
  • das.maxFailures = Maximum amount of VM failures, if the amount is reached VM HA doesn’t restart the machine automatically. Default value is 3.
  • das.minUptime = The minimum uptime in seconds before VM HA starts polling. The default value is 120 seconds.
  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 6
  • Page 7
  • Page 8
  • Page 9
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in