• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

VMworld 2010: Labs are the place to be!

Duncan Epping · Aug 29, 2010 ·

As some of you might now I am not only doing a session at VMworld 2010 but I am also a Lab Captain. We have been working really hard over the last couple of months to get the labs up and running for you guys.

Over the last three days it has been chaos here at VMworld. Setting up, testing and stress testing labs and of course some last minute changes to make sure all of you guys have a great experience.

I must say, looking at the lab environment it has been worth it. We are not talking about a couple of labs here. No we are talking 480 seats and about 30 different labs ranging from “VMware ESXi Remote Management Utilities” to “Intro to Zimbra Colloaboration Suite” and even products which will be formally announced tomorrow.

I took a couple of pictures this morning of the labs just to get you guys as excited as we are:

We all hope you will enjoy the Labs at VMworld 2010, but looking at the content and the set-up I am confident you will! Enjoy,

Soon in a book store near you! HA and DRS Deepdive

Duncan Epping · Aug 25, 2010 ·

Over the last couple of months Frank Denneman and I have been working really hard on a secret project. Although we have spoken about it a couple of times on twitter the topic was never revealed.

Months ago I was thinking about what a good topic would be for my next book. As I already wrote a lot of articles on HA it made sense to combine these and do a full deepdive on HA. However a VMware Cluster is not just HA. When you configure a cluster there is something else that usually is enabled and that is DRS. As Frank is the Subject Matter Expert on Resource Management / DRS it made sense to ask Frank if he was up for it or not… Needless to say that Frank was excited about this opportunity and that was when our new project was born: VMware vSphere 4.1 – HA and DRS deepdive.

As both Frank and I are VMware employees we contacted our management to see what the options were for releasing this information to market. We are very excited that we have been given the opportunity to be the first official publication as part of a brand new VMware initiative, codenamed Rome. The idea behind Rome along with pertinent details will be announced later this year.

Our book is currently going through the final review/editing stages. For those wondering what to expect, a sample chapter can be found here. The primary audience for the book is anyone interested in high availability and clustering. There is no prerequisite knowledge needed to read the book however, the book will consist of roughly 220 pages with all the detail you want on HA and DRS. It will not be a “how to” guide, instead it will explain the concepts and mechanisms behind HA and DRS like Primary Nodes, Admission Control Policies, Host Affinity Rules and Resource Pools. On top of that, we will include basic design principles to support the decisions that will need to be made when configuring HA and DRS or when designing a vSphere infrastructure.

I guess it is unnecessary to say that both Frank and I are very excited about the book. We hope that you will enjoy reading it as much as we did writing it. Stay tuned for more info, the official book title and url to order the book. We hope to be able to give you an update soon.

Frank and Duncan

Two new HA Advanced Settings

Duncan Epping · Aug 23, 2010 ·

Just noticed a couple of new advanced settings in the vCenter Performance Best Practices whitepaper.
  • das.perHostConcurrentFailoversLimit
    When multiple VMs are restarted on one host, up to 32 VMs will be powered on concurrently by default. This is to avoid resource contention on the host. This limit can be changed through the HA advanced option: das.perHostConcurrentFailoversLimit. Setting a larger value will allow more VMs to be restarted concurrently and might reduce the overall VM recovery time, but the average latency to recover individual VMs might increase. We recommend using the default value.
  • das.sensorPollingFreq
    The das.sensorPollingFreq option controls the HA polling interval. HA polls the system periodically to update the cluster state with such information as how many VMs are powered on, and so on. The polling interval was 1 second in vSphere 4.0. A smaller value leads to faster VM power on, and a larger value leads to better scalability if a lot of concurrent power operations need to be performed in a large cluster. The default is 10 seconds in vSphere 4.1, and it can be set to a value between 1 and 30 seconds.

I want to note that I would not recommend changing these. There is a very good reason the defaults have been selected. Changing these can lead to instability, however when troubleshooting they might come in handy.

HA and a Metrocluster

Duncan Epping · Aug 20, 2010 ·

I was reading an excellent article on NetApp metroclusters and vm-host affinity rules Larry Touchette the other day. That article is based on the tech report TR-3788 which covers the full solution but did not include the 4.1 enhancements.

The main focus of the article is on VM-Host Affinity Rules. Great stuff and it will “ensure” you will keep your IO local. As explained when a Fabric Metrocluster is used the increased latency when going across for instance 80KM of fibre will be substantial. By using VM-Host Affinity Rules where a group of VMs are linked to a group of hosts this “overhead” can be avoided.

Now, the question of course is what about HA? The example NetApp provided shows 4 hosts. With only four hosts we all know, hopefully at least, that all of these hosts will be primary. So even if a set of hosts fail one of the remaining hosts will be able to take over the failover coordinator role and restart the VMs. Now if you have up to an 8 host cluster that is still very much true as with a max of 5 primaries and 4 hosts on each side at least a single primary will exist in each site.

But what about 8 hosts or more? What will happen when the link between sites fail? How do I ensure each of the sites has primaries left to restart VMs if needed?

Take a look at the following diagram I created to visualize all of this:

We have two datacenters here, Datacenter A and B. Both have their own FAS with two shelves and their own set of VMs which run on that FAS. Although storage will be mirrored there is still only one real active copy of the datastore. In this case VM-Host Affinity rules have been created to keep the VMs local in order to avoid IO going across the wire. This is very much similar to what NetApp described.

However in my case there are 5 hosts in total which are a darker color green. These hosts were specified as the preferred primary nodes. This means that each site will have at least 2 primary nodes.

Lets assume the link between Datacenter A and B dies. Some might assume that this will trigger an HA Isolation Response but it actually will not.

The reason for this being is the fact that an HA primary node still exists in each site. Isolation Response is only triggered when no heartbeats are received. As a primary node sends a heartbeat to both the primary and secondary nodes a heartbeat will always be received. Again as I can’t emphasize this enough, an Isolation Response will not be triggered.

However if the link dies between these Datacenter’s, it will appear to Datacenter A as if Datacenter B is unreachable and one of the primaries in Datacenter A will initiate restart tasks for the allegedly impacted VMs and vice versa. However as the Isolation Response has not been triggered a lock on the VMDK will still exist and it will be impossible to restart the VMs.

These VMs will remain running within their site. Although it might appear on both ends that the other Datacenter has died HA is “smart” enough to detect it hasn’t and it will be up to you to decide if you want to failover those VMs or not.

I am just so excited about these developments, that I can’t get enough of it. Although the “das.preferredprimaries” setting is not supported as of writing, I thought this was cool enough to share it with you guys. I also want to point out that in the diagram I show 2 isolation addresses, this of course is only needed when a gateway is specified which is not accessible at both ends when the network connection between sites is dead. If the gateway is accessible at both sites even in case of a network failure only 1 isolation address, which can be the default gateway, is required.

Layer 2 Adjacency for vMotion (vmkernel)

Duncan Epping · Aug 19, 2010 ·

Recently I had a discussion around Layer 2 adjacency for the vMotion(vmkernel interface) network. With that meaning that all vMotion interfaces, aka vmkernel interfaces, are required to be on the same subnet as otherwise vMotion would not function correctly.

Now I remember when this used to be part of the VMware documentation but that requirement is nowhere to be found anywhere. I even have a memory of documentation of the previous versions stating that it was “recommended” to have layer-2 adjacency but even that is nowhere to be found. The only reference I could find was an article by Scott Lowe where Paul Pindell from F5 chips in and debunks the myth, but as Paul is not a VMware spokes person it is not definitive in my opinion. Scott also just published a rectification of his article after we discussed this myth a couple of times over the last week.

So what are the current Networking Requirements around vMotion according to VMware’s documentation?

  • On each host, configure a VMkernel port group for vMotion
  • Ensure that virtual machines have access to the same subnets on source and destination hosts
  • Ensure that the network labels used for virtual machine port groups are consistent across hosts

Now that got me thinking, why would it even be a requirement? As far as I know vMotion is all layer three today, and besides that the vmkernel interface even has the option to specify a gateway. On top of that vMotion does not check if the source vmkernel interface is on the same subnet as the destination interface, so why would we care?

Now that makes me wonder where this myth is coming from… Have we all assumed L2 adjacency was a requirement? Have the requirements changed over time? Has the best practice changed?

Well one of those is easy to answer; no the best practice hasn’t changed. Minimize the amount of hops needed to reduce latency, is and always will be, a best practice. Will vMotion work when your vmkernels are in two different subnets, yes it will. Is it supported? No it is not as it has not explicitly gone through VMware’s QA process. However, I have had several discussions with engineering and they promised me a more conclusive statement will be added to our documentation and the KB in order to avoid any misunderstanding.

Hopefully this will debunk this myth that has been floating around for long enough once and for all. As stated, it will work it just hasn’t gone through QA and as such cannot be supported by VMware at this point in time. I am confident though that over time this statement will change to increase flexibility.

References:

  • Integrating VMs into the Cisco Datacenter Architecture (ESX 2.5)
  • Deploying BIG-IP to enable LD vMotion
  • vMotion Practicality
  • http://kb.vmware.com/kb/1002662
  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 291
  • Page 292
  • Page 293
  • Page 294
  • Page 295
  • Interim pages omitted …
  • Page 492
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in