• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

cloud

Management Cluster / vShield Resiliency?

Duncan Epping · Feb 14, 2011 ·

I was reading Scott’s article about using dedicate clusters for management applications. Which was quickly followed by a bunch of quotes turned into an article by Beth P. from Techtarget. Scott mentions that he had posed the original question on twitter if people were doing dedicated management clusters and if so why.

As he mentioned only a few responded and the reason for that is simple, hardly anyone is doing dedicated management clusters these days. The few environments that I have seen doing it were large enterprise environments or service providers where this was part of an internal policy. Basically in those cases a policy would state that “management applications cannot be hosted on the platform it is managing”, and some even went a step further where these management applications were not even allowed to be hosted in the same physical datacenter. Scott’s article was quickly turned in to a “availability concerns” article by Techtarget to which I want to respond. I am by no means a vShield expert, but I do know a thing or two about the product and the platform it is hosted on.

I’ll use vShield Edge and vShield Manager as an example as in Scott’s article vCloud Director is mentioned which leverages vShield Edge. This means that vShield Manager needs to be deployed in order to manage the edge devices. I was part of the team who was responsible for the vCloud Reference Architecture but also part of the team who designed and deployed the first vCloud environment in EMEA. Our customer had their worries as well about resiliency of vShield Manager and vShield Edge, but as they are virtual they can easily be “protected” by leveraging vSphere features. One thing I want to point out though, if vShield Manager is down vShield Edge will continue to function so no need to worry there. I created the following table to display how vShield Manager and vShield Edge can be “protected”.

Product vShield Manager VMware HA VM Monitoring VMware FT
vShield Manager Yes (*) Yes Yes Yes
vShield Edge Yes (*) Yes Yes Yes

Not only would you be able to leverage these standard vSphere technologies there is more that can be leveraged:

  • Scheduled live clone of vShield Manager through vCenter
  • Scheduled configuration back up of vShield Manager (*)

Please don’t get me wrong here, there are always methods to get locked out but as Edward Haletky stated “In fact, the way vShield Manager locks down the infrastructure upon failure is in keeping with longstanding security best practices”. (Quote from Beth P’s article) I also would not want my door to be opened up automatically when there is something wrong with my lock. The trick though is to prevent a “broken lock” situation from occurring and to utilize vSphere capabilities in such a way that the last known state can be safely recovered if it would.

As always an architect/consultant will need to work with all the requirements and constraints  and based on the capabilities of a product come up with a solution that offers maximum resiliency and with the mentioned options above you can’t tell me that VMware doesn’t provide these

Binding a vCloud Director Provider vDC to an ESX Host?

Duncan Epping · Dec 27, 2010 ·

One of our partners was playing around with vCloud Director and noticed that they could create a Provider vDC and link it directly to an ESX Host. vCloud Director did not complain about it so they figured it would be okay. However it is a requirement for vCloud Director to have DRS. One of the reasons for this being is the fact that vCloud Director leverages resource pools to ensure tenants receive what they are entitled to.

But back to the issue, they created the Provider vDC and went on to create an Org vDC and even that worked fine… Next stop was the “Organization Network. In order to create one you will need to select a network pool at some point and for some weird reason that didn’t work. After some initial emailing back and forth I noticed they didn’t select a cluster or resource pool but an ESX host. After creating a new Provider vDC based on a vSphere Resource Pool all of a sudden everything started working. Although I cannot really say why it is exactly this part that causes an issue, I can tell you that DRS is a hard requirement and not just  a suggestion!

vCloud Director Demo, creation of an Organization and its resources

Duncan Epping · Dec 10, 2010 ·

At the Dutch VMUG I presented two sessions. One was about HA/DRS and the other was about vCD. The vCD session contained a live demo and as a backup I decided to record the demo just in case for instance the internet connect would go down. The video shows the creation of an Organization, Org vCD, Org Network and of course a vApp. I didn’t want the video to go to waste so I decided to share it with all of you. I hope you will enjoy it.

RE: Maximum Hosts Per Cluster (Scott Drummonds)

Duncan Epping · Nov 29, 2010 ·

I love blogging because of the discussions you some times get into. One of the bloggers I highly respect and closely follow is EMC’s vSpecialist Scott Drummonds (former VMware Performance Guru). Scott posted a question on his blog about what the size of a cluster should be. Scott discussed this with Dave Korsunksy and Dan Anderson, both VMware employee, and more or less came to the conclusion that 10 is probably a good number.

So, have I given a recommendation?  I am not sure.  If anything I feel that Dave, Dan and I believe that a minimum cluster size needs should be set to guarantee that the CPU utilization target, and not the HA failover capacity, is the defining the number of wasted resources.  This means a minimum cluster of something like four or five hosts.  While neither of us claims a specific problem that will occur with very large clusters, we cannot imagine the value of a 32-host cluster.  So, we think the right cluster size is somewhere shy of 10.

And of course they have a whole bunch of arguments for both Large( 12+) and small (8-) clusters… which I summarized below for your convenience

  • Pro Large: DRS efficiency.  This was my primary claim in favor of 32-host clusters.  My reasoning is simple: with more hosts in the cluster there are more CPU and memory resource holes into which DRS can place running virtual machines to optimize the cluster’s performance.  The more hosts, the more options to the scheduler.
  • Pro Small: DRS does not make scheduling decisions based on the performance characteristics of the server so a new, powerful server in a cluster is just as likely to receive a mission-critical virtual machine as older, slower host.  This would be unfortunate if a cluster contained servers with radically different–although EVC compatible–CPUs like the Intel Xeon 5400 and Xeon 5500 series.
  • Pro Small: By putting your mission-critical applications in a cluster of their own your “server huggers” will sleep better at night.  They will be able to keep one eye on the iron that can make or break their job.
  • Pro Small: Cumbersome nature of their change control.  Clusters have to be managed to a consistent state and the complexity of this process is dependent on the number of items being managed.  A very large cluster will present unique challenges when managing change.
  • Pro Small: To size a 4+1 cluster to 80% utilization after host failure, you will want to restrict CPU usage in the five hosts to 64%.  Going to a 5+1 cluster results in a pre-failure CPU utilization target of 66%.  The increases slowly approach 80% as the clusters get larger and larger.  But, you can see that the incremental resource utilization improvement is never more than 2%.  So, growing a cluster slightly provides very little value in terms of resource utilization.

It is probably an endless debate and all the arguments for both “Pro Large” and “Pro Small” are all very valid although I seriously disagree with their conclusion as in not seeing the value of a 32-host cluster. As always it fully depends. On what in this case you might say, why would you ever want a 32-host cluster? Well for instance when you are deploying vCloud Director. Clusters are currently your boundary for your vDC, and who wants to give his customer 6 vDCs instead of just 1 because you limited your cluster size to 6 hosts instead of leaving the option open to go to the max. This might just be an exception and nowhere near reality for some of you but I wanted to use this as an example to show that you will need to take many factors into account.
Now I am not saying you should, but at least leave the option open.

One of the arguments I do want to debate is the Change Control argument. Again, this used to be valid in a lot of Enterprise environments where ESX was used. Now I am deliberately using “ESX” and “Enterprise” here as reality is that many companies don’t even have a change control process in place. (I worked for a few large insurance companies which didn’t!) On top of that there is a large discrepancy when it comes to the amount of work associated with patching ESX vs ESXi. I have spent many weekends upgrading ESX but today literally spent minutes upgrading ESXi. The impact and risks associated with patching has most certainly decreased with ESXi in combination with VUM and the staging options. On top of that many organizations treat ESXi as an appliance, and with with stateless ESXi and the Auto-Deploy appliance being around the corner I guess that notion will only grow to become a best practice.

A couple of arguments that I have often seen being used to restrict the size of a cluster are the following:

  • HA limits (different max amount of VMs when cluster are > 8 hosts)
  • SCSI Reservation Conflicts
  • HA Primary nodes

Let me start with saying that for every new design you create, challenge your design considerations and best practices… are the still valid?

The first one is obvious as most of you know by now that there is no such a thing anymore as an 8 host boundary with HA. The second one needs some explanation. Around the VI3 time frame cluster sizes were often limited because of possible storage performance issues. These alleged issues were mainly blamed on SCSI Reservation Conflicts. The conflicts were caused by having many VMs on a single LUN in a large cluster. Whenever a metadata update was required the LUN would be locked by a host and this would/could increase overall latency. To avoid this, people would keep the amount of VMs per VMFS volume low (10/15) and keep the amount of VMFS volumes per cluster low…. Also resulting in a fairly low consolidation factor, but hey 10:1 beats physical.

Those arguments used to be valid, however things have changed. vSphere 4.1 brought us VAAI; which is a serious game changer in terms of SCSI Reservations. I understand that for many storage platforms VAAI is currently not supported… However, the original mechanism which is used for SCSI Reservations has also severely improved over time (Optimistic Locking) which in my opinion reduced the need to have many small LUNs, which eventually would limit you from a max amount of LUNs per host perspective. So with VAAI or Optimistic Locking, and of course NFS, the argument to have small clusters is not really valid anymore. (Yes there are exceptions)

The one design consideration, which is crucial, that is missing in my opinion though is HA node placement. Many have limited their cluster sizes because of hardware and HA primary node constraints. As hopefully known, if not be ashamed, HA has a maximum of 5 primary nodes in a cluster and a primary is required for restarts to take place. In large clusters the chances of losing all primaries also increase if and when the placement of the hosts is not taken into account. The general consensus usually is, keep your cluster limited to 8 and spread across two racks or chassis so that each rack always has at least a single primary node to restart VMs. But why would you limit yourself to 8? Why, if you just bought 48 new blades, would you create 6 clusters of 8 hosts instead of 3 clusters of 16 hosts? By simply layering your design you can mitigate all risks associated with primary nodes placements while benefiting from additional DRS placement options. (Do note that if you “only” have two chassis, your options are limited.) Which brings us to another thing I wanted to discuss…. Scott’s argument against increased DRS placement was that hundreds of VMs in an 8 host cluster already leads to many placement options. Indeed you will have many load balancing options in an 8 host cluster, but is it enough? In the field I also see a lot of DRS rules. DRS rules will restrict the DRS Load Balancing algorithm when looking for suitable options, as such more opportunities will more than likely result in a better balanced cluster. Heck, I have even seen cluster imbalances which could not be resolved due to DRS rules in a five host cluster with 70 VMs.

Don’t get me wrong,  I am not advocating to go big…. but neither am I advocating to have a limited cluster size for reasons that might not even apply to your environment. Write down the requirements of your customer or your environment and don’t limit yourself to design considerations around Compute alone. Think about storage, networking, update management, max config limits, DRS&DPM, HA, resource and operational overhead.

HA, the missing link…

Duncan Epping · Oct 20, 2010 ·

One of the things that has always been missing from VMware’s High Availability solution stack is application awareness. As I explained in one of my earlier posts this is something that VMware is actively working on. Instead of creating a full App clustering level VMware decided to extend “VM Monitoring” and created an API to enable App level resiliency.

At VMworld I briefly sat down with Tom Stephens who is part of the Technical Marketing Team as an expert on HA and of course the recently introduced App Monitoring. Tom explained me what App Monitoring enables our partners to do and he used Symantec as the example. Symantec monitors the Application and all its associated services and ensure appropriate action is taken depending on the type of failure. Now keep in mind, it is still a single node so in case of OS maintenance their will be a short downtime. However, I personally feel that this does bridge a gap, this could add that extra 9 and that extra level of assurance your customer needs for his tier-1 app.

Not only will it react to a failover, but it also ensures for instance that all service are stopped and started in the correct order if and when needed. Now think about that for a second, you are doing maintenance during the weekend and need to reboot some of the Application Servers which are owned by someone else. This feature would enable you to reboot the machine and guarantee that the App will be started correctly as it knows the dependencies!

Tom recently published a great article about this new HA functionality and the key benefits of it, make sure you read it on the VMware Uptime blog!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 22
  • Page 23
  • Page 24
  • Page 25
  • Page 26
  • Interim pages omitted …
  • Page 28
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in