• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

RE: Maximum Hosts Per Cluster (Scott Drummonds)

Duncan Epping · Nov 29, 2010 ·

I love blogging because of the discussions you some times get into. One of the bloggers I highly respect and closely follow is EMC’s vSpecialist Scott Drummonds (former VMware Performance Guru). Scott posted a question on his blog about what the size of a cluster should be. Scott discussed this with Dave Korsunksy and Dan Anderson, both VMware employee, and more or less came to the conclusion that 10 is probably a good number.

So, have I given a recommendation?  I am not sure.  If anything I feel that Dave, Dan and I believe that a minimum cluster size needs should be set to guarantee that the CPU utilization target, and not the HA failover capacity, is the defining the number of wasted resources.  This means a minimum cluster of something like four or five hosts.  While neither of us claims a specific problem that will occur with very large clusters, we cannot imagine the value of a 32-host cluster.  So, we think the right cluster size is somewhere shy of 10.

And of course they have a whole bunch of arguments for both Large( 12+) and small (8-) clusters… which I summarized below for your convenience

  • Pro Large: DRS efficiency.  This was my primary claim in favor of 32-host clusters.  My reasoning is simple: with more hosts in the cluster there are more CPU and memory resource holes into which DRS can place running virtual machines to optimize the cluster’s performance.  The more hosts, the more options to the scheduler.
  • Pro Small: DRS does not make scheduling decisions based on the performance characteristics of the server so a new, powerful server in a cluster is just as likely to receive a mission-critical virtual machine as older, slower host.  This would be unfortunate if a cluster contained servers with radically different–although EVC compatible–CPUs like the Intel Xeon 5400 and Xeon 5500 series.
  • Pro Small: By putting your mission-critical applications in a cluster of their own your “server huggers” will sleep better at night.  They will be able to keep one eye on the iron that can make or break their job.
  • Pro Small: Cumbersome nature of their change control.  Clusters have to be managed to a consistent state and the complexity of this process is dependent on the number of items being managed.  A very large cluster will present unique challenges when managing change.
  • Pro Small: To size a 4+1 cluster to 80% utilization after host failure, you will want to restrict CPU usage in the five hosts to 64%.  Going to a 5+1 cluster results in a pre-failure CPU utilization target of 66%.  The increases slowly approach 80% as the clusters get larger and larger.  But, you can see that the incremental resource utilization improvement is never more than 2%.  So, growing a cluster slightly provides very little value in terms of resource utilization.

It is probably an endless debate and all the arguments for both “Pro Large” and “Pro Small” are all very valid although I seriously disagree with their conclusion as in not seeing the value of a 32-host cluster. As always it fully depends. On what in this case you might say, why would you ever want a 32-host cluster? Well for instance when you are deploying vCloud Director. Clusters are currently your boundary for your vDC, and who wants to give his customer 6 vDCs instead of just 1 because you limited your cluster size to 6 hosts instead of leaving the option open to go to the max. This might just be an exception and nowhere near reality for some of you but I wanted to use this as an example to show that you will need to take many factors into account.
Now I am not saying you should, but at least leave the option open.

One of the arguments I do want to debate is the Change Control argument. Again, this used to be valid in a lot of Enterprise environments where ESX was used. Now I am deliberately using “ESX” and “Enterprise” here as reality is that many companies don’t even have a change control process in place. (I worked for a few large insurance companies which didn’t!) On top of that there is a large discrepancy when it comes to the amount of work associated with patching ESX vs ESXi. I have spent many weekends upgrading ESX but today literally spent minutes upgrading ESXi. The impact and risks associated with patching has most certainly decreased with ESXi in combination with VUM and the staging options. On top of that many organizations treat ESXi as an appliance, and with with stateless ESXi and the Auto-Deploy appliance being around the corner I guess that notion will only grow to become a best practice.

A couple of arguments that I have often seen being used to restrict the size of a cluster are the following:

  • HA limits (different max amount of VMs when cluster are > 8 hosts)
  • SCSI Reservation Conflicts
  • HA Primary nodes

Let me start with saying that for every new design you create, challenge your design considerations and best practices… are the still valid?

The first one is obvious as most of you know by now that there is no such a thing anymore as an 8 host boundary with HA. The second one needs some explanation. Around the VI3 time frame cluster sizes were often limited because of possible storage performance issues. These alleged issues were mainly blamed on SCSI Reservation Conflicts. The conflicts were caused by having many VMs on a single LUN in a large cluster. Whenever a metadata update was required the LUN would be locked by a host and this would/could increase overall latency. To avoid this, people would keep the amount of VMs per VMFS volume low (10/15) and keep the amount of VMFS volumes per cluster low…. Also resulting in a fairly low consolidation factor, but hey 10:1 beats physical.

Those arguments used to be valid, however things have changed. vSphere 4.1 brought us VAAI; which is a serious game changer in terms of SCSI Reservations. I understand that for many storage platforms VAAI is currently not supported… However, the original mechanism which is used for SCSI Reservations has also severely improved over time (Optimistic Locking) which in my opinion reduced the need to have many small LUNs, which eventually would limit you from a max amount of LUNs per host perspective. So with VAAI or Optimistic Locking, and of course NFS, the argument to have small clusters is not really valid anymore. (Yes there are exceptions)

The one design consideration, which is crucial, that is missing in my opinion though is HA node placement. Many have limited their cluster sizes because of hardware and HA primary node constraints. As hopefully known, if not be ashamed, HA has a maximum of 5 primary nodes in a cluster and a primary is required for restarts to take place. In large clusters the chances of losing all primaries also increase if and when the placement of the hosts is not taken into account. The general consensus usually is, keep your cluster limited to 8 and spread across two racks or chassis so that each rack always has at least a single primary node to restart VMs. But why would you limit yourself to 8? Why, if you just bought 48 new blades, would you create 6 clusters of 8 hosts instead of 3 clusters of 16 hosts? By simply layering your design you can mitigate all risks associated with primary nodes placements while benefiting from additional DRS placement options. (Do note that if you “only” have two chassis, your options are limited.) Which brings us to another thing I wanted to discuss…. Scott’s argument against increased DRS placement was that hundreds of VMs in an 8 host cluster already leads to many placement options. Indeed you will have many load balancing options in an 8 host cluster, but is it enough? In the field I also see a lot of DRS rules. DRS rules will restrict the DRS Load Balancing algorithm when looking for suitable options, as such more opportunities will more than likely result in a better balanced cluster. Heck, I have even seen cluster imbalances which could not be resolved due to DRS rules in a five host cluster with 70 VMs.

Don’t get me wrong,  I am not advocating to go big…. but neither am I advocating to have a limited cluster size for reasons that might not even apply to your environment. Write down the requirements of your customer or your environment and don’t limit yourself to design considerations around Compute alone. Think about storage, networking, update management, max config limits, DRS&DPM, HA, resource and operational overhead.

You don’t need any brains to listen to music pt III

Duncan Epping · Nov 28, 2010 ·

It is one of those days again… It is time to blog about something different than VMware, virtualization, cloud or storage… Music. Over the last couple of months there are a couple of songs which I just can’t stop listening to.

The first song is Parade Of the Dead by the almighty Black Label Society. I realize that many of you might not appreciate Black Label Society but this is one of those tracks that can get you pumped up for everything. Feeling down? Going for a 10K run? Need to lose some of that frustration? This song literally solves everything, well at least in my case it does. The vocals combined with bonecrushing riffs and those drums just keep on marching on… you will know what I mean when you listen to the track.

Again I fully understand that some might appreciate music with less dB violence. The following track, Munich, has literally been one of my favorite tracks over the past years and it is one of my favorite bands as well. Editors made a huge impression on my when I saw them live, and still every album and every song reminds of that concert. I actually don’t really know why I like this song so much, I guess the fact that it is uptempo helps… plus that it used to be one of my sons favorite songs as well… a three years(back then, he is now 8) old Dutch kid singing a song in English and knowing every word does make an impression.

Configuring HA: Error while running health check script

Duncan Epping · Nov 24, 2010 ·

I was just cleaning up our Cloud Lab and noticed HA wasn’t enabled. I enabled it and immediately it threw the following error at me:

Error Message: Configuration Issues – HA agent on esx4.mgm.local in cluster ams-hadrs-01 in Lab2 has an error : Error while running health check script.

When experiencing HA configuration issues there are a couple of steps I usually take to try to fix the experienced issues:

  • Click “reconfigure for VMware HA” and see if the issue is still there, if so:
    • Is DNS configured and does it actually work? If not, fix and reconfigure for HA.
    • Is the gateway reachable? If not, fix and reconfigure for HA.

This usually solves 75% of the issues. If it hasn’t been fixed the next step I usually take is unloading the agent and restarting the management services. Although it is pretty rigurous it is the fastest way of fixing HA issues.  In my case I am using ESXi and this is what I needed to do to clean up the host:

  • Disable HA on the cluster
  • /opt/vmware/aam/VMware-aam-ha-uninstall.sh
  • /sbin/services.sh restart
  • Enable HA on the cluster

This solved the issue I had with HA,

Introducing voiceforvirtual.com

Duncan Epping · Nov 24, 2010 ·

At VMworld I met up with the guys presenting the Storage I/O Control session, Irfan Ahmad and Chethan Kumar. As many of you hopefully know Irfan has always been active in the social media space (virtualscoop.org). Chethan however is “new” and just started his own blog.

Chethan is a Senior Member of the Performance Engineering team  At VMware. He focuses on characterizing / troubleshooting performance of Enterprise Applications (mostly databases) in virtual environments using VMware products. Chethan has also studied the performance characteristic of the VMware storage stack and was one of the people who produced this great whitepaper on Storage I/O Control. Chethan just released his first article and I am sure many excellent articles will follow. Make sure you add him to your bookmarks/rss reader.

Running Virtual Center Database in a Virtual Machine

I just completed an interesting project. For years, we at VMware believed that SQL server databases run well when virtualized. We have illustrated this through several benchmark studies published as white papers. It was time for us to look at real applications. One such application that can be found in most of the vSphere based virtual environments is the database component of the vCenter server (the brain behind a vSphere environment). Using the vCenter database as the application and the resource intensive tasks of the vCenter databases (implemented as stored procedures in SQL server-based databases) as the load generator, I compared the performance of these resource intensive tasks in a virtual machine (in a vSphere 4.1 host) to that in a native server.

vStorage APIs for Array Integration aka VAAI

Duncan Epping · Nov 23, 2010 ·

It seems that a lot of vendors are starting to update their firmware to enable virtualized workloads from the vStorage APIs for Array Integration, also known as VAAI. Not only the vendors are starting to show interest, also the bloggers are picking up on it. Hence the reason I wanted to reiterate some of the excellent details out there and wanted to make sure everyone understands what VAAI brings. Although currently there are “only” three major improvements they can and probably will make a huge difference:

  1. Hardware Offloaded Copy
    Up to 10x faster VM deployment, cloning, Storage vMotion etc. VAAI offloads the copy task to the array, enabling the usage of native storage based mechanism resulting in a decrease of deployment time but equally important reducing the amount of data flowing between the array and server. Check this post by Bob Plankers and this one by Matt Liebowitz which clearly demonstrates the power of hardware offloaded copies! (reducing cloning from 19Minutes to 6Minutes!)
  2. Write Same/Zero
    10 x times less I/O for common tasks. Take for instance a zero-out process. It typically sends the same SCSI command several times. By enabling this option the same command will be repeated by the storage platform resulting in reduced utilization of the server while decreasing the time span of the action.
  3. Hardware Offloaded Locking
    SCSI Reservation Conflicts…. How many times have I heard that during Health Checks / Design Reviews and while troubleshooting performance related issues. Well VAAI solves those issues as well by offloading the locking mechanism to the array, also known as Atomic Test & Set aka ATS. It will more than likely reduce latency in an environment where thin-provisioned disks are used or linked clones, or even where VMware based snapshots are used. ATS removes the need to lock the full VMFS volume but instead locks a block when an update needs to occur.

One thing I wanted to point out here, which I haven’t seen mentioned yet, is that VAAI will actually allow you to have larger VMFS volumes. Now don’t get me wrong, I am not saying that you can go beyond 2TB-512b by enabling VAAI… My point is that by having VAAI enabled you will reduce the “load” on the array and on the servers. I placed quotes around load as it will not reduce the load from a VM perspective. What I am trying to get at is that many people have limited the amount of VMs per VMFS volume because of “SCSI Reservation Conflicts”. With VAAI this will change. Now you can keep your calculations “simple” and base your VMFS size on the amount of eggs you can have in a single basket and the sum of all VMs IOPS requirements.

After reading about all of this goodness I bet many of you want to use it straight away, well of course your array will need to support it first. Tomi Hakala created a nice list of all storage platforms that are currently supported and those that will be supported soon including a time frame. If your array is supported this KB explains perfectly how to enable/disable it.

I started out with saying that there are currently only three major enhancements…. that means indeed that there is more coming up in the future. Some of which I can’t discuss and others that I can as those were already mentioned at VMworld. (If you have access to TA7121 watch it!) I can’t say when they will be available or in which release, but I think it is great to know more enhancements are being worked on.

  • Dead Space Reclamation
    Dead space is previously written blocks that are no longer used by the VM. Currently in order to reclaim diskspace (for instance when you’ve deleted a lot of files) these blocks you will need to zero them out with for instance sdelete and then Storage vMotion the VM. Dead Space Reclamation will enable the storage system to reclaim these dead blocks by giving block liveness information.
  • Out-of-space conditions notifications
    This is very much an improvement for day-to-day operations. It will enable notification of possible “out-of-space” conditions on both the array vendor’s tool both also within the vSphere client!

Must reads:

Chad Sakac – What does VAAI mean to you?
Bob Plankers – If you ever needed convincing about VAAI
AndreTheGiant – VAAI
VMware KB – VAAI FAQ
VMware Support Blog – VAAI changes the way storage is handled
Matt Liebowitz – Exploring the performance benefits of VAAI
Bas Raayman – What is VAAI, and how does it add spice to my life as a VMware admin?

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 281
  • Page 282
  • Page 283
  • Page 284
  • Page 285
  • Interim pages omitted …
  • Page 492
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in