• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Server

Cluster Sizes – vSphere 5 style!?

Duncan Epping · Apr 10, 2012 ·

At the end of 2010 I wrote an article about cluster sizes… ever since it has been a popular article and I figured that it was time to update it. vSphere 5 changed the game when it comes to sizing/scaling of your clusters and I this is an excellent opportunity to emphasize that. The key take-away of my 2010 article was the following:

I am not advocating to go big…. but neither am I advocating to have a limited cluster size for reasons that might not even apply to your environment. Write down the requirements of your customer or your environment and don’t limit yourself to design considerations around Compute alone. Think about storage, networking, update management, max config limits, DRS & DPM, HA, resource and operational overhead.

We all know that HA used to be a constraint for your cluster size… However these times are long gone. I still occasionally see people referring to old “max config limits” around the amount of VMs per cluster when exceeding 8 hosts… This is not a concern anymore. I also still see people referring to the max 5 primary node limit… Again not a concern anymore. I guess we can generalize things and using the 2010 article and applying that to vSphere 5 I guess we can come to the following conclusions:

  • HA does not limit the number of hosts in a cluster anymore! Using more hosts in a cluster results in less overhead. (N+1 for 8 hosts vs N+1 for 32 hosts)
  • DRS loves big clusters! More hosts equals more scheduling opportunities.
  • SCSI Locking? Hopefully all of you are using VAAI capable arrays by now… This should not be a concern. Even if you are not using VAAI, optimistic locking should have relieved this for almost all environments!
  • Max number of hosts accessing a file = 8! This is a constraint in an environment using linked clones like View
  • Max values in general (256 LUNs, 1024 Paths, 512 VMs per host, 3000 VMs per cluster)

Once again, I am not advocating to scale-up or scale-out. I am mere showing that there are hardly any limiting factors anymore at this point in time. One of the few constraints that is still valid is the max of 8 hosts in a cluster using linked clones. Or better said, a max of 8 hosts accessing a file concurrently. (Yes we are working on fixing this…)

I would like to know from you guys what the cluster sizes are you are using, and if you are constraint somehow… what those constraints are… chip in!

Fling: vBenchmark 1.0.1 just released

Duncan Epping · Apr 4, 2012 ·

An update to the recently released fling vBenchmark was just posted. This update includes some fixes and a feature request which was heard often… Here is what’s new/fixed with 1.0.1:

  • Added a checkbox to include or exclude vCenter license keys when submitting the data to the community repository
  • The application now listens on port 443 (https), requests to port 80 will be automatically redirected to 443
  • The appliance will now prompt you to change the root password at first logon
  • Fixed bugs that prevented some customers from proceeding to the dashboard when they have ESX 3.x hosts in their cluster or are using vCenter credentials that did not have access to the full inventory
  • vBenchmark application log is now written to the VM serial port. If you are using the VMX package, the serial port output will be redirected to a file named vBenchmark.log in the virtual machine folder. If you are importing an OVA or OVF, you need to manually add a serial port device and specify a filename.

Make sure to download the latest version of vBenchmark and try it out! If you don’t have a clue what it does, check out my introduction post here…

Update: VMware vCloud Director DR paper available in Kindle / iBooks format!

Duncan Epping · Mar 29, 2012 ·

I just received a note that the DR paper for vCloud Director is finally available in both epub / mobi format. So if you have an e-reader make sure to download this format as it will render a lot better then a generic PDF!

Description: vCloud Director disaster recovery can be achieved through various scenarios and configurations. This case study focuses on a single scenario as a simple explanation of the concept, which can then easily be adapted and applied to other scenarios. In this case study it is shown how vSphere 5.0, vCloud Director 1.5 and Site Recovery Manager 5.0 can be implemented to enable recoverability after a disaster.

Download:
http://www.vmware.com/files/pdf/techpaper/vcloud-director-infrastructure-resiliency.pdf
http://www.vmware.com/files/pdf/techpaper/vcloud-director-infrastructure-resiliency.epub
http://www.vmware.com/files/pdf/techpaper/vcloud-director-infrastructure-resiliency.mobi

Slight change in “restart” behavior for HA with vSphere 5.0 Update 1

Duncan Epping · Mar 27, 2012 ·

Although this is a corner case scenario I did wanted to discuss it to make sure people are aware of this change. Prior to vSphere 5.0 Update 1 a virtual machine would be restarted by HA when the master had detected that the state of the virtual machine had changed compared to the “protectedlist” file. In other words, a master would filter the VMs it thinks had failed before trying to restart any. Prior to Update 1, a master used the protection state it read from the protectedlist. If the master did not know the on-disk protection state for the VM, the master did not try to restart it. Keep in mind that only one master can open the protectedList file in exclusive mode.

In Update 1 this logic has slightly changed. HA can know retrieve the state information from either the protectionlist stored on the datastore or from vCenter Server. So now multiple masters could try to restart a VM. If one of those restarts would fail, for instance because a “partition” does not have sufficient resources, the master in the other partition might be able to restart it. Although these scenarios are highly unlikely, this behavior change was introduced as a safety net!

 

** Disclaimer: This article contains references to the words master and/or slave. I recognize these as exclusionary words. The words are used in this article for consistency because it’s currently the words that appear in the software, in the UI, and in the log files. When the software is updated to remove the words, this article will be updated to be in alignment. **

Playing around with WSX

Duncan Epping · Mar 20, 2012 ·

I wanted to test WSX, which is part of the Tech Preview of VMware Workstation for Linux. WSX allows you to see your virtual machine’s desktop in a browser window. I installed Workstation for Linux on my Ubuntu 12.04 desktop, the process is fairly straight forward. This is what I had to do to get WSX running:

  • Download Workstation bundle
  • Install Workstation
    sudo chmod 755 VMware-Workstation-Full-e.x.p-646643.x86_64.bundle
    sudo ./VMware-Workstation-Full-e.x.p-646643.x86_64.bundle
  • Open a terminal and do the following to install python 2.6
    sudo apt-get install python2.6
  • When python is installed you can run WSX Server
    /etc/init.d/vmware-wsx-server start
  • Now you can open a browser session to “localhost:8888” or “<ip-address-of-VM>:8888”
  • Login using your username/password
  • Click on “Home” and then on “Configuration”
  • Click “Add Server”
  • I added my vCenter Server 5.0 Update 1
  • Click the newly added server in the left pane
  • Enter your vCenter Server credentials and click login
  • Now you will see a list of VMs which you can access… (see screenshot below, this is what you will see in your browser window when you select a VM)

My next step was digging in to a lean install for WSX, but I should have known better… William Lam posted it around the time I started looking in to it. Thanks William :-). Again, I would recommend reading this article by the WSX developer. If you run in to any issues, you could always check /var/log/vmware/vmware-wsx-server-<pid>.log.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 154
  • Page 155
  • Page 156
  • Page 157
  • Page 158
  • Interim pages omitted …
  • Page 336
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in