• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Server

LUN sizes in the new Storage World

Duncan Epping · Jul 11, 2012 ·

I am on a holiday and catching up on some articles I had saved that I still wanted to read. I stumbled on an article about sizing VMFS volumes by  Ravi Venkat (Pure Storage) on flash based arrays. I must say that Ravi has a couple of excellent arguments around the operational and architectural simplicity of these new types of arrays and I do strongly believe that indeed it makes the world a lot easier.

IOps requirements, indeed forget about them when you have thousands at your disposal… And indeed your raid penalty also doesn’t really matter anymore, especially as many of these new storage arrays also have new types of raid-levels. Great right?

Yes in most cases this is great news! One thing to watch out for though is the failure domain. Meaning that if you create a large 32TB volume with hundreds of virtual machines the impact would be huge if this volume for whatever reason blows up. Not only the impact of the failure itself but also the RTO aka “Recovery Time Objective” would be substantially longer. Yes the array might be lightning fast, but your will and probably are limited by your backup solution. How long will it take to restore those 32TBs? Have you ever done the math?

It isn’t too complicated to do the math, but I would strongly suggest to test it! When I was an admin we had a clearly defined RTO and RPO. We tested these every once in a while, and even though we were already using tapeless backups, it still took a long time to restore 2TB.

Nevertheless, I do feel that Ravi points out the “hidden value” of these types of storage architectures. Definitely something to take in to account when you are looking for new storage… I am wondering how many of you are already using flash based solutions, and how you do your sizing.

Maximum amount of FT virtual machines per host?

Duncan Epping · Jun 29, 2012 ·

There was a discussion yesterday on our Socialcast system. The question was what the max amount of FT virtual machines was and what dictated this. Of course there are many things that will be a constraint when it comes to FT (memory reservations, bandwidth etc) but the one thing that stands out and not many realize is that the amount of FT virtual machines per host is limited to 4 by default.

This is currently controlled by a vSphere HA advanced setting called “das.maxftvmsperhost”. By default this setting is configured to 4. This advanced setting is an HA advanced setting (in combination with vSphere DRS) and defines the max amount of FT virtual machines, either primary or secondary or a combination of both, that can run  on a single host. So if for whatever reason you want a max of 6 you will need to add this advanced setting with a value of 6.

I do not recommend changing this however, FT is a fairly heavy process and in most environments 4 is the recommended value.

VMworld 2012 here I come

Duncan Epping · Jun 27, 2012 ·

I just got the news that two of my VMworld sessions have been accepted. I wanted to share with you which ones so you can keep track (if you want):

  • BCO1159 – Architecting and Operating a vSphere Metro Storage Cluster by Lee Dilworth and Duncan Epping
    In this session Lee Dilworth and Duncan Epping will discuss the design and operational considerations for vSphere Metro Storage Clusters environments, also commonly referred to as stretched cluster environments. Best practices around implementation and design will be shared. Various failure scenarios which can occur in a stretched storage environment are discussed in-depth including how vSphere 5.x responds to these failures. We will cover the implication on your vSphere HA, DRS and Storage DRS configuration and provide recommendations how to increase availability and simplify operations!
  • VSP1504 – Ask the Expert vBloggers with Rick Scherer, Frank Denneman, Chad Sakac, Scott Lowe and Duncan Epping
    Back by high demand, the Ask the Expert vBloggers panel session. Show up and ask any question you like to a panel consisting out of know community members! This was one of the best voted sessions last year and with people like Frank, Rick, Scott and Chad sitting next to me I know it is going to be awesome again. Lets just hope Rick brings his buzzer again so he can buzz Chad when he starts preaching again 🙂

In a couple of weeks when all sessions are listed I will also create a nice “Top 20 – VMworld Sessions” article again, but for now I want to thank everyone who voted and am hoping to see all of you at VMworld.

Enabling Aero-Glass on Windows 2008 R2 for RDP

Duncan Epping · Jun 26, 2012 ·

I had to enable Aero-Glass this week when recording a demo and as I found the various procedures on the internet to complex I created my own. This is what I had to do to enable Aero-Glass for RDP in Windows 2008 R2:

  • Open the “Server Manager”
  • Click on “Add roles”
    • Enable “Remote Desktop Session Host”
    • Enable “Desktop Composition” in “Client Experience”
  • Reboot the server as required
  • RDP back in to the server
  • Right click the desktop and click “Personalize”
  • Select the “Windows 7 Aero theme”

That is it, yes I know just a couple of steps but as it is something I don’t do daily I figured I would document it. All credits for this info goes to MSDN.

HA Admission Control the basics – Part 2/2

Duncan Epping · Jun 20, 2012 ·

In part one I described what HA Admission Control is and in part two I will explain what your options are when admission control is enabled. Currently there are three admission control policies:

  1. Host failures cluster  tolerates
  2. Percentage of cluster resources reserved as failover spare capacity
  3. Specify a failover host

Each of these work in a slightly different way. And lets start with “Specify a failover host” as it is the most simple one to explain. This admission control policy allows you to set aside 1 host that will only be used in case a fail-over needs to occur. This means that even if your cluster is overloaded DRS will not use it. In my opinion there aren’t many usecases for it, and unless you have very specific requirements I would avoid using it.

The most difficult one to explain is “Host failures cluster tolerates” but I am going to try to keep it simple. This admission control policy takes the worst case scenario in to account, and only the worst case scenario, and it does this by using “slots”. A slot is comprised of two components:

  1. Memory
  2. CPU

For memory it will take the largest reservation on any powered-on virtual machine in your cluster plus the memory overhead for this virtual machine. So if you have one virtual machine that has 24GB memory provisioned and 10GB out of that is reserved than the slot size for memory is ~10GB (reservation + memory overhead).

For CPU it will take the largest reservation on any powered-on virtual machine in your cluster, or it will use a default of 32MHz (5.0, pre 5.0 it was 256MHz) for the CPU slot size. If you have a virtual machine with 8 vCPUs assigned and a 2GHz reservation then the slot size will be 2GHz for CPU.

HA admission control will look at the total amount of resources and see how many “memory slots” there are by dividing the total amount of memory by the “memory slot size”. It will do the same for CPU. It will calculate this for each host. From the total amount of available memory and CPU slots it will take the worst case scenario again, so if you have 80 memory slots and 120 CPU slots then you can power on 80 VMs… well almost, cause the number of slots of the largest hosts is also subtracted. Meaning that if you have 5 hosts and each of those have 10 slots for memory and CPU instead of having 50 slots available in total you will end up with 40.

Simple right? So remember: reservations –> slot size –> worst case. Yes, a single large reservation could severely impact this algorithm!

So now what? Well this is where the third admission control policy comes in to play… “Percentage of cluster resources reserved as failover spare capacity”. This is not a difficult one to explain, but again misunderstood by many. First of all HA will add up all available resources to see how much it has available. It will now subtract the amount of resource specified for both memory and CPU. Then HA will calculate how much resources are currently reserved for both memory and CPU for powered-on virtual machines. For CPU, those virtual machines that do not have a reservation larger than 32Mhz a default of 32Mhz will be used. For memory a default of 0MB+memory overhead will be used if there is no reservation set. If a reservation is set for memory it will use the reservation+memory overhead.

That is it. Percentage based looks at “powered-on virtual machines” and its reservation or uses the above mentioned defaults. Nothing more than that. No. it doesn’t look at resource usage / consumption / active etc. It looks at reserved resources. Remember that!

What do I recommend? I always recommend using the percentage based admission control policy as it is the most flexible policy. It will do admission control on a per virtual machine reservation basis without the risk of skewing the numbers.

If you have any questions around this please don’t hesitate.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 147
  • Page 148
  • Page 149
  • Page 150
  • Page 151
  • Interim pages omitted …
  • Page 336
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in