• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

4.1

Memory states

Duncan Epping · Aug 17, 2010 ·

I was just browsing the vsinodes/procnodes. I noticed the following:

Free memory state thresholds {
soft:64 pct
hard:32 pct
low:16 pct
}

As explained in Frank’s excellent article on memory reservations, ESX/ESXi uses memory states to determine what type of memory reclamation technique to use. Techniques that can be used are TPS, ballooning and swapping. Of course you will always want to avoid ballooning and swapping but that is not the point here.  The point is that as far as I am aware the thresholds for those states have always been:

  • High – 6%
  • Soft – 4%
  • Hard – 2%
  • Low – 1%

This is also what our documentation states. Now if you do the math you will notice that 64% of 6% is indeed 4% and so on… Although it doesn’t seem to be substantial it is something I wanted to document, just for completeness sake.

vCenter Resiliency?

Duncan Epping · Aug 12, 2010 ·

After the whole MSCS’ed vCenter support discussion VMware Technical Marketing reached out to me. Lets be clear, the intention of this article is not to change support. The intention of this article is to get an idea of how many of you would be interested in seeing a whitepaper on vCenter resiliency with MSCS/VCS which could be supported on best effort by GSS.

I was asked to figure out what would most interest you. I appreciate any comments around this but specifically would love to have answers on the following:

  • Based on which technology would you prefer to see a whitepaper? MSCS or Veritas Clustering?
  • Would you be looking for a total solution including VMware Update Manager and Orchestrator or just purely vCenter Server? In case of the total package, why?
  • Are there any other components that would need to be included in a whitepaper?

Again, any help / answer / comment is very much appreciated.

HA Cli

Duncan Epping · Aug 3, 2010 ·

I was just playing around with the HA Cli and noticed that when you give an “ln” (listNodes) that the failover coordinator (aka master primary) is also listed. I have never noticed this before, but don’t have a pre-vSphere 4.1 environment to test it on to see if this existed before 4.1. If you want to test it in your own environment just simply run “/opt/vmware/aam/bin/Cli” and give the “ln” command as shown in the screenshot below:

I also tested demoting of a node just for fun. In this case I demoted the node “esxi1” from primary to secondary:

And of course I promoted it again to primary:

 

** Disclaimer: This article contains references to the words master and/or slave. I recognize these as exclusionary words. The words are used in this article for consistency because it’s currently the words that appear in the software, in the UI, and in the log files. When the software is updated to remove the words, this article will be updated to be in alignment. **

Storage Migrations?

Duncan Epping · Jul 28, 2010 ·

On an internal mailing list we had a very useful discussion around storage migrations when a SAN is replaced or a migration needs to take place to a different set of disks. Many customers face this at some point. The question usually is what is the best approach? SAN Replication or Storage vMotion… Both have its pros and cons I guess.

SAN Replication:

  • Can utilize Array based copy mechanisms for fast replication (+)
  • Per LUN migration, high level of concurrency (+)
  • Old volumes still available (+)
  • Need to resignature or mount the volume again (-)
    • A resignature also means you will need to reregister the VM! (-)
  • Downtime for the VM during the cut over (-)

Storage vMotion:

  • No downtime for your VMs (+)
  • Fast Storage vMotion when your Array supports VAAI (+)
    • If your Array doesn’t support VAAI migrations can be slow (-)
    • Induced cost if VAAI isn’t supported (-)
    • Only intra Array not across arrays (-)
  • No resignaturing or re-registering needed (+)
  • Per VM migration (-)
    • Limited concurrency (2 per host, 8 per vmfs volume) (-)

As you can see both have its pros and cons and it boils down to the following questions:

How much down time can you afford?
How much time do you have for the migration?

HA/DRS and Flattened Shares

Duncan Epping · Jul 22, 2010 ·

A week ago I already touched on this topic but I wanted to get a better understand for myself what could go wrong in these situations and how vSphere 4.1 solves this issue.

Pre-vSphere 4.1 an issue could arise when shares had been set custom on a virtual machine. When HA fails over a virtual machine it will power-on the virtual machine in the Root Resource Pool. However, the virtual machine’s shares were scaled for its appropriate place in the resource pool hierarchy, not for the Root Resource Pool. This could cause the virtual machine to receive either too many or too few resources relative to its entitlement.

A scenario where and when this can occur would be the following:

VM1 has a 1000 shares and Resource Pool A has 2000 shares. However Resource Pool A has 2 VMs and both will have 50% of those “2000” shares.

When the host would fail both VM2 and VM3 will end up on the same level as VM1. However as a custom shares value of 10000 was specified on both VM2 and VM3 they will completely blow away VM1 in times of contention. This is depicted in the following diagram:

This situation would persist until the next invocation of DRS would re-parent the virtual machine to it’s original Resource Pool. To address this issue as of vSphere 4.1 DRS will flatten the virtual machine’s shares and limits before fail-over. This flattening process ensures that the VM will get the resources it would have received if it would have been failed over to the correct Resource Pool. This scenario is depicted in the following diagram. Note that both VM2 and VM3 are placed under the Root Resource Pool with a shares value of 1000.

Of course when DRS is invoked  both VM2 and VM3 will be re-parented under Resource Pool A and will receive the amount of shares they had originally assigned again. I hope this makes it a bit more clear what this “flattened shares” mechanism actually does.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 16
  • Page 17
  • Page 18
  • Page 19
  • Page 20
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in