• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Server

vSphere HA setting Performance degradation VMs tolerate

Duncan Epping · Apr 8, 2026 · Leave a Comment

There was a question this week internally and I really had to start digging, as I have not looked at this in a loooong time. What does “Performance degradation VMs tolerate” do? And does this feature require admission control to be enabled or not?

vSphere HA setting Performance degradation VMs tolerate

I had to test this, as I barely ever play around with the HA settings these days. But, let’s first describe what this feature is for. I think the UI explains it fairly decently, but here’s my explanation from the vSphere Clustering Deep Dive:

This feature allows you to specify the performance degradation you are willing to incur if a failure happens. It is set to 100% by default, but it is our recommendation to consider changed the value. You can for instance change this to 25% or 50.

Now, the requirement for this feature to work is to have DRS enables, but Admission Control does not need to be enabled! A lot of people are under the impression that it requires Admission Control in order to take “an X number of failures” into account, but it does not. It actually does not use what is specified for Admission Control. It takes a single failure into account when it comes to this feature, and then uses DRS to do the calculations if powered on VMs will get the same amount of resources allocated after a failure. If the answer is no, or performance degradation is higher than the percentage specified, a warning is triggered. You will still be able to power on new VMs, but the warning will not go away unless the resource usage changes, or you add more resources to the cluster.

Playing around with Memory Tiering, are my memory pages tiered?

Duncan Epping · Dec 18, 2025 · 2 Comments

There was a question on VMTN about Memory Tiering performance, and how you can check if pages were tiered. I haven’t played around with Memory Tiering too much, so I noted down for myself what I needed to do on every host in order to enable it. Note, if the command contains a path and you want to do this in your own environment you need to change the path and device name accordingly. The question was if memory pages were tiered or not, so I dug up the command that allows you to check this on a per host level. It is at the bottom of this article for those who just want to skip to that part.

Now, before I forget, probably worth mentioning as this is something many people don’t seem to understand, memory tiering only tiers cold memory pages. Active pages are not being moved to NVMe, on top of that, it only tiers memory when there’s memory pressure! So if you don’t see any tiering, it could simply be that you are not under any memory capacity pressure. (Why move pages to a lower tier when there’s no need?)

List all storage devices via the CLI:

esxcli storage core device list

Create memory tiering partition on an NVMe device:

esxcli system tierdevice create -d=/vmfs/devices/disks/eui.1ea506b32a7f4454000c296a4884dc68

Enable Memory Tiering on a host level, note this requires a reboot:

esxcli system settings kernel set -s MemoryTiering -v TRUE

How is Memory Tiering configured in terms of DRAM to NVMe ratio? A 4:1 DRAM to NVMe ratio would be 25%, 1:1 would be 100%. So if you have it set at 4:1, with 512GB of DRAM you would only use 128GB of the NVMe at most, regardless of the size of the device.

esxcli system settings advanced list -o /Mem/TierNvmePct

Is memory tiered or not? Find out all about it via memstats!

memstats -r vmtier-stats -u mb

Want to show a select number of metrics?

memstats -r vmtier-stats -u mb -s name:memSize:active:tier1Target:tier1Consumed:tier1ConsumedPeak:comnsumed

So what would the outcome look like when there is memory tiering happening? I removed a bunch of the metrics, just to keep it readable, “tier1” is the NVMe device, and as you can see each VM has several MBs worth of memory pages on NVMe right now.

 VIRTUAL MACHINE MEMORY TIER STATS: Wed Dec 17 15:29:43 2025
 -----------------------------------------------
   Start Group ID   : 0
   No. of levels    : 12
   Unit             : MB
   Selected columns : name:memSize:tier1Consumed

----------------------------------------
           name    memSize tier1Consumed
----------------------------------------
      vm.533611       4096            12
      vm.533612       4096            34
      vm.533613       4096            24
      vm.533614       4096            11
      vm.533615       4096            25
----------------------------------------
          Total      20480           106
----------------------------------------

Unexplored Territory Episode 087 – Microsoft on VMware VCF featuring Deji Akomolafe

Duncan Epping · Dec 16, 2024 · Leave a Comment

For the last episode of 2024, I invited Deji Akomolafe to discuss running Microsoft workloads on top of VCF. I’ve known Deji for a long time, and if anyone is passionate about VMware and Microsoft technology, it is him. Deji went over the many caveats, and best practices when it comes to running for instance SQL on top of VMware VCF (or vSphere for that matter). NUMA, CPU Scheduling, latency sensitive settings, power settings, virtual disk controllers, just some of the things you can expect in this episode. You can listen to the episode on Spotify, Apple, or via the embedded player below.

Unexplored Territory Episode 086 – VCF 9 and vSAN 9 storage and data protection vision with Pete Koehler

Duncan Epping · Dec 2, 2024 · Leave a Comment

I just rebooted the Unexplored Territory Podcast after a break of 2 months. In this episode I discuss the VCF 9 and vSAN 9 storage and data protection vision with my colleague and good friend Pete Koehler. I hope you enjoy the show!

VCF-9 announcements at Explore Barcelona – vSAN Site Takeover and vSAN Site Maintenance

Duncan Epping · Nov 15, 2024 · Leave a Comment

At Explore in Barcelona we had several announcements and showed several roadmap items which we did not reveal in Las Vegas. As the sessions were not recording in Barcelona, I wanted to share with you the features I spoke about at Explore which are currently planned for VCF 9. Please note, I don’t know when these features will be generally available, and there’s always a chance they are not released at all.

I created a video of the features we discussed, as I also wanted to share the demos with you. Now for those who don’t watch videos, the functionality that we are working on for VCF-9 is the following, I am just going to do a brief description, as we have not made a full public announcement about this, and I don’t want to get into trouble.

vSAN Site Maintenance

In a vSAN stretched cluster environment when you want to do site maintenance, today you will need to place every host into maintenance mode one by one. This not only is an administrative/operational burden, it also increases the chances of placing the wrong hosts into maintenance mode. On top of that, as you need to do this sequentially, it could also be that the data stored on host-1 in site A differs from host-2 in site A, meaning that there’s an inconsistent set of data in a site. Normally this is not a problem as the environment will resync when it comes back online, but if the other data site fails, now that existing (inconsistent) data set cannot be used to recover. With Site Maintenance we not only make it easier to place a full site into maintenance mode, we also remove that risk of data inconsistency as vSAN coordinates the maintenance and ensures that the data set is consistent within the site. Fantastic right?!

vSAN Site Takeover

One of the features I felt we were lacking for the longest time was the ability to promote a site when 2 out of the 3 sites had failed simultaneously. This is where Site Takeover comes into play. If you end up in a situation where both the Witness Site and a data site goes down at the same time, you want to be able to still recover. Especially as it is very likely that you will have healthy objects for each VM in that second site. This is what vSAN Site Takeover will help you with. It will allow you to manually (through the UI or script) inform vSAN that even though quorum is lost, it should make the local RAID set for each of the VMs impacted accessible again. After which, of course, vSphere HA would instruct the hosts to power-on those VMs.

If you have any feedback on the demos, and the planned functionality, feel free to leave a comment!

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 336
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 ยท Log in