• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

memory tiering

Playing around with Memory Tiering, are my memory pages tiered?

Duncan Epping · Dec 18, 2025 · 1 Comment

There was a question on VMTN about Memory Tiering performance, and how you can check if pages were tiered. I haven’t played around with Memory Tiering too much, so I noted down for myself what I needed to do on every host in order to enable it. Note, if the command contains a path and you want to do this in your own environment you need to change the path and device name accordingly. The question was if memory pages were tiered or not, so I dug up the command that allows you to check this on a per host level. It is at the bottom of this article for those who just want to skip to that part.

Now, before I forget, probably worth mentioning as this is something many people don’t seem to understand, memory tiering only tiers cold memory pages. Active pages are not being moved to NVMe, on top of that, it only tiers memory when there’s memory pressure! So if you don’t see any tiering, it could simply be that you are not under any memory capacity pressure. (Why move pages to a lower tier when there’s no need?)

List all storage devices via the CLI:

esxcli storage core device list

Create memory tiering partition on an NVMe device:

esxcli system tierdevice create -d=/vmfs/devices/disks/eui.1ea506b32a7f4454000c296a4884dc68

Enable Memory Tiering on a host level, note this requires a reboot:

esxcli system settings kernel set -s MemoryTiering -v TRUE

How is Memory Tiering configured in terms of DRAM to NVMe ratio? A 4:1 DRAM to NVMe ratio would be 25%, 1:1 would be 100%. So if you have it set at 4:1, with 512GB of DRAM you would only use 128GB of the NVMe at most, regardless of the size of the device.

esxcli system settings advanced list -o /Mem/TierNvmePct

Is memory tiered or not? Find out all about it via memstats!

memstats -r vmtier-stats -u mb

Want to show a select number of metrics?

memstats -r vmtier-stats -u mb -s name:memSize:active:tier1Target:tier1Consumed:tier1ConsumedPeak:comnsumed

So what would the outcome look like when there is memory tiering happening? I removed a bunch of the metrics, just to keep it readable, “tier1” is the NVMe device, and as you can see each VM has several MBs worth of memory pages on NVMe right now.

 VIRTUAL MACHINE MEMORY TIER STATS: Wed Dec 17 15:29:43 2025
 -----------------------------------------------
   Start Group ID   : 0
   No. of levels    : 12
   Unit             : MB
   Selected columns : name:memSize:tier1Consumed

----------------------------------------
           name    memSize tier1Consumed
----------------------------------------
      vm.533611       4096            12
      vm.533612       4096            34
      vm.533613       4096            24
      vm.533614       4096            11
      vm.533615       4096            25
----------------------------------------
          Total      20480           106
----------------------------------------

#103 – The performance impact of Memory Tiering featuring Qasim Ali and Todd Muirhead

Duncan Epping · Sep 22, 2025 · Leave a Comment

The last few months, I’ve had many discussions about Memory Tiering. When I saw a brand new performance white paper being released, I knew it was time to invite two of the authors to the podcast. Qasim Ali and Todd Muirhead go over the ins and outs of Memory Tiering, they discuss the basics, but also explain in-depth what the potential performance impact is when enabling this feature in your environment. You can listen on Apple Podcasts, Spotify, the embedded player below, or any podcast app of your choice!

If you’d like to know more, visit the following links!

  • ⁠Performance blog⁠
  • ⁠Performance white paper⁠
  • ⁠Explore Session Arvind and David⁠
  • ⁠Explore Session Dave⁠
  • ⁠Extreme Performance Series videos⁠
  • ⁠How to enable Memory Tiering blog⁠
  • ⁠VMware Performance Blog

Memory Tiering… Say what?!

Duncan Epping · Jun 14, 2024 ·

Recently I presented a keynote at the Belgium VMUG, the topic was Innovation at VMware by Broadcom, but I guess I should say Innovation at Broadcom to be more accurate. During the keynote I briefly went over the process and the various types of innovation and what this can lead to. During the session, I discussed three projects, namely vSAN ESA, the Distributed Services Engine, and something which is being worked on called: Memory Tiering.

Memory Tiering is a very interesting concept that was first publicly discussed at Explore (or VMworld I guess it was still. called) a few years ago as a potential future feature. You may ask yourself why anyone would want to tier memory, as the impact from a performance stance can be significant. There are various reasons to do so, one of them being the cost of memory. Another problem the industry is facing is the fact that memory capacity (and performance) has not grown at the same rate as CPU capacity, which has resulted in many environments being memory-bound, differently said the imbalance between CPU and memory has increased substantially. That’s why VMware started Project Capitola.

When Project Capitola was discussed most of the focus was on Intel Optane, and most of us know what happened to that. I guess some assumed that that would also result in Project Capitola, or memory tiering and memory pooling technology, being scrapped. This is most definitely not the case, VMware has gone full steam ahead and has been discussing the progress in public, although you need to know where to look. If you listen to that session it is clear that there are various efforts, that would allow customers to tier memory in various ways, one of them being of course the various CXL based solutions that are coming to market now/soon.

One of which is memory tiering via a CXL accelerator card, basically an FPGA that has the sole purpose of increasing memory capacity, offload memory tiering and accelerating certain functionality where memory is crucial like for instance vMotion. As mentioned in the SNIA session, using an accelerator card can lead to a 30% reduction in migration times. An accelerator card like this will also open up other opportunities, like pooling memory for instance, which is something customers have been asking for since we created the concept of a cluster. Being able to share compute resources across hosts. Just imagine, your VM can use memory capacity available on another host without having to move the VM. Yes, before anyone comments on this, I do realize that this will have a significant performance impact potentially.

That is of course where the VMware logic comes into play. At VMworld in 2021 when Project Capitola was presented, the team also shared the performance results of recent tests, and it showed that the performance degradation was around 10% when 50% of DRAM was used and 50% of Optane memory. I was watching the SNIA session, and this demo shows the true power of VMware vSphere, memory tiering, and acceleration (Project Peaberry as it is called). On average the performance degradation was around 10%, yet roughly 40% of virtual memory was accessed via the Peaberry accelerator. Do note that the tiering is completely transparent to the application, this works for all different types of workloads out there. The crucial part here to understand is that because the hypervisor is already responsible for memory management, it knows which pages are hot and which pages are cold, that also means it can determine which pages it can move to a different tier while maintaining performance.

Anyway, I cannot reveal too much about what may, or may not, be coming in the future. What I can promise though is that I will make sure to write a blog as soon as I am allowed to talk about more details publicly, and I will probably also record a podcast with the product manager(s) when the time is there, so stay tuned!

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in