• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Software Defined

Memory Tiering… Say what?!

Duncan Epping · Jun 14, 2024 ·

Recently I presented a keynote at the Belgium VMUG, the topic was Innovation at VMware by Broadcom, but I guess I should say Innovation at Broadcom to be more accurate. During the keynote I briefly went over the process and the various types of innovation and what this can lead to. During the session, I discussed three projects, namely vSAN ESA, the Distributed Services Engine, and something which is being worked on called: Memory Tiering.

Memory Tiering is a very interesting concept that was first publicly discussed at Explore (or VMworld I guess it was still. called) a few years ago as a potential future feature. You may ask yourself why anyone would want to tier memory, as the impact from a performance stance can be significant. There are various reasons to do so, one of them being the cost of memory. Another problem the industry is facing is the fact that memory capacity (and performance) has not grown at the same rate as CPU capacity, which has resulted in many environments being memory-bound, differently said the imbalance between CPU and memory has increased substantially. That’s why VMware started Project Capitola.

When Project Capitola was discussed most of the focus was on Intel Optane, and most of us know what happened to that. I guess some assumed that that would also result in Project Capitola, or memory tiering and memory pooling technology, being scrapped. This is most definitely not the case, VMware has gone full steam ahead and has been discussing the progress in public, although you need to know where to look. If you listen to that session it is clear that there are various efforts, that would allow customers to tier memory in various ways, one of them being of course the various CXL based solutions that are coming to market now/soon.

One of which is memory tiering via a CXL accelerator card, basically an FPGA that has the sole purpose of increasing memory capacity, offload memory tiering and accelerating certain functionality where memory is crucial like for instance vMotion. As mentioned in the SNIA session, using an accelerator card can lead to a 30% reduction in migration times. An accelerator card like this will also open up other opportunities, like pooling memory for instance, which is something customers have been asking for since we created the concept of a cluster. Being able to share compute resources across hosts. Just imagine, your VM can use memory capacity available on another host without having to move the VM. Yes, before anyone comments on this, I do realize that this will have a significant performance impact potentially.

That is of course where the VMware logic comes into play. At VMworld in 2021 when Project Capitola was presented, the team also shared the performance results of recent tests, and it showed that the performance degradation was around 10% when 50% of DRAM was used and 50% of Optane memory. I was watching the SNIA session, and this demo shows the true power of VMware vSphere, memory tiering, and acceleration (Project Peaberry as it is called). On average the performance degradation was around 10%, yet roughly 40% of virtual memory was accessed via the Peaberry accelerator. Do note that the tiering is completely transparent to the application, this works for all different types of workloads out there. The crucial part here to understand is that because the hypervisor is already responsible for memory management, it knows which pages are hot and which pages are cold, that also means it can determine which pages it can move to a different tier while maintaining performance.

Anyway, I cannot reveal too much about what may, or may not, be coming in the future. What I can promise though is that I will make sure to write a blog as soon as I am allowed to talk about more details publicly, and I will probably also record a podcast with the product manager(s) when the time is there, so stay tuned!

Doing network/ISL maintenance in a vSAN stretched cluster configuration!

Duncan Epping · Nov 21, 2023 ·

I got a question earlier about the maintenance of an ISL in a vSAN Stretched Cluster configuration which had me thinking for a while. The question was what would you do with your workload during maintenance. I guess the easiest of course is to power off all VMs and simply shutdown the cluster, for which vSAN has a UI option, and there’s a KB you can follow. Now, of course, there could also be a situation where the VMs need to remain running. But how does this work when you end up losing the connection between all three locations? Normally this would lead to a situation where all VMs will become “inaccessible” as you will end up losing quorum.

As said, this had me thinking, you could take advantage of the “vSAN Witness Resiliency” mechanism which was introduced in vSAN 7.0 U3. How would this work?

Well, it is actually pretty straight forward, if all hosts of 1 site are in maintenance mode, failed, or powered off, the votes of the witness object for each VM/Object will be recalculated within 3 minutes. When this recalculation has completed the witness can go down without having any impact on the VM. We introduced this capability to increase resiliency in a double-failure scenario, but we can (ab)use this functionality also during maintenance. Of course I had to test this, so the first step I took was placing all hosts in 1 location into maintenance mode (no data evac). This resulted in all my VMs being vMotioned to the other site.

Now next I checked with RVC if my votes were recalculated or not. As stated, depending on the number of VMs this can take around 3 minutes in total, but usually will probably be quicker. After the recalculation had been completed I powered off the Witness, and this was the result as shown below, all VMs were still running.

Of course, I had to double check on the commandline using RVC (you can use the command “vsan.vm_object_info” to check a particular object for instance) to ensure that indeed the components of those VMs were still “ACTIVE” instead of “ABSENT”, and there you go!

Now when maintenance has been completed, you simply do the reverse, you power on the witness, and then you power on the hosts in the other location. After the “resync” has been completed the VMs will be rebalanced again by DRS. Note, DRS rebalancing (or should rules being applied) will only happen when the resync of the VM has been completed.

What does Datastore Sharing/HCI Mesh/vSAN Max support when stretched?

Duncan Epping · Oct 31, 2023 ·

This question has come up a few times now, what does Datastore Sharing/HCI Mesh/vSAN Max support when stretched? It is a question which keeps coming up somehow, and I personally had some challenges to find the statements in our documentation as well. I just found the statement and I wanted to first of all point people to it, and then also clarify it so there is no question. If I am using Datastore Sharing / HCI Mesh, or will be using vSAN Max, and my vSAN Datastore is stretched, what does VMware (or does not) support?

We have multiple potential combinations, let me list them and add whether it is supported or not, not that this is at the time of writing with the current available version (vSAN 8.0 U2).

  • vSAN Stretched Cluster datastore shared with vSAN Stretched Cluster –> Supported
  • vSAN Stretched Cluster datastore shared with vSAN Cluster (not stretched) –> Supported
  • vSAN Stretched Cluster datastore shared with Compute Only Cluster (not stretched) –> Supported
  • vSAN Stretched Cluster datastore shared with Compute Only Cluster (stretched, symmetric) –> Supported
  • vSAN Stretched Cluster datastore shared with Compute Only Cluster (stretched, asymmetric) –> Not Supported

So what is the difference between symmetric and asymmetric? The below image, which comes from the vSAN Stretched Configuration, explains it best. I think Asymmetric in this case is most likely, so if you are running Stretched vSAN and a Stretched Compute Only, it most likely is not supported.

This also applies to vSAN Max by the way. I hope that helps. Oh and before anyone asks, if the “server side” is not stretched it can be connected to a stretched environment and is supported.

 

Unexplored Territory episode 59: Introducing vSAN Max!

Duncan Epping · Oct 23, 2023 ·

Two months ago VMware introduced vSAN Max at VMware Explore. I wrote about it in this blog. Last week I had a conversation with Kalyan Krishnaswamy on the topic of vSAN Max, for which Kalyan is the Product Manager. I figured I would share the episode via my blog as well for those who are not subscribed to the Unexplored Territory podcast just yet. Note, you can either listen to it below, of just listen via Spotify, Apple, or anywhere else you get your podcasts.

Witness resiliency feature with a 2-node cluster

Duncan Epping · Oct 9, 2023 ·

A few weeks ago I had a conversation with a customer about a large vSAN ESA 2-node deployment they were planning for. One of the questions they had was if they would have a 2-node configuration with nested fault domains if they would be able to tolerate a witness failure after one of the node had gone down. I tested this for a stretched cluster, but I hadn’t tested it with a 2-node configuration. Will we actually see the votes be re-calculated after a host failure, and will the VM remain up and running when the witness fails after the votes have been recalculated?

Let’s just test it, and look at RVC at what happens in each case. Let’s look at the healthy output first, then we will look at a host failure, followed by the witness failure:

Healthy

    DOM Object: 71c32365-667e-0195-1521-0200ab157625 
      RAID_1
        Concatenation
          Component: 71c32365-b063-df99-2b04-0200ab157625 
            votes: 2, usage: 0.0 GB, proxy component: true
          RAID_0
            Component: 71c32365-f49e-e599-06aa-0200ab157625 
              votes: 1, usage: 0.0 GB, proxy component: true
            Component: 71c32365-681e-e799-168d-0200ab157625 
              votes: 1, usage: 0.0 GB, proxy component: true
            Component: 71c32365-06d3-e899-b3b2-0200ab157625 
              votes: 1, usage: 0.0 GB, proxy component: tru
        Concatenation
          Component: 71c32365-e0cb-ea99-9c44-0200ab157625 
            votes: 1, usage: 0.0 GB, proxy component: false
          RAID_0
            Component: 71c32365-6ac2-ee99-1f6d-0200ab157625 
               votes: 1, usage: 0.0 GB, proxy component: false
            Component: 71c32365-e03f-f099-eb12-0200ab157625 
               votes: 1, usage: 0.0 GB, proxy component: false
            Component: 71c32365-6ad0-f199-a021-0200ab157625 
               votes: 1, usage: 0.0 GB, proxy component: false
      Witness: 71c32365-8c61-f399-48c9-0200ab157625 
        votes: 4, usage: 0.0 GB, proxy component: false

1 host down, as you can see the votes for the witness changed, of course the staste also changed from “active” to “absent”.

    DOM Object: 71c32365-667e-0195-1521-0200ab157625 
      RAID_1
        Concatenation (state: ABSENT (6)
          Component: 71c32365-b063-df99-2b04-0200ab157625 
            votes: 1, proxy component: false
          RAID_0
            Component: 71c32365-f49e-e599-06aa-0200ab157625 
              votes: 1, proxy component: false
            Component: 71c32365-681e-e799-168d-0200ab157625 
              votes: 1, proxy component: false
            Component: 71c32365-06d3-e899-b3b2-0200ab157625 
              votes: 1, proxy component: false
        Concatenation
          Component: 71c32365-e0cb-ea99-9c44-0200ab157625 
             votes: 2, usage: 0.0 GB, proxy component: false
          RAID_0
            Component: 71c32365-6ac2-ee99-1f6d-0200ab157625 
              votes: 1, usage: 0.0 GB, proxy component: false
            Component: 71c32365-e03f-f099-eb12-0200ab157625
              votes: 1, usage: 0.0 GB, proxy component: false
            Component: 71c32365-6ad0-f199-a021-0200ab157625 
              votes: 1, usage: 0.0 GB, proxy component: false
      Witness: 71c32365-8c61-f399-48c9-0200ab157625 
        votes: 1, usage: 0.0 GB, proxy component: false

And after I failed the witness, of course we had to check if the VM was still running and didn’t show up as inaccessible in the UI, and it did not. vSAN and the Witness Resilience feature worked as I expected it would work. (Yes, I double checked it through RVC as well, and the VM was “active”.)

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Interim pages omitted …
  • Page 71
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in