• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

express storage architecture

What can I change about a vSAN ESA Ready Node?

Duncan Epping · Jan 23, 2023 ·

I’ve had half a dozen people asking about this over the past weeks, it really seems more and more people are at the point of adopting vSAN ESA (Express Storage Architecture. When they look at the various vSAN ESA Ready Node configurations what stands out is that the current list is limited in terms of server models and configurations. (https://vmwa.re/vsanesahcl)

The list is being updated every week, last week for instance Supermicro popped up as a Server vendor. Of course, Dell, HPe, and Lenovo had been on the list since day 1. When you select the vendor, the ready node type, and the model you will then have the option to select a number of things, but in most cases, you seem to be limited to “Storage Device” and “Number of Storage Devices”. This however does not mean you cannot change anything. A knowledge base article has been released which describes what you can, and cannot change when it comes to these configurations! The KB article is listed on the vSAN ESA VMware Compatibility Guide list, but somehow it seems people don’t always notice the link. (Yes, I have asked the team to make the link more obvious somehow.)

Now when you look at the KB it lists what you can change, and what the rules are when it comes to making changes. For instance, you can change the CPU, but only for the same or higher core count and the same or higher base clock speed. For memory, you can increase the amount, and the same applies to storage capacity for instance. For storage it is even a bit more specific, you need to use the same make/model, so basically if the ReadyNode configuration lists a P5600 of 1.6TB, you can swap it for a P5600 of 3.2TB. We recently (May 20th 2023) had a change in support, and we now support the change of device model/make, as long as you follow the other guidelines mentioned in the KB. For instance, you can swap an Intel device for a Samsung, but that Samsung would need to be supported by the OEM vendor and needs to be the same (or higher) performance and endurance class. And of course the device needs to be certified for vSAN ESA: http://vmwa.re/vsanesahclc. Anyway, if you are configuring a Ready Node for ESA, make sure to check the KB so that you make supported changes!

Should I always use vSAN ESA, or can I go for vSAN OSA?

Duncan Epping · Dec 22, 2022 ·

Starting to get this question more often, should I always use vsAN ESA, or are there reasons to go for vSAN OSA? The answer is simple, whether you should use ESA or OSA can only be answered by the most commonly used phrase in consultancy: it depends. What does it depend on? Well, your requirements and your constraints.

One thing which comes up frequently is the 25Gbps requirement from a networking perspective. I’ve seen multiple people saying that they want to use ESA but their environment is not even close to saturating 10Gbps currently, so can they use ESA with 10Gbps? Yes you can, we do support 10Gbps with the new readynode profiles, but we typically do recommend for ESA to use 25Gbps. Why? Well, with ESA there’s also a certain performance expectation, which is why there’s a bandwidth requirement. The bandwidth requirement is put in place to ensure that you can use the NVMe devices to their full potential.

With both ESA and OSA you can produce impressive performance results, the big difference between ESA and OSA is the fact that ESA does this with a single type of NVMe device across the cluster, whereas OSA uses caching devices and capacity devices. ESA has also been optimized for high performance and is better at leveraging the existing host resources to achieve those higher numbers. An example of how ESA achieves that is through multi-threading for instance. What I appreciate about ESA the most is that the stack is also optimized in terms of resource usage. By moving data services to the top of the stack, data processing (compression of encryption for example) happens at the source instead of at bottom/destination. In other words, blocks are compressed by one host, and not by two or more (depending on the selected RAID level and type). Also, data is transferred over the network compressed, which saves bandwidth etc.

Back to OSA, why would you go for the Original Storage Architecture instead of ESA? Well, like I said, if you don’t have the performance requirements that dictate the use of ESA. If you want to use vSAN File Services (not supported with ESA in 8.o), HCI Mesh etc. If you want to run a hybrid configuration. If you want to use the vSAN Standard license. Plenty of reasons to still use OSA, so please don’t assume ESA is the only option. Use what you need to achieve your desired outcome, use what fits your budget, and use what will work with your constraints and requirements.

vSAN Express Storage Architecture cluster sizes supported?

Duncan Epping · Dec 20, 2022 ·

On VMTN a question was asked about the size of the cluster being supported for vSAN Express Storage Architecture (ESA). There appears to be some misinformation out there on various blogs. Let me first state that you should rely on official documentation when it comes to support statements, and not on third party blogs. VMware has the official documentation website, and of course, there’s core.vmware.com with material produced by the various tech marketing teams. This is what I would rely on for official statements and or insights on how things work, and then of course there are articles on personal blogs by VMware folks. Anyway. back to the question, which cluster size is supported?

For vSAN ESA, VMware supports the exact same configuration when it comes to the cluster size as it supports for OSA. In other words, as small as a 2-node configuration (with a witness), as large as a 64-node configuration, and anything in between!

Now when it comes to sizing your cluster, the same applies for ESA as it does for OSA, if you want VMs to automatically rebuild after a host failure or long-term maintenance mode action, you will need to make sure you have capacity available in your cluster. That capacity comes in the form of storage capacity (flash) as well as host capacity. Basically what that means is that you need to have additional hosts available where the components can be created, and the capacity to resync the data of the impacted objects.

If you look at the diagram below, you see 6 components in the capacity leg and 7 hosts, which means that if a host fails you still have a host available to recreate that component, again, on this host you also still need to have capacity available to resync the data so that the object is compliant again when it comes to data availability.

I hope that explains first of all what is supported from a cluster size perspective, and secondly why you may want to consider adding additional hosts. This of course will depend on the requirements you have and the budget you have.

vSAN 8.0 ESA – Dude, where’s my vSAN disk group?

Duncan Epping · Nov 29, 2022 ·

Last week I was talking to a customer and he mentioned that he deployed vSAN 8.0 in his lab and he was shocked that when he wanted to define disk groups he noticed that they don’t exist anymore. Well, not in vSAN 8.0 ESA (Express Storage Architecture) that is. They do still exist in the Original Storage Architecture! The big change with vSAN 8.0 ESA is that the “bottleneck” in the previous architecture has been removed. No longer will you select a single device for caching for a particular disk group, and no longer do you designate devices purely for capacity.

With vSAN 8.0 ESA all your devices will be part of a single storage pool, and all those devices will contribute to both storage capacity as well as storage performance! The added benefit of course is the fact that writes and reads will be distributed across all devices, removing a potential choking point, and also removing a single point of failure. Why? Well with vSAN OSA when the caching device fails the whole disk group becomes unavailable. With ESA that is no longer the case as there’s no caching device!

So how does vSAN ESA provide both optimal efficiency for capacity as well as optimal performance? Well, it does this by introducing additional layers. The idea is that vSAN will provide write performance at the level of RAID-1 but space efficiency at the level of RAID-5 or RAID-6. That would be the best of both worlds. It would need to do this however while taking into consideration that we are also dealing with different types of flash devices than you normally would be with vSAN OSA. In other words, writes will also need to be optimized for the types of devices used (TLC), and it will also need to be future-proof for devices that may be supported later on (QLC).

One of the key elements in this new architecture is the introduction of the “log-structured filesystem” and the “durable log”. Let’s look at the below diagram first.

What we do with vSAN ESA is that all data is written to the log-structured file system first in the durable log. This ensures that data is persistently stored. This is what the “performance leg” provides. The performance leg literally stores the writes first. That could be 4KB blocks, or 32KB blocks, or whatever. It stores the data first, collects a full stripe write (512KB), and then writes the data to the capacity leg. Why these 2-layers? Well, the performance leg is a RAID-1 configuration, so it is optimal for write performance, while in general, the capacity leg will be RAID-5 or RAID-6, which is optimal for space efficiency. By creating this small performance leg component that holds the durable log, vSAN is capable of immediately acknowledging the writes as it is persisted in the log, and then when there’s a full stripe write it efficiently as RAID-5 or RAID-6.

Now of course, in the UI you will be able to see those new performance leg components and the capacity leg components. They are not marked as “performance” or “capacity” but they are easily recognizable. I created a quick demo that talks you through the above. If you are interested, check it out!

vSAN 8.0 ESA – Introducing Adaptive RAID-5

Duncan Epping · Nov 15, 2022 ·

Starting with vSAN 8.0 ESA (Express Storage Architecture) VMware has introduced an adaptive RAID-5 mechanism. What does this mean? Essentially, vSAN deploys a particular RAID-5 configuration depending on the size of the cluster! There are two options, let’s list them out and discuss them individually.

  • RAID-5, 2+1, 3-5 hosts
  • RAID-5, 4+1, 6 hosts or more

As mentioned in the above list, depending on the cluster size, you will see a particular RAID-5 configuration. Clusters of up to 5 hosts will see a 2+1 configuration when RAID-5 is selected. For those wondering, the below diagram will show what this looks like. 2+1 configurations will have a 150% overhead, meaning that when you store 100GB of data, this will consume 150GB of capacity.

Now, when you have a larger cluster, meaning 6 hosts or more, vSAN will deploy a 4+1 configuration. The big benefit of this is that the “capacity overhead” goes down from 150% to 125%, in other words, 100GB of data will consume 125GB of capacity.

What is great about this solution is that vSAN will monitor the cluster size. If you have 6 hosts and a host fails, or a host is placed into maintenance mode etc, vSAN will automatically scale down the RAID-5 configuration from 4+1 to 2+1 after a time period of 24 hours. I of course had to make sure that it actually works, so I created a quick demo that shows vSAN changing the RAID-5 configuration from 4+1 to 2+1, and then back again to 4+1 when we reintroduce a host into the cluster.

One more thing I need to point out. The Adaptive RAID-5 functionality also works in a stretched cluster. So if you have a 3+3+1 stretched cluster you will see a 2+1 RAID-5 set. If you have a 6+6+1 cluster (or more in each location) then you will see a 4+1 set. Also, if you place a few hosts into maintenance mode or hosts have failed then you will see the configuration change from 4+1 to 2+1, and the other way around when hosts return for duty!

For more details, watch the demo, or read this excellent post by Pete Koehler on the VMware website.

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 ยท Log in