• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

What can I change about a vSAN ESA Ready Node?

Duncan Epping · Jan 23, 2023 ·

I’ve had half a dozen people asking about this over the past weeks, it really seems more and more people are at the point of adopting vSAN ESA (Express Storage Architecture. When they look at the various vSAN ESA Ready Node configurations what stands out is that the current list is limited in terms of server models and configurations. (https://vmwa.re/vsanesahcl)

The list is being updated every week, last week for instance Supermicro popped up as a Server vendor. Of course, Dell, HPe, and Lenovo had been on the list since day 1. When you select the vendor, the ready node type, and the model you will then have the option to select a number of things, but in most cases, you seem to be limited to “Storage Device” and “Number of Storage Devices”. This however does not mean you cannot change anything. A knowledge base article has been released which describes what you can, and cannot change when it comes to these configurations! The KB article is listed on the vSAN ESA VMware Compatibility Guide list, but somehow it seems people don’t always notice the link. (Yes, I have asked the team to make the link more obvious somehow.)

Now when you look at the KB it lists what you can change, and what the rules are when it comes to making changes. For instance, you can change the CPU, but only for the same or higher core count and the same or higher base clock speed. For memory, you can increase the amount, and the same applies to storage capacity for instance. For storage it is even a bit more specific, you need to use the same make/model, so basically if the ReadyNode configuration lists a P5600 of 1.6TB, you can swap it for a P5600 of 3.2TB. We recently (May 20th 2023) had a change in support, and we now support the change of device model/make, as long as you follow the other guidelines mentioned in the KB. For instance, you can swap an Intel device for a Samsung, but that Samsung would need to be supported by the OEM vendor and needs to be the same (or higher) performance and endurance class. And of course the device needs to be certified for vSAN ESA: http://vmwa.re/vsanesahclc. Anyway, if you are configuring a Ready Node for ESA, make sure to check the KB so that you make supported changes!

Cross connecting vSAN Datastores with HCI Mesh in vSAN 8 OSA

Duncan Epping · Jan 4, 2023 ·

Yesterday I had a discussion internally on Slack about a configuration for a customer. The customer had multiple locations and had the potential to create various clusters within locations, or across locations. Now, as most of you know, I have been talking a lot about stretched clusters for the past decade. However, stretched clusters are not the answer to every problem. Especially not in situations where you end up with unbalanced configurations, or where you lack the ability to place a witness in a third location. Customers tend to gravitate towards stretched clusters as it provides them resiliency, and pooling of all resources even though they are across locations.

With vSAN when you have multiple clusters, you also have multiple vSAN Datastores. Having that separation of resources is typically appreciated. However, in some cases, customers prefer the flexibility of movement with limited overhead. Sure if you have multiple clusters you can simply Storage vMotion VMs from source cluster to destination cluster, but it does mean you need to move ALL the data with the VM, where in some cases you may not care where the data resides.

This is where HCI Mesh comes into play. With HCI Mesh you have the ability to mount a vSAN Datastore. Meaning, you have a “client” and a “server” cluster, and the client mounts the server. Within our documentation on core.vmware.com this is demonstrated as follows:

If you look at this diagram then the top two clusters are “client” clusters and the bottom one is the “server” cluster. This basically means that the “client” cluster consumed the datastore capacity from the “server” cluster. The above diagram however resulted in a bit of confusion as it does not show a situation where your client cluster can simultaneously be a server cluster. This is something I want to point out. You can create a true “mesh” with HCI Mesh! If you have two clusters, let’s say Cluster A and Cluster B, then you can mount the vSAN Datastore from Cluster A to Cluster B and the datastore from Cluster B to Cluster A. This is fully supported, and works great, as demonstrated in the below two screenshots. I tested this with vSphere/vSAN 8.0 OSA, but it is also supported with vSAN 7.0 U1 and up. Do note, vSAN ESA today does not support HCI Mesh just yet, hopefully it will in the near feature!


So before you decide how to configure vSAN, please look at all the capabilities provided, write down your requirements, and see what helps solving the challenges you are facing while meeting those requirements (in a supported way)!

The following VIBs on the host are missing from the image and will be removed from the host during remediation: vmware-fdm

Duncan Epping · Jan 2, 2023 ·

I’ve seen a few people being confused about a message which is shown when upgrading ESXi. The message is: The following VIBs on the host are missing from the image and will be removed from the host during remediation: vmware-fdm(version number + build number). Now this happens when you use vLCM (Lifecycle Manager) to upgrade from one version of ESXi to the next. The reason for it is simple, the vSphere HA VIB (vmware-fdm) is never included in the image.

The following VIBs on the host are missing from the image and will be removed from the host during remediation: vmware-fdm

If it is not included, how do the hosts get the VIB? The VIB is pushed by vCenter Server to the hosts when required! (When you enable HA for instance on a cluster.) This also is the case after an upgrade. After the VIB is removed it will simply be replaced by the latest version of it by vCenter Server. So no need to be worried, HA will work perfectly fine after the upgrade!

Should I always use vSAN ESA, or can I go for vSAN OSA?

Duncan Epping · Dec 22, 2022 ·

Starting to get this question more often, should I always use vsAN ESA, or are there reasons to go for vSAN OSA? The answer is simple, whether you should use ESA or OSA can only be answered by the most commonly used phrase in consultancy: it depends. What does it depend on? Well, your requirements and your constraints.

One thing which comes up frequently is the 25Gbps requirement from a networking perspective. I’ve seen multiple people saying that they want to use ESA but their environment is not even close to saturating 10Gbps currently, so can they use ESA with 10Gbps? Yes you can, we do support 10Gbps with the new readynode profiles, but we typically do recommend for ESA to use 25Gbps. Why? Well, with ESA there’s also a certain performance expectation, which is why there’s a bandwidth requirement. The bandwidth requirement is put in place to ensure that you can use the NVMe devices to their full potential.

With both ESA and OSA you can produce impressive performance results, the big difference between ESA and OSA is the fact that ESA does this with a single type of NVMe device across the cluster, whereas OSA uses caching devices and capacity devices. ESA has also been optimized for high performance and is better at leveraging the existing host resources to achieve those higher numbers. An example of how ESA achieves that is through multi-threading for instance. What I appreciate about ESA the most is that the stack is also optimized in terms of resource usage. By moving data services to the top of the stack, data processing (compression of encryption for example) happens at the source instead of at bottom/destination. In other words, blocks are compressed by one host, and not by two or more (depending on the selected RAID level and type). Also, data is transferred over the network compressed, which saves bandwidth etc.

Back to OSA, why would you go for the Original Storage Architecture instead of ESA? Well, like I said, if you don’t have the performance requirements that dictate the use of ESA. If you want to use vSAN File Services (not supported with ESA in 8.o), HCI Mesh etc. If you want to run a hybrid configuration. If you want to use the vSAN Standard license. Plenty of reasons to still use OSA, so please don’t assume ESA is the only option. Use what you need to achieve your desired outcome, use what fits your budget, and use what will work with your constraints and requirements.

vSAN Express Storage Architecture cluster sizes supported?

Duncan Epping · Dec 20, 2022 ·

On VMTN a question was asked about the size of the cluster being supported for vSAN Express Storage Architecture (ESA). There appears to be some misinformation out there on various blogs. Let me first state that you should rely on official documentation when it comes to support statements, and not on third party blogs. VMware has the official documentation website, and of course, there’s core.vmware.com with material produced by the various tech marketing teams. This is what I would rely on for official statements and or insights on how things work, and then of course there are articles on personal blogs by VMware folks. Anyway. back to the question, which cluster size is supported?

For vSAN ESA, VMware supports the exact same configuration when it comes to the cluster size as it supports for OSA. In other words, as small as a 2-node configuration (with a witness), as large as a 64-node configuration, and anything in between!

Now when it comes to sizing your cluster, the same applies for ESA as it does for OSA, if you want VMs to automatically rebuild after a host failure or long-term maintenance mode action, you will need to make sure you have capacity available in your cluster. That capacity comes in the form of storage capacity (flash) as well as host capacity. Basically what that means is that you need to have additional hosts available where the components can be created, and the capacity to resync the data of the impacted objects.

If you look at the diagram below, you see 6 components in the capacity leg and 7 hosts, which means that if a host fails you still have a host available to recreate that component, again, on this host you also still need to have capacity available to resync the data so that the object is compliant again when it comes to data availability.

I hope that explains first of all what is supported from a cluster size perspective, and secondly why you may want to consider adding additional hosts. This of course will depend on the requirements you have and the budget you have.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 15
  • Page 16
  • Page 17
  • Page 18
  • Page 19
  • Interim pages omitted …
  • Page 492
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 ยท Log in