• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

osa

vSAN 8.0 U1 – Disaggregated Storage Enhancements!

Duncan Epping · Mar 16, 2023 · 5 Comments

With vSAN 8.0 U1 a lot of new features and enhancements are introduced. There are many blog posts out there describing the long list of enhancements, but in this post, I want to focus on HCI Mesh or Disaggregated vSAN specifically. (Also read this post by Cato!) For this feature, which in the UI is referred to as “Datastore Sharing”, there are 3 key enhancements introduced in vSAN 8.0 U1. There are enhancements for both the Original Storage Architecture (OSA), as well as the Express Storage Architecture (ESA).

With vSAN 8.0 the initial version of ESA was launched, and it did not support the use of Datastore Sharing. Starting with vSAN 8.0 U1 though, vSAN ESA is now also capable of sharing its storage with other clusters in the environment. To be more precise, a vSAN ESA cluster can now mount the datastore of another vSAN ESA cluster. What we also support is a “compute only” cluster mounting the vSAN ESA datastore remotely. So for those planning on implementing vSAN ESA, I think that is a very welcome enhancement!

For OSA there are also two enhancements for Datastore Sharing. The first I want to discuss is cross-vCenter Server datastore sharing. This feature is especially useful with customers who have a larger estate and are managing multiple clusters via different vCenter Server instances. You simply now have the option to connect the vCenter Server instances from a storage point of view, and then you can simply select the remote datastore in the cluster managed by a different vCenter Server instance. Let me just show you how this actually works in the next demo.

The second enhancement for OSA specifically is support for Stretched Cluster configurations. Starting with vSAN 8.0 U1 it is now possible to mount a vSAN Datastore which is stretched across locations. Your “client” cluster” can be “stretched”, “standard”, or compute-only even. We support all of those combinations. On top of that, the interface enables you to specify which location should be paired with which location, or fault domain. In other words, if you look at the diagram below, I can ensure that the hosts in Site A connect via the “local” network” to the remote datastore as part of Site A. This avoids IO traversing the intersite link, which can make a big difference in terms of latency and available bandwidth for other I/O etc.

I can imagine that the concepts are difficult to grasp without seeing the vSphere Client, so I spend some time in the lab to create a demo for you that walks you through the steps of how to configure this. In the lab I created a vSAN Stretched Cluster, and a standard cluster, and I am going to mount the vSAN stretched Datastore to the host in the standard cluster. Enjoy!

Should I always use vSAN ESA, or can I go for vSAN OSA?

Duncan Epping · Dec 22, 2022 · 6 Comments

Starting to get this question more often, should I always use vsAN ESA, or are there reasons to go for vSAN OSA? The answer is simple, whether you should use ESA or OSA can only be answered by the most commonly used phrase in consultancy: it depends. What does it depend on? Well, your requirements and your constraints.

One thing which comes up frequently is the 25Gbps requirement from a networking perspective. I’ve seen multiple people saying that they want to use ESA but their environment is not even close to saturating 10Gbps currently, so can they use ESA with 10Gbps? No, you cannot. ESA requires 25Gbps at minimum, and the bandwidth requirement fully depends on the ESA ReadyNode configuration you select. Why? Well, with ESA there’s also a certain performance expectation, which is why there’s a bandwidth requirement. The bandwidth requirement is put in place to ensure that you can use the NVMe devices to their full potential.

With both ESA and OSA you can produce impressive performance results, the big difference between ESA and OSA is the fact that ESA does this with a single type of NVMe device across the cluster, whereas OSA uses caching devices and capacity devices. ESA has also been optimized for high performance and is better at leveraging the existing host resources to achieve those higher numbers. An example of how ESA achieves that is through multi-threading for instance. What I appreciate about ESA the most is that the stack is also optimized in terms of resource usage. By moving data services to the top of the stack, data processing (compression of encryption for example) happens at the source instead of at bottom/destination. In other words, blocks are compressed by one host, and not by two or more (depending on the selected RAID level and type). Also, data is transferred over the network compressed, which saves bandwidth etc.

Back to OSA, why would you go for the Original Storage Architecture instead of ESA? Well, like I said, if you don’t have the performance requirements that dictate the use of ESA. If you want to use vSAN File Services (not supported with ESA in 8.o), HCI Mesh etc. If you want to run a hybrid configuration. If you want to use the vSAN Standard license. Plenty of reasons to still use OSA, so please don’t assume ESA is the only option. Use what you need to achieve your desired outcome, use what fits your budget, and use what will work with your constraints and requirements.

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
Aug 21st – VMware Explore
Sep 20th – VMUG DK
Nov 6th – VMware Explore
Dec 7th – Swiss German VMUG

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in