• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

8.0

vSAN ESA is using more CPU cycles than vSAN OSA?

Duncan Epping · Feb 1, 2023 · 1 Comment

Over the last couple of weeks, I’ve had conversations with customers and partners who have been running performance benchmarks against both vSAN ESA and vSAN OSA. As you can imagine, people want to compare version 8 of OSA against version 8 of ESA, and that is completely fair. What I noticed though is that some of those customers came back with comments around CPU usage of vSAN OSA against ESA. The general comment we get is that vSAN ESA is using more CPU cycles than vSAN OSA.

When looking at it from a total number point of view, or CPU cycles consumed, it is very likely you will see vSAN ESA using more cycles than vSAN OSA. The question then typically arises why that is the case, as VMware (the vSAN team) has been claiming that vSAN ESA is much more efficient than vSAN OSA. To be fair, it is much more efficient. For instance data services like checksumming, encryption, and compression have moved to the top of the stack (as shown below) resulting in the fact that we don’t have to compress/encrypt data 3/4/5/6 times but can do it once at the source and then send it over the network to the destination.

Still, it leaves the question, why is more CPU capacity used? The answer is simple, you are pushing much more IO. We’ve seen customers easily reaching 4x the number of IOPS with ESA than with OSA. Even though ESA is more efficient, if you are pushing 4x (or more) the amount of IO then you will need to remember that those additional IOs also come at a cost, and that cost is CPU cycles to process them. So when you make a comparison, please compare apples to apples, and not apples to oranges.

The last thing I want to add, and hopefully I can share some data in the future, the use of RDMA with vSAN 8 ESA seems to have a significant impact on CPU usage, as in lower the amount of CPU required to produce the same results (or better results). So it is worth considering RDMA for sure when adopting vSAN 8 ESA!

Cross connecting vSAN Datastores with HCI Mesh in vSAN 8 OSA

Duncan Epping · Jan 4, 2023 · 4 Comments

Yesterday I had a discussion internally on Slack about a configuration for a customer. The customer had multiple locations and had the potential to create various clusters within locations, or across locations. Now, as most of you know, I have been talking a lot about stretched clusters for the past decade. However, stretched clusters are not the answer to every problem. Especially not in situations where you end up with unbalanced configurations, or where you lack the ability to place a witness in a third location. Customers tend to gravitate towards stretched clusters as it provides them resiliency, and pooling of all resources even though they are across locations.

With vSAN when you have multiple clusters, you also have multiple vSAN Datastores. Having that separation of resources is typically appreciated. However, in some cases, customers prefer the flexibility of movement with limited overhead. Sure if you have multiple clusters you can simply Storage vMotion VMs from source cluster to destination cluster, but it does mean you need to move ALL the data with the VM, where in some cases you may not care where the data resides.

This is where HCI Mesh comes into play. With HCI Mesh you have the ability to mount a vSAN Datastore. Meaning, you have a “client” and a “server” cluster, and the client mounts the server. Within our documentation on core.vmware.com this is demonstrated as follows:

If you look at this diagram then the top two clusters are “client” clusters and the bottom one is the “server” cluster. This basically means that the “client” cluster consumed the datastore capacity from the “server” cluster. The above diagram however resulted in a bit of confusion as it does not show a situation where your client cluster can simultaneously be a server cluster. This is something I want to point out. You can create a true “mesh” with HCI Mesh! If you have two clusters, let’s say Cluster A and Cluster B, then you can mount the vSAN Datastore from Cluster A to Cluster B and the datastore from Cluster B to Cluster A. This is fully supported, and works great, as demonstrated in the below two screenshots. I tested this with vSphere/vSAN 8.0 OSA, but it is also supported with vSAN 7.0 U1 and up. Do note, vSAN ESA today does not support HCI Mesh just yet, hopefully it will in the near feature!


So before you decide how to configure vSAN, please look at all the capabilities provided, write down your requirements, and see what helps solving the challenges you are facing while meeting those requirements (in a supported way)!

vSAN 8.0 ESA – Introducing Adaptive RAID-5

Duncan Epping · Nov 15, 2022 · 2 Comments

Starting with vSAN 8.0 ESA (Express Storage Architecture) VMware has introduced an adaptive RAID-5 mechanism. What does this mean? Essentially, vSAN deploys a particular RAID-5 configuration depending on the size of the cluster! There are two options, let’s list them out and discuss them individually.

  • RAID-5, 2+1, 3-5 hosts
  • RAID-5, 4+1, 6 hosts or more

As mentioned in the above list, depending on the cluster size, you will see a particular RAID-5 configuration. Clusters of up to 5 hosts will see a 2+1 configuration when RAID-5 is selected. For those wondering, the below diagram will show what this looks like. 2+1 configurations will have a 150% overhead, meaning that when you store 100GB of data, this will consume 150GB of capacity.

Now, when you have a larger cluster, meaning 6 hosts or more, vSAN will deploy a 4+1 configuration. The big benefit of this is that the “capacity overhead” goes down from 150% to 125%, in other words, 100GB of data will consume 125GB of capacity.

What is great about this solution is that vSAN will monitor the cluster size. If you have 6 hosts and a host fails, or a host is placed into maintenance mode etc, vSAN will automatically scale down the RAID-5 configuration from 4+1 to 2+1 after a time period of 24 hours. I of course had to make sure that it actually works, so I created a quick demo that shows vSAN changing the RAID-5 configuration from 4+1 to 2+1, and then back again to 4+1 when we reintroduce a host into the cluster.

One more thing I need to point out. The Adaptive RAID-5 functionality also works in a stretched cluster. So if you have a 3+3+1 stretched cluster you will see a 2+1 RAID-5 set. If you have a 6+6+1 cluster (or more in each location) then you will see a 4+1 set. Also, if you place a few hosts into maintenance mode or hosts have failed then you will see the configuration change from 4+1 to 2+1, and the other way around when hosts return for duty!

For more details, watch the demo, or read this excellent post by Pete Koehler on the VMware website.

How to convert a standard cluster to a stretched cluster while expanding it!

Duncan Epping · Sep 27, 2022 · Leave a Comment

On VMTN a question was asked about how you could convert a 5-node standard cluster to a stretched cluster. It is not documented in our regular documentation, probably as the process is pretty straightforward, so I figured I would write it down. When you create a stretched cluster you will need a Witness Appliance in a third location first. I would recommend deploying that Witness Appliance before doing anything else.

After you deployed the Witness Appliance add the additional hosts to vCenter Server. DO NOT yet add them to the cluster yet though! First, configure each host separately. After you have configured each host, place the host into maintenance mode. After the host is placed into maintenance mode, move it into the cluster and do not take it out of maintenance mode!

Now, when all hosts are part of the cluster you can create the Stretched Cluster. This process is simple, you pick the hosts that belong to each location, and then you select the witness. After the cluster has been created you simply take the hosts out of maintenance mode and you should be good! Note, you take the host out of maintenance after the Stretched Cluster has been created to ensure that you don’t have any rebalancing happening while you are creating the stretched cluster. Simply avoiding unneeded resyncs from occuring.

Do note, all VMs will have the same storage policy assigned still, so you will need to change that policy to ensure that the vSAN objects are placed and replicated according to your requirements! (RAID1 across locations and RAID-1/5/6 within a location for instance.)

Podcast episodes: vSphere 8, vSAN 8, and VMware Explore wrap-up…

Duncan Epping · Sep 5, 2022 · Leave a Comment

It was a busy week at VMware Explore last week, but we still managed to record new content to discuss what was happening at VMware Explore. We spoke with folks like Kit Colbert, Chris Wolf, Dave Morera, Sazzala Reddy, and many others. We also recorded episodes to cover the vSAN 8.0 and vSphere 8.0 release. For vSAN 8.0 we asked Pete Koehler to go over all the changes with vSAN Express Storage Architecture. vSphere 8.0 was covered by Feidhlim O’Leary, going into every aspect of the release, and it is a lot.

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

Feb 9th – Irish VMUG
Feb 23rd – Swiss VMUG
March 7th – Dutch VMUG
May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in