I created a vSAN Data Protection Demo for VMware Explore in Las Vegas and Barcelona, and as I got some questions from people about vSAN Data Protection I figured I would share the demo on Youtube and post it on my blog. I have written a few articles on vSAN Data Protection already, and I am starting to notice that some customers are starting to test it and even use it in production where possible. As mentioned before, vSAN Data Protection is only available with vSAN ESA as it leverages the new snapshotting capabilities. The brand new UI is introduced through the Snap Manager Appliance. Make sure to download, deploy, and configure that first.
What happened to the option “none – stretched cluster” in storage policies?
Starting with 8.0 Update 2 support for the option “None – Stretched Cluster” in your storage policy for a vSAN Stretched Cluster configuration has been removed. The reason for this is that it was leading to a lot of situations where unfortunately customers mistakenly had used this option and during failure realized that some VMs were not working anymore. The reason the VMs stopped working is because with this policy option objects all components of an object were placed within a single location, but there was no guarantee that all objects of a VM would reside in the same location. So you could for instance end up in the following situation shown below, where you have the VM running in Site B, with a data object stored in Site A and a data object stored in Site B, and on top of that the witness in Site B. This unfortunately for some customers resulted in strange behavior if there was an issue with the network between Site A and Site B.
Hopefully that explains why this option is no longer available.
vSAN IO Trip Analyzer in 8.0 Update 3 enhanced!
Starting with vSAN 8.0 Update 3 the vSAN IO Trip Analyzer has now been enhanced! What is this enhancement specifically? Well, the IO Trip Analyzer can now also be enabled on multiple VMs at the same time. Which is especially great when you are performance troubleshooting a service that consists of multiple VMs. This enhancement allows you to monitor multiple VMs simultaneously and inspect at which layer of each of the selected VMs latency occurs. Note, this enhancements works both for vSAN ESA as well as vSAN OSA!
I created a short demo that shows this capability:
New vCLS architecture with vSphere 8.0 Update 3
Some of you may have seen this, others may have not, as I had a question today around vCLS retreat mode with 8.0U3 I figured I would write something on the topic quickly. Starting with vSphere 8.0 Update we introduced a new architecture for vCLS aka vSphere Cluster Services. Pre-vSphere 8.0 Update the vCLS architecture was based on virtual machines with Photon OS. These VMs were there to assist in enabling and disabling primarily DRS. If something was wrong with these VMs then DRS would also be unable to function normally. In the past many of you have probably experienced situations where you had to kill and delete the vCLS VMs to restore functionality of DRS, for that VMware introduced a feature called “retreat mode” which basically killed and deleted the VMs for you. There were some other challenges with the vCLS VMs and as a result the team decided to create a new design for vCLS.
Starting with vSphere 8.0 Update 3 vCLS is now implemented as what I would call a container runtime, sometimes referred to as a Pod VM or PodCRX. In other words, when you upgrade to vSphere 8.0 Update 3 you will see your current vCLS VMs be deleted, and these new shiny vCLS VMs will pop up. How do you know if these VMs are created using a different mechanism? Well you can simply just see that in the UI as demonstrated below. See the “CRX” mention in the UI?
So you may ask yourself, why should I even care? Well the thing is, you should not indeed. The new vCLS architecture uses less resources per VM, there are less vCLS VMs deployed to begin with (two instead of three), and they are more resilient. Also, when a host is for instance placed into maintenance mode while it has a vCLS VM running, that vCLS instance is deleted and recreated elsewhere. Considering the VMs are stateless and tiny, that is much more efficient than trying to vMotion it. Note, vMotion and SvMotion of these new (Embedded as they call them) type of vCLS VMs isn’t even supported to begin with.
Normally, vCLS retreat mode shouldn’t be needed anymore, but if you do end up in a situation where you need to clean up these instances, Retreat Mode is still fully supported with 8.0 U3 as well. You can find the Retreat Mode option in the same place as before, on your cluster object under “Configure –> vSphere Cluster Services –> General –> Edit vCLS Mode”. Simply select “Retreat Mode” and the clean up should happen automatically. When you want the VMs to be recreated, simply go back to the same UI and select “System managed”. This should then lead to the vCLS VMs being recreated.
I hope this helps,
Why vSAN Max aka disaggregated storage?
At VMware Explore 2024 in Las Vegas I had many customer meetings. Last year my calendar was also swamped and one of the things we spend a lot of time on was explaining to customers where vSAN Max would fit into the picture. vSAN Max was originally positioned as a “storage only vSAN platform for petabyte scale use cases”. I guess this still somewhat applies, but since then a lot has changed.
First, the vSAN Max ReadyNode configurations have changed substantially, and you can start at a much smaller capacity scale than when originally launched. We start at 20TB for an XS ReadyNode configuration, which means that with a 4 node minimum you have 80TB. That is something completely different then the petabytes we originally discussed. The other big difference is also the networking requirements, depending on the capacity needs, those have also come down substantially.
Now was said, originally the platform was positioned as a solution for customers running at petabyte scale. The reason I wanted to write a quick blog is because that is not the main argument today customers have for adopting vSAN Max or considering vSAN Max in their environment. The reason is something I personally did not expect to hear, but it is all about operations and sometimes politics.
In a traditional environment, of course, depending on the size, you typically see a separation of duties. You have virtualization admins, networking admins, and storage admins. We have worked hard over the past decade to try to create these full-stack engineers, but the reality is that many companies still have these silos, and they will likely still exist 20 years from now.
This is where vSAN Max can help. The HCI model typically means that the virtualization administrator takes on the storage responsibilities when they implement vSAN, but with vSAN Max this doesn’t necessarily need to be the case. As various customers mentioned last week, with vSAN Max you could have a fully separated environment that is managed by a different team. Funny how often this was brought up as a great use case for vSAN. Especially with the amount of vSAN capacity included in VCF this makes more and more sense!
You could even have a different authentication service connected to the vCenter Server, which manages your vSAN Max clusters! You could have other types of hosts, cluster sizes, best practices, naming schemes, etc. This will all be up to the team managing that particular euuh silo. I know, sometimes a silo is seen as something negative, but for a lot of organizations, this is how they operate, and prefer to operate for the foreseeable future. If so, vSAN Max can cater for that use case as well!