• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vSAN File Services and Stretched Clusters!

Duncan Epping · Mar 29, 2021 · Leave a Comment

As most of you probably know, vSAN File Services is not supported on a stretched cluster with vSAN 7.0 or 7.0U1. However, starting with vSAN 7.0 U2 we now fully support the use of vSAN File Services on a stretched cluster configuration! Why is that?

In 7.0 U2, you now have the ability to specify during configuration of vSAN File Services to which site certain IP addresses belong. In other words, you can specify the “site affinity” of your File Service services. This is shown in the screenshot below. Now I do want to note, this is a soft affinity rule. Meaning that if hosts, or VMs, fail on which these file services containers are running it could be that the container is restarted in the opposite location. Again, a soft rule, not a hard rule!

Of course, that is not the end of the story. You also need to be able to specify for each share with which location they have affinity. Again, you can do this during configuration (or edit it afterward if desired), and this basically then sets the affinity for the file share to a location. Or said differently, it will ensure that when you connect to file share, one of the file servers in the specified site will be used. Again, this is a soft rule, meaning that if none of the file servers are available on that site, you will still be able to use vSAN File Services,  just not with the optimized data path you defined.

Hopefully, that gives a quick overview of how you can use vSAN File Services in combination with a vSAN Stretched Cluster.  I created a video to demonstrate these new capabilities, you can watch it below.

vSAN 7.0 U2 now integrates with vSphere DRS

Duncan Epping · Mar 24, 2021 · 1 Comment

One of the features our team requested a while back was integration between DRS and vSAN. The key use case we had was for stretched clusters. Especially in scenarios where a failure has occurred, it would be useful if DRS would understand what vSAN is doing. What do I mean by that?

Today when customers create a stretched cluster they have two locations. Using vSAN terminology these locations are referred to as the Preferred Fault Domain and the Secondary Fault Domain. Typically when VMs are then deployed, customers will create VM-to-Host Affinity Rules which state that VMs should reside in a particular location. When these rules are created DRS will do its best to ensure that the defined rule is adhered to. What is the problem?

Well if you are running a stretched cluster and let’s say one of the sites go down, then what happens when the failed location returns for duty is the following:

  • vSAN detects the missing components are available again
  • vSAN will start the resynchronization of the components
  • DRS runs every minute and rebalances and will move VMs based on the DRS rules

This means that the VMs for which rules are defined will move back to their respective location, even though vSAN is potentially still resynchronizing the data. First of all, the migration will interfere with the replication traffic. Secondly, for as long as the resync has not completed, I/O will across the network between the two locations, this will not only interfere with resync traffic, it will also increase latency for those workloads. So, how does vSAN 7.0 U2 solve this?

Starting with vSAN 7.0 U2 and vSphere 7.0 U2 we now have DRS and vSAN communicating. DRS will verify with vSAN what the state is of the environment, and it will not migrate the VMs back as long the VMs are healthy again. When the VMs are healthy and the resync has completed, you will see the rules being applied and the VMs automatically migrate back (when DRS is configured to Fully Automated that is).

I can’t really show it with a screenshot or anything, as this is a change in the vSAN/DRS architecture, but to make sure it worked I recorded a quick demo which I published through Youtube. Make sure to watch the video!

vSAN 7.0 U2 Durability Components?

Duncan Epping · Mar 22, 2021 · Leave a Comment

Last week I published a new demo on my youtube channel (at the bottom of this post) and it discussed an enhanced feature called Durability Components. Some may know these as “delta components” as well. These durability components were introduced in vSAN 7.0 Update 1 and provided a mechanism to maintain the required availability for VMs while doing maintenance. That meaning that when you would place a host into maintenance mode new “durability components” would be created for the components which were stored on that host. This would then allow all the new VM I/O to be committed to the existing component, as well as the durability component.

Now, starting with vSAN 7.0 Update 2, vSAN also uses these durability components in situations where a host failure has occurred. So if a host has failed, durability components will be created to ensure we still maintain the specified availability level specified within the policy as shown in the diagram above. The great thing is that if a second host fails in an FTT=1 scenario and you are able to recover the first failed host, we can still merge the data with the first failed host with the durability component! So not only are these durability components great for improving the resync times, but they also provide a higher level of availability to vSAN! To summarize:

  1. Host fails
  2. Durability components are created for all impacted objects
  3. New writes are committed to existing components and the new durability components
  4. Host recovers
  5. Durability components are merged with the previously failed components
  6. Durability components are deleted when resync has completed

I hope that help providing a better understanding of how these durability components help improving availability/resiliency in your environment with vSAN 7.0 Update 2.

I can understand that some of you may not want to test durability components in their own environment, this is why I recorded a quick demo and published it on my youtube channel. Check out the video below, as it also shows you how durability components are represented in the UI.

vSphere 7.0 U2 Suspend VMs to Memory for maintenance!

Duncan Epping · Mar 17, 2021 · Leave a Comment

In vSphere 7.0 U2 a new feature popped up for Lifecycle Manager. This new feature basically provides the ability to specify what should happen to your workloads when you are applying updates or upgrades to your infrastructure. The new feature is only available for those environments which can use Quick Boot. Quick Boot basically is a different method of restarting a host. It basically skips the BIOS part, which makes a big difference in overall time to complete a reboot.

When you have LCM configured, you can enable Quick Boot by editing the “Remediation Settings”. You then simply tick the “Quick Boot” tickbox, which then provides you a few other options:

  • Do not change power state (aka vMotion the VMs)
  • Suspend to disk
  • Suspend to memory
  • Power off

I think all of these speak for themselves, and Suspend to Memory is the new feature that was introduced in 7.0 U2. When you select this option, when you do maintenance via LCM, the VMs which are running on the host which need to be rebooted, will be suspended to memory before the reboot. Of course, they will be resumed when the hypervisor returns for duty again. This should shorten the amount of time the reboot takes, while also avoiding the cost of migrating VMs. Having said that, I do believe that the majority of customers will want to migrate the VMs. When would you use this? Well, if you can afford a small VM/App downtime and have large mem configurations for hosts as well as workloads. As the migration of large memory VMs, especially when they are very memory active, could take a significant amount of time.

I hope that helps, if you want to know where to find the config option in the UI, or if you would like to see it demonstrated, simply watch the video below!

Compute only HCI Mesh with vSAN 7.0 U2

Duncan Epping · Mar 16, 2021 · 13 Comments

I guess that title explains it all, starting with vSAN 7.0 U2, we now also support connecting a compute-only cluster with a vSAN cluster. Meaning that if you have a vSphere cluster that does not have vSAN enabled, you can now mount a remote vSAN Datastore to it and leverage all the capabilities it provides!

I am sure seeing this new capability will make many of you happy, as we had many customers asking for this when we launched HCI Mesh with vSAN 7.0 U1. The great thing is that there’s no need for a vSAN license on the compute-only cluster, even though we load up the vSAN Client on the Client Cluster. No, we are not using NFS, but we are using the vSAN proprietary protocol for this. Another thing that may be useful to know is that we doubled the number of hosts that can be connected to a single datastore, this has gone from 64 to 128!

Last but not least, we have also extended the Policy Based Management Framework to allow for customers to specify which data services should be enabled on a datastore level. So if you select a policy, the vSAN datastores that will be presented, should not only be able to provide the RAID configuration specified, but should also have the data services enabled you require. Those data services would be: Deduplication and Compression, Compression, and/or Encryption, as shown in screenshot of the new policy capability below.

As mentioned, the feature itself is pretty straightforward and very easy to use. There are some things to take into consideration, of course, I wrote those down here. If you want to see it in action, make sure to check the demo below.

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Interim pages omitted …
  • Go to page 462
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007) and the author of the "vSAN Deep Dive" and the “vSphere Clustering Technical Deep Dive” series.

Upcoming Events

06-May-21 | VMUG Iceland- Roadshow
19-May-21 | VMUG Saudi – Roadshow
26-May-21 | VMUG Egypt – Roadshow
27-May-21 | VMUG Australia – Roadshow

Recommended reads

Sponsors

Want to support us? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2021 · Log in