For episode 93 I invited Mark A to discuss with us what low latency workloads are all about, and what they require! Mark explains all the ins and outs of why vSphere, and VCF, is the perfect platform for latency sensitive workloads. Listen on Spotify (https://bit.ly/4bT0Lod), Apple (https://bit.ly/4kSbxiC), or just via the below embedded player!
vSAN Component vote recalculation with Witness Resilience, the follow up!
I wrote about the Witness Resilience feature a few years ago and had a question on this topic today. I did some tests and then realized I already had an article describing how it works, but as I also tested a different scenario I figured I would write a follow up. In this case we are particularly talking about a 2-node configuration, but this would also apply to stretched cluster.
In a stretched cluster, or a 2-node, configuration when a data site goes down (or is placed into maintenance mode) a vote recalculation will automatically be done on each object/component. This is to ensure that if now the witness ends up failing, the objects/VMs will remain accessible. How that works I’ve explained here, and demonstrated for a 2-node cluster here.
But what if the Witness fails first? Well, I can explain it fairly easily, then the VMs will be inaccessible if the Witness goes down. Why is that? Well because the votes will not be recalculated in this scenario. Of course, I tested this and the screenshots below demonstrate it.
This screenshot shows the witness as Absent and both the “data” components have 1 vote. This means that if we fail one of those hosts the component will become inaccessible. Let’s do that next and then check the UI for more details.
As you can see below, the VM is now inaccessible. This is the result of the fact that there’s no longer a quorum, as 2 out of 3 votes are dead.
I hope that explains how this works.
vSphere HA restart times, how long does it actually take?
I had a question today, and it was based on material I wrote years ago for the Clustering Deepdive. (read it here) The material talks about the sequence HA goes through when a failure has occurred. If you look at the sequence for instance where a “secondary” host has failed, it looks as follows:
- T0 – Secondary host failure.
- T3s – Primary host begins monitoring datastore heartbeats for 15 seconds.
- T10s – The secondary host is declared unreachable and the primary will ping the management network of the failed secondary host. This is a continuous ping for 5 seconds.
- T15s – If no heartbeat datastores are configured, the secondary host will be declared dead if there is no reply to the ping.
- T18s – If heartbeat datastores are configured, the secondary host will be declared dead if there’s no reply to the ping and the heartbeat file has not been updated or the lock was lost.
So, depending on whether you have heartbeat datastores or not, this sequence takes either 15 or 18 seconds. Does that mean the VMs are then instantly restarted, and if so, how long does that take? Well no, they won’t instantly restart, because when this sequence has ended, the secondary host which has failed is actually declared dead. Now the potentially impacted VMs will need to be verified if they have actually failed, a list of “to be restarted” VMs will need to be created, and a placement request will need to be done.
The placement request will either go to DRS, or will be handled by HA itself, depending on whether DRS is enabled and if vCenter Server is available. After placement has been determined, the primary host will then request the individual hosts to restart the VMs which should be restarted. After the host(s) has received the list of VMs it needs to restart it will do this in batches of 32, and of course restart priority / order, will be applied. The whole aforementioned process can easily take 10-15 seconds (if not longer), which means that in a perfect world, the restart of the VM occurs after about 30 seconds. Now, this is when the restart of the VM is initiated, that does not mean that the VM, or the services it is hosting, will be available after 30 seconds. The power-on sequence of the VM can take anywhere from seconds, to minutes, depending of course on the size of the VM and the services that need to be started during the power-on sequence.
So, although it only takes 15 to 18 seconds for vSphere HA to determine and declare a failure, there’s much more to it, hopefully, this post provides a better understanding of all that is involved.
Unexplored Territory #092 – Introducing DSM 2.2 featuring Cormac Hogan!
Recently Data Services Manager 2.2 was released, so it was time for me to ask my friend Cormac Hogan back on the show to share with us what was introduced. Although it was just a “minor” release, there were some major announcements, of which the S3 Object Storage capabilities are probably what will excite you the most! Make sure to listen to the episode either via the player below or on your favorite podcast app. (Spotify, Apple, etc)
Unexplored Territory #091 – Discussing performance with Ravi Soundararajan!
This is probably my favorite episode in a long time. Ravi is just such an enthusiastic and charismatic person to talk too, and on top of that he has a deep understanding of everything vSphere/vCenter and performance. If you want to hear more about tagging, vCenter limits, bandwidth for vCenter, then this is the episode to listen to! What a show!