• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

metro

vSphere Metro Storage Cluster solutions, what is supported and what not?

Duncan Epping · Oct 7, 2011 ·

I started digging in to this yesterday when I had a comment on my Metro Cluster article. I found it very challenging to get through the vSphere Metro Storage Cluster HCL details and decided to write an article about it which might help you as well when designing or implementing a solution like this.

First things first, here are the basic rules for a supported environment?
(Note that the below is taken from the “important support information”, which you see in the “screenshot, call out 3”.)

  • Only array-based synchronous replication is supported and asynchronous replication is not supported.
  • Storage Array types FC, iSCSI, SVD, and FCoE are supported.
  • NAS devices are not supported with vMSC configurations at the time of writing.
  • The maximum supported latency between the ESXi ethernet networks sites is 10 milliseconds RTT.
    • Note that 10ms of latency for vMotion is only supported with Enterprise+ plus licenses (Metro vMotion).
  • The maximum supported latency for synchronous storage replication is 5 milliseconds RTT (or higher depending on the type of storage used, please read more here.)

How do I know if the array / solution I am looking at is supported and what are the constraints / limitations you might ask yourself? This is the path you should walk to find out about it:

  • Go to : http://www.vmware.com/resources/compatibility/search.php?deviceCategory=san (See screenshot, call out 1)
  • In the “Array Test Configuration” section select the appropriate configuration type like for instance “FC Metro Cluster Storage” (See screenshot, call out 2)
    (note that there’s no other category at the time of writing)
  • Hit the “Update and View Results” button
  • This will result in a list of supported configurations for FC based metro cluster solutions, currently only EMC VPLEX is supported
  • Click name of the Model (in this case VPLEX) and note all the details listed
  • Unfold the “FC Metro Cluster Storage” solution for the footnotes as they will provide additional information on what is supported and what is not.
  • In the case of our example, VPLEX, it says “Only Non-uniform host access configuration is supported” but what does this mean?
    • Go back to the Search Results and click the “Click here to Read Important Support Information” link (See screenshot, call out 3)
    • Half way down it will provide details for ” vSphere Metro Cluster Storage (vMSC)in vSphere 5.0″
    • It states that “Non-uniform” are ESXi hosts only connected to the storage node(s) in the same site. Paths presented to ESXi hosts from storage nodes are limited to local site.
  • Note that in this case not only is “non-uniform” a requirement, you will also need to adhere to the latency and replication type requirements as listed above.

Yes I realize this is not a perfect way of navigating through the HCL and have already reached out to the people responsible for it.

vSphere 5.0 HA and metro / stretched cluster solutions

Duncan Epping · Oct 5, 2011 ·

I had a discussion via email about metro clusters and HA last week and it made me realize that HA’s new architecture (as part of vSphere 5.0) might be confusing to some. I started re-reading the article which I wrote a while back about HA and metro / stretched cluster configurations and most actually still applies. Before you read this article I suggest reading the HA Deepdive Page as I am going to assume you understand some of the basics. I also want to point you to this section in our HCL which lists the certified configurations for vMSC (vSphere Metro Storage Cluster). You can select the type of storage like for instance “FC Metro Cluster Storage” in the “Array Test Configuration” section.

In this article I will take a single scenario and explain the different type of failures and how HA underneath handles this. I guess the most important part in these scenarios is why HA or did not respond to a failure.

Before I will explain the scenario I want to briefly explain the concept of a metro / stretched cluster, which can be carved up in to two different type of solutions. The first solution is where a synchronous copy of your datastore is available on the other site, this mirror copy will be read-only. In other words there is a read-write copy in Datacenter-A and a read-only copy in Datacenter-B. This means that your VMs in Datacenter-B located on this datastore will do I/O on Datacenter-A since the read-write copy of the datastore is in Datacenter-A. The second solution is which EMC calls “write anywhere”. In this case VMs always write locally. The key point here is that each of the LUNs / datastores has a “preferred site” defined, this is also sometimes referred to as “site bias”. In other words, if anything happens to the link in between then the storage system on the preferred site for a given datastore will be the only one left who can read-write access it. This of course to avoid any data corruption in the case of a failure scenario.

In this article we will use the following scenario:

  • 2 sites
  • 18 hosts
  • 50KM in between sites
  • Preferred (aka “Should”) VM-Host affinity rule to create “Datacenter Affinity – I/O Locality”
  • Designated heartbeat datastores
    (read this article more details on heartbeat datastores)
  • Synchronous mirrored datastores
    (Please note that I have not depicted the “mirror” copy just to simply the diagram)

This is what it will look like:

What do you see in this diagram? Each site will have 9 hosts. The HA master is located in Datacenter-A. Each host will use the designated “heartbeat datastore” in each of the datacenters, note that I only drew the line for the lower left ESXi host just to simplify the diagram.

There are many failures which can occur but HA will be unaware of many of these. I will not discuss these as they are explained in-depth in the storage vendor’s documentation. I will however discuss the following “common” failures:

  • Host failure in Datacenter-A
  • Storage Failure in Datacenter-A
  • Loss of Datacenter-A
  • Datacenter Partition
  • Storage Partition

Host failure in Datacenter-A

When a host fails in Datacenter-A this is detected by the HA master node as network heartbeats from it are not received any longer. When the master has detected network heartbeats are missing it will start monitoring for datastore heartbeats. As the host has failed there will be no datastore heartbeats issued. During this time a third liveness check will be done, which is pinging the management addresses of the failed hosts. If all of these liveness checks are unsuccessful, the master will declare the host dead, and will attempt to restart all the protected virtual machines that were running on the host before the master lost contact with it. . The rules defined on a cluster level are “preferred rules” and as such the virtual machine can be restarted on the other site. If DRS is enabled then it will attempt to correct any sub-optimal placements HA made during the restart of your virtual machines. This same scenario also applies to a situation where all hosts fail in one site without storage being affected.

Storage Failure in Site-A

In this scenario only the storage system fails in Site-A. This failure does not result in down time for your VMs. What will happen? Simply said the mirror, read-only, copy of your datastore will become read-write and be presented with the same identifier and as such the hosts will be able to write to these volumes without the need to resignature. (This sounds very simple of course, but do note I am describing it on an extremely high level and in most solutions manual intervention is required to indicate a failure has occurred.) In most cases however, from a VMs perspective this happens seamlessly. It should be noted that all I/O will now go across your link to the other site. Note that HA is not aware of this failure. Although the storage heartbeat might be lost for a second, HA will not take action as a HA master agent only checks for the storage heartbeat when the network heartbeat has not been received for three seconds.

Loss of Datacenter-A

This is basically a combination of the first and the second failure we’ve described. In this scenario, the hosts in Datacenter-B will lose contact with the master and elect a new one. The new master will access the per-datastore files HA uses to record the set of protected VMs, and so determine the set of HA protected VMs. The master will then attempt to restart any VMs that are not already running on its host and the other hosts in Datacenter-B.  At the same time, the master will do the liveness checks noted above, and after 50 seconds, report the hosts of Datacenter-A as dead. From a VMs perspective the storage fail-over could occur seamlessly. The mirror copy of the datastore promoted to Read-write and the hosts on Datacenter-B will be able to access the datastores which were local to Datacenter-A.

Datacenter Partition

This is where most people feel things will become tricky. What happens if your link between the two datacenters fails? Yes I realize that the chances of this happening are slim as you would typically have redundancy on this layer, but I do think it is an interesting one to explain. The main thing to realize here is that with these types of failures VM-Host affinity, or should I call it VM-Datacenter affinity, is very important suddenly, but we will come back to that in a second. Lets explain the scenario first.

In this case a distinction could be made between the two types of metro cluster as briefly explained before. There is a solution, which EMC likes to call “write anywhere” which basically presents a virtual datastore across Datacenters and allows writes on both sites. On the other hand there is the traditional stretched cluster solution where there’s only 1 site actively handling I/Os and a “passive” site which will be used in the case a fail-over need to occur. In the case of “write anywhere” a so-called site bias or preference is defined per datastore. Both scenarios however are similar  in terms of that a given datastore will in the case of a failure only be accessible on one site.

If the link between the sites should fail the datastore would become active on just one of the sites. What would happen to the VM that is running on Datacenter-B but has its files stored on a datastore which was configured with site bias for Datacenter-A?

In the case where a VM is running on Datacenter-B the VM would have its storage “yanked” out underneath of it. The VM would more than likely keep running and keep retrying the I/O. However as the link has been broken between the sites, HA in Datacenter-A will try to restart the workload. Why is that?

  • The network heartbeat is missing because the link dropped
  • The datastore heartbeat is missing because the link dropped and the datastore becomes inaccessible from Datacenter-B
  • A ping to the management address of the host fails because the link is missing
  • The master for Datacenter-A knows the VM was powered on before the failure, and since it can’t communicate with the VM’s host in Datacenter-B after the failure, it will attempt to restart the VM.

What happens in Datacenter-B? In Datacenter-B a master is elected. This master will determine the VMs that need to be protected. Next it will attempt to restart all those that it knows are not already running.  Any VMs biased to this site that are not already running will be powered on. However any VMs biased to the other site, won’t be as the datastore is inaccessible. HA will report a restart failure for the latter since it does not know that the VMs are (still) running in the other site.

Now you might wonder what will happen if the link returns? This is the classic “VM split brain scenario”. For a short period of time you will have 2 active copies of the VM on your network  both with the same mac address. However only one copy will have access to the VM’s files and HA will recognize this. As soon as this is detected the VM copy that has no access to the VMDK will be powered off.

I hope all of you understand why it is important to understand what the preferential site is for your datastore as it can and probably will impact your up-time. Also note that although we defined VM-Host affinity rules these are preferential / should rules and can be violated by both DRS.

Storage Partition

This is the final scenario. In this scenario only the storage connection between Datacenter-A and Datacenter-B fails. What happens in this case? This scenario is very straight forward. As the HA master is still receiving network heartbeats it will not take any action unfortunately currently. This is very important to realize. Also HA will not be aware of these rules. HA will restart virtual machines where ever it feels it should, DRS however should move the virtual machines to the correct location based on these rules. That is, if and when correctly defined of course!

Summarizing

Stretched Clusters are in my opinion great solutions to increase resiliency in your environment. There is however always a lot of confusion around failure scenarios and the different type of responses from both the vSphere layer and the Storage layer. In this article I have tried to explain how vSphere HA responds to certain failures in a stretched / metro cluster environment. I hope this will help everyone getting a better understanding of vSphere HA. I also fully realize that things can be improved by a tighter integration between HA and your storage systems, for now all I can is that this is being worked on. I want to finish with a quick summary of some vSphere 5.0 HA design consideration for  stretched cluster environments:

  • Das.isolationaddress per Datacenter!
    Having multiple isolation addresses will help your hosts understanding if they are isolated or if the master has isolated in the case of a network failure.
  • Designated heartbeat datastores per Datacenter!
    Each site will need a designated heartbeat datastore to ensure each site can at a minimum update the heartbeat region of the site local storage.

    • If there are multiple storage systems on each site it is recommended to increase the number of heartbeat datastores to four, two for each site.
  • Define VM-Host affinity rules as it can lower the impact during a failure and it can help keeping I/O local

Thanks for taking the time to read this far, and don’t hesitate to leave a comment if you have any questions or feedback / remarks.

<edit> completely coincidentally Chad Sakac posted an article about Stretched Clusters and the new HCL Category. Read it!</edit>

 

** Disclaimer: This article contains references to the words master and/or slave. I recognize these as exclusionary words. The words are used in this article for consistency because it’s currently the words that appear in the software, in the UI, and in the log files. When the software is updated to remove the words, this article will be updated to be in alignment. **

vSphere 5 – Metro vMotion

Duncan Epping · Aug 3, 2011 ·

I received a question last week about higher latency thresholds for vMotion… A rumor was floating around that vMotion would support RTT latency up to 10 miliseconds instead of 5. (RTT=Round Trip Time) Well this is partially true. With vSphere 5.0 Enterprise Plus this is true. With any of the versions below Enterprise Plus the supported limit is 5 miliseconds RTT. Is there a technical reason for this?

There’s a new component that is part of vMotion which is only enabled with Enterprise Plus and that components is what we call ‘Metro vMotion’.  This feature enables you to safely vMotion a virtual machine across a link of up to 10 miliseconds RTT. The technique used is common practice in networking and a bit more in-depth described here.

In the case of vMotion the standard socket buffer size is around 0.5MB.  Assuming a 1GbE network (or 125MBps) then bandwidth delay product dictates that we could support roughly 5ms RTT delay without a noticeable bandwidth impact.  With the “Metro vMotion” feature, we’ll dynamically resize the socket buffers based on the observed RTT over the vMotion network.  So, if you have 10ms delay, the socket buffers will be resized to 1.25MB, allowing full 125MBps throughput.  Without “Metro vMotion”, over the same 10ms link, you would get around 50MBps throughput.

Is that cool or what?

  • « Go to Previous Page
  • Page 1
  • Page 2

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in