• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

srm

Demo time – vCloud Director 5.1 disaster recovery demo

Duncan Epping · Aug 30, 2012 ·

When I was playing with the new vCloud Director 5.1 and Site Recovery Manager 5.1 I figured I would record a demo of the DR solution that Chris Colotti and I developed. The demo is fairly straight forward and hopefully helps you in the process of building a resilient cloud infrastructure. In this demo I have included:

  • vSphere 5.1
    • vSphere Replication
  • vCloud Director 5.1
  • Site Recovery Manager 5.1

Site Recovery Manager survey… please help us out!

Duncan Epping · Jul 27, 2012 ·

I just received an email from the the Site Recovery Manager Product Management team. They created a new survey, and I was hoping each of you who is using, or will be purchasing SRM soon, could take the time to complete it. These types of surveys are very useful for Product Management when it comes to setting priorities for new features and identify gaps etc. Thanks!

We are conducting a survey about VMware vCenter Site Recovery Manager (SRM) to learn more about how people use our products. The survey will help us identify where we can improve the product to meet your needs and we would really appreciate getting your feedback.

The link to the survey is below, it typically takes less than 10 minutes to complete. http://www.surveymethods.com/EndUser.aspx?ECC8A4BDEDA6B9BAE7

Thanks!

Forced recovery option grayed out with Site Recovery Manager 5.0.1

Duncan Epping · Jun 22, 2012 ·

I was playing with Site Recovery Manager (SRM) 5.0.1 today and I wanted to trigger a fail-over. As I just wanted a quick test I figured I would use the “forced recovery” option. This option allows you to fail-over without SRM trying to sync the storage layer. In a normal situation I would probably try to sync my storage but as I knew the other site was dead and I just wanted to test it quickly I figured I would just tick it and get the recovery plan going. Unfortunately the option was grayed out.

You can enable this fairly simple  though:

  1. Right click in the left pane on your site
  2. Click “advanced settings”
  3. Click “Recovery”
  4. Select the “recovery.forcedFailover” setting

Now when you run your recovery plan it will not try to power-off/shutdown VMs or sync the storage. Nice right.

Another option that I spotted which many of you might need is “storageProvider.hostRescanRepeatCnt”, in the past I often had to rescan my storage system at least twice before LUNs would appear. That is where this setting comes in handy as it will do that for you. There’s some more nice new SRM 5.0.1 features to be found in this article by Ken Werneburg, make sure to read it.

Stretched Clusters and Site Recovery Manager

Duncan Epping · Mar 23, 2012 ·

My colleague Ken Werneburg, also known as “@vmKen“, just published a new white paper. (Follow him if you aren’t yet!) This white paper talks about both SRM and Stretched Cluster solutions and explains the advantages and disadvantages of either. It provides a great overview in my opinion on when a stretched cluster should be implemented or when SRM makes more sense. Various goals and concepts are discussed and I think this is a must read for everyone exploring implementing a Stretched Clusters or SRM.

http://www.vmware.com/resources/techresources/10262

This paper is intended to clarify concepts involved with choosing solutions for vSphere site availability, and to help understand the use cases for availability solutions for the virtualized infrastructure. Specific guidance is given around the intended use of DR solutions like VMware vCenter Site Recovery Manager and contrasted with the intended use of geographically stretched clusters spanning multiple datacenters. While both solutions excel at their primary use case, their strengths lie in different areas which are explored within.

DR of View persistent linked clone desktops…

Duncan Epping · Mar 15, 2012 ·

I know some of you have been waiting for this so I wanted to share some early results. I was in the UK last week and we managed to get an environment configured using persistent linked clone virtual desktops with View. We also managed to fail-over and fail-back desktops between two datacenters. The concepts is really similar to the vCloud Director DR concept.

In this scenario Site Recover Manager will be leveraged to fail-over all View management components. In each of the sites it is required to have a management vCenter Server and an SRM Server which aligns with standard SRM design concepts. Since it is difficult to use SRM for View persistent desktops there is no requirement to have an SRM environment connecting to the View desktop cluster’s vCenter Server. In order to facilitate a fail-over of the View desktops a simple mount of the volume is done. This could be using ‘esxcfg-volume -m’ for VMFS or using a DNS c-name mounting the NFS share after point the alias to the secondary NAS server.

What would the architecture look like? This is an oversimplified architecture, of course … but I just want to get the message across:

What would the steps be?

  1. Fail-over View management environment using SRM
  2. Validate all View management virtual machines are powered on
  3. Using your storage management utility break replication for the datastores connected to the View Desktop Cluster and make the datastores read/write (if required by storage platform)
  4. Mask the datastores to the recovery site (if required by storage platform)
  5. Using ESXi command line tools mount the volumes of the View Desktop Cluster cluster on each host of the cluster
    • esxcfg-volume –m <;volume ID>;
      or
    • point the DNS CNAME to the secondary NAS server and mount the NAS datastores
  6. Validate all volumes are available and visible in vCenter, if not rescan/refresh the storage
  7. Take the hosts out of maintenance mode for the View Desktop Cluster (or add the hosts to your cluster, depending on the chosen strategy)
  8. In our tests the virtual desktops were automatically powered on by vSphere HA. vSphere HA is aware of the situation before the fail-over and will power-on the virtual machines according to the last known state

These steps have been validated this week and we managed to successfully fail-over our desktops and fail them back. Keep in mind that we only did these tests two or three times, so don’t consider this article to be support statement. We used persistent linked clones as that was the request we had at that point, but we are certain this will work for the various different scenarios. We will extend our testings to include various other scenarios.

Cool right!?

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Interim pages omitted …
  • Page 13
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in