• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

BC-DR

data copy management / converged data management / secondary storage

Duncan Epping · Dec 3, 2015 ·

At the Italian VMUG I was part of the “Expert panel” at the end of the event. One of the questions was around innovation in the world of IT, what should be next. I knew immediately what I was going to answer: backup/recovery >> data copy management. My key reason for it being is that we haven’t seen much innovation in this space.

And yes before some of my community friends will go nuts and point at Veeam and some of the great stuff they have introduced over the last 10 years, I am talking more broadly here. Many of my customers are still using the same backup solution they used 10-15 years ago, yes it is a different version probably, but all the same concepts apply. Well maybe tapes have been replaced by virtual tape libraries stored on a disk system somewhere, but that is about it. The world of backup/recovery hasn’t evolved really.

Over the last years though we’ve been seeing a shift in the industry. This shift started with companies like Veeam and then continued with companies like Actifio, and this is now accelerated by companies like Cohesity and Rubrik. What is different from what these guys offer versus the more traditional backup solution… well the difference is that all of these are more than backup solutions, they don’t focus on a single use case. They “simply” took a step back and looked at what kind of solutions are using your data today, who is using it, how and of course what for. On top of that, where the data is stored is also a critical part of it of the puzzle.

In my mind Rubrik and Cohesity are leading the pack when it comes to this new wave of, they’ve developed a solution which is a convergence of different products (Backup / Scale-out storage / Analytics / etc). I used “convergence” on purpose, as this is what it is to me “converged data (copy) management”. Although not all use cases may have reached the full potential yet, the vision is pretty clear, and multiple layers have already converged, even if we would just consider backup and scale-out storage. I said pretty clear as the various startups have taken different messaging approaches. This is something that became obvious during the last Storage Field Day where Cohesity presented. Different story than for instance Rubrik had during Virtualization Field Day. Just as an example, Rubrik typical;y leads with data protection and management, where Cohesity’s messaging appears to be more around being a “secondary storage platform”. This in the case of Cohesity lead to discussions (during SFD) around what secondary storage is, how you get data on the platform and finally then what you can do with it.

To me, and the folks at these startups may have completely different ideas around this, there are a couple of use cases which stand out for a converged data management platform, use cases which I would expect to be the first target and I will explain why in a second.

  1. Backup and Recovery (long retention capabilities)
  2. Disaster Recovery using point in time snapshots/replication (relatively short retention capabilities and low RPO)

Why are these the two use cases to go after first? Well it is the easiest way to suck data in to your system and make your system sticky. It is also the market where innovation is needed, on top of that you need to have the data in your system first before you can do anything with it. Before some of the other use cases start to make sense like “data analytics”, or creating clones for “test / dev” purposes, or spinning up DR instances whether that is in your remote site or somewhere in the cloud.

The first use case (backup and recovery) is something which all of them are targeting, the second one not so much at this point. In my opinion a shame, as it could definitely be very compelling for customers to have these two data availability concepts combined. Especially when some form of integration with an orchestration layer can be included (think Site Recovery Manager here) and protection of workloads is enabled through policy. Policy in this case allowing you to specify SLA for data recovery in the form of recovery point, recovery time and retention. And then when needed, you as a customer have the choice of how you want to make your data available again: VM fail-over, VM recovery, live/instant recovery, file granular or application/database object level recovery and so on and so forth. Not just that, from that point on you should be capable of using your data for other use cases, the use cases I mentioned earlier like analytics, test/dev copies etc.

We aren’t there yet, better said we are far from there, but I do feel this is where we are headed towards… and some are closing in faster than others. I can’t wait for all of this to materialize and we start making those next steps and see what kind of new use cases can be made possible on converged data management platforms.

Stretched Clusters: Disable failover of specific VMs during full site failure

Duncan Epping · Oct 21, 2015 ·

Last week at VMworld when presenting on Virtual SAN Stretched Clusters someone asked me if it was possible to “disable the fail-over of VMs during a full site failure while allowing a restart during a host failure”. I thought about it and said “no, that is not possible today”. Yes you can “disable HA restarts” on a per VM basis, but you can’t do that for a particular type of failure.

The last statement is correct, HA does not allow you to disable restarts for a site failure. You can fully disable HA for a particular VM though. But when back at my hotel I started thinking about this question and realized that there is a work around to achieve this. I didn’t note down the name of the customer who asked the question, so hopefully you will read this.

When it comes to a stretched cluster configuration typically you will use VM/Host rules. These rules will “dictate” where VMs will run, and typically you use the “should” rule as you want to make sure VMs can run anywhere when there is a failure. However, you can also create “must” rules, and yes this means that the rules will not be violated and that those VMs can only run within that site. If a host fails within a site then the impacted VMs will be restarted within the site. If the site fails then the “must rule” will prevent the VMs from being restarted on the hosts in the other location. The must rules are pushed down to the “compatibility list” that HA maintains, which will never be violated by HA.

Simple work-around to prevent VMs from being restarted in another site.

SMP-FT support for Virtual SAN ROBO configurations

Duncan Epping · Oct 12, 2015 ·

When we announced Virtual SAN 2-node ROBO configurations at VMworld we received a lot of great feedback and responses. A lot of people asked if SMP-FT was supported in that configuration. Apparently many of the customers using ROBO still have legacy applications which can use some form of extra protection against a host failure etc. The Virtual SAN team had not anticipated this and had not tested this explicit scenario unfortunately so our response had to be: not supported today.

We took the feedback to the engineering and QA team and these guys managed to do full end-to-end tests for SMP-FT on 2-node Virtual SAN ROBO configurations. Proud to announce that as of today this is now fully supported with Virtual SAN 6.1! I want to point out that still all SMP-FT requirements do apply, which means 10GbE for SMPT-FT! Nevertheless, if you have the need to provide that extra level of availability for certain workloads, now you can!

HA/DRS configuration with Virtual SAN Stretched Cluster environment

Duncan Epping · Sep 9, 2015 ·

This question is going to come sooner or later, how do I configure HA/DRS when I am running a Virtual SAN Stretched cluster configuration. I described some of the basics of Virtual SAN stretched clustering in a what’s new for 6.1 post already, if you haven’t read it then I urge you to do so first. There are a couple of key things to know, first of all the latency between data sites that can be tolerated is 5ms and to the witness location ~100ms.

If you look at the picture you below you can imagine that when a VM sits in Fault Domain A and is reading from Fault Domain B that it could incur a latency of 5ms for each read IO. From a performance perspective we would like to avoid this 5ms latency, so for stretched clusters we introduce the concept of read locality. We don’t have this in a non-stretched environment, as there the latency is microseconds and not miliseconds. Now this “read locality” is something we need to take in to consideration when we configure HA and DRS.

[Read more…] about HA/DRS configuration with Virtual SAN Stretched Cluster environment

VMworld 2015: Site Recovery Manager 6.1 announced

Duncan Epping · Sep 1, 2015 ·

This week Site Recovery Manager 6.1 was announced. There are many enhancements in SRM 6.1 like the integration with NSX for instance and policy driven protection, but personally I feel that support for stretched storage is huge. When I say stretched storage I am referring to solutions like EMC VPLEX, Hitachi Virtual Storage Platform and IBM San Volume Controller(etc). In the past, and you can still today, when you had these solutions deployed you would have a single vCenter Server with a single cluster and moved VMs around manually when needed, or let HA take care of restarts in failure scenarios.

As of SRM 6.1 running these types of stretched configurations is now also supported. So how does that work, what does it allow you to do, and what does it look like? Well in contrary to a vSphere Metro Storage Cluster solution with SRM 6.1 you will be using two vCenter Server instances. These two vCenter Server instances will have an SRM server attached to it which will use a storage replication adaptor to communicate to the array.

But why would you want this? Why not just stretch the compute cluster also? Many have deployed these stretched configurations for disaster avoidance purposes. The problem is however that there is no form of orchestration whatsoever. This means that all workloads will come up typically in a random fashion. In some cases the application knows how to recover from situations like that, in most cases it does not… Leaving you with a lot of work, as after a failure you will now need to restart services, or VMs, in the right order. This is where SRM comes in, this is the strength of SRM, orchestration.

Besides doing orchestration of a full failover, what SRM can also do in the 6.1 release is evacuate a datacenter using vMotion in an orchestrated / automated way. If there is a disaster about to happen, you can now use the SRM interface to move virtual machines from one datacenter to another, with just a couple of clicks, planned migration is what it is called as can be seen in the screenshot above.

Personally I think this is a great step forward for stretched storage and SRM, very excited about this release!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 16
  • Page 17
  • Page 18
  • Page 19
  • Page 20
  • Interim pages omitted …
  • Page 63
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in