• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

BC-DR

Rubrik update >> 3.1

Duncan Epping · Feb 8, 2017 ·

It has been a while since I wrote about Rubrik. This week I was briefed by Chris Wahl on what is coming in their next release, which is called Cloud Data Management 3.1. As Chris mentioned during the briefing, backup solutions grab data. In most cases this data is then never used, or in some cases used for restores but that is it. A bit of a waste if you imagine there are various other uses cases for this data.

First of all, it should be possible from a backup and recovery perspective to set a policy, secure it, validate compliancy and search the data. On top of that the data set should be fully indexed and should be accessible through APIs which allows you to automate and orchestrate various types of workflows, like for instance provide it to developers for test/dev purposes.

Anyway, what was introduced in Cloud Data Management 3.1? Today Rubrik from a source perspective supports vSphere, SQL Server, Linux and NAS and with 3.1 also “physical” Windows (or native, whatever you want to call it) is supported. (Windows 2008 R2, 2012 and 2012 R2) Fully policy based in a similar way to how they implemented it for vSphere. Also, support for SQL Server Failover Clustering (WSFC) was added. Note that the Rubrik connector must be installed on both nodes. Rubrik will automatically recognize that the hosts are part of a cluster and provide additional restore options etc.

There are a couple of User Experience improvements as well. Instead of being “virtual machine” centric now the UI revolves around “hosts”. Meaning that the focus is on the “OS”, and they will for instance show all file systems which are protected and a calendar with snapshots and per day a set of the snapshots of the host. One of the areas Rubrik still had some gaps was reporting and analytics. With 3.1 Rubrik Envision is introduced.

Rubrik Envision provides you build your own fully customisable reports, and of course provides different charts and filtering / query options. These can be viewed, downloaded and emailed in html-5 format. This can also be done in a scheduled fashion, create a report and schedule it to be send out. Four standard reports are included to get you started, of course you can also tweak those if needed.


(blatantly stole this image from Mr Wahl)

Cloud Data Management 3.1 also adds Software Based encryption (AES-256) at rest, where in the past self encrypting devices were used. Great thing is that this will be supported for all R300 series. Single click to enable it, nice! When thinking about this later I asked Chris a question about multi-tenancy and he mentioned something I had not realized:

For multi tenant environments, we’re encrypting data transfers in and out of the appliance using SSL certificates between the clusters (such as hosting provider cluster to customer cluster), which are logically divided by SLA Domains. Customers don’t have any visibility into other replication customers and can supply their own keys for archive encryption (Azure, AWS, Object, etc.)

That was a nice surprise to me. Especially in multi-tenancy environments or large enterprise organizations with clear separation between business units that is a nice plus.

Some “minor” changes Chris mentioned as well, in the past Rubrik would help with every upgrade but this didn’t scale well plus there are customers who have Rubrik gear installed in a “dark site” (meaning no remote connection for security purposes). With the 3.1 release there is the option for customers to do this themselves. Download the binary, upload to the box, type upgrade and things happen. Also, restores directly to ESXi are useful. In the past you needed vCenter in place first. Some other enhancements around restoring, but too many little things to go in to. Overall a good solid update if you ask me.

Last but not least, from a company/business point of view, 250 people work at Rubrik right now. 6x growth in terms of customer acquisition, which is great to hear. (No statement around customer count though.) I am sure we will hear more from the guys in the future. They have a good story, a good product and are solving a real pain point in most datacenters today: backup/recovery and explosion of data sets and data growth. Plenty of opportunities if you ask me.

vSphere Replication 6.5, 5 minute RPO for ALL!

Duncan Epping · Nov 16, 2016 ·

I just noticed the following in the vSphere Replication 6.5 release notes which I felt was worth sharing:

5-minute Recovery Point Objective (RPO) support for additional data store types – This version of vSphere Replication extends support for the 5 minute RPO setting to the following new data stores: VMFS 5, VMFS 6, NFS 4.1, NFS 3, VVOL and VSAN 6.5. This allows customers to replicate virtual machine workloads with an RPO setting as low as 5-minutes between these various data store options.

We have had this for vSAN in specific for a while now, but I hadn’t realized yet that we were enabling this for all sorts of datastores in this release. Definitely a great reason to move up to vSphere 6.5 and re-evaluate which VMs can do with a 5 minute RPO and use this great replication mechanism that just ships with vSphere for free! More info found in the release notes here.

If you like to know more about the 6.5 release visit this page with the links to all docs/downloads by William Lam.

vSphere 6.5 what’s new – VVols

Duncan Epping · Oct 20, 2016 ·

Well I guess I can keep this one short, what is new for VVols? Replication. Yes, that is right… finally if you ask me. This is something I know many of my customers have been waiting for. I’ve seen various customers deploy VVols in production, but many were holding off because of the lack of support for Replication and with vSphere 6.5 that has just been introduced. Note that alongside with new VVol capabilities we have also introduced VASA 3.0. VASA 3.0 provides Policy Components in the SPBM UI which allows you to combine for instance a VVol policy with a VAIO Filter based solution like VMCrypt / Encryption or for instance Replication or Caching from a third party vendor.

When it comes to replication I think it is good to know that there will be Day 0 support from both Nimble and HPE 3PAR. More vendors can be expected soon. Not only is replication per object supported, but also replication groups. Replication groups can be viewed as consistency groups, but also a unit of granularity for failover. By default each VM will be in its own replication group, but if you need some form of consistency or would like a group of VMs always to failover at the same time then they can be lumped together through using the replication group option.

There is a full set of APIs available by the way, and I would expect most storage vendors to provide some tooling around their specific implementation. Note that through the API you will for instance be able to “failover” or do a “test failover” and even reverse replication if and when desired. Also, this release will come with a set of new PowerCLI cmdlets which will also allow you to failover and reverse replication, I can’t remember having seen the test failover cmdlet but as it is also possible through the API that should not be rocket science for those who need this functionality. Soon I will have some more stuff to share with regards to scripting DR scenarios…

data copy management / converged data management / secondary storage

Duncan Epping · Dec 3, 2015 ·

At the Italian VMUG I was part of the “Expert panel” at the end of the event. One of the questions was around innovation in the world of IT, what should be next. I knew immediately what I was going to answer: backup/recovery >> data copy management. My key reason for it being is that we haven’t seen much innovation in this space.

And yes before some of my community friends will go nuts and point at Veeam and some of the great stuff they have introduced over the last 10 years, I am talking more broadly here. Many of my customers are still using the same backup solution they used 10-15 years ago, yes it is a different version probably, but all the same concepts apply. Well maybe tapes have been replaced by virtual tape libraries stored on a disk system somewhere, but that is about it. The world of backup/recovery hasn’t evolved really.

Over the last years though we’ve been seeing a shift in the industry. This shift started with companies like Veeam and then continued with companies like Actifio, and this is now accelerated by companies like Cohesity and Rubrik. What is different from what these guys offer versus the more traditional backup solution… well the difference is that all of these are more than backup solutions, they don’t focus on a single use case. They “simply” took a step back and looked at what kind of solutions are using your data today, who is using it, how and of course what for. On top of that, where the data is stored is also a critical part of it of the puzzle.

In my mind Rubrik and Cohesity are leading the pack when it comes to this new wave of, they’ve developed a solution which is a convergence of different products (Backup / Scale-out storage / Analytics / etc). I used “convergence” on purpose, as this is what it is to me “converged data (copy) management”. Although not all use cases may have reached the full potential yet, the vision is pretty clear, and multiple layers have already converged, even if we would just consider backup and scale-out storage. I said pretty clear as the various startups have taken different messaging approaches. This is something that became obvious during the last Storage Field Day where Cohesity presented. Different story than for instance Rubrik had during Virtualization Field Day. Just as an example, Rubrik typical;y leads with data protection and management, where Cohesity’s messaging appears to be more around being a “secondary storage platform”. This in the case of Cohesity lead to discussions (during SFD) around what secondary storage is, how you get data on the platform and finally then what you can do with it.

To me, and the folks at these startups may have completely different ideas around this, there are a couple of use cases which stand out for a converged data management platform, use cases which I would expect to be the first target and I will explain why in a second.

  1. Backup and Recovery (long retention capabilities)
  2. Disaster Recovery using point in time snapshots/replication (relatively short retention capabilities and low RPO)

Why are these the two use cases to go after first? Well it is the easiest way to suck data in to your system and make your system sticky. It is also the market where innovation is needed, on top of that you need to have the data in your system first before you can do anything with it. Before some of the other use cases start to make sense like “data analytics”, or creating clones for “test / dev” purposes, or spinning up DR instances whether that is in your remote site or somewhere in the cloud.

The first use case (backup and recovery) is something which all of them are targeting, the second one not so much at this point. In my opinion a shame, as it could definitely be very compelling for customers to have these two data availability concepts combined. Especially when some form of integration with an orchestration layer can be included (think Site Recovery Manager here) and protection of workloads is enabled through policy. Policy in this case allowing you to specify SLA for data recovery in the form of recovery point, recovery time and retention. And then when needed, you as a customer have the choice of how you want to make your data available again: VM fail-over, VM recovery, live/instant recovery, file granular or application/database object level recovery and so on and so forth. Not just that, from that point on you should be capable of using your data for other use cases, the use cases I mentioned earlier like analytics, test/dev copies etc.

We aren’t there yet, better said we are far from there, but I do feel this is where we are headed towards… and some are closing in faster than others. I can’t wait for all of this to materialize and we start making those next steps and see what kind of new use cases can be made possible on converged data management platforms.

VMworld 2015: Site Recovery Manager 6.1 announced

Duncan Epping · Sep 1, 2015 ·

This week Site Recovery Manager 6.1 was announced. There are many enhancements in SRM 6.1 like the integration with NSX for instance and policy driven protection, but personally I feel that support for stretched storage is huge. When I say stretched storage I am referring to solutions like EMC VPLEX, Hitachi Virtual Storage Platform and IBM San Volume Controller(etc). In the past, and you can still today, when you had these solutions deployed you would have a single vCenter Server with a single cluster and moved VMs around manually when needed, or let HA take care of restarts in failure scenarios.

As of SRM 6.1 running these types of stretched configurations is now also supported. So how does that work, what does it allow you to do, and what does it look like? Well in contrary to a vSphere Metro Storage Cluster solution with SRM 6.1 you will be using two vCenter Server instances. These two vCenter Server instances will have an SRM server attached to it which will use a storage replication adaptor to communicate to the array.

But why would you want this? Why not just stretch the compute cluster also? Many have deployed these stretched configurations for disaster avoidance purposes. The problem is however that there is no form of orchestration whatsoever. This means that all workloads will come up typically in a random fashion. In some cases the application knows how to recover from situations like that, in most cases it does not… Leaving you with a lot of work, as after a failure you will now need to restart services, or VMs, in the right order. This is where SRM comes in, this is the strength of SRM, orchestration.

Besides doing orchestration of a full failover, what SRM can also do in the 6.1 release is evacuate a datacenter using vMotion in an orchestrated / automated way. If there is a disaster about to happen, you can now use the SRM interface to move virtual machines from one datacenter to another, with just a couple of clicks, planned migration is what it is called as can be seen in the screenshot above.

Personally I think this is a great step forward for stretched storage and SRM, very excited about this release!

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 13
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in