Rubrik follow up, GA and funding announcement

Two months ago I published an introduction post on Rubrik. Yesterday Rubrik announced that their platform went GA and they announced a funding round (series B) of 41 million dollars led by Greylock. I want to congratulate Rubrik with this new milestone, major achievement and I am sure we will hear much more from them in the months to come. For those who don’t recall, here is what Rubrik is all about:

Rubrik is building a hyperconverged backup solution and it will scale from 3 to 1000s of nodes. Note that this solution will be up and running in 15 minutes and includes the option to age out data to the public cloud. What impressed me most is that Rubrik can discover your datacenter without any agents, it scales-out in a fully automated fashion and will be capable of deduplicating / compressing data but also offer the ability to mount data instantly. All of this through a slick UI or you can leverage the REST APIs , fully programmable end-to-end.

When I published the article some people made comments that you can do the above with various of other solutions and people asked why I was so excited about their solution. Well, first of all because you can do all of that from a single platform and don’t need a backup solution plus a storage solution and have multiple pieces to manage without scale-out capabilities. I like the model, the combination of what is being offered, the fact that is is a single package designed for this purpose and not glued together… But of course there is more, I just couldn’t talk about it yet. I am not gonna go in to an extreme amount of detail as Cormac wrote an excellent piece here and there is this great blog from Chris, who is a user of the product, which explains the value of the solution. (Always nice to see by the way people read your article and share their experience as well in return…)

I do want to touch on a couple of things which I feel sets Rubrik apart. (And there may be others who do this / offer this, but I haven’t been briefed by them.)

  • Global search across all data
    • “Google-alike” search, which means you start typing the name of a file in the UI of any VM and while typing the UI already presents a list of potential files you are looking for. Then when it shows the right file you click it and it presents a list of options. The file with this name could of course be on one or many VMs, you can pick which one you want and select from which point in time. When I was an admin I was often challenged with this problem “I deleted a file, I know the name… but no clue where I stored it, can you recover it?”. Well that is no problem any longer with global search, just type the name and restore it.
  • True Scale Out
    • I’d already highlighted this, but I agree with Scott Lowe that there is “scale-out” and there is “Scale-Out”. In the case of Rubrik we are talking scale out with capital S and capital O. Not just from a capacity stance, but also when it comes to (as Scott points out) task management and the ability to run any task anywhere in the cluster. So with each node you add you aren’t just scaling capacity, but also performance on all fronts. No single choking point with Rubrik as far as I can tell.
  • Miscellaneous, stuff that people take for granted… but does matter
    • API-Driven – Not something you would expect I would get excited about. And it seems such an obvious thing, but Rubrik’s solution can be configured and managed through the API they expose. Note that every single thing you see in the UI can be done through the API, the UI is simply an API client.
    • Well performing instant mount through the use of flash and serving the cluster up as a scale-out NFS solution to any vSphere host in your environment. Want to access a VM that was backed-up? Mount it!
    • Cloud archiving… Yes others offer this functionality I know. I still feel it is valuable enough to mention that Rubrik does offer the option to archive data to S3 for instance.

Of course there is more to Rubrik then what I just listed, read the articles by Scott, Cormac and Chris to get a good overview… Or just contact Rubrik and ask for a demo.

Requirements Driven Data Center

I’ve been thinking about the term Software Defined Data Center for a while now. It is a great term “software defined” but it seems that many agree that things have been defined by software for a long time now. When talking about SDDC with customers it is typically referred to as the ability to abstract, pool and automate all aspects of an infrastructure. To me these are very important factors, but not the most important, well at least not for me as they don’t necessarily speak to the agility and flexibility a solution like this should bring. But what is an even more important aspect?

I’ve had some time to think about this lately and to me what is truly important is the ability to define requirements for a service and have the infrastructure cater to those needs. I know this sounds really fluffy, but ultimately the service doesn’t care what is running underneath, and typically the business owner and the application owners also don’t when all requirements can be met. Key is delivering a service with consistency and predictability. Even more important consistency and repeatability increase availability and predictability, and nothing is more important for the user experience.

When it comes to user experience and providing a positive one of course it is key to figure out first what you want and what you need first. Typically this information comes from your business partner and/or application owner. When you know what those requirements are then they can be translated to technical specifications and ultimately drive where the workloads end up. A good example of how this works or would look like is VMware Virtual Volumes. VVols is essentially requirements driven placement of workloads. Not just placement, but of course also all other aspects when it comes to satisfying requirements that determine user experience like QoS, availability, recoverability and whatever more is desired for your workload.

With Virtual Volumes placement of a VM (or VMDK) is based on how the policy is constructed and what is defined in it. The Storage Policy Based  Management engine gives you the flexibility to define policies anyway you like, of course it is limited to what your storage system is capable of delivering but from the vSphere platform point of view you can do what you like and make many different variations. If you specify that the object needs to thin provisioned, or has a specific IO profile, or needs to be deduplicated or… then those requirements are passed down to the storage system and the system makes its placement decisions based on that and will ensure that the demands can be met. Of course as stated earlier also requirements like QoS and availability are passed down. This could be things like latency, IOPS and how many copies of an object are needed (number of 9s resiliency). On top of that, when requirements change or when for whatever reason SLA is breached then in a requirements driven environment the infrastructure will assess and remediate to ensure requirements are met.

That is what a requirements driven solution should provide: agility, availability, consistency and predictability. Ultimately your full data center should be controlled through policies and defined by requirements. If you look at what VMware offers today, then it is fair to say that we are closing in on reaching this ideal fast.

vCenter Server Appliance watchdog

I was reviewing a paper on vCenter availability for 6.0 and it listed a watchdog service which monitors “VPXD” (the vCenter Server service) on the vCenter Server Appliance. I had seen the service before but never really looked in to it. With 5.5 the watchdog service (/usr/bin/vmware-watchdog) was only used to monitor vpxd and tomcat but in 6.0 the watchdog service seems to monitor some more services. I did a “grep” of vmware-watchdog within the 6.0 appliance and the below is the outcome, it shows the services which are being watched:

ps -ef | grep vmware-watchdog
 root 7398 1 0 Mar27 ? 00:00:00 /bin/sh /usr/bin/vmware-watchdog -s rhttpproxy -u 30 -q 5 /usr/sbin/rhttpproxy -r /etc/vmware-rhttpproxy/config.xml -d /etc/vmware-rhttpproxy
 root 11187 1 0 Mar27 ? 00:00:00 /bin/sh /usr/bin/vmware-watchdog -s vws -u 30 -q 5 /usr/lib/vmware-vws/bin/vws.sh
 root 12041 1 0 Mar27 ? 00:09:58 /bin/sh /usr/bin/vmware-watchdog -s syslog -u 30 -q 5 -b /var/run/rsyslogd.pid /sbin/rsyslogd -c 5 -f /etc/vmware-rsyslog.conf
 root 12520 1 0 Mar27 ? 00:09:56 /bin/sh /usr/bin/vmware-watchdog -b /storage/db/vpostgres/postmaster.pid -u 300 -q 2 -s vmware-vpostgres su -s /bin/bash vpostgres
 root 29201 1 0 Mar27 ? 00:00:00 /bin/sh /usr/bin/vmware-watchdog -a -s vpxd -u 3600 -q 2 /usr/sbin/vpxd

As you can see vmware-watchdog is ran with a couple of parameters, which seem to different for some services. As it is the most important service, lets have a look at VPXD. It shows the following parameters:

-a
-s vpxd
-u 3600
-q 2

What the above parameters result in is the following: the service, named vpxd (-s vpxd), is monitored for failures and will be restarted twice (-q 2) at most. If it fails for a third time within 3600 seconds/one hour (-u 3600) the guest OS will be restarted (-a).

Note that the guest OS will only be restarted when vpxd has failed multiple times. With other services this is not the case as the “grep” above shows. There are some more watchdog related processes, but I am not going to discuss those at this point as the white paper which is being worked on by Technical Marketing will discuss these in a bit more depth and should be the authoritative resource.

** Please do not make changes to ANY of the above parameters as this is totally unsupported, I am mere showing the details for educational purposes and to provide a better insight around vCenter availability when it comes to the VCSA. **

Implementing a Hybrid Cloud Strategy white paper

Last week I already posted this up on the VMware Office of CTO blog, and I figured I would share it to my regular readers here as well. A couple of months ago I stumbled across a great diagram which was developed by Hany Michael, (Consulting Architect, VMware PSO) who is part of the VMware CTO Ambassador program. The CTO Ambassadors are members of a small group of our most experienced and talented customer-facing, individual contributor technologists. The diagram explained an interesting architecture– namely hybrid cloud. After a brief discussion with Hany I decided to reach out to David Hill (Senior Technical Marketing Architect, vCloud Air) and asked if he was interested in getting this work published. Needless to say, David was very interested. Together we worked on expanding on the great content that Hany had already developed. Today, the result is published.

The architecture described in this white paper is based on a successful real-world customer implementation. Besides explaining the steps required it also explains the use case for this particular customer. We hope that you find the paper useful and that it will help implementing or positioning a hybrid cloud strategy.

Implementing a Hybrid Cloud Strategy

IT has long debated the merits of public and private cloud. Public clouds allow organizations to gain capacity and scale services on-demand, while private clouds allow companies to maintain control and visibility of business-critical applications. But there is one cloud model that stands apart: hybrid cloud. Hybrid clouds provide the best of both worlds: secure, on-demand access to IT resources with the flexibility to move workloads onsite or offsite to meet specific needs. It’s the security you need in your private cloud with the scalability and reach of your public cloud. Hybrid cloud implementations should be versatile, easy to use, and interoperable with your onsite VMware vSphere® environment. Interoperability allows the same people to manage both onsite and offsite resources while leveraging existing processes and tools and lowering the operational expenditure and complexity…

Cloud native inhabitants

When ever I hear the term “cloud native” I think about my kids. It may sounds a bit strange as many of you will think about “apps” probably first when “cloud native” is dropped. Cloud native to me is not about an application, but about a problem which has been solved and a solution which is offered in a specific way. A week or so ago someone made a comment on twitter around how “Generation X” will adopt cloud faster than the current generation of IT admins…

Some even say that “Generation X” is more tech savvy, just look at how a 3 year old handles an iPad, they are growing up with technology. To be blunt… that has nothing to do with the technical skills of the 3 year old kid, but is more about the intuitive user interface that took years to develop. It comes natural to them as that is what they are exposed to from day 1. They see there mom or dad swiping a screen daily, mimicking them doesn’t require deep technical understanding of how an iPad works, they move their finger from right to left… but I digress.

My kids don’t know what a video tape is and even a CD to play music is so 2008, which for them is a lifetime, my kids are cloud native inhabitants. They use Netflix to watch TV, they use Spotify to listen to music, they use Facebook to communicate with friends, they use Youtube / Gmail and many other services running somewhere in the cloud. They are native inhabitants of the cloud. They won’t adopt cloud technology faster, for them it is a natural choice as it is what they are exposed to day in day out.