• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

startup

Cohesity announces 4.0 and Round C funding

Duncan Epping · Apr 4, 2017 ·

Earlier this week I was on the phone with Rawlinson Rivera, my former VMware/vSAN colleague, and he told me all about the new stuff for Cohesity that was just announced. First of all, congrats with Round C funding. As we’ve all seen, lately it has been mayhem in the storage world. Landing a $90 million round is big. This round was co-led by investors GV (formerly Google Ventures) and Sequoia Capital. Both Cisco Investments and Hewlett Packard Enterprise (HPE) also participated in this round as strategic investors. I am not an analyst, and I am not going to pretend either, lets talk tech.

Besides the funding round, Cohesity also announced the 4.0 release of their hyper-converged secondary storage platform. Now, let it be clear, I am not a fan of the “hyper-converged” term used here. Why? Well I think this is a converged solution. They combined multiple secondary storage use cases and created a single appliance. Hyper-Converged stands for something in the industry, and usually it means the combination of a hypervisor, storage software and hardware. The hypervisor is missing here. (No I am not saying “hyper” in hyper-converged” stands for hypervisor.) Anyway, lets continue.

In 4.0 some really big functionality is introduced, lets list it out and then discuss each at a time:

  • S3 Compatible Object Storage
  • Quotas for File Services
  • NAS Data Protection
  • RBAC for Data Protection
  • Folder and Tag based protection
  • Erasure Coding

As of 4.0 you can now on the Cohesity platform create S3 Buckets, besides replicating to an S3 bucket you can now also present them! This is fully S3 compatible and can be created through their simple UI. Besides exposing their solution as S3 you can also apply all of their data protection logic to it, so you can have cloud archival / tiering /replication. But also enable encryption, data retention and create snapshots.

Cohesity already offered file services (NFS and SMB), and in this release they are expanding the functionality. The big request from customers was Quotas and that is introduced in 4.0. Along with what they call Write-Once-Read-Many (WORM) capabilities, which refers to data retention in this case (write once, keep forever).

For the Data Protection platform they now offer NAS Data Protection. Basically they can connect to a NAS device and protect everything which is stored on that device by snapping the data and storing it on their platform. So if you have a NetApp filer for instance you can now protect that by offloading the data to the Cohesity platform. For the Data Protection solution they also intro Role Based Access. I think this was one of the big ticket items missing, and with 4.0 they now provide that as well. Last but not last “vCenter Integration”, which means that they can now auto-protect groups of VMs based on the folder they are in or the tag they have provided. Just imagine you have 5000 VMs, you don’t want to associate a backup scheme with each of these, you probably much rather do that for an X number of VMs with a similar SLA at a time. Give them a tag, and associate the tag with the protection scheme (see screenshot). Same for folders, easy.

Last but not least: Erasure Coding. This is not a “front-end” feature, but it is very useful to have. Especially in larger configurations it can safe a lot of precious disk space. Today they have “RAID-1” mechanism more or less, where each block is replicated / mirrored to another host in the cluster. This results in a 100% overhead, in other words: for every 100GB stored you need 200GB capacity. By introducing Erasure Coding they reduce that immediately to 33%. Or in other words, with a 3+1 scheme you get 50% more usable capacity and with a 5+2 (double protection) you get 43% more. Big savings, a lot of extra usable capacity.

Oh and before I forget, besides getting Cisco and HPE as investors you can now also install Cohesity on Cisco kit (there’s a list of approved configurations). HPE took it one step further even, they can sell you a configuration with Cohesity included and pre-installed. Smart move.

All in all, some great new functionality and some great enhancements of the current offering. Good work Cohesity, looking forward to see what is next for you guys.

Startup intro: Runecast

Duncan Epping · Mar 7, 2017 ·

I met with Runecast a couple of years ago at VMworld. Actually, I am not sure they already had a name back then, I should probably say I met with the guys who ended up founding Runecast at VMworld. One of them, Stan, is a VCDX and back then he pitched this idea to me around an appliance that would analyze your environment based on a set of KBs. His idea was primarily based on his experience managing and building datacenters. (Not just Stan’s experience, but most of the team are actually former IBM employees) Interesting concept, kind of sounded similar to CloudPhysics to me, although the focus was more on correlation of KB then capacity management etc.

Fast forward to 2017 and I just finished a call with the Runecast team. I had a short conversation at VMworld 2016 and was under the impression that they sold the company or quit. None of this is true. Runecast managed to get a 1.6m euro funding (Czech Republic) and is going full steam ahead. With around 10 people, most being in Czech Republic they are ready to release the next version of Runecast Analyzer, which will be 1.5. So what does this provide?

Well just imagine you manage a bunch of hosts and vCenter (not unlikely when you visit my blog), maybe some shared storage along with it. There are many KB articles, frequent updates of these and many newly published KBs every week. Then there’s also a whole bunch of best practices and of course the vSphere Hardening Guide. As an administrator do you have time to read everything that is published every day? And then when you have read it, do you have time to check your environment if the issue or best practice described applies to your infrastructure? Of course you don’t, and this is where Runecast Analyzer comes in to play.

You download the appliance and provision it in to your environment, next you simply hook vCenter Server in to it and off you go. (As of 1.5 it also supports connecting several vCenter Server instances by the way.) Click analyze now and check the issues called out in the HTML-5 dashboard. As the screenshot below shows, this particular environment has issues identified in the log file that are described in a KB article. There are various other KB articles that may apply, just as an example: a combination of a certain virtual NIC with a specific OS may not be recommended. Also, various potential security issues and best practices are raised if they exist/apply.

When you would click one of these areas you can drill down in to what the issue is and potentially figure out how to mitigate it. In the screenshot below you see the list of KBs that apply to this particular environment, you can open the particular entry (second screenshot below) and then find out to what it applies (objects: VMs, hosts, vCenter etc). If you feel it doesn’t apply to you, or you accept the risk, you can of course “ignore” the issue. When you click ignore a filter will be created which rules out this issue from being called out through the dashboard. The filtering mechanism is pretty smart, and you can easily create your own filters on any level of the virtual infra hierarchy. Yes, it is also possible to delete the filter(s) again when you feel it does apply to your environment.

Besides checking the environment, as mentioned, Runecast can also analyze the logs for you. And I was happy to see that this got added, as it makes it unique compared to other solutions out there. Depending on what you are looking for you have these quick filtering options, and of course there are search strings and you can select a time period in which you would like to search of this particular string

As I said, all of this comes as a virtual appliance, which does not require direct connection to the internet. However, in order to keep the solution relevant you will need to update regularly, they mentioned they release a new data set once every two weeks roughly. It can be updated over the internet (through a proxy if needed), or you can download the ISO and update Runecast Analyzer through that, which could be very useful in secure locations. The appliance works against vSphere 5.x and 6.x (yes including 6.5) and there is a 30 day free trial. (Annual subscription, per socket pricing.) If you like to give it a try, click the banner on the right side, or go to their website: https://www.runecast.biz/. Pretty neat solution, and looking forward seeing what these guys can achieve with the funding they just received.

Rubrik update >> 3.1

Duncan Epping · Feb 8, 2017 ·

It has been a while since I wrote about Rubrik. This week I was briefed by Chris Wahl on what is coming in their next release, which is called Cloud Data Management 3.1. As Chris mentioned during the briefing, backup solutions grab data. In most cases this data is then never used, or in some cases used for restores but that is it. A bit of a waste if you imagine there are various other uses cases for this data.

First of all, it should be possible from a backup and recovery perspective to set a policy, secure it, validate compliancy and search the data. On top of that the data set should be fully indexed and should be accessible through APIs which allows you to automate and orchestrate various types of workflows, like for instance provide it to developers for test/dev purposes.

Anyway, what was introduced in Cloud Data Management 3.1? Today Rubrik from a source perspective supports vSphere, SQL Server, Linux and NAS and with 3.1 also “physical” Windows (or native, whatever you want to call it) is supported. (Windows 2008 R2, 2012 and 2012 R2) Fully policy based in a similar way to how they implemented it for vSphere. Also, support for SQL Server Failover Clustering (WSFC) was added. Note that the Rubrik connector must be installed on both nodes. Rubrik will automatically recognize that the hosts are part of a cluster and provide additional restore options etc.

There are a couple of User Experience improvements as well. Instead of being “virtual machine” centric now the UI revolves around “hosts”. Meaning that the focus is on the “OS”, and they will for instance show all file systems which are protected and a calendar with snapshots and per day a set of the snapshots of the host. One of the areas Rubrik still had some gaps was reporting and analytics. With 3.1 Rubrik Envision is introduced.

Rubrik Envision provides you build your own fully customisable reports, and of course provides different charts and filtering / query options. These can be viewed, downloaded and emailed in html-5 format. This can also be done in a scheduled fashion, create a report and schedule it to be send out. Four standard reports are included to get you started, of course you can also tweak those if needed.


(blatantly stole this image from Mr Wahl)

Cloud Data Management 3.1 also adds Software Based encryption (AES-256) at rest, where in the past self encrypting devices were used. Great thing is that this will be supported for all R300 series. Single click to enable it, nice! When thinking about this later I asked Chris a question about multi-tenancy and he mentioned something I had not realized:

For multi tenant environments, we’re encrypting data transfers in and out of the appliance using SSL certificates between the clusters (such as hosting provider cluster to customer cluster), which are logically divided by SLA Domains. Customers don’t have any visibility into other replication customers and can supply their own keys for archive encryption (Azure, AWS, Object, etc.)

That was a nice surprise to me. Especially in multi-tenancy environments or large enterprise organizations with clear separation between business units that is a nice plus.

Some “minor” changes Chris mentioned as well, in the past Rubrik would help with every upgrade but this didn’t scale well plus there are customers who have Rubrik gear installed in a “dark site” (meaning no remote connection for security purposes). With the 3.1 release there is the option for customers to do this themselves. Download the binary, upload to the box, type upgrade and things happen. Also, restores directly to ESXi are useful. In the past you needed vCenter in place first. Some other enhancements around restoring, but too many little things to go in to. Overall a good solid update if you ask me.

Last but not least, from a company/business point of view, 250 people work at Rubrik right now. 6x growth in terms of customer acquisition, which is great to hear. (No statement around customer count though.) I am sure we will hear more from the guys in the future. They have a good story, a good product and are solving a real pain point in most datacenters today: backup/recovery and explosion of data sets and data growth. Plenty of opportunities if you ask me.

Startup intro: Reduxio

Duncan Epping · Sep 23, 2016 ·

About a year ago my attention was drawn to a storage startup called Reduxio, not because of what they were selling (they weren’t sharing much at that point though even) but because two friends joined them, Fred Nix and Wade O’Harrow (EMC / vSpecialist fame). I tried to set up a meeting back then and it didn’t happen for whatever reason and it slipped my mind completely. Before VMworld Fred asked me if I was interested in meeting up and we ended up having an hour long conversation at VMworld with Reduxio’s CTO Nir Peleg and Jacob Cherian who is the VP of Product. This week we followed up that conversation with a demo, we had an hour scheduled but the demo was done in 20 minutes… not because it wasn’t interesting, but because it was that simple and intuitive. So who is Reduxio and what do they have to offer?

Reduxio is a storage company which was founded in 2012 and backed by Seagate Technology, Intel Capital, JVP and Carmel Ventures. I probably shouldn’t say storage company as they are more positioning themselves as a data management company, which makes sense if you know their roadmap. For those who care, Reduxio has a head office in San Francisco and an R&D site in Israel. Today Reduxio offers a hybrid storage system. The system is called HX550 and is a dual controller (active/standby) solution which comes in a 2U form factor with 8 SSDs and 16 HDDs, of course connected over 10GbE, dual power supply which also includes a cache protection unit for power failures. Everything you would expect from a storage system I guess.

But the hardware specs are not what interested me. The features offered by the platform, or Reduxio’s TIME OS (as they call it) is what sets them apart from others. First of all, not surprisingly, the architecture revolves around flash. It is a tiering based architecture which provides in-memory deduplication and compression, this means that dedupe and compressions happens before data is stored on SSD or HDD. What I found interesting as well is that Reduxio expects IO to be random and all IO will go to SSD, however if it does detect sequential streams then the SSD is bypassed and the IO stream will go directly to HDD. This goes for both  reads and writes by the way. Also, they take proximity of the data in to account when IO moves between SSD and HDD, very smart as that ensures data moves efficiently. All of this by the way, is shown in the UI of course, including dedupe/compression results etc.

Now the interesting part is the “BackDating” feature Reduxio offers. Basically in their UI you can specify the retention time of data and automatically all volumes with the created policy will adhere to those retention times. You could compare it to snapshots, but Reduxio solved it differently. They asked themselves first what the outcome was a customer expected and then looked at how they could solve the problem, without taking existing implementations like snapshots in to account. In this case they added time as an attribute to a stored block. The screenshot below by the way shows how you can create BackDating policies and what you can set in terms of granularity. So “seconds” need to be saved for 6 hours in this example, hourly for 7 days and so on.

Big benefit is that as a result you can go to a volume and go back to a point in time and simply revert the volume to that point in time or create a clone from that volume for that point in time. This is also how the volume will be presented back to vSphere by the way, so you will have to re-signature it before you can access it. The screenshot below shows what the UI looks like, very straight forward, select a date / time or just use the slide if you need to go back seconds/minutes/hours.

What struck me when they demoed this by the way was how fast these volume clones were created. Jacob, who was driving the demo, explained that you need to look at their system as a database. They are not creating an actual volume, the cloned volume seen by the host is more the result of a query where the data set consists of volume, offset, reference and time. Just a virtual construct that points to data.

Oh and before I forget, just to keep things simple the UI also allows you to set a bookmark for a certain point in time so that it is easier to go back to that point using your own naming scheme. Talking about the UI, I think this is the thing that impressed me most, it is a simple concept, but allowing you to drag and drop widgets in to your front page dashboard is something I appreciate a lot. I may want to see different info on the frontpage than someone else, having the ability to change this is very welcome. The other thing about their UI, it doesn’t feel crammed. In most cases with enterprise systems we seem to have the habit of cramming as much as we can on a single page which then usually results in users not knowing where to start. Reduxio took a clean slate approach, what do we need and what don’t we need?

One other thing I liked was a feature they call StorSense. This is basically a SaaS based support infrastructure where analytics and an event database can help you prevent issues from occurring. When there is an error for instance the UI will inform you about the issue and also tells you how to mitigate it. Something which I felt was very useful as you don’t need to search an external KB system to figure out what is going on. Of course they also still offer traditional logging etc for those who prefer that.

That sounds cool right? So what’s the catch you may ask? Well there is one thing I feel is missing right now and that is replication. Or I should rather say the ability to sync data to different locations. Whether that is traditional sync replication or async replication or something in a different shape or form is to be seen. I am hoping they take a different approach again, as that is what Reduxio seems to be good at, coming up with interesting alternative ways for solving the same problem.

All in all they impressed me with what they have so far, and I didn’t even mention it, but they also have a vSphere plugin which allows for VM Level recovery. Hopefully we can expect support for VVols soon and some form of replication, just imagine how powerful that combination can be. Great work guys, and looking forward to hearing more in the future!

If you want to know more about them I encourage you to fill out their contact form so they can get back to you and give you a demo as I am sure you will appreciate it. (Or simply hit up someone like Fred Nix on twitter) Thanks Fred, Jacob and Nir for taking the time to have a chat!

Rubrik landed new funding round and announced version 3.0

Duncan Epping · Aug 24, 2016 ·

After having gone through all holiday email it is now time to go over some of the briefings. The Rubrik briefing caught my eye as it had some big news in there. First of all, they landed Series C, big congrats, especially considering the size, $ 61m is a pretty substantial I must say! Now I am not a financial analyst, so I am not going to spend time talking too much about it, as the introduction of a new version of their solution is more interesting to most of you. So what did Rubrik announce with version 3 aka Firefly.

First of all, the “Converged Data Management” term seems to be gone and “Cloud Data Management” was introduced, and to be honest I prefer “Cloud Data Management”. Mainly because data management is not just about data in your datacenter, but data in many different locations, which typically is the case for archival or backup data. So that is the marketing part, what was announced in terms of functionality?

Version 3.0 of Rubrik supports:

  • Physical Linux workloads
  • Physical SQL
  • Edge virtual appliance (for ROBO for instance)
  • Erasure Coding

When it comes to physical SQL and Linux support it is probably unnecessary, but you will be able to backup those systems using the same policy driven / SLA concepts Rubrik already provides in their UI. For those who didn’t read my other articles on Rubrik, policy based backup/data management (or SLA domains as they call it) is their big thing. No longer do you create a backup schedule. You create an SLA and assign that SLA to a workload or a group even. And now this concept applies to SQL and physical Linux as well, which is great if you still have physical workloads in your datacenter! Connecting to SQL is straight forward, there is a connector service which is a simple MSI that needs to be installed.

Now all that data can be store in AWS S3 and for instance Microsoft Azure in the public cloud or maybe in a privately deployed Scality solution. Great thing about the different tiers of storage is that you qualify the tiers in their solution and data flows between it as defined in your workload SLA. This also goes for the announced Edge virtual appliance. This basically is a virtualized version of the Rubrik appliance, which allows you to deploy a solution in ROBO offices. Through the SLA you bring data to your main data center, but you can also keep “locally cached” copies so that restores are fast.

Finally, Rubrik used mirroring in previous versions to safely store data. Very similar to VMware Virtual SAN they now introduce Erasure Coding. Which means that they will be able to store data more efficiently, and according to Chris Wahl at no performance cost.

Overall an interesting 3.0 release of their platform. If you are looking for a new backup/data management solution, definitely one to keep your eye on.

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Interim pages omitted …
  • Page 10
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in