• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

intro

Startup intro: Runecast

Duncan Epping · Mar 7, 2017 ·

I met with Runecast a couple of years ago at VMworld. Actually, I am not sure they already had a name back then, I should probably say I met with the guys who ended up founding Runecast at VMworld. One of them, Stan, is a VCDX and back then he pitched this idea to me around an appliance that would analyze your environment based on a set of KBs. His idea was primarily based on his experience managing and building datacenters. (Not just Stan’s experience, but most of the team are actually former IBM employees) Interesting concept, kind of sounded similar to CloudPhysics to me, although the focus was more on correlation of KB then capacity management etc.

Fast forward to 2017 and I just finished a call with the Runecast team. I had a short conversation at VMworld 2016 and was under the impression that they sold the company or quit. None of this is true. Runecast managed to get a 1.6m euro funding (Czech Republic) and is going full steam ahead. With around 10 people, most being in Czech Republic they are ready to release the next version of Runecast Analyzer, which will be 1.5. So what does this provide?

Well just imagine you manage a bunch of hosts and vCenter (not unlikely when you visit my blog), maybe some shared storage along with it. There are many KB articles, frequent updates of these and many newly published KBs every week. Then there’s also a whole bunch of best practices and of course the vSphere Hardening Guide. As an administrator do you have time to read everything that is published every day? And then when you have read it, do you have time to check your environment if the issue or best practice described applies to your infrastructure? Of course you don’t, and this is where Runecast Analyzer comes in to play.

You download the appliance and provision it in to your environment, next you simply hook vCenter Server in to it and off you go. (As of 1.5 it also supports connecting several vCenter Server instances by the way.) Click analyze now and check the issues called out in the HTML-5 dashboard. As the screenshot below shows, this particular environment has issues identified in the log file that are described in a KB article. There are various other KB articles that may apply, just as an example: a combination of a certain virtual NIC with a specific OS may not be recommended. Also, various potential security issues and best practices are raised if they exist/apply.

When you would click one of these areas you can drill down in to what the issue is and potentially figure out how to mitigate it. In the screenshot below you see the list of KBs that apply to this particular environment, you can open the particular entry (second screenshot below) and then find out to what it applies (objects: VMs, hosts, vCenter etc). If you feel it doesn’t apply to you, or you accept the risk, you can of course “ignore” the issue. When you click ignore a filter will be created which rules out this issue from being called out through the dashboard. The filtering mechanism is pretty smart, and you can easily create your own filters on any level of the virtual infra hierarchy. Yes, it is also possible to delete the filter(s) again when you feel it does apply to your environment.

Besides checking the environment, as mentioned, Runecast can also analyze the logs for you. And I was happy to see that this got added, as it makes it unique compared to other solutions out there. Depending on what you are looking for you have these quick filtering options, and of course there are search strings and you can select a time period in which you would like to search of this particular string

As I said, all of this comes as a virtual appliance, which does not require direct connection to the internet. However, in order to keep the solution relevant you will need to update regularly, they mentioned they release a new data set once every two weeks roughly. It can be updated over the internet (through a proxy if needed), or you can download the ISO and update Runecast Analyzer through that, which could be very useful in secure locations. The appliance works against vSphere 5.x and 6.x (yes including 6.5) and there is a 30 day free trial. (Annual subscription, per socket pricing.) If you like to give it a try, click the banner on the right side, or go to their website: https://www.runecast.biz/. Pretty neat solution, and looking forward seeing what these guys can achieve with the funding they just received.

Startup intro: Reduxio

Duncan Epping · Sep 23, 2016 ·

About a year ago my attention was drawn to a storage startup called Reduxio, not because of what they were selling (they weren’t sharing much at that point though even) but because two friends joined them, Fred Nix and Wade O’Harrow (EMC / vSpecialist fame). I tried to set up a meeting back then and it didn’t happen for whatever reason and it slipped my mind completely. Before VMworld Fred asked me if I was interested in meeting up and we ended up having an hour long conversation at VMworld with Reduxio’s CTO Nir Peleg and Jacob Cherian who is the VP of Product. This week we followed up that conversation with a demo, we had an hour scheduled but the demo was done in 20 minutes… not because it wasn’t interesting, but because it was that simple and intuitive. So who is Reduxio and what do they have to offer?

Reduxio is a storage company which was founded in 2012 and backed by Seagate Technology, Intel Capital, JVP and Carmel Ventures. I probably shouldn’t say storage company as they are more positioning themselves as a data management company, which makes sense if you know their roadmap. For those who care, Reduxio has a head office in San Francisco and an R&D site in Israel. Today Reduxio offers a hybrid storage system. The system is called HX550 and is a dual controller (active/standby) solution which comes in a 2U form factor with 8 SSDs and 16 HDDs, of course connected over 10GbE, dual power supply which also includes a cache protection unit for power failures. Everything you would expect from a storage system I guess.

But the hardware specs are not what interested me. The features offered by the platform, or Reduxio’s TIME OS (as they call it) is what sets them apart from others. First of all, not surprisingly, the architecture revolves around flash. It is a tiering based architecture which provides in-memory deduplication and compression, this means that dedupe and compressions happens before data is stored on SSD or HDD. What I found interesting as well is that Reduxio expects IO to be random and all IO will go to SSD, however if it does detect sequential streams then the SSD is bypassed and the IO stream will go directly to HDD. This goes for both  reads and writes by the way. Also, they take proximity of the data in to account when IO moves between SSD and HDD, very smart as that ensures data moves efficiently. All of this by the way, is shown in the UI of course, including dedupe/compression results etc.

Now the interesting part is the “BackDating” feature Reduxio offers. Basically in their UI you can specify the retention time of data and automatically all volumes with the created policy will adhere to those retention times. You could compare it to snapshots, but Reduxio solved it differently. They asked themselves first what the outcome was a customer expected and then looked at how they could solve the problem, without taking existing implementations like snapshots in to account. In this case they added time as an attribute to a stored block. The screenshot below by the way shows how you can create BackDating policies and what you can set in terms of granularity. So “seconds” need to be saved for 6 hours in this example, hourly for 7 days and so on.

Big benefit is that as a result you can go to a volume and go back to a point in time and simply revert the volume to that point in time or create a clone from that volume for that point in time. This is also how the volume will be presented back to vSphere by the way, so you will have to re-signature it before you can access it. The screenshot below shows what the UI looks like, very straight forward, select a date / time or just use the slide if you need to go back seconds/minutes/hours.

What struck me when they demoed this by the way was how fast these volume clones were created. Jacob, who was driving the demo, explained that you need to look at their system as a database. They are not creating an actual volume, the cloned volume seen by the host is more the result of a query where the data set consists of volume, offset, reference and time. Just a virtual construct that points to data.

Oh and before I forget, just to keep things simple the UI also allows you to set a bookmark for a certain point in time so that it is easier to go back to that point using your own naming scheme. Talking about the UI, I think this is the thing that impressed me most, it is a simple concept, but allowing you to drag and drop widgets in to your front page dashboard is something I appreciate a lot. I may want to see different info on the frontpage than someone else, having the ability to change this is very welcome. The other thing about their UI, it doesn’t feel crammed. In most cases with enterprise systems we seem to have the habit of cramming as much as we can on a single page which then usually results in users not knowing where to start. Reduxio took a clean slate approach, what do we need and what don’t we need?

One other thing I liked was a feature they call StorSense. This is basically a SaaS based support infrastructure where analytics and an event database can help you prevent issues from occurring. When there is an error for instance the UI will inform you about the issue and also tells you how to mitigate it. Something which I felt was very useful as you don’t need to search an external KB system to figure out what is going on. Of course they also still offer traditional logging etc for those who prefer that.

That sounds cool right? So what’s the catch you may ask? Well there is one thing I feel is missing right now and that is replication. Or I should rather say the ability to sync data to different locations. Whether that is traditional sync replication or async replication or something in a different shape or form is to be seen. I am hoping they take a different approach again, as that is what Reduxio seems to be good at, coming up with interesting alternative ways for solving the same problem.

All in all they impressed me with what they have so far, and I didn’t even mention it, but they also have a vSphere plugin which allows for VM Level recovery. Hopefully we can expect support for VVols soon and some form of replication, just imagine how powerful that combination can be. Great work guys, and looking forward to hearing more in the future!

If you want to know more about them I encourage you to fill out their contact form so they can get back to you and give you a demo as I am sure you will appreciate it. (Or simply hit up someone like Fred Nix on twitter) Thanks Fred, Jacob and Nir for taking the time to have a chat!

Startup intro: ZeroStack

Duncan Epping · Aug 26, 2015 ·

A couple of months back one of the people I used to work a lot with in the DRS team reaches out to me. He told me that he started a company with some other people I knew and we spoke about the state of the industry and some of the challenges customers faced. Fast forward to today, ZeroStack just came out of stealth and announced to the world what they are building and an A round funding of roughly $ 5.6m.

At the head of the company as CEO we have Ajay Gulati, former VMware employee and most known for Storage IO Control, Storage DRS and DRS. Kiran Bondapalati is the CTO and some may recognize that name as he was a lead architect on Bromium. The DNA of the company is a mix of VMware, Nutanix, Bromium, Cisco, Google an more. Not a bad list I must say

So what are they selling? ZeroStack has developed a private cloud solution which is delivered in two parts:

  1. Physical 2U/4Node Appliance which comes with KVM preinstalled named ZS1000
  2. Management / Monitoring solution which is delivered in a SaaS model.

ZeroStack showed me a demo and getting their appliance up and running took about 15 minutes, the configuration wizard wasn’t unlike EVO:RAIL and looked very easy to run through. The magic however if you ask me isn’t in their configuration section, it is the SaaS based management solution. I stole a diagram from their website which immediately shows the potential.

zerostack

The SaaS management layer provides you a single pane of glass of all the deployed appliances. These can be in a single site or in multiple sites. You can imagine that especially for ROBO deployments this is very useful, but also in larger environments. Now it doesn’t just show you the physical aspect, it also shows you all the logical constructs that have been created like “projects”.

At this part of the demo by the way I got reminded of vCloud Director a bunch of times, and AWS for that matter. ZeroStack allows you to create “tenants” and designate resources to them in the form of projects. These can even have a lease times, which is kind of similar to what vCloud Director offers also.

When looking at the networking aspects of ZeroStack’s solution it also has the familiar constructs like private networks and public networks etc. On top of that networking services like routing / firewall’ing are implemented also in a distributed fashion. And before I forget, everything you see in the UI can also be automated through the APIs which are fully Openstack compatible.

Last but not least we had a discussion about patching and updating. With most systems this is usually the most complicated part. ZeroStack took a very customer friendly approach. The SaaS layer is being updated by them, and this can happen as frequent as once every ten days. The team said they are very receptive to feedback and have a short turnaround time for implementing new functionality, as their goal is to provide most functionality through the SaaS layer. The appliance will be on a different patch/update scheme, probably once every 3 or 6 months, of course depending on the problems fixed and features introduced. The updates are done in a rolling fashion and non-disruptive to your workloads, as expected.

That sounds pretty cool right? Well as always with a 1.0 version there is still some functionality missing. Functionality that is missing in 1.0 is for instance a “high availability” feature for your workloads. If a host fails then you as an admin will need to restart those VMs. Also when it comes to load balancing, there is no “DRS-alike” functionality today. Considering the background of the team though, I can imagine both of those showing up at some point in the near future. It does however mean that for some workloads the 1.0 version may not be the right solution for now. Nevertheless, test/dev and things like cloud native apps could land on it.

All in all, a nice set of announcements and some cool functionality coming. These guys are going to be at VMworld so make sure to stop by their booth if you want to see what they are working on.

Startup intro: Rubrik. Backup and recovery redefined

Duncan Epping · Mar 24, 2015 ·

Some of you may have seen the article by The Register last week about this new startup called Rubrik. Rubrik just announced what they are working on and announced their funding at the same time:

Rubrik, Inc. today announced that it has received $10 million in Series A funding and launched its Early Access Program for the Rubrik Converged Data Management platform. Rubrik offers live data access for recovery and application development by fusing enterprise data management with web-scale IT, and eliminating backup software. This marks the end of a decade-long innovation drought in backup and recovery, the backbone of IT. Within minutes, businesses can manage the explosion of data across private and public clouds.

The Register made a comment, which I want to briefly touch on. They mentioned it was odd that a venture capitalist is now the CEO for a startup and how it normally is the person with the technical vision who heads up the company. I can’t agree more with The Register. For those who don’t know Rubrik and their CEO, the choice for Bipul Sinha may come as a surprise it may seem a bit odd. Then there are some who may say that it is a logical choice considering they are funded by Lightspeed… Truth of the matter is that Bipul Sinha is the person with the technical vision. I had the pleasure to see his vision evolve from a couple of scribbles on the whiteboard to what Rubrik is right now.

I still recall having a conversation with Bipul talking about the state of the “backup industry”, and I recall we agreed the different components of a datacenter had evolved over time but that the backup industry was still very much stuck in the old world. (We agreed backup and recovery solutions suck in most cases…) Back when we had this discussion there was nothing yet, no team, no name, just a vision. Knowing what is coming in the near future and knowing their vision I do think this quote from the press release embraces best what Rubrik is working on and it will do:

Today we are excited to announce the first act in our product journey. We have built a powerful time machine that delivers live data and seamless scale in a hybrid cloud environment. Businesses can now break the shackles of legacy and modernize their data infrastructure, unleashing significant cost savings and management efficiencies.

Of course Rubrik would not be possible without a very strong team of founding members. Arvind Jain, Arvind Nithrakashyap and Soham Mazumdar are probably the strongest co-founders one can wish. The engineering team has deep experience in building distributed systems, such as Google File System, Google Search, YouTube, Facebook Data Infrastructure, Amazon Infrastructure, and Data Domain File System. Expectations just raised a couple of notches right?!

I agree that even the statement above is still a bit fluffy so lets add some more details, what are they working on? Rubrik is working on a solution which combines backup software and a backup storage appliance in to a single solution and initially will target VMware environments. They are building (and I hate using this word) a hyperconverged backup solution and it will scale from 3 to 1000s of nodes. Note that this solution will be up and running in 15 minutes and includes the option to age out data to the public cloud. What impressed me most is that Rubrik can discover your datacenter without any agents, it scales-out in a fully automated fashion and will be capable of deduplicating / compressing data but also offer the ability to mount data instantly. All of this through a slick UI or you can leverage the REST APIs , fully programmable end-to-end.

I just went over “instant mount” quickly, but I want to point out that this is not just for “restoring VMs”. Considering the REST APIs you can also imagine that this would be a perfect solution to enable test/dev environments or running Tier 2/3 workloads. How valuable is it to have instant copies of your production data available and test your new code against production without any interruption to your current environment? To throw a buzzword in there: perfectly fit for a devops world and continuous development.

That is about all I can say for now unfortunately… For those who agree that backup/recovery has not evolved and are interested in a backup solution for tomorrow, there is an early access program and I urge you to sign up to learn more but also help shaping the product! The solution is targeting environments of 200 VMs and upwards, make sure you meet those requirements. Read more here and/or follow them on twitter (or Bipul).

Good luck Rubrik, I am sure this is going to be a great journey!

Startup Intro: Eco4Cloud

Duncan Epping · Dec 3, 2014 ·

This week I had the pleasure to be briefed by Eco4Cloud on what it is they bring to the world of IT. First thing which stood out instantly that this startup is based out of Italy, yes indeed… Europe and not a Silicon Valley based startup… that is a nice change if you ask me! Not just from a geographical perspective are they different then most startups today, but also in terms of what solution they are building. Eco4Cloud is all about datacenter optimization and efficiency. What does this mean?

Most of you probably have heard of vSphere DRS and DPM, if you look at DPM from a conceptual perspective then you could say it is all about lowering cost by consolidating more virtual machines on fewer physical hosts and powering off the unneeded hosts. Eco4Cloud is targeting to do something similar, but doesn’t stop just there. Lets look at what they can do today.

Workload Consolidation is the name of the their core piece of technology (in my opinion). Workload Consolidation analyses your hosts and virtual machines and tries to increase consolidation to allow for hosts to be powered off without impacting the virtual machine SLAs. In other words, if your VM is using 1024MB and 2GHz it should have this available after the consolidation as well. (vMotion is used to move VMs around.) Now it does this in a smart way of course by ensuring that resources are properly balanced both from a CPU and Memory point of view. E4C has done many proof of concepts now and they have shown that they can for instance reduce power consumption between 30-60%, as you can imagine this is huge for larger datacenters. Of course it is not just the decrease of power consumption, but it is also reduction in carbon footprint etc.

Besides consolidation of your workload E4C also has a number of features that can help with optimizing your workloads itself. For instance Smart Ballooning which will preemptively, and in a smart way, claim unused memory from specific virtual machines so that other virtual machines can use the memory when needed. But more importantly, free up claimed resources which are not used anyway to avoid the scenario where you reach a state of (false) overcommitment.

Of course it is best to right size your virtual machines in the first place, but as we all know this is fairly difficult and especially with the ever growing demands of the application owners it is not going to get any easier. E4C can also help with that part, they can provide you the data needed to show VMs are oversized and help providing them the correct resources: Capacity Decision Support Manager. It doesn’t just allow you to analyze the current scenario, but also provides you the option to do “what if” scenarios. These “what if” scenarios are very useful in the case where you expect a growth. CDSM will be able to tell you how many hosts you will need to add, but can also help identifying which type of hosts.

Last but not least there is E4C Troubleshooter, a monitoring solution that will help identifying configuration problems for hosts and virtual machines. It can help you with identifying problems in different areas, but for now the focus seems to be SLA compliance, VM mobility and resource accessibility.

So who is doing this? E4C showed me a case study they have done with Telecom Italia, and out of the 500 hosts Telecom Italia had they were able to place 100 hosts in hibernation mode, leading to a 440MWh decrease (avg 20%). What I like about the solution by the way, as that you can run it in analysis mode without having it apply the recommendations. That way you can see first what the potential savings are.

So how does this thing work? Well it is fairly straight forward, as far as I understand. It is a simple appliance and installing it is no rocket science… Of course you will need to ask yourself how you would benefit from this solution, if you have 2 hosts then it probably will not make sense, but in large(r) environments I can definitely see how costs can be dramatically lowered leveraging their datacenter optimization solution.

** disclaimer: I was briefed by E4C, I have no direct experience with their products. E4C is actively looking for Enterprise customers who are willing to test out their solution in there data center. If you work for an Enterprise and are wondering if you can benefit from this, please leave a comment and I can get you in touch with them directly! **

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007) and the author of the "vSAN Deep Dive" and the “vSphere Clustering Technical Deep Dive” series.

Upcoming Events

06-May-21 | VMUG Iceland- Roadshow
19-May-21 | VMUG Saudi – Roadshow
26-May-21 | VMUG Egypt – Roadshow
27-May-21 | VMUG Australia – Roadshow

Recommended reads

Sponsors

Want to support us? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2021 · Log in