vOpenData – feed up!

A couple of weeks ago I asked this question on twitter about what the average disk size of a virtual machine is these days. Within a couple of minutes Ben Thomas replied and said we might be able to create a survey script and he copied William Lam in. Now for those who never worked with William, if you ask him a question like that you can expect him to knock something out… William and Ben decided not to just knock out a survey script, but rather an open community project called vOpenData.

vOpenData

This open community project consists of a script that collects the data (and they collect a significant amount, you can see what they collect here.) and is aiming to provide various trending statistics and data for virtualized environments. The data is fed back in to the vOpenData database. The vOpenData website has a great dashboard which provides you all these cool stats. For instance, at the moment there are 77 infrastructures that provided data to their collection. The question I asked, what is the average disk size, currently says “61.51GB”. That average is based on those 77 infrastructures with over 27.000 VMs in total combined. Nice right!?!

I have already emailed William a bunch of suggestions, and as I will be in Palo Alto this week I am sure some more will bubble up during conversations. I am hoping that everyone sees the power of a solution like this and can help feeding data in to the vOpenData platform.

Go here to download the bits and feed up!

** I have had some people asking me how vOpenData compares to CloudPhysics. I have also seen some people comparing vOpenData to CloudPhysics… To be honest you can’t really compare them. Where vOpenData is about averages and statistics, CloudPhysics is more about analytics and simulation models. **

Guaranteeing availability through admission control, chip in!

I have been having these discussions with our engineering teams for the last year around guaranteed restarts of virtual machines in a cluster. In the current shape / form we use Admission Control to guarantee virtual machines are restarted. Today Admission Control is all about guaranteeing virtual machine restarts by keeping track of Memory and CPU resource reservations, but you can imagine that in the Software Defined Datacenter this could be expanded with for instance storage or networking reservation.

Now why am I having these discussions, what is the problem with Admission Control today? Well first of all it is the perception that many appear to have of Admission Control. Many believe the Admission Control algorithm uses “used” resources. Reality however is that Admission Control is not that flexible, it uses resource reservations and as you know this is static. So what is the result of using reservations?

By using reservations for “admission control” vSphere HA has a simple way of guaranteeing a restart is possible at all times. Simply because it checks if sufficient “unreserved resources” are available and if so it allows the virtual machine to be powered-on. If not, then it won’t allow the power-on just to ensure that all virtual machines can be restarted in case of a failure. But what is the problem? Although we guarantee a restart we do not guarantee any type of performance after the restart! Unless, unless of course you are setting your reservations equal to what you provisioned… but I don’t know anyone doing this as it eliminates any form of overcommitment and will result in an increase of cost and a decrease in flexibility.

So that is the problem. Question is – what should we do about it? We (the engineering teams and I) would like to hear from YOU.

  • What would you like admission control to be?
  • What guarantees do you want HA to provide?
  • After a failure, what criteria should HA apply in deciding which VMs to restart?

One idea we have been discussing is to have Admission Control use something like “used” resources… or for instance an “average of resources used” per virtual machine. What if you could say: I want to ensure that my virtual machines always get at least 80% of what they use on average? If so, what should HA do when there are not enough resources to meet the 80% demand of all VMs? Power on some of the VMs? Power on all with reduced share values?

Also, something we have discussed is having vCenter show how many resources are used on average taking your high availability N-X setup in to account, which should at least provide an insight around how your VMs (and applications) will perform after a fail-over. Is that something you see value in?

What do you think? Be open and honest, tell us what you think… don’t be scared, we won’t be bite, we are open for all suggestions.

Tintri releases version 2.0 – Replication added!

I have never made it a secret that I am a fan of Tintri. I just love their view on storage systems and the way they decided to solve specific problems. When I was in Palo Alto last month I had the opportunity to talk to the folks of Tintri again and what they were working on. Of course we had a discussion about the Software Defined Datacenter and more specifically Software Defined Storage and what Tintri would bring to the SDS era. As all of that was under strict NDA I can’t share it, but what I can share are some cool details of what Tintri has just announced, version 2.0 of their storage system.

For those who have never even looked in to Tintri I suggest you catch-up by reading the following two articles:

  1. Tintri – virtual machine aware storage
  2. Tintri follow-up

When I was briefed initially about Tintri back in 2011 one of the biggest areas of improvement I saw were around availability. Two things were on my list to be solved, first one was the “single controller” approach they took. This was solved back in 2011. Another feature I missed was replication. Replication is the main feature that is announced today and it will be part of the 2.0 release of their software. What I loved about Tintri is that all data services they offered were on a virtual machine level. Of course the same applies to replication, announced today.

Tintri offers a-synchronous replication which can go down to a recovery point objective (RPO) of 15 minutes. Of course I asked if there were plans on bringing this down, and indeed this is planned but I can’t say when. What I liked about this replication solution is that as data is deduplicated and compressed the amount of replication traffic is kept to a limit. Let me rephrase that, globally deduplicated… meaning that if a block already exists in the DR site then it will not be replicated to that site. This will definitely have a positive impact on your bandwidth consumption, and Tintri has seen up to 95% reduction in WAN bandwidth consumption. The diagram below shows how this works.

The nice thing about the replication technique Tintri offers is that it is well integrated with VMware vSphere and thus it offers “VM consistent” snapshots by leveraging VMware’s quiescing technology. My next obvious question was what about Site Recovery Manager? As failover is on a per VM basis, orchestrating / automating this would be a welcome option. Tintri is still working on this and hopes to add support for Site Recovery Manager soon. Another I would like to see added was grouping of virtual machines for replication consistency; again this is something which is on the road map and hopefully will be added soon.

One of the other cool features which is added with this release is remote cloning. Remote cloning basically allows you to clone a virtual machine / template to a different array. Those who have multiple vCenter Server instances in their environment know what a pain this can be, hence the reason I feel this is one of those neat little features which you will appreciate once you have used it. Would be great if this functionality could be integrated within the vSphere Web Client as a “right click”, judging by the comments made by the Tintri team I would expect that they are already working on deeper / tighter integration with the Web Client, and I can only hope a vSphere Web Client plugin will be released soon so that all granular VM level data services can be managed from a single console.

All-in-all a great new release by Tintri, if you ask me this release is 3 huge steps forward!

Software Defined Storage; just some random thoughts

I have been reading many articles over the last weeks on Software Defined Storage and wrote an article on this topic a couple of weeks ago. While reading up one thing that stood out was that every single storage/flash vendor out there  has jumped on the bandwagon and (ab)uses this term where ever and when ever possible. In most of those cases however the term isn’t backed by SDS enabling technology or even a strategy, but lets not get in to the finger pointing contest as I think my friends who work for storage vendors are more effective at that.

The article which triggered me to write this article was released a week and a half a go by CRN. The article was a good read, so don’t expect me to tear it down. The article just had me thinking about various things, and what better way to clear your head then to write an article about it. Lets start with the following quote:

While startups and smaller software-focused vendors are quick to define software-defined storage as a way to replace legacy storage hardware with commodity servers, disk drives and flash storage, large storage vendors are not giving ground in terms of the value their hardware offers as storage functionality moves toward the software layer.

Let me also pull out this comment by Keith Norbie in the same article, as I think Keith hit the nail on the head:

Norbie said to think of the software-defined data center as a Logitech Harmony remote which, when used with a home theater system, controls everything with the press of a button.

If you take a look at how Keith’s quote relates to Software Defined Storage it would mean that you should be able to define EVERYTHING via software. Just like you can simply program the Logitech Harmony remote to work with all your devices; you should be able to configure your platform in such a way that spinning up new storage objects can be done by the touch of one button! Now getting back to the first quote, whether functionality moves out of a storage system to a management tool or even to the platform is irrelevant if you ask me. If your storage system has an API and it is allows you to do everything programmatically you are half way there.

I understand that many of the startups like to make potential customers believe different, but the opportunity is there for everyone if you ask me. Yes that includes old-timers like EMC / NetApp / IBM (etc) and their “legacy” arrays. (As some of the startups like to label them.) Again, don’t get me wrong… playing in the SDS space will require significant changes to most storage platforms as most were never architected for this usecase. Most are currently not capable of creating thousands of new objects programmatically. Many don’t even have a public API.

However, what is missing today is not just a public API on most storage systems, it is also the platform which doesn’t allow you to efficiently manage these storage systems through those APIs. When I say platform I refer to vSphere, but I guess the same applies to Hyper-V, KVM, Xen etc. Although various sub-components are already there like the vSphere APIs for Array Integration (VAAI) and the vSphere APIs for Storage Awareness (VASA), there are also still a lot capabilities missing. A good example would be defining and setting specific data-services on a virtual disk level granularity, or end to end Quality of Service for virtual disks or virtual machines, or automatic instantiation of storage objects during virtual machine provisioning without manual action required from your storage admin. Of course, all of this from a single management console…

If you look at VMware vSphere and what is being worked on in the future you know those capabilities will come at some point, in this case I am referring to what was previewed at VMworld as “virtual volumes” (sometime also referred to as VVOLs), but this will take time… Yes I know some storage vendors already offer some of this granularity (primarily the startups out there), but can you define/set this from your favorite virtual infrastructure management solution during the provisioning of a new workload? Or do you need to use various tools to get the job done? If you can define QoS on a per VM basis, is this end-to-end? What about availability / disaster recovery, do they offer a full solution for that? If so, is it possible to simply integrate this with other solutions like for instance Site Recovery Manager?

I think exciting times are ahead of us; but lets all be realistic… they are ahead of us. There is no “Logitech Harmony” experience yet, but I am sure we will get there in the (near) future.

vCenter Federation Survey

One of our product managers asked me if I could share this survey with the world. The topic is vCenter Federation and APIs. It literally takes a couple of minutes to fill out. Your help / input is greatly appreciated, so please if you have those two minutes to spare at the end of the day, then take the time:

http://tinyurl.com/VMwareFederator