Serve, Learn and Inspire – support the cause and donate/contribute!

Today I am flying out to Vietnam. No not for a holiday, as many seem to think. I am flying out to Vietnam to work with several great VMware colleagues whom have been asked by the VMware Foundation to go on this journey. I am very grateful and honored to have been selected for this project, it is not just a random project… We will be helping Team4tech and Orphanimpact by working on improving the delivery of computer classes to various orphanages in Vietnam.

Some of you will probably have the same question as my daughter had when I explained why I was going away for almost 2 weeks to Vietnam: “You are going to an orphanage to do what… improve delivery of computer classes, why would those kids need that?” Watch this video and you will understand why it is important for these kids to get computer classes, preventing social isolation is key here.

More details about Orphan Impact here:
More details about Team4Tech here:

If your company is interested in contributing / giving back, make sure to contact Team4Tech and Orphan Impact. They work with many great technology companies like Intel and VMware on various projects, and they can use all the help they need.

I know many of my fellow technology lovers have a big heart. I would like to ask each and everyone of you who has enjoyed reading my articles to donate something to either Team4Tech or Orphan Impact. (Of course contributions in different ways like I describe above are also encouraged!) Believe me, this is a great cause and they can use all the help they can get. You (or the company your work for) can donate any amount, but with only 10 dollars you can give these kids a headphone for their computer classes for instance. (Before anyone asks, yes I just donated.) VMware folks, if you donate don’t forget to request a donation match through the VMware Foundation!

Support the cause!

Don’t create a Frankencluster just because you can…

In the last couple of weeks I have had various discussions around creating imbalanced clusters. Imbalanced from either CPU, memory and even a storage point of view. This typically comes up in discussions where either someone wants to bring larger scale to their cluster and they want to add hosts with more resources of any of the before mentioned types. Or also when licensing costs need to be limited and people want to restrict certain VMs to run a specific set of hosts. Something that comes up often when people are starting to look at virtualizing Oracle. (Andrew Mitchell published this excellent article on the topic of Oracle Licensing and soft vs hard partitioning which is worth reading!)

Why am I not a fan of imbalanced clusters when it comes to compute or storage resources? Why am I not a fan of crippling your environment purposely to ensure your VMs will only run on a subset of vSphere hosts? The reason is simple, the problems I have seen and experienced and the inefficiency in certain scenarios. Lets look at some examples:

Lets assume I have 4 hosts with each 128GB of memory. I need more memory in my cluster and I add a host with 256GB of memory. Now you just went from 512Gb to 768GB which is a huge increase. However, this is only true when you don’t do any form of admission control and resource management. When you do proper resource management or admission control than you would need to make sure that all of your virtual machines can run in the case of a failure, and preferably run with equal performance before and after the failure has occured. If you added 256GB of memory and this is being used and that host containing 256GB goes down your virtual machines could potentially be impacted. They might not restart, and if they restart they may not get the same amount of resources as they received before the failure. This scenario also applies to CPU, if you create an imbalance .

Another one I encountered recently was presenting a LUN to a limited set of hosts, in this case a LUN was only presented to 2 hosts out of the 20 hosts in that cluster… Guess what, when those two hosts die… so do your VMs. Not optimal right when they are running an Oracle database for instance. On top of that I have seen people pitching a VSAN cluster of 16 nodes with only 3 hosts contributing storage. Yes you can do that, but again… when things go bad, they will go horribly bad. Just imagine 1 host fails, how will you rebuild your components that were impacted? What is the performance impact? Very difficult to predict how it will impact your workload, so just keep it simple. Sure there is a cost overhead associated with separating workloads and creating dedicated clusters, but it will be easier to manage and more predictable in failure scenarios.

I guess in summary: If you want predictability in terms of availability and recoverability of your virtual machines go for a balanced environment, don’t create a Frankencluster!

What is Virtual SAN really about?

When talking about Virtual SAN you hear a lot of people talking about the benefits, what Virtual SAN is essentially about. You see the same with various other so-called Software Defined Storage solutions. People typically, when talking about these solutions, talk about things like “enabling within 2 clicks”… Or maybe about how easy it is to scale out, or scale-up for that matter. How much performance you have because of the way they use flash drives. Or about some of the advanced data services they offer.

While all of these are important, when it comes to Virtual SAN I don’t think that is the true strength. Sure, it is great to be able to provide a well performing easy to install scale-out storage solution… but the true strength in my opinion is: Policy Based Management & Integration. After having worked with VSAN for months, that is probably what stood out the most… policy based management

What does this deep integration and what do these policies allow you to do?

  • It provides the ability to specify both Performance and Availability characteristics using the UI (Web Client) or through the API.
    • Number of replicas
    • Stripe width
    • Cache reservations
    • Space reservations
  • It allows you to apply policies to your workload in an easy way through the UI (or API).
  • It provides the ability to do this in a granular way, per VMDK and not per datastore.
  • To a group of VMs or even all VMs in a programmatic way when needed.

Over the last couple of months I have played extensively with this feature of VSAN and vCenter, and in my opinion it is by far the biggest benefit of a hypervisor-converged storage solution. Deep integration with the platform, exposed in a simplistic VM-centric way through the Web Client and/or the vSphere APIs.

vSphere Availability Survey, please help out!

Just received the below from the vSphere Availability team. It takes a couple of minutes to fill out and it helps the vSphere Availability team to set priorities correctly for upcoming releases, yes indeed based on your answers!

— copy / paste —

The Availability team (that brings to you products such as vSphere HA, FT etc.) would like to get your input on how you use our products today and your projected needs. The survey has mainly multiple choice questions, and will take 10-15 minutes to complete. Your feedback is invaluable in helping us tailor our development efforts towards valuable enhancements. So, thank you!

Here’s the link to the survey:

Startup News Flash part 13

Edition 13 of the Startup News Flash already. This week is VMware Partner Exchange 2014 so I expected some announcements to be made. There were a couple of announcements the last week(s) which I felt were worth highlighting. There is one that is not really a startup, but I figured should at least be included in the article and that is the fact that Scale.IO and SuperMicro / LSI / Mellanox / VMware showed an appliance at PEX that was optimized for View deployments. I found it an interesting move, and appealing solution. Chris Mellor wrote an article about it here for the Register.

DataGravity announced their Partner Early Access Program this week. They haven’t revealed what they are building, but judging by the quotes in the announcement publication they are aiming to bring a simple cost-effictive solution to enable analysis of unstructured data. Definitely interesting, and something I will look more closer in to at some point in time.

Atlantis ILIO USX was announced this week. I already mentioned it in my VSAN update. Atlantis ILIO USX is an in-memory storage solution. They added the ability to pool and optimize any class of storage including SAN, NAS, RAM or any type of DAS (SSD, Flash, SAS, SATA) to create a hybrid solution. A change of direction for Atlantis as there primary focus was caching so far, but it makes a lot of sense to me especially as they already have many of the data services for their caching platform.

PernixData announced their Beta program for FVP 1.5. They added support for vSphere 5.5, the vSphere Web Client and also in this version allow you to use a different VMkernel interface other than the vMotion interface which their product uses by default. If you want to know more, Chris Wahl wrote a nice article on his experience with FVP 1.5.

Tintri announced it has closed a $75 million Series E funding round led by Insight Venture Partners, with participation from existing investors Lightspeed Venture, Menlo Ventures and NEA. Good to see Tintri getting another boost, and will be interesting to see how they move forward. I have been following them from the very start and have always been impressed with the ease of the solution they have built.

Virtual SAN (related) PEX Updates

I am at VMware Partner Exchange this week and there and figured I would share some of the Virtual SAN related updates.

  • 6th of March their is an online Virtual SAN event with Pat Gelsinger, Ben Fathi and John Gilmartin… Make sure to register for it!
  • Ben Fathi (VMware CTO) stated that VSAN will be GA in Q1, more news in the upcoming weeks
  • Maximum cluster size has been increased from 8 (beta) to 16 according to Ben Fathi, VMware VSAN engineering team is ahead of schedule!
  • VSAN has linear scalability, close to a million IOPS with 16 hosts in a cluster (100% read, 4K blocks). Mixed IOPS close to half a million. All of this with less than 10% CPU/Memory overhead. That is impressive if you ask me. Yeah yeah I know, numbers like these are just a part of the overall story… still it is nice to see that this kind of performance numbers can be achieved with VSAN.
  • I noticed a tweet Chetan Venkatesh and it looks like Atlantis ILIO USX (in memory storage solution) has been tested on top of VSAN and they were capable of hitting 120K IOPS using 3 hosts, WOW. There is a white paper on this topic to be found here, interesting read.
  • It was also reinstated that customers who sign up and download the beta will get a 20% discount on the first purchase of 10 VSAN licenses or more!
  • Several hardware vendors announced support for VSAN, a nice short summary by Alberto to be found here.

Operational simplicity through Flash

A couple of weeks back I had to honor to be one of the panel members at the opening of the Pure Storage office in the Benelux. The topic of course was flash, and the primary discussion around the benefits. The next day I tweeted a quote of one of the answers I gave during the session which was picked up by Frank Denneman in one of his articles, this is the quote:

David Owen responded to my tweet saying that many performance acceleration platforms introduce an additional layer of complexity, and Frank followed up on that in his article. However this is not what my quote was referring to. First of all, I don’t agree with David that many performance acceleration solutions increase operational complexity. However, I do agree that they don’t always make life a whole lot easier either.

I guess it is fair to say that performance acceleration solutions (hyper-visor based SSD caching) are not designed to replace your storage architecture or to simplify it. They are designed to enhance it, to boost the performance. During the Pure Storage panel sessions I was talking about how flash changed the world of storage, or better said is changing the world of storage. When you purchased a storage array in the two decades it would come with days worth of consultancy. Two days typically being the minimum and in some cases a week or even more. (Depending on the size, and the different functionality used etc.) And that was just the install / configure part. It also required the administrators to be trained, in some cases (not uncommon) multiple five-day courses. This says something about the complexity of these systems.

The complexity however was not introduced by storage vendors just because they wanted to sell extra consultancy hours. It was simply the result of how the systems were architected. This by itself being the result of a major big constraint: magnetic disks. But the world is changing, primarily because a new type of storage was introduced; Flash!

Flash allowed storage companies to re-think their architecture, probably fair to state that the this was kickstarted by the startups out there who took flash and saw this as their opportunity to innovate. Innovationg by removing complixity. Removing (front-end) complexity by flattening their architecture.

Complex constructs to improve performance are no longer required as (depending on which type you use) a single flash disk delivers more than a 1000 magnetic disks typically do. Even when it comes to resiliency, most new storage systems introduced different types of solutions to mitigate (disk) failures. No longer is a 5-day training course required to manage your storage systems. No longer do you need weeks of consultancy just to install/configure your storage environment. In essence, flash removed a lot of the burden that was placed on customers. That is the huge benefit of flash, and that is what I was referring to with my tweet.

One thing left to say: Go Flash!