VMworld attendees: Give Back!

Always wanted to give back to the community but can’t find the time? How about VMware help you with that? If you are attending VMworld Europe you can simply do this by going to the hangspace and throw a paper airplane as far as you can. The VMware Foundation did the same thing in the US and it was a big success, I am hoping we can repeat that! The amount that will be donated will depend on where your airplane lands. It could be as little as 15 dollars, but also easily a 1000. I just went to the hangspace and threw a paper airplane  with my friend Jessica from the VMware Foundation and and former colleague but now EMC, Paul Manning.

Take the time, go to the hangspace… it literally only takes 2 minutes and give back!

EVO:RAIL now also available through HP and HDS!

It is going to be a really short blog post, but as many folks have asked about this I figured it was worth a quick article. In the VMworld Europe 2014 keynote Pat Gelsinger today announced that both HP and Hitachi Data Systems have joined the VMware EVO:RAIL program. This is great news if you ask me for customers all throughout the world as it provides more options for procuring an EVO:RAIL hyper-converged infrastructure appliance through your preferred server vendor!

The family is growing: Dell, Fujitsy, Hitachi Data Systems, HP, Inspur, NetOne Systems and SuperMicro… who would you like to see next?

Underlying Infrastructure for your pets and cattle

Last week on twitter Maish asked a question that got me thinking. Actually, I have been thinking about this for a while now. The question deals with how you design your infrastructure for the various types of workloads (pets and cattle). Whether your workload falls in the “pet” category or the “cattle” category. (If you are not familiar with the terms pets/cattle read this article by Massimo)

I asked Maish what it actually means for your infrastructure, and at the same time I gave it some more thought over the last week. Cattle is the type of application architecture which handles failures by providing a distributed solution, it scales out instead of up typically, the VMs are disposable as they usually won’t hold state. With pets this is different, they typically scale up and resiliency is often provided by either a 3rd party clustering mechanism or the infrastructure under neath, in many cases contain state and recoverability is key. As you can imagine both types of workloads have different requirements of the infrastructure. Going back to Maish’s question, I guess the real question is if you can afford the “what it means for the underlying infrastructure”. What do I mean with that?

If you look at the requirements of both architectures, you could say that “pets” will typically demand more from the underlying infrastructure when it comes to resiliency / recoverability. Cattle will have less demands from that perspective but flexibility / agility is more important. You can imagine that you could implement two different infrastructure architectures for these specific workloads, but does this make sense? If you are Netflix, Google, Youtube etc then it may make sense to do this due to the scale they operate at and the fact that IT is their core business. In those cases “cattle” is what drives the business, and there are back-end systems. Reality is though that for the majority this is not the case. Your environment will be a hybrid, and more than likely “pets” will have the overhand as that is simply what the state of the world is today.

That does not mean they cannot co-exist. That is what I believe is the true strength of virtualization, it allows you to run many different types of workloads on the same infrastructure. Whether that is your Exchange environment or your in-house developed scale out web application which serves hundreds of thousands of customers does not make a difference to your virtualization platform. From an operational perspective the big benefit here is that you will not have to maintain different run books to manage your workloads. From an ops perspective they will look  the same on the outside, although they may differ on the inside. What may change though is the services required for those systems, but with the rich ecosystem available for virtualization platforms these days that should not be a problem. Need extra security / microsegmentation? VMware NSX can provide the security isolation needed to run these applications smoothly. Sub milliseconds latency requirements? Plenty of storage / caching solutions out there that can deliver this!

Will the application architecture shift that is happening right now impact your underlying infrastructure? We have made these huge steps in operational efficiency in the last 5 years, and with SDDC we are about to take the next big step, and although I do believe that the application architecture shift will result in infrastructure changes lets not make the same mistakes we made in the past by creating these infrastructure silos per workload. I strongly believe that repeatability, consistency, reliability and predictability are key and this starts with a solid, scalable and trusted foundation (infrastructure).

VMware / ecosystem / industry news flash… part 3

It has been a couple of weeks since the last VMware / ecosystem / industry news flash… but we have a couple of items which I felt are worth sharing. Same as with the previous two parts I will share the link, my thoughts around it and hope that you will leave a comment with your thoughts around a specific announcement. If you work for a vendor, I would like to ask to add a disclaimer mentioning this so that all the cards are on the table.

  • PernixData FVP 2.0 available! New features and also new pricing / packaging!
    Frank Denneman has a whole slew of articles describing the new functionality of FVP 2.0 in-depth. If you ask me especially the resilient memory caching is a cool feature, but also the failure domains is something I can very much appreciate as it will allow you to build smarter clusters! The change in pricing/packaging kind of surprised me, an “Enterprise” edition was announced and the current version was renamed to “Standard”. The SMB package was renamed to “Essentials Plus” which from a naming point of view now aligns more with the VMware naming scheme, which makes life easier for customers I guess. I have not seen details around the pricing itself yet, so don’t know what the impact actually is. PernixData has upped the game again and it keeps amazing me how fast they keep growing and at which pace they are releasing new functionality. It makes you wonder what is next for these guys?!
  • Nutanix Unveils Industry’s First All-Flash Hyper-Converged Platform and Only Stretch Clustering Capability!
    I guess the “all-flash” part was just a matter of time considering the price point flash devices have reached. I have looked at these configurations many times, and if you consider that SAS drives are now as expensive as decent SSDs it only makes sense. It should be noted that “all-flash” also means a new model, NX-9000, and this comes as a 2U / 2Node form factor. List price is $110,000 per node… As that is 220k per block and with a 3 node minimum 330K it feels like a steep price, but then again we all know that the street price will be very different. The NX-9000 comes with either 6x 800GB or 1.6TB flash device for capacity, and I am guessing that the other models will also have “all-flash” options as well in the future… it only makes sense. What about that stretched clustering? Well this is what excited me most from yesterdays announcement. In version 4.1  Nutanix will allow for up to 400KM of distance between sites for a stretched cluster. Considering their platform is “vm aware” it should be very easy to select which VMs you want to protect (and which you do not). On top of that they provide the ability to have two different hardware platforms in each of the sites. In other words you can run with a top of the line block in your primary site, while having a lower end block in your recovery site. From a TCO/ROI point of view this can be very beneficial if you have no requirement for a uniform environment. Judging by the answers on twitter, the platform has not gone through VMware vSphere Metro Storage Cluster certification yet but this is likely to happen soon. SRM integration is also being looked at. All in all, nice announcements if you ask me!
  • SolidFire announces two new models and new round of funding (82 million!)
    What is there to say about the funding that hasn’t been said yet. 82 million in series D says enough if you ask me. SolidFire is one of those startups which have impressed me from the very beginning. They have a strong scale-out storage system which offers excellent quality of service functionality, a system which is primarily aimed at the Service Provider market. Although that seems to slowly change with the introduction of these new models as their smallest model now brings a 100K entry point. Note that the smallest configuration with SolidFire is 4 nodes, spec details can be found here. As stated, what excites me most with SolidFire is the services that the systems brings: QoS, data reduction and replication / SRM integration.

Thanks, and again feel free to drop a comment / leave your thoughts!

vSphere 5.1 Clustering Deep Dive promotion & major milestone

This week when looking at the sales numbers of the vSphere Clustering Deep Dive series and Frank and I noticed that we hit a major milestone! In September 2014 we passed the 45000 copies distributed of the vSphere Clustering Deep Dive. Frank and I never ever expected this or even dared to dream to hit this milestone.

When we first started writing the 4.1 book we had discussions around what to expect from a sales point of view and I recall having a discussion with Frank around the sales number, Frank said he would be happy with 100 and I said well 400 would be nice. Needless to say we reset our expectations many times since then… We didn’t really follow it closely in the last 12-18 months, and as today we were discussing a potential update of the book we figured it was time to look at the numbers again just to get an idea. 45000 copies distributed (ebook + printed) is just remarkable, and we are very humbled, baffled and honoured!

We’ve noticed that the ebook is still very popular, and decided to do a promo. As of Monday the 13th of October the 5.1 ebook (kindle) will be available for only $ 0.99 for 72 hours, then after 72 hours the price will go up to $ 3.99 and then after 72 hours it will be back to the normal price. Make sure to get it while it is low priced!

You can pick it up here on Amazon.com! The only other kindle store we could open the promotion up for was amazon.co.uk, so that is also an option.