Lately I have been looking more and more in to converged compute and storage solutions, or “datacenter in a box” solutions as some like to call them. I am a big believer of this concept as some of you may have noticed. Those who have never heard of these solutions, an example of this would be Nutanix or Simplivity. I have written about both Nutanix and Simplivity in the past, and for a quick primer on those respective solutions I suggest to read those articles. In short, these solutions run a hypervisor with a software based storage solution that creates a shared storage platform from local disks. In others, no SAN/NAS required, or as stated… a full datacenter experience in just a couple of U’s.
One thing that stood out to me though in the last 6 months is that for instance Nutanix is often tied to VDI/View solutions, in a way I can understand why as it has been part of their core message / go-to-market strategy for a long time. In my opinion though there is no limit to where these solutions can grow and go. Managing storage, or better said your full virtualization infrastructure, should be as simple as creating or editing a virtual machine. One of the core principles mentioned during the vCloud Distributed Storage talk at VMworld, by the way vCloud Distributed Storage is a VMware software defined storage initiative.
Hopefully people are starting to realize that these so-called Software Defined Storage solutions will fit in most, if not all, scenarios out there today. I’ve been having several discussions with people about these solutions and wanted to give some examples of how it could fit in to your strategy.
Just a week ago I was having a discussion with a customer around disaster recovery. They wanted to add a secondary site and replicate their virtual machines to that site. The cost associated with a second storage array was holding them back. After an introduction to converged storage and compute solutions they realized they could step in to the world of disaster recovery slowly. They realized that these solutions allowed them to protect their Tier-1 applications and expand their DR protected estate when required. By using a converged storage and compute solutions they can avoid the high upfront cost and it allows them to scale out when needed (or when they are ready).
One of the service providers I talk to on a regular basis is planning on creating a new cloud service. Their current environment is reaching its limits and predicting how this new environment will grow in the upcoming 12 months is difficult due to the agile and dynamic nature of this service they are developing. The great thing though about a converged storage and compute solution is that they can scale out whenever needed, without a lot of hassle. Typically the only requirement is the availability of 10Gbps ports in your network. For the provider though the biggest benefit is probably that services are defined by software. They can up-level or expand their offerings when they please or when there is a demand.
These are just two simple examples of how a converged infrastructure solution could fit in to your software defined datacenter strategy. The mentioned vendors Nutanix and Simplivity are also just two examples out of various companies offering these. I know of multiple start-ups who are working on a similar products and of course there are the likes of Pivot3 who already offer turnkey converged solutions. As stated earlier, personally I am a big believer in these architectures and if you are looking to renew your datacenter or at the verge of a green-field deployment… I highly recommend researching these solutions.
Go Software Defined – Go Converged!
Rob Bergin says
Dont’ forget about Scale Computing (http://www.scalecomputing.com/) – I think they were in this space before Nutanix and Simplivity and I am not tooting their horn – just saying if you are evaluating the “Datacenter in a box” – I would add them to your short list.
All of these platforms are trying to get the holy grail of 100% virtualization.
We heard at the VMworld keynote a lot of large virtualization shops are reaching 50-60% virtualized.
But i think that is the Enterprise customer and the smaller SMB or Mid-market shops are reaching 80-90% because they are all Intel-based OS which run inside virtual machines.
And the hard to virtualize – the Tier 1 / Line of Business (LOB) applications are still elusive prey on the virtualization safari for the Enterprise customer where the SMB/Midmarket has their LOB applications running in a VM already.
So the Nutanix/Simplivity/Scale may find a home in the SMB/Midmarket before the Enterprise or they may just surprise folks and land some Enterprise deals.
I have seen this as the hierarchy in building a multitenant or multipurpose x86 Mainframe.
1) BYOS – Build Your Own Stack – pick a server, a storage platform and add a hypervisor
2) Bundle 1.0 – VCE – a single SKU (but made from multiple vendors (Vmware, Cisco, EMC)
3) Bundle 1.1 – IBM’s Pure, HP CloudSystem Matrix (rough name) or Dell Active System (formerly Virtual Integrated System ) – a single SKU from one vendor
4) Reference Architectures – Flexpod, VSPEX – more flexibility in picking the components.
5) Bundle 2.0 – Nutanix, Simplivity, Scale – single piece of gear that is compute, memory, storage and you add a single piece of gear each time you grow (reminds me a little of Equalogic’s model).
6) Bundle 3.0 aka Software Defined Data Center (SDCC) – now you add on to the compute/storage/hypervisor with all physical network devices like load balancers, DNS appliances, IPAM, Firewall, SSL accelerators, WAN caching, etc and run them as virtual appliances or virtual machines on any of the #1 – #5 options. (shameless plug – if you are going to PEX next month – check out SPO2307 – Storage Designed for the Software Defined Data Center – its going to be a very cool session from NetApp on the SDDC)
7) Hybrid – running #1 – #5 with some SSO/ADFS to SaaS vendor(s) like Salesforce for CRM and have the ability to move workloads between the four wall of your private data center and an external service provider or provider(s).
8) Bundle 4.0 – Data Center as a Service – Amazon – reverse engineer the Instagram platform – it was 100% software-defined – and used a public cloud vendor. This space is getting busier and busier – IBM, HP, Rackspace, Virtustream, ProfitBricks, Peak Colo, etc are all starting to have “Data Center in a Box” and its 100% hosted offsite. And there could be another shameless plug here for NetApp Private Storage for Amazon Web Services.
Lastly – be on the look out for tools that help IT Departments move from 1.0 to 4.0, etc – the folks at AppZero or Zerto are interesting for moving workloads between paradigms.
Sorry for the long winded post -I may have had too much coffee.
Tom Ridges says
Good article.
I believe they also allow pave the way for “cloud” adoption for organisation who are not quite there yet, but have an immediate requirement for resource. The traditional way people consume hardware means that often there is a lot of capacity left in a system (think SAN controllers which can take more disk shelves, empty switches, etc) which may delay a move to “cloud” solution while they use this capacity (and from a service provider point of view, may delay/increase costs of moving customers to their “cloud”).
By aligning the available resource to the required resource more closely you reduce the need to “consume the space you have paid for” syndrome and give the business more flexibility on implementing the correct solution, not the solution you paid for last year.
Michael Baranski says
I’ve had the converged discussions with customers and they all seem to give the first impression of not trusting software for doing traditional SAN based activities for Replication/DR. I know it sounds strange but if VMware or their SAN doesn’t do it then they just don’t know if it’s reliable enough for them. I then remind them of how they had to trust VMware for vMotion and DRS and how that took some trust. 🙂
In the past when all we had was DAS/SCSI attached arrays it left a bad taste in some people’s mouths from those issues. So they went the SAN/NAS route. Now they are going back to the old model of DAS and it takes some readjustment especially in the datacenter versus a remote office.
Duncan Epping says
It makes sense for them not to trust it. For many years they have been told that they needed a SAN to guarantee reliability and performance. However, the reason for that was the fact that “server hardware and networking” just wasn’t able to cope with the amounts of data in a proper way. Now that we can use SSD for performance, server hw is packed with GHz and GBs and we have 10Gbps networks those arguments are no longer relevant. On top of that their are availability services that are moving up the stack… the world has changed!
Bob Henderson says
Thanks for this post Duncan – I watched the Nutanix presentation at London’s VMUG last week and reading your notes here it’s convincing enough to merit consideration where it didn’t even appear on our list previously. We’re in exactly the situation your customer is in – new private cloud with an unquantifiable demand and of course budget constraints. How do we scale it?
I need to be convinced that the compression and thin-provisioning offers me the same efficiencies as my current SAN vendor (v. impressive numbers to date) but also have some networking security issues to address. VLAN and NIOC over shared nics won’t cut it my world much of the time and I can play with multi-tenancy technologies on my SAN that satisfy some of the isolation requirements put in front of us. Time to sort an evaluation I believe….
zagpoint says
Thanks for the article! Now I see where we are going with vmware VSAN…