I was just reading up and noticed an article about Nutanix. Nutanix is a “new” company which just came out of stealth mode and offers a datacenter in a box type of solution. With that meaning that they have a solution which provides shared storage and compute resources in a single 2u chassis. This 2u chassis can hold up to 4 compute nodes and each of these nodes can have 2 CPUs, up to 192GB of memory, 320 GB of PCIe SSD, 300 GB SATA SSD and 5 TB of SATA HDDs. Now the cool thing about it is that each of the nodes “local” storage can be served up as shared storage to all of the nodes enabling you to use HA/DRS etc. I guess you could indeed describe Nutanix’s solution as the “Complete Cluster” solution and as Nutanix says it is unique and many analysts and bloggers have been really enthusiastic about this… but is it really that special?
What Nutanix actually uses for their building block is an HPC form factor case like the one I discussed in May of this year. I wouldn’t call that revolutionary as Dell, Super Micro, HP (and others) sell these as well but market it differently (in my opinion a missed opportunity). What does make Nutanix somewhat unique is that they package it as a complete solution including a Virtual Storage Appliance they’ve created. It is not just a VSA but it appears to be a smart device which is capable of taking advantage of the SSD drives available and uses that as a shared cache distributed amongst each of the hosts and it uses multiple tiers of storage; SSD and SATA. It kind of reminds me of what Tintri does only this is a virtual appliance that is capable of leveraging multiple nodes. (I guess HP could offer something similar in a heartbeat if they bundle their VSA with the DL170e) Still I strongly believe that this is a promising concept and hope these guys are at VMworld so I can take a peak and discuss the technology behind this a bit more in-depth as I have a few questions from a design perspective…
- No 10Gbe redundancy? (according to the datasheet just a single port)
- Only 2 nics for VM traffic, vMotion, Management? (Why not just 2 10Gbe nic ports?)
- What about when the VMware cluster boundaries are reached? (Currently 32 nodes)
- Out band management ports? (could be useful to have console access)
- How about campus cluster scenarios, any constraints?
Lets see if I can get these answered over the next couple of days or at VMworld.