• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Nutanix Complete Cluster

Duncan Epping · Aug 18, 2011 ·

I was just reading up and noticed an article about Nutanix. Nutanix is a “new” company which just came out of stealth mode and offers a datacenter in a box type of solution. With that meaning that they have a solution which provides shared storage and compute resources in a single 2u chassis. This 2u chassis can hold up to 4 compute nodes and each of these nodes can have 2 CPUs, up to 192GB of memory, 320 GB of PCIe SSD, 300 GB SATA SSD and 5 TB of SATA HDDs. Now the cool thing about it is that each of the nodes “local” storage can be served up as shared storage to all of the nodes enabling you to use HA/DRS etc. I guess you could indeed describe Nutanix’s solution as the “Complete Cluster” solution and as Nutanix says it is unique and many analysts and bloggers have been really enthusiastic about this… but is it really that special?

What Nutanix actually uses for their building block is an HPC form factor case like the one I discussed in May of this year. I wouldn’t call that revolutionary as Dell, Super Micro, HP (and others) sell these as well but market it differently (in my opinion a missed opportunity). What does make Nutanix somewhat unique is that they package it as a complete solution including a Virtual Storage Appliance they’ve created. It is not just a VSA but it appears to be a smart device which is capable of taking advantage of the SSD drives available and uses that as a shared cache distributed amongst each of the hosts and it uses multiple tiers of storage; SSD and SATA. It kind of reminds me of what Tintri does only this is a virtual appliance that is capable of leveraging multiple nodes. (I guess HP could offer something similar in a heartbeat if they bundle their VSA with the DL170e) Still I strongly believe that this is a promising concept and hope these guys are at VMworld so I can take a peak and discuss the technology behind this a bit more in-depth as I have a few questions from a design perspective…

  • No 10Gbe redundancy? (according to the datasheet just a single port)
  • Only 2 nics for VM traffic, vMotion, Management? (Why not just 2 10Gbe nic ports?)
  • What about when the VMware cluster boundaries are reached? (Currently 32 nodes)
  • Out band management ports? (could be useful to have console access)
  • How about campus cluster scenarios, any constraints?
  • …..

Lets see if I can get these answered over the next couple of days or at VMworld.

Related

Server architect, cloud, Server, Storage

Reader Interactions

Comments

  1. Ron Davis says

    18 August, 2011 at 01:24

    You could always build one of these yourself.
    Use LSI Megaraid controllers, they will have a feature to let you use SSD as write/read cache for any LUNs you provision on them. Install ESXi, install the HP P4000 VSA. Now repeat and use HP’s network RAID.
    You have made something very similar, but you have control over the pieces. You pick the SSD, the number of SSDs, SATA or SAS for the drives. NICs, etc.
    I bet the price comes out 40% lower for an apples to apples build.

    • Duncan Epping says

      18 August, 2011 at 09:34

      Yes you can, now that I started to look around it seems that is is the Super Micro or the Dell chassis that is used with their blades… just whitelabelled with a nice bezel. Now that is not what is most interesting about this development, it is the distributed storage system Nutanix created which is most interesting!

  2. Tiffany To says

    18 August, 2011 at 02:07

    Thanks for the write-up, Duncan!

    Yes, we’ll be at vmworld and would love to meet you there in person to show you a demo. Please send a note @ http://www.nutanix.com/VMWorld.html and we’ll get it set up.

    Regarding your questions:

    No 10Gbe redundancy? (according to the datasheet just a single port)
    > The box currently fails over to one of the 1GbE nics.

    Only 2 nics for VM traffic, vMotion, Management? (Why not just 2 10Gbe nic ports?)
    > Traffic separation for vmotion, mgmt and storage traffic is configurable across the 3 nics.

    What about when the VMware cluster boundaries are reached? (Currently 32 nodes)
    > Our storage is not bound by any VMware cluster limits. We’re getting around the 255 LUN limit with an approach similar LUNmasking to create separate zones of storage.

    Out band management ports? (could be useful to have console access)
    > We have IPMI support.

    How about campus cluster scenarios, any constraints?
    > We don’t currently support stretch clustering today, but are considering it for a future release.

    As for Ron’s comment about building one yourself, building scale-out distributed systems is not a simple feat, which is why our R&D team is comprised of key Google GFS and Exadata architects. HP’s VSA was not built to scale-out, so it’s interesting for SMB, but we’re targeting mid-market & enterprise with Nutanix Complete Cluster.

    • Duncan Epping says

      18 August, 2011 at 07:53

      Thanks for your reply much appreciated. I will most definitely drop by your booth at VMworld.

      And I agree from a HW perspective you can buy one yourself, building a true scale-out distributed system is indeed not easy and most systems don’t scale above 10 nodes for a reason.

  3. Jason Langdon says

    18 August, 2011 at 12:49

    Great idea but how many people are going to be turned off by the number of CPU’s and the corresponding number of VMware CPU licenses they’ll have to buy in order to meet the 192GB’s of vRAM?

    • Duncan Epping says

      18 August, 2011 at 12:55

      I think that when you invest in this type of solution Enterprise Plus is the way to go anyway which would give you 192GB per host so I don’t see the problem…

      • Mike says

        18 August, 2011 at 20:43

        As far as I know, the models start at $75k or so, so yea – VMware licenses are the least of your worry 🙂

        They mention FusionIO – would be interesting which model as we built server with the big ones ($10k / piece) so I ask sure the $75k won’t reach far for that either 😉

        • Mike says

          18 August, 2011 at 20:44

          * so I am sure 😉

  4. Michael Duke says

    19 August, 2011 at 02:21

    The Register has an in-depth view of the solution and including the component level info for the storage bits.

    http://www.theregister.co.uk/2011/08/18/nutanix_storage/

    I like the look of this solution a lot.

  5. Mike says

    19 August, 2011 at 09:02

    40-60% Capex seems a lot. Very likely it is based on list prices. We use bespoke systems and get MASSIVE discounts from Dell Equallogics for example (no one is paying retails for SANs tbh.) A full solution is just a fraction of the googled $75k these things cost. Would love to get my hands on one of them though to make a proper price / performance comparison as most if it is based on guesses.

  6. Tiffany To says

    20 August, 2011 at 17:03

    Thanks to Michael Duke for pointing folks to The Register article for component details.

    @Mike

    The 40-60% savings is based on street price, not list price, and we compared to low (Dell + Compellent), mid (Dell + Netapp) and high (vBlock) end server + SAN configs.

    Obviously, the savings are less at the low-end, but we are delivering more than just a basic compute + storage building block. Enterprise features like HA, DR, heat-optimized tiering, VM-based policies, capacity optimization, converged backups, etc. are also included.

    $75K is the USLP for the Nutanix Starter Kit (1 2U block w/ 3 nodes populated) but it will obviously sell for much lower through our solution reseller partners.

    We can drill into this more and show you a demo if you want to join us for a webinar next week (http://www2.nutanix.com/l/8112/2011-08-12/5AQZ) or meet up at vmworld (http://www.nutanix.com/VMWorld.html).

  7. MigrationKing says

    21 August, 2011 at 00:28

    This company has a new idea for a new time of computing effort. It’s funny how they are getting hammered and they just came out a few days ago. Indicative of things to come. The manner in which SAN infrastructures are today are highly costly for the mid-level to enterprise market. Explaining to a CIO that you can drop in a new component/commoditized server or server(s) for $75,000 versus trying to explain how to pay for a new EMC SAN Array + Service and Support + Licensing + anything else that needs to go with the SAN is much more difficult to explain.

    This idea is one that will be VERY disruptive if it can ever really get our there and one thing that they need to do is get on the VMware HCL.

  8. Captain Canuck says

    23 August, 2011 at 21:09

    It’s very easy to hammer a new player, particularly when they have taken a new approach to handling a very common issue we’ve all built kludges to handle.

    It’s odd to me to hear people moaning about the price of this thing, and obviously it’s coming from the technie and not management side.

    At $75K for a 3 node cluster complete with high performance (flash) storage as well as ample bulk storage all rolled into one simple to scale, manage, and support package has me thrilled.

    But we have a real business to run which isn’t building custom bespoke IT solutions to create and support to keep our IT staff challenged.

  9. Pete Thiern says

    24 August, 2011 at 04:59

    I have been in one of their web demos recently. The guy who says he can build it cheaper doesn’t have a clue what he is talking about. It’s like saying I don’t need to buy NetApp because I can do it using ZFS and nexenta.

    What these guys at Nutanix is doing interesting because of the file system, tiering, inline dedeup and things like converged backup and snapshots. We use a lot of NetApp here, not always because it’s what my users want, but because of some of these features. If I can get them without paying for NetApp and still have one vendor solution with decent random performance, then it indeed is interesting to me and, I reckon, for a lot others like me

  10. Sudhish Ahuja says

    27 August, 2011 at 21:05

    For 75k with all its features is a great solution, sometime back I worked on a similar solution with Blades, SSD, RHEL KVM and HDFS and SAN storage for archival.

    Is Nutanix virtual storage controller runs on each node of the cluster as a VSA utilizing 4vCPU’s.
    Is the SOCS filesystem based on zfs with node level distributed architecture, the redundancy spans across all the 4-nodes in a Nutanix Starter Kit.
    Do you have a solution which uses SSD’s instead of fusion-IO cards, it will help in lowering the cost and free-up a pcie port.

  11. Phil says

    29 September, 2014 at 14:08

    We have four of these. We have had three completely fail and we had to call Nutanix to get them working again. Brought our client down for hours! Had to do with the fusion card. We moved them to our dev environment and went back to 1U servers with fiber attached storage.

    • duncan says

      29 September, 2014 at 15:50

      I am sorry to hear that you had a bad experience. It is not something I have heard from anyone so far to be honest. Most feedback I have heard on Nutanix has been positive, and so has been my experience.

    • @vcdxnz001 says

      2 October, 2014 at 12:46

      Hi Phil,

      If there is anything I can do to ensure that the situation is resolved to your satisfaction please let me know. This is the first I’ve heard of this type of situation and it’s certainly not up to our usual high standards that everyone expects and generally receives. I work for Nutanix in the Solutions and Performance Engineering team and would be happy to help any way I can.

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in