• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

VMware EVO:RAIL FAQ

Duncan Epping · Sep 2, 2014 ·

Over the last couple of days the same VMware EVO:RAIL questions keep popping up over and over again. I figured I would do a quick VMware EVO:RAIL Q&A post so that I can point people to that instead of constantly answering them on twitter.

  • Can you explain what EVO:RAIL is?
    • EVO:RAIL is the next evolution of infrastructure building blocks for the SDDC. It delivers compute, storage and networking in a 2U / 4 node package with an intuitive interface that allows for full configuration within 15 minutes. The appliance bundles hardware+software+support/maintenance to simplify both procurement and support in a true “appliance” fashion. EVO:RAIL provides the density of blade with the flexibility of rack. Each appliance comes with 100GHz of compute power, 768GB of memory capacity and 14.4TB of raw storage capacity (plus 1.6TB of flash for IO acceleration purposes). For full details, read my intro post.
  • Where can I find the datasheet?
    • http://www.vmware.com/files/pdf/products/evo-rail/vmware-evo-rail-datasheet.pdf
  • What is the minimum number of EVO:RAIL hosts?
    • Minimum number is 4 hosts. Each appliance comes with 4 independent hosts, which means that 1 appliance is the start. It scales per appliance!
  • What is included with an EVO:RAIL appliance?
    • 4 independent hosts each with the following resources
      • 2 x E5-2620 6 core
      • 192GB Memory
      • 3 x 1.2TB 10K RPM Drive for VSAN
      • 1 x 400Gb eMLC SSD for VSAN
      • 1 x ESXi boot device
      • 2 x 10GbE NIC port (SFP / RJ45 can be selected)
      • 1 x IPMI port
    • vSphere Enterprise Plus
    • vCenter Server
    • Virtual SAN
    • Log Insight
    • Support and Maintenance for 3 years
  • What is the total available storage capacity?
    • After the VSAN Datastore is formed and vCenter Server is installed / configured there is about 13.1TB left
  • How many VMs can I run on one appliance?
    • That will very much depend on the size of the virtual machine and the workload. We have been able to comfortably run 250 desktops on one appliance. With Server VMs we ended up with around 100. However, again this very much depends on things like workload / capacity etc.
  • How many EVO:RAIL appliance can I scale to?
    • With the current release EVO:RAIL scales to 4 appliance (aka 16 hosts)
  • If licensing / maintenance / support is three 3 years, what happens after?
    • After 3 years support/maintenance expires. It can be extended, or the appliance can be replaced when desired.
  • How is support handled?
    • All support is handled through the OEM the EVO:RAIL HCIA has been procured through. This ensures that “end to end” support will be provided through the same channel.
  • Who are the EVO:RAIL qualified partners?
    • The following partners were announced at VMworld: Dell, EMC, Fujitsu, Inspur, Net One Systems, Supermicro, Hitachi Data Systems, HP, NetApp
  • How much does an EVO:RAIL appliance cost?
    • Pricing will be set by qualified partners
  • I was told Support and Maintenance is for 3 years, what happens after 3 years?
    • You can renew your support and maintenance with 2 years at most (as far as I know).
    • If not renewed then the EVO:RAIL appliance will remain functioning, but entitlement to support is gone.
  • What if I buy a new appliance after 3 years, can I re-use my licenses that come with the EVO:RAIL appliance??
    • No, the licenses are directly tied to the appliance and cannot be transferred to any other appliance or hardware.
  • Will NSX work with EVO:RAIL?
    • EVO:RAIL uses vSphere 5.5 and Virtual SAN. Anything that works with that will work with EVO:RAIL. NSX has not been explicitly tested but I expect that this should be no problem.
  • Does it use VMware Update Manager (VUM) for updating/patching?
    • No EVO:RAIL does not use VUM for updating and patching. It uses a new mechanism which is built from scratch and comes as part of the EVO:RAIL engine. This to provide a simple updating and patching mechanism, while avoiding the need for a Windows VM (VUM requires Windows).
  • What kind of NIC card is included?
    • 10GbE dual port NIC per host. Majority of vendors will offer both SFP+ and RJ45. This means there is 8 x 10GbE switch port per EVO:RAIL appliance required!
  • Is there a physical switch included?
    • A physical switch is not part of the “recipe” VMware provided to qualified partners, but some may package one (or multiple) with it to simplify green field deployments.
  • What is MARVIN or Mystic ?
    • MARVIN (Modular Automated Rackable Virtual Infrastructure Node) was the codename used by VMware internally for EVO:RAIL. Mystic was the codename used by EMC. Both of them refer to EVO:RAIL
  • Where does EVO:RAIL run?
    • EVO:RAIL runs on vCenter Server. vCenter Server is powered-on automatically when the appliance is started and the EVO:RAIL engine can then be used to configure the appliance
  • Which version of vCenter Server do you use, the Windows version or the Appliance?
    • In order to simplify deployment EVO:RAIL uses the vCenter Server Appliance.
  • Can I use the vCenter Web Client to manage my VMs or do I need to use the EVO:RAIL engine?
    • You can use whatever you like to manage your VMs. Web Client is fully supported and configured for you!
  • Are there networking requirements?
    • IPv6 is required for configuration of the appliance and auto-discovery. Multicast traffic on L2 is required for Virtual SAN
  • …

Some great EVO:RAIL links:

  • Introducing EVO:RAIL
  • EVO:RAIL configuration and management Demo
  • VMTN Community – EVO:RAIL
  • Linkedin Group – EVO:RAIL
  • VMware blog: VMware Horizon and EVO: RAIL – Value Add For Customers
  • Chad Sakac – VMworld 2014 – EVO:RAIL and EMC’s approach
  • Julian Wood – VMware Marvin comes alive as EVO:Rail, a Hyper-Converged Infrastructure Appliance
  • Chris Wahl – VMware announces software defined infrastructure with EVO:RAIL
  • Ivan Pepelnjak – VMware EVO:RAIL – One stop shop for your private cloud
  • Podcast on EVO:RAIL with Mike Laverick
  • EVO:RAIL engineering interview with Dave Shanley
  • EVO:RAIL vs VSAN Ready Node vs Component based
  • …

If you have any questions, feel free to drop them in comments section and I will do my best to answer them.

Related

Server, Software Defined, Storage, vSAN evo, evo:rail, virtual san, vmware evo:rail, vsan, vSphere

Reader Interactions

Comments

  1. Mike says

    2 September, 2014 at 17:50

    When will a OVA be released so that us home-lab users can play with the interface.?

    • Duncan Epping says

      3 September, 2014 at 09:38

      I don’t expect this to happen. EVO:RAIL is very much an OEM program. The OEM is provided with a build and a recipe.

  2. Andrew Dauncey says

    2 September, 2014 at 19:53

    Will the OEM be able to customise the number of cores/memory/disk sizes?

    • Duncan Epping says

      3 September, 2014 at 10:10

      To be determined…

  3. Totie Bash says

    2 September, 2014 at 20:27

    “No EVO:RAIL does not use VUM for updating and patching. It uses a new mechanism which is built from scratch and comes as part of the EVO:RAIL engine”….. Is this a glimpse of the death of VUM on a C# vSphere Client/Windows box… I would assume that I am not alone and a lot are waiting for this day when VUM is nolonger tied to Windows and I can have multiple vSphere suite attach to just one VUM…

    • Duncan Epping says

      2 September, 2014 at 21:10

      Being worked on…

  4. sketch says

    2 September, 2014 at 20:29

    so – Could this be considered a HARDWARE appliance? I’m guessing even it was, we still couldn’t load Oracle on it (?) Also, is this just a one-stop shop for support? as we could do the same thing with HP or IBM servers and customize it to our corp. requirements…

    • Duncan Epping says

      2 September, 2014 at 21:11

      No, if you ask me it is a: hyperconverged infrastructure solution

  5. Nate says

    2 September, 2014 at 20:48

    Do we refer to the configuration/simplified management software as EVO:RAIL also?

    • Duncan Epping says

      2 September, 2014 at 21:11

      Yes, EVO:RAIL engine

  6. Ralf says

    2 September, 2014 at 21:02

    2 x E5-2620 6 cores per host ist not that much compute power. Is this the entry level configuration and others will follow?

    • Duncan Epping says

      2 September, 2014 at 21:12

      It is about 100GHz of compute per appliance…

      • Ralf says

        2 September, 2014 at 21:40

        Yeah, maybe we are an exception. But we have a lot of VMs with 8-10 vCPUs. So we use pCPUs with 10+ Cores.

        • R says

          3 September, 2014 at 01:34

          But the question us do you actually consume them or just over provision?

          • Ralf says

            3 September, 2014 at 08:42

            We have systems that need the vCPUs. They are running typically end-of-the-month tasks.

      • Alexey says

        3 September, 2014 at 14:41

        4x E5-2695/7 v2 per appliance would provide more compute for half the sockets (and licenses). Then, 192 GB RAM per two sockets seems awfully small. Why not 256/384/512 GB with more powerful CPU’s not unlike Nutanix’s 3050/3060?

  7. Mark Gabryjelski says

    2 September, 2014 at 22:25

    So where does the EVO:RAIL engine actually run?
    4 x nodes get configured via EVO:RAIL interface, as well as vCenter.

    • Duncan Epping says

      2 September, 2014 at 23:45

      It runs within vCenter

      • Mark Gabryjelski says

        3 September, 2014 at 02:14

        ….so…..which came first? The chicken or the egg?
        Perhaps a post on how this actually works for those of us who have done this for the past 10 years?

        • Duncan Epping says

          3 September, 2014 at 09:48

          Not sure what exactly you are looking for, I can’t however share any factory build recipes as these are only provided to qualified partners under NDA.

  8. Lewis says

    2 September, 2014 at 22:41

    I’m starting an EVO:RAIL forum to share some of the load here:

    http://www.itsupportforum.net/forum/virtualization/vmware/evorail/

    That way people can discuss each of these things to get a better understanding.

    • Duncan Epping says

      2 September, 2014 at 23:44

      There already is a forum… the VMTN Community one.

  9. Peter says

    3 September, 2014 at 00:01

    Will the VUM replacement be available separately as well, maybe for ESXi 6? It’s one thing holding back using vCSA, together with no supported backup/restore methods.

    • Duncan Epping says

      3 September, 2014 at 09:38

      Can’t comment on roadmap for vSphere 6.0

  10. Venkat says

    3 September, 2014 at 08:06

    What is the mechanism used in EVO Rail for the replacement of VUM?

    • Duncan Epping says

      3 September, 2014 at 09:38

      Custom build mechanism using vSphere APIs

  11. Brian Suhr says

    3 September, 2014 at 13:39

    Hey Duncan,

    Can you confirm EVO:RAIL uses the vCSA for its vCenter?

    Also sounds like the EVO management layer & automation is a service running on the vCSA?

    Just trying to get the whole picture.

    Thanks,

    • Duncan Epping says

      3 September, 2014 at 19:56

      VCSA indeed, EVO:RAIL engine runs within the VCSA.

  12. Walbert Broeders says

    3 September, 2014 at 17:06

    Why should we go on with blade hosts and SAN. Why shouldn’t we use EVO ?

  13. Chris says

    3 September, 2014 at 19:02

    VC servers are already a resource hog in our environment, how much more resources will we need to add to the VC server to support EVO?

  14. Chris says

    3 September, 2014 at 19:05

    P.S. Is there a limit on how many appliances you can have per cluster? I.E…can you have 8 appliances configured for a 32 node cluster? Or is the max # of appliances below 8?

    • Duncan Epping says

      3 September, 2014 at 19:59

      4 appliances for now (16 hosts)

  15. Andrew Dauncey says

    3 September, 2014 at 19:54

    Interesting to see LogInsight bundled with it, but not vSOM (vSphere with Ops Manager).

    Why is LogInsight bundled, and why isn’t vSOM?

    • Duncan Epping says

      3 September, 2014 at 20:00

      LogInsight helps both customers and partners when it comes to troubleshooting etc. vSOM if there is a customer demand for it may be added in the future.

  16. kcarlile says

    4 September, 2014 at 12:43

    2 10GbE ports per 4 hosts seems very, very low. I’ll grant that if you’re only using one brick, you don’t need a vmotion network, but I’d much rather see these with 40GbE. Is there upgrade potential in the boxes? If so, that makes it a very appealing solution in some ways.

    • Duncan Epping says

      4 September, 2014 at 14:37

      Actually there are 2 x 10GbE port per individual host. With 4 hosts that means 80GbE per appliance.

  17. kcarlile says

    4 September, 2014 at 14:39

    Considering that I currently have 8 10GbE ports per node in my cluster and am planning a minimum of 2×40 in my next… still a bit low. But better than I thought.

    • Duncan Epping says

      4 September, 2014 at 22:43

      I wonder what you are running that drives that much traffic. I have not heard a single customer yet seeing this as a constraint.

  18. Raul Coria says

    4 September, 2014 at 16:50

    It’s possible access and manage hosts and virtual machines from vSphere Web Client, connecting directly to vCSA or ESXi hosts? or only from EVO:RAIL webpage.

    • Duncan Epping says

      4 September, 2014 at 22:42

      Sure, you can use the EVO engine or vCenter client… your choice.

  19. Michael Munk Larsen says

    5 September, 2014 at 00:53

    Hmm, so your saying that the vCenter is hosted within the same cluster which it is managing?

    Is it possible have the vCenter run on a management cluster which is not part of EVO?

    I like the whole concept of EVO, just not sure I want to run my vCenter within the cluster and on VSAN..

    • Duncan Epping says

      5 September, 2014 at 08:20

      That is the architecture for 1.0 indeed. This may, or may not, change in the future.

  20. Brad Ramsey says

    7 September, 2014 at 16:12

    13.1TB before taking FTT into consideration right? So with FTT=1 we’re about 6.5TB useable?

    • Duncan Epping says

      8 September, 2014 at 16:29

      Yes

  21. Rawl says

    8 September, 2014 at 13:29

    It’s possible do centralized management of a tradicional vsphere platform and a EVO:RAIL cluster together? Or I must discard tradicional vCenter and add ESXi hosts to EVO:RAIL vCSA?

    If VCSA don’t support Linked Mode, I suppose that i can’t link EVO:RAIL to producction vSphere. In this case, all remote and branch office (ROBO) will be isolated clusters.

    • rawlcoria says

      16 September, 2014 at 10:27

      Any update? Thanks!!

      • Duncan Epping says

        16 September, 2014 at 11:26

        Each EVO:RAIL cluster has its own vCenter Server. That is the model with the current version. This means, with each 16 hosts a new vCenter Appliance is instantiated.

        You cannot use linked mode indeed, but they can all be part of the same SSO domain, and as such end up in the same Web Client if you want. That is standard vSphere functionality.

        • rawlcoria says

          16 September, 2014 at 11:39

          But to this vCenter (EVO:RAIL) can you add other ESXi hosts (non EVO:RAIL) without any limitations ?

          • Duncan says

            16 September, 2014 at 12:21

            In the current version we do not support this, although technically it should work

  22. Raph says

    11 September, 2014 at 10:59

    What happens if a customer does not have IPV6 on their network? “Are there networking requirements?
    IPv6 is required for configuration of the appliance and auto-discovery. Multicast traffic on L2 is required for Virtual SAN”

    • Duncan Epping says

      11 September, 2014 at 11:51

      With the current version that will mean that “auto-scale-out” will not work and that configuration will be more complex.

  23. jonesy777 says

    11 September, 2014 at 16:27

    Duncan, when will vendors begin to offer this solution?

    • Duncan Epping says

      12 September, 2014 at 12:29

      First should arrive next week I was told

      • jonesy777 says

        25 September, 2014 at 13:50

        Still waiting to hear what pricing will be on this. From the specs it looks like the neighborhood of $75-100K, but that is a pure guess. I have a situation that would be perfect for Evo Rail, but it is hard to talk to management about something without a price attached.

        • Kent says

          15 October, 2014 at 16:26

          There’s pricing for the Super Micro version: $160K at ShopBLT. Quite a bit more than I expected.

          • jonesy777 says

            15 October, 2014 at 16:41

            Wow, I going to have to agree; that is more than expected.

  24. John Wright says

    17 September, 2014 at 21:56

    Hi Duncan,

    VMware has come up with EVO RAIL in order to compete against Hyperconvergence Upstarts. However it’s price will be around $1 million.

    http://www.enterprisetech.com/2014/08/25/vmware-takes-hyperconvergence-upstarts-evorail/#comment-287544

    Do you think there is a large market for EVO RAIL given it’s high price and also because rival companies can sell their product at a far more attractive price?

    SIncerely,
    John

    • Duncan Epping says

      18 September, 2014 at 20:41

      Not sure where the 1 million price comes from but I am pretty sure it is inaccurate.

      • Alexey says

        18 September, 2014 at 22:10

        Just a quick and dirty estimate: a single node (1/4 of appliance) should be about $30-40K in Dell prices. Throw in VMware and VSAN licenses and you get about $50-60K per 1/4 appliance, so about $200-240K per appliance. A full 16-node cluster would then indeed be about $0.8-1M – which is likely what was meant.
        Still, a customized VSAN config could give something similar for 1/2 the price, just saying…

        • Duncan Epping says

          19 September, 2014 at 08:20

          If that would be the case, then the big difference here is that the appliance will include 3 years of Support and Maintenance upfront. I think if you take all of the various parts in to account the comparison will look different 🙂

  25. Eric says

    15 October, 2014 at 19:54

    Would it be accurate to say that the EVO RAIL engine can only manage a single vSphere cluster (4-16 nodes)? In other words, can you manage multiple 16 node EVO RAIL clusters from a single management interface?

    Also, are stretched (metro) clusters supported? My assumption is no. Thanks for the great post.

  26. Vikas says

    30 October, 2014 at 08:01

    You may want to add new qualified partners announced at VMworld, Barcelona.

  27. Trenton says

    4 December, 2014 at 19:53

    How many VMs can I run on one appliance?

    Would mind going thru the math on how 100 VMs were achieved using the appliance specs as stated by VMware here http://www.vmware.com/files/pdf/products/evorail/vmware-evorail-faq.pdf .

    Based on this PDF each appliance would consist of the following:

    4-nodes * 2-sockets = 8 physical processors
    8 processors * 6-core each = 48 physical cores
    Enable hyperthreading = 96 virtual cores

    4-nodes * 192GB memory ea. = 768GB memory total

    Each VMware publicized virtual machine requires:
    “General-purpose VM profile: 2 vCPU, 4GB vMEM, 60GB of vDisk, with redundancy”

    If you have 96 virtual cores and each general purpose VM needs 2 vCPU how do you achieve 100 VMs?

    Thank you for the help on clearing this up.

    • Mike W (@IT_Muscle) says

      6 December, 2014 at 19:51

      You should be able to overprovision CPUs and memory easily enough. I have seen it can be done up to 25 vCPUs per pCPU but more commonly a 4:1 ratio is accepted (from what I have seen) You can find more about it here, courtesy of Scott Lowe https://communities.vmware.com/servlet/JiveServlet/previewBody/21181-102-1-28328/vsphere-oversubscription-best-practices%5B1%5D.pdf

      • Duncan Epping says

        8 December, 2014 at 12:26

        exactly, 4:1 is even fairly conservative these days with the powerful processors Intel and AMD have.

  28. Oliver Wilks says

    5 December, 2014 at 00:55

    My question .. I have just received some training on this at work for the Mystic platform, and so afterwords I went over to the VMware HOL lab to play with their EVO:RAIL vApp lab for a bit. So I built the appliance, and it appears to have built it on vCSA, although I am unable to determine where it is nested since I do not see it in vCenter as an appliance on itself. My question is, where does vCenter appliance get nested on the real appliance? Is it a hidden virtual machine not visible in inventory? Or is the lab I used have it nested somewhere else (like up one level)? If it is invisible to vCenter, then how to you administer that VM when there are problems?

    • Duncan Epping says

      8 December, 2014 at 12:25

      You were running it nested and it runs outside of that environment indeed at that point for performance reasons (otherwise it would be running on top of nested ESXi) Normally it would run on the ESXi hosts that are part of the EVO box.

  29. FredK says

    10 December, 2014 at 17:50

    Hi,

    i got a question about this :

    Q. How is network traffic prioritized?
    A. To ensure vSphere vMotion traffic does not consume all available bandwidth on the 10GbE port, EVO:RAIL limits vMotion traffic to 4Gbps.

    How is it made ? How do they limit the traffic to 4Gbps for vMotion ?

    Thanks

    • Duncan Epping says

      17 December, 2014 at 17:59

      It is an option on the portgroup, it has a limit option where you can define the max throughput.

  30. Jim says

    15 January, 2015 at 20:57

    In EVORAIL. For IPV6, is Full IPv6 required or link local IPv6?

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in