Over the last couple of days the same VMware EVO:RAIL questions keep popping up over and over again. I figured I would do a quick VMware EVO:RAIL Q&A post so that I can point people to that instead of constantly answering them on twitter.
- Can you explain what EVO:RAIL is?
- EVO:RAIL is the next evolution of infrastructure building blocks for the SDDC. It delivers compute, storage and networking in a 2U / 4 node package with an intuitive interface that allows for full configuration within 15 minutes. The appliance bundles hardware+software+support/maintenance to simplify both procurement and support in a true “appliance” fashion. EVO:RAIL provides the density of blade with the flexibility of rack. Each appliance comes with 100GHz of compute power, 768GB of memory capacity and 14.4TB of raw storage capacity (plus 1.6TB of flash for IO acceleration purposes). For full details, read my intro post.
- Where can I find the datasheet?
- What is the minimum number of EVO:RAIL hosts?
- Minimum number is 4 hosts. Each appliance comes with 4 independent hosts, which means that 1 appliance is the start. It scales per appliance!
- What is included with an EVO:RAIL appliance?
- 4 independent hosts each with the following resources
- 2 x E5-2620 6 core
- 192GB Memory
- 3 x 1.2TB 10K RPM Drive for VSAN
- 1 x 400Gb eMLC SSD for VSAN
- 1 x ESXi boot device
- 2 x 10GbE NIC port (SFP / RJ45 can be selected)
- 1 x IPMI port
- vSphere Enterprise Plus
- vCenter Server
- Virtual SAN
- Log Insight
- Support and Maintenance for 3 years
- 4 independent hosts each with the following resources
- What is the total available storage capacity?
- After the VSAN Datastore is formed and vCenter Server is installed / configured there is about 13.1TB left
- How many VMs can I run on one appliance?
- That will very much depend on the size of the virtual machine and the workload. We have been able to comfortably run 250 desktops on one appliance. With Server VMs we ended up with around 100. However, again this very much depends on things like workload / capacity etc.
- How many EVO:RAIL appliance can I scale to?
- With the current release EVO:RAIL scales to 4 appliance (aka 16 hosts)
- If licensing / maintenance / support is three 3 years, what happens after?
- After 3 years support/maintenance expires. It can be extended, or the appliance can be replaced when desired.
- How is support handled?
- All support is handled through the OEM the EVO:RAIL HCIA has been procured through. This ensures that “end to end” support will be provided through the same channel.
- Who are the EVO:RAIL qualified partners?
- The following partners were announced at VMworld: Dell, EMC, Fujitsu, Inspur, Net One Systems, Supermicro, Hitachi Data Systems, HP, NetApp
- How much does an EVO:RAIL appliance cost?
- Pricing will be set by qualified partners
- I was told Support and Maintenance is for 3 years, what happens after 3 years?
- You can renew your support and maintenance with 2 years at most (as far as I know).
- If not renewed then the EVO:RAIL appliance will remain functioning, but entitlement to support is gone.
- What if I buy a new appliance after 3 years, can I re-use my licenses that come with the EVO:RAIL appliance??
- No, the licenses are directly tied to the appliance and cannot be transferred to any other appliance or hardware.
- Will NSX work with EVO:RAIL?
- EVO:RAIL uses vSphere 5.5 and Virtual SAN. Anything that works with that will work with EVO:RAIL. NSX has not been explicitly tested but I expect that this should be no problem.
- Does it use VMware Update Manager (VUM) for updating/patching?
- No EVO:RAIL does not use VUM for updating and patching. It uses a new mechanism which is built from scratch and comes as part of the EVO:RAIL engine. This to provide a simple updating and patching mechanism, while avoiding the need for a Windows VM (VUM requires Windows).
- What kind of NIC card is included?
- 10GbE dual port NIC per host. Majority of vendors will offer both SFP+ and RJ45. This means there is 8 x 10GbE switch port per EVO:RAIL appliance required!
- Is there a physical switch included?
- A physical switch is not part of the “recipe” VMware provided to qualified partners, but some may package one (or multiple) with it to simplify green field deployments.
- What is MARVIN or Mystic ?
- MARVIN (Modular Automated Rackable Virtual Infrastructure Node) was the codename used by VMware internally for EVO:RAIL. Mystic was the codename used by EMC. Both of them refer to EVO:RAIL
- Where does EVO:RAIL run?
- EVO:RAIL runs on vCenter Server. vCenter Server is powered-on automatically when the appliance is started and the EVO:RAIL engine can then be used to configure the appliance
- Which version of vCenter Server do you use, the Windows version or the Appliance?
- In order to simplify deployment EVO:RAIL uses the vCenter Server Appliance.
- Can I use the vCenter Web Client to manage my VMs or do I need to use the EVO:RAIL engine?
- You can use whatever you like to manage your VMs. Web Client is fully supported and configured for you!
- Are there networking requirements?
- IPv6 is required for configuration of the appliance and auto-discovery. Multicast traffic on L2 is required for Virtual SAN
- …
Some great EVO:RAIL links:
- Introducing EVO:RAIL
- EVO:RAIL configuration and management Demo
- VMTN Community – EVO:RAIL
- Linkedin Group – EVO:RAIL
- VMware blog: VMware Horizon and EVO: RAIL – Value Add For Customers
- Chad Sakac – VMworld 2014 – EVO:RAIL and EMC’s approach
- Julian Wood – VMware Marvin comes alive as EVO:Rail, a Hyper-Converged Infrastructure Appliance
- Chris Wahl – VMware announces software defined infrastructure with EVO:RAIL
- Ivan Pepelnjak – VMware EVO:RAIL – One stop shop for your private cloud
- Podcast on EVO:RAIL with Mike Laverick
- EVO:RAIL engineering interview with Dave Shanley
- EVO:RAIL vs VSAN Ready Node vs Component based
- …
If you have any questions, feel free to drop them in comments section and I will do my best to answer them.
Mike says
When will a OVA be released so that us home-lab users can play with the interface.?
Duncan Epping says
I don’t expect this to happen. EVO:RAIL is very much an OEM program. The OEM is provided with a build and a recipe.
Andrew Dauncey says
Will the OEM be able to customise the number of cores/memory/disk sizes?
Duncan Epping says
To be determined…
Totie Bash says
“No EVO:RAIL does not use VUM for updating and patching. It uses a new mechanism which is built from scratch and comes as part of the EVO:RAIL engine”….. Is this a glimpse of the death of VUM on a C# vSphere Client/Windows box… I would assume that I am not alone and a lot are waiting for this day when VUM is nolonger tied to Windows and I can have multiple vSphere suite attach to just one VUM…
Duncan Epping says
Being worked on…
sketch says
so – Could this be considered a HARDWARE appliance? I’m guessing even it was, we still couldn’t load Oracle on it (?) Also, is this just a one-stop shop for support? as we could do the same thing with HP or IBM servers and customize it to our corp. requirements…
Duncan Epping says
No, if you ask me it is a: hyperconverged infrastructure solution
Nate says
Do we refer to the configuration/simplified management software as EVO:RAIL also?
Duncan Epping says
Yes, EVO:RAIL engine
Ralf says
2 x E5-2620 6 cores per host ist not that much compute power. Is this the entry level configuration and others will follow?
Duncan Epping says
It is about 100GHz of compute per appliance…
Ralf says
Yeah, maybe we are an exception. But we have a lot of VMs with 8-10 vCPUs. So we use pCPUs with 10+ Cores.
R says
But the question us do you actually consume them or just over provision?
Ralf says
We have systems that need the vCPUs. They are running typically end-of-the-month tasks.
Alexey says
4x E5-2695/7 v2 per appliance would provide more compute for half the sockets (and licenses). Then, 192 GB RAM per two sockets seems awfully small. Why not 256/384/512 GB with more powerful CPU’s not unlike Nutanix’s 3050/3060?
Mark Gabryjelski says
So where does the EVO:RAIL engine actually run?
4 x nodes get configured via EVO:RAIL interface, as well as vCenter.
Duncan Epping says
It runs within vCenter
Mark Gabryjelski says
….so…..which came first? The chicken or the egg?
Perhaps a post on how this actually works for those of us who have done this for the past 10 years?
Duncan Epping says
Not sure what exactly you are looking for, I can’t however share any factory build recipes as these are only provided to qualified partners under NDA.
Lewis says
I’m starting an EVO:RAIL forum to share some of the load here:
http://www.itsupportforum.net/forum/virtualization/vmware/evorail/
That way people can discuss each of these things to get a better understanding.
Duncan Epping says
There already is a forum… the VMTN Community one.
Peter says
Will the VUM replacement be available separately as well, maybe for ESXi 6? It’s one thing holding back using vCSA, together with no supported backup/restore methods.
Duncan Epping says
Can’t comment on roadmap for vSphere 6.0
Venkat says
What is the mechanism used in EVO Rail for the replacement of VUM?
Duncan Epping says
Custom build mechanism using vSphere APIs
Brian Suhr says
Hey Duncan,
Can you confirm EVO:RAIL uses the vCSA for its vCenter?
Also sounds like the EVO management layer & automation is a service running on the vCSA?
Just trying to get the whole picture.
Thanks,
Duncan Epping says
VCSA indeed, EVO:RAIL engine runs within the VCSA.
Walbert Broeders says
Why should we go on with blade hosts and SAN. Why shouldn’t we use EVO ?
Chris says
VC servers are already a resource hog in our environment, how much more resources will we need to add to the VC server to support EVO?
Chris says
P.S. Is there a limit on how many appliances you can have per cluster? I.E…can you have 8 appliances configured for a 32 node cluster? Or is the max # of appliances below 8?
Duncan Epping says
4 appliances for now (16 hosts)
Andrew Dauncey says
Interesting to see LogInsight bundled with it, but not vSOM (vSphere with Ops Manager).
Why is LogInsight bundled, and why isn’t vSOM?
Duncan Epping says
LogInsight helps both customers and partners when it comes to troubleshooting etc. vSOM if there is a customer demand for it may be added in the future.
kcarlile says
2 10GbE ports per 4 hosts seems very, very low. I’ll grant that if you’re only using one brick, you don’t need a vmotion network, but I’d much rather see these with 40GbE. Is there upgrade potential in the boxes? If so, that makes it a very appealing solution in some ways.
Duncan Epping says
Actually there are 2 x 10GbE port per individual host. With 4 hosts that means 80GbE per appliance.
kcarlile says
Considering that I currently have 8 10GbE ports per node in my cluster and am planning a minimum of 2×40 in my next… still a bit low. But better than I thought.
Duncan Epping says
I wonder what you are running that drives that much traffic. I have not heard a single customer yet seeing this as a constraint.
Raul Coria says
It’s possible access and manage hosts and virtual machines from vSphere Web Client, connecting directly to vCSA or ESXi hosts? or only from EVO:RAIL webpage.
Duncan Epping says
Sure, you can use the EVO engine or vCenter client… your choice.
Michael Munk Larsen says
Hmm, so your saying that the vCenter is hosted within the same cluster which it is managing?
Is it possible have the vCenter run on a management cluster which is not part of EVO?
I like the whole concept of EVO, just not sure I want to run my vCenter within the cluster and on VSAN..
Duncan Epping says
That is the architecture for 1.0 indeed. This may, or may not, change in the future.
Brad Ramsey says
13.1TB before taking FTT into consideration right? So with FTT=1 we’re about 6.5TB useable?
Duncan Epping says
Yes
Rawl says
It’s possible do centralized management of a tradicional vsphere platform and a EVO:RAIL cluster together? Or I must discard tradicional vCenter and add ESXi hosts to EVO:RAIL vCSA?
If VCSA don’t support Linked Mode, I suppose that i can’t link EVO:RAIL to producction vSphere. In this case, all remote and branch office (ROBO) will be isolated clusters.
rawlcoria says
Any update? Thanks!!
Duncan Epping says
Each EVO:RAIL cluster has its own vCenter Server. That is the model with the current version. This means, with each 16 hosts a new vCenter Appliance is instantiated.
You cannot use linked mode indeed, but they can all be part of the same SSO domain, and as such end up in the same Web Client if you want. That is standard vSphere functionality.
rawlcoria says
But to this vCenter (EVO:RAIL) can you add other ESXi hosts (non EVO:RAIL) without any limitations ?
Duncan says
In the current version we do not support this, although technically it should work
Raph says
What happens if a customer does not have IPV6 on their network? “Are there networking requirements?
IPv6 is required for configuration of the appliance and auto-discovery. Multicast traffic on L2 is required for Virtual SAN”
Duncan Epping says
With the current version that will mean that “auto-scale-out” will not work and that configuration will be more complex.
jonesy777 says
Duncan, when will vendors begin to offer this solution?
Duncan Epping says
First should arrive next week I was told
jonesy777 says
Still waiting to hear what pricing will be on this. From the specs it looks like the neighborhood of $75-100K, but that is a pure guess. I have a situation that would be perfect for Evo Rail, but it is hard to talk to management about something without a price attached.
Kent says
There’s pricing for the Super Micro version: $160K at ShopBLT. Quite a bit more than I expected.
jonesy777 says
Wow, I going to have to agree; that is more than expected.
John Wright says
Hi Duncan,
VMware has come up with EVO RAIL in order to compete against Hyperconvergence Upstarts. However it’s price will be around $1 million.
http://www.enterprisetech.com/2014/08/25/vmware-takes-hyperconvergence-upstarts-evorail/#comment-287544
Do you think there is a large market for EVO RAIL given it’s high price and also because rival companies can sell their product at a far more attractive price?
SIncerely,
John
Duncan Epping says
Not sure where the 1 million price comes from but I am pretty sure it is inaccurate.
Alexey says
Just a quick and dirty estimate: a single node (1/4 of appliance) should be about $30-40K in Dell prices. Throw in VMware and VSAN licenses and you get about $50-60K per 1/4 appliance, so about $200-240K per appliance. A full 16-node cluster would then indeed be about $0.8-1M – which is likely what was meant.
Still, a customized VSAN config could give something similar for 1/2 the price, just saying…
Duncan Epping says
If that would be the case, then the big difference here is that the appliance will include 3 years of Support and Maintenance upfront. I think if you take all of the various parts in to account the comparison will look different 🙂
Eric says
Would it be accurate to say that the EVO RAIL engine can only manage a single vSphere cluster (4-16 nodes)? In other words, can you manage multiple 16 node EVO RAIL clusters from a single management interface?
Also, are stretched (metro) clusters supported? My assumption is no. Thanks for the great post.
Vikas says
You may want to add new qualified partners announced at VMworld, Barcelona.
Trenton says
How many VMs can I run on one appliance?
Would mind going thru the math on how 100 VMs were achieved using the appliance specs as stated by VMware here http://www.vmware.com/files/pdf/products/evorail/vmware-evorail-faq.pdf .
Based on this PDF each appliance would consist of the following:
4-nodes * 2-sockets = 8 physical processors
8 processors * 6-core each = 48 physical cores
Enable hyperthreading = 96 virtual cores
4-nodes * 192GB memory ea. = 768GB memory total
Each VMware publicized virtual machine requires:
“General-purpose VM profile: 2 vCPU, 4GB vMEM, 60GB of vDisk, with redundancy”
If you have 96 virtual cores and each general purpose VM needs 2 vCPU how do you achieve 100 VMs?
Thank you for the help on clearing this up.
Mike W (@IT_Muscle) says
You should be able to overprovision CPUs and memory easily enough. I have seen it can be done up to 25 vCPUs per pCPU but more commonly a 4:1 ratio is accepted (from what I have seen) You can find more about it here, courtesy of Scott Lowe https://communities.vmware.com/servlet/JiveServlet/previewBody/21181-102-1-28328/vsphere-oversubscription-best-practices%5B1%5D.pdf
Duncan Epping says
exactly, 4:1 is even fairly conservative these days with the powerful processors Intel and AMD have.
Oliver Wilks says
My question .. I have just received some training on this at work for the Mystic platform, and so afterwords I went over to the VMware HOL lab to play with their EVO:RAIL vApp lab for a bit. So I built the appliance, and it appears to have built it on vCSA, although I am unable to determine where it is nested since I do not see it in vCenter as an appliance on itself. My question is, where does vCenter appliance get nested on the real appliance? Is it a hidden virtual machine not visible in inventory? Or is the lab I used have it nested somewhere else (like up one level)? If it is invisible to vCenter, then how to you administer that VM when there are problems?
Duncan Epping says
You were running it nested and it runs outside of that environment indeed at that point for performance reasons (otherwise it would be running on top of nested ESXi) Normally it would run on the ESXi hosts that are part of the EVO box.
FredK says
Hi,
i got a question about this :
Q. How is network traffic prioritized?
A. To ensure vSphere vMotion traffic does not consume all available bandwidth on the 10GbE port, EVO:RAIL limits vMotion traffic to 4Gbps.
How is it made ? How do they limit the traffic to 4Gbps for vMotion ?
Thanks
Duncan Epping says
It is an option on the portgroup, it has a limit option where you can define the max throughput.
Jim says
In EVORAIL. For IPV6, is Full IPv6 required or link local IPv6?