• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vSAN

VMware EVO:RAIL FAQ

Duncan Epping · Sep 2, 2014 ·

Over the last couple of days the same VMware EVO:RAIL questions keep popping up over and over again. I figured I would do a quick VMware EVO:RAIL Q&A post so that I can point people to that instead of constantly answering them on twitter.

  • Can you explain what EVO:RAIL is?
    • EVO:RAIL is the next evolution of infrastructure building blocks for the SDDC. It delivers compute, storage and networking in a 2U / 4 node package with an intuitive interface that allows for full configuration within 15 minutes. The appliance bundles hardware+software+support/maintenance to simplify both procurement and support in a true “appliance” fashion. EVO:RAIL provides the density of blade with the flexibility of rack. Each appliance comes with 100GHz of compute power, 768GB of memory capacity and 14.4TB of raw storage capacity (plus 1.6TB of flash for IO acceleration purposes). For full details, read my intro post.
  • Where can I find the datasheet?
    • http://www.vmware.com/files/pdf/products/evo-rail/vmware-evo-rail-datasheet.pdf
  • What is the minimum number of EVO:RAIL hosts?
    • Minimum number is 4 hosts. Each appliance comes with 4 independent hosts, which means that 1 appliance is the start. It scales per appliance!
  • What is included with an EVO:RAIL appliance?
    • 4 independent hosts each with the following resources
      • 2 x E5-2620 6 core
      • 192GB Memory
      • 3 x 1.2TB 10K RPM Drive for VSAN
      • 1 x 400Gb eMLC SSD for VSAN
      • 1 x ESXi boot device
      • 2 x 10GbE NIC port (SFP / RJ45 can be selected)
      • 1 x IPMI port
    • vSphere Enterprise Plus
    • vCenter Server
    • Virtual SAN
    • Log Insight
    • Support and Maintenance for 3 years
  • What is the total available storage capacity?
    • After the VSAN Datastore is formed and vCenter Server is installed / configured there is about 13.1TB left
  • How many VMs can I run on one appliance?
    • That will very much depend on the size of the virtual machine and the workload. We have been able to comfortably run 250 desktops on one appliance. With Server VMs we ended up with around 100. However, again this very much depends on things like workload / capacity etc.
  • How many EVO:RAIL appliance can I scale to?
    • With the current release EVO:RAIL scales to 4 appliance (aka 16 hosts)
  • If licensing / maintenance / support is three 3 years, what happens after?
    • After 3 years support/maintenance expires. It can be extended, or the appliance can be replaced when desired.
  • How is support handled?
    • All support is handled through the OEM the EVO:RAIL HCIA has been procured through. This ensures that “end to end” support will be provided through the same channel.
  • Who are the EVO:RAIL qualified partners?
    • The following partners were announced at VMworld: Dell, EMC, Fujitsu, Inspur, Net One Systems, Supermicro, Hitachi Data Systems, HP, NetApp
  • How much does an EVO:RAIL appliance cost?
    • Pricing will be set by qualified partners
  • I was told Support and Maintenance is for 3 years, what happens after 3 years?
    • You can renew your support and maintenance with 2 years at most (as far as I know).
    • If not renewed then the EVO:RAIL appliance will remain functioning, but entitlement to support is gone.
  • What if I buy a new appliance after 3 years, can I re-use my licenses that come with the EVO:RAIL appliance??
    • No, the licenses are directly tied to the appliance and cannot be transferred to any other appliance or hardware.
  • Will NSX work with EVO:RAIL?
    • EVO:RAIL uses vSphere 5.5 and Virtual SAN. Anything that works with that will work with EVO:RAIL. NSX has not been explicitly tested but I expect that this should be no problem.
  • Does it use VMware Update Manager (VUM) for updating/patching?
    • No EVO:RAIL does not use VUM for updating and patching. It uses a new mechanism which is built from scratch and comes as part of the EVO:RAIL engine. This to provide a simple updating and patching mechanism, while avoiding the need for a Windows VM (VUM requires Windows).
  • What kind of NIC card is included?
    • 10GbE dual port NIC per host. Majority of vendors will offer both SFP+ and RJ45. This means there is 8 x 10GbE switch port per EVO:RAIL appliance required!
  • Is there a physical switch included?
    • A physical switch is not part of the “recipe” VMware provided to qualified partners, but some may package one (or multiple) with it to simplify green field deployments.
  • What is MARVIN or Mystic ?
    • MARVIN (Modular Automated Rackable Virtual Infrastructure Node) was the codename used by VMware internally for EVO:RAIL. Mystic was the codename used by EMC. Both of them refer to EVO:RAIL
  • Where does EVO:RAIL run?
    • EVO:RAIL runs on vCenter Server. vCenter Server is powered-on automatically when the appliance is started and the EVO:RAIL engine can then be used to configure the appliance
  • Which version of vCenter Server do you use, the Windows version or the Appliance?
    • In order to simplify deployment EVO:RAIL uses the vCenter Server Appliance.
  • Can I use the vCenter Web Client to manage my VMs or do I need to use the EVO:RAIL engine?
    • You can use whatever you like to manage your VMs. Web Client is fully supported and configured for you!
  • Are there networking requirements?
    • IPv6 is required for configuration of the appliance and auto-discovery. Multicast traffic on L2 is required for Virtual SAN
  • …

Some great EVO:RAIL links:

  • Introducing EVO:RAIL
  • EVO:RAIL configuration and management Demo
  • VMTN Community – EVO:RAIL
  • Linkedin Group – EVO:RAIL
  • VMware blog: VMware Horizon and EVO: RAIL – Value Add For Customers
  • Chad Sakac – VMworld 2014 – EVO:RAIL and EMC’s approach
  • Julian Wood – VMware Marvin comes alive as EVO:Rail, a Hyper-Converged Infrastructure Appliance
  • Chris Wahl – VMware announces software defined infrastructure with EVO:RAIL
  • Ivan Pepelnjak – VMware EVO:RAIL – One stop shop for your private cloud
  • Podcast on EVO:RAIL with Mike Laverick
  • EVO:RAIL engineering interview with Dave Shanley
  • EVO:RAIL vs VSAN Ready Node vs Component based
  • …

If you have any questions, feel free to drop them in comments section and I will do my best to answer them.

VMware / ecosystem / industry news flash… part 2

Duncan Epping · Sep 1, 2014 ·

There we go, part two of the VMware / ecosystem / industry news flash. I expected a lot of news around VMworld as traditionally is always the case. I hope the below is a good summary, these are the articles / announcements I read and found interesting. It is the Monday after VMworld and I figured I would get this out there as I will be out for most of this week to recover.

  • Maginatics: A Virtual Filer for VMware’s Virtual SAN
    Last week I mentioned the Nexenta solution for VSAN… this week Maginatics is up. They also announced it last week, but somehow it fell through the cracks so I figured I would list it this week. MSCP offers a distributed file system with global deduplication, multiple caching layers and Content Distribution Network logic build in.
  • VMware EVO:RAIL was of course all over the news, with these being my fav posts Chris Wahl, Julian Wood, Dell, Chad Sakac)
    Do I really need to comment on this one? I am hoping everyone read my blog… Also, make sure to watch the demo!
  • Infinio announced version 2.0 of their acceleration platform
    A whole bunch of announcements around the 2.0 version of Infinio Acellerator. Support for Fibre Channel, iSCSI and FCoE is probably the biggest piece of functionality added. On top of that the extension of the monitoring / reporting section is very handy for those who want to tweak based on latency / IO information you will be able to do so. There are some more features announced, make sure to read the announcement for the full details.
  • VMware joins Open Compute Project
    I was surprised about this announcement, did not know it was coming… but I am very excited. The OCP solution is interesting as it is highly optimized around efficiency / power consumption / rack units etc. I have looked at some of the configurations for Virtual SAN but the problem I saw was hardware compatibility / support. Hopefully with this announcement these constraints will be lifted soon! Definitely one I will be following with a lot of interest!
  • Nutanix announced a new round of funding: 140 million
    What more can I say than: Congratulations! Hyper-converged infrastructure is hot, and Nutanix has a compelling solution for sure. 140 million (series e) is significant, and I guess they are on their way to an IPO (rumours have been floating around for months now).

That was it for now.

Introducing VMware EVO:RAIL a new hyper-converged offering!

Duncan Epping · Aug 25, 2014 ·

About 18 months ago I was asked to be part of a very small team to build a prototype. Back then it was one developer (Dave Shanley) who did all the development including the user experience aspect. I worked on architectural aspects we had an executive sponsor (Mornay van der Walt). After a couple of months we had something to show internally. In March of 2013, after showing the first prototype, we received the green light and the team expanded quickly. A small team within the SDDC Division’s Emerging Solutions Group was tasked with building something completely new, to enter a market where VMware had never gone before, to do something that would surprise many. The team was given the freedom to operate somewhat like a startup within VMware; run fast and hard, prototype, iterate, pivot when needed, with the goal of delivering a game changing product by VMworld 2014. Today I have the pleasure to introduce this project to the world: VMware EVO:RAIL™.

EVO:RAIL – What’s in a name?
EVO represents a new family of ‘Evolutionary’ Hyper-Converged Infrastructure offerings from VMware. RAIL represents the first product within the EVO family that will ship during the second half of 2014. More on the meaning of RAIL towards the end of the post.

The Speculation is finally over!

Over the past 6-plus months there was a lot of speculation over Project Mystic and Project MARVIN. I’ve been wanting to write about this for so long now, but unfortunately couldn’t talk about it. The speculation is finally over with the announcement of EVO:RAIL, and you can expect multiple articles on this topic here in the upcoming weeks! So just to be clear: MARVIN = Mystic = EVO:RAIL

What is EVO:RAIL?

Simply put, EVO:RAIL is a Hyper-Converged Infrastructure Appliance (HCIA) offering by VMware and of qualified EVO:RAIL partners that includes Dell, EMC, Fujitsu, Inspur, Net One Systems and SuperMicro. This impressive list of partners will ensure EVO:RAIL has a global market reach from day one, as well as the assurance of world class customer support and services these partners are capable of providing. For those who are not familiar with hyper-converged infrastructure offerings: it combines Compute, Network and Storage resources into a single unit of deployment. In the case of EVO:RAIL this is a 2U unit which contains 4 independent physical nodes.

evo:rail logo

But why a different type of hardware? platform What will EVO:RAIL bring to you as a customer? In my opinion EVO:RAIL has a several major advantages over traditional infrastructure:

  • Software-Defined
  • Simplicity
  • Highly Resilient
  • Customer Choice

Software-Defined

EVO:RAIL is a scalable Software-Defined Data Center (SDDC) building block that delivers compute, networking, storage, and management to empower private/hybrid-cloud, end-user computing, test/dev, remote and branch office environment, and small virtual private cloud. EVO:RAIL builds on proven technology of VMware vSphere®, vCenter Server™, and VMware Virtual SAN™, EVO:RAIL delivers the first hyper-converged infrastructure appliance 100% powered by VMware software.

Simplicity Transformed

EVO: RAIL enables time to value to first VM in minutes once the appliance is racked, cabled and powered on.. VM creation, radically simplified via the EVO:RAIL management user interface, easy VM deployment, one-click non-disruptive patch and upgrades, simplified management and scale-out.

Highly Resilient by Design

Resilient appliance design starting with four independent hosts and a distributed Virtual SAN datastore ensures zero VM downtime during planned maintenance or during disk, network, or host failures.

Customer Choice

EVO:RAIL is delivered as a complete appliance solution with hardware, software, and support through qualified EVO:RAIL partners; customers choose an EVO:RAIL appliance from their preferred EVO:RAIL partner. This means a single point of contact to buy new equipment (single SKU includes all components), and a single point of contact for support.

So what will each appliance provide you with in terms of hardware resources? Each EVO:RAIL appliance has four independent nodes with dedicated computer, network, and storage resources and dual, redundant power supplies.

Each of the four EVO:RAIL nodes have (at a minimum):

  • Two Intel E5-2620 v2 six-core CPUs
  • 192GB of memory
  • One SLC SATADOM or SAS HDD as the ESXi™ boot device
  • Three SAS 10K RPM 1.2TB HDD for the VMware Virtual SAN™ datastore
  • One 400GB MLC enterprise-grade SSD for read/write cache
  • One Virtual SAN-certified pass-through disk controller
  • Two 10GbE NIC ports (configured for either 10GBase-T or SFP+ connections)
  • One 1GbE IPMI port for remote (out-of-band) management

All of this leads to a total combined of at least 100GHz CPU resources, 768GB of memory resources, 14.4TB  of storage capacity and 1.6TB of flash capacity used by Virtual SAN for storage acceleration services. Never seen one of these boxes before? Well this is what they tend to look like, in this example you see a SuperMicro Twin configuration. As you can see from the rear view, 4 individual nodes with 2 power supplies and in the front you see all the disks which are connected per group of 6 to each of the nodes!

For those of you who read this far and are still wondering why RAIL, and is it an acronym, the short answer is “No, not an acronym”. The RAIL in EVO:RAIL simply represent the ‘rail mount’ attached to 2U/4-node server platform that allows it to slide easily into a datacenter rack. One RAIL for one EVO:RAIL HCIA, which represents the smallest unit measure with respect to compute, network, storage and management within the EVO product family.

By now you are probably all anxious to know what EVO:RAIL looks like. Before I show you, one more thing to know about EVO:RAIL… the user interface uses HTML-5! So it works on any device, nice right!

If you prefer a video over screenshots, make sure to visit the EVO:RAIL product page on vmware.com!

What do we do to get it up and running? First of all rack the appliance, cable it up and power it on! Next, hit up the management interface on https://<ip-address>:7443

evo:rail intro

Next you start entering the details of your environment, look the following screenshot to get an idea around how easy it is! You can even define your own naming scheme and it will automatically apply that to joining hosts (both the current set, and any additional appliance added in the future)

evo:rail configure hostnames

Besides a naming scheme, EVO:RAIL allows you to configure the following:

  • IP addresses for Management, vMotion, Virtual SAN (by specifying a pool per traffic type, see screenshot below)
  • vCenter Server and ESXi passwords
  • Globals like: Time Zone, NTP Servers, DNS Servers, Centralized Logging (or configure Log Insight), Proxy

Believe me when I say that it does not get easier then this. Specify your IP ranges and globals once and never think about it any more.

evo:rail configure networking

When you are done EVO:RAIL will validate the configuration for you and then when you are ready apply it. Along the way it will indicate the stage and provide an indication of how far it is in terms of configuration.

evo:rail configuring

When it is done it will point you to the management interface and from there you can  start deploying workloads. Just to be clear, the EVO:RAIL interface is a simplified interface. If for any reason at all you feel  the  interface does not bring you the functionality required you can switch to the vSphere Web Client as that is fully supported!

evo:rail vm management

The interface will allow you to manage virtual machines in an easy way.  It has pre-defined virtual machine sizes (small / medium / large) and even security profiles that can be applied to the virtual machine configuration!

evo:rail vm creation

Of course, EVO:RAIL provides you monitoring capabilities, in the same easy fashion as everything else. Simple overview of what is there, what the usage is and what the state is.

evo:rail health

With that I think it is time to conclude this already lengthy blog post. I will however follow up on this shortly with a series that looks a bit more in-depth to some of the details around EVO:RAIL with a couple core team members. I think it is fair to say that EVO:RAIL is an exciting development in the space of datacenter infrastructure and more specifically in the world of hyper-convergence! If you are at VMworld and want to know more visit the EVO:RAIL booth, the EVO:RAIL pavilion or one of the following sessions:

  • Software-Defined Data Center through Hyper-Converged Infrastructure (SDDC4245) Monday 25th August, 2:00 PM – 3:00 PM – Moscone South, Gateway 103 with Mornay van der Walt and Chris Wolf
  • VMware and Hyper-Converged Infrastructure (SDDC2095) Monday 25th of August, 4:00 PM – 5:00 PM – Moscone West, Room 3001 with Bryan Evans
  • VMware EVO:RAIL Technical Deepdive (SDDC1337) Tuesday 26th of August – 11:00 AM – Marriott, Yerba Buena Level, Salon 1 with Dave Shanley and Duncan Epping
  • VMware Customers Share Experiences and Requirements for Hyper-Converged (SDDC1818) – Tuesday 26th of August, 12:30 PM – 1:30 PM – Moscone West, Room 3014 with Bryan Evans and Michael McDonough
  • VMware EVO:RAIL Technical Deepdive (SDDC1337) Wednesday 27th of August – 11:30 AM – Marriott, Yerba Buena Level, Salon 7 with Dave Shanley and Duncan Epping

Paper copy of Essential Virtual SAN available as of today!

Duncan Epping · Jul 31, 2014 ·

3 weeks ago I announced the availability of the ebook of “Essential Virtual SAN”. Today I have the pleasure to inform you that the paper copy has also hit the streets and is being shipped by Amazon as of today. So for those who were waiting with ordering until the paper version was available… Go here, and order it today, and have it in house by tomorrow! The book covers the architecture of Virtual SAN, operational and architectural gotchas and sizing guidance, design examples and much more. Just pick it up,

Good Read: Virtual SAN data locality white paper

Duncan Epping · Jul 19, 2014 ·

I was reading the Virtual SAN Data Locality white paper. I think it is a well written paper, and really enjoyed it. I figured I would share the link with all of you and provide a short summary. (http://blogs.vmware.com/vsphere/files/2014/07/Understanding-Data-Locality-in-VMware-Virtual-SAN-Ver1.0.pdf)

The paper starts with an explanation of what data locality is (also referred to as “locality of reference”), and explains the different types of latency experienced in Server SAN solutions (network, SSD). It then explains how Virtual SAN caching works, how locality of reference is implemented within VSAN and also how VSAN does not move data around because of the high cost compared to the benefit for VSAN. It also demonstrates how VSAN delivers consistent performance, even without a local read cache. The key word here is consistent performance, something that is not in the case for all Server SAN solutions. In some cases, a significant performance degradation is experienced minutes long after a workload has been migrated. As hopefully all of you know vSphere DRS runs every 5 minutes by default, which means that migrations can and will happen various times a day in most environments. (Seen environments where 30 migrations a day was not uncommon.) The paper then explains where and when data locality can be beneficial, primarily when RAM is used and with specific use cases (like View) and then explains how CBRC aka View Accelerator (in RAM deduplicated read cache) could be used for this purpose. (Does not explain how other Server SAN solutions leverage RAM for local read caching in-depth, but sure those vendors will have more detailed posts on that, which are worth reading!)

Couple of real gems in this paper, which I will probably read a couple of times in the upcoming days!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 56
  • Page 57
  • Page 58
  • Page 59
  • Page 60
  • Interim pages omitted …
  • Page 71
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in