• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Server

VMware EVO:RAIL use case: ROBO

Duncan Epping · Sep 8, 2014 ·

Something that came up a couple of days back was a question around how VMware EVO:RAIL fits the ROBO (remote office branch office) use case. If you watched the demo you will have seen that it is very simple / easy to configure. It takes about 15 minutes to get you up and running and all you need to do is provide details like “IP ranges”, “Subnet mask”, “Gateway” and a couple of other globals.

This by itself makes EVO:RAIL a perfect solution for ROBO deployments… but there is more. When it comes to ROBO deployments and simplifying the roll out there are two more options:

  1. Provide configuration details during procurement process
  2. Specify configuration details in a file and insert in to appliance before shipment to remote office

evo:rail UI

I won’t discuss option 1 in-depth, as this will very much depend on how each of the EVO:RAIL Qualified Partners handles this on their website / during the procurement process. Basically what happens is that you provide your preferred server vendor with configuration details and they put it in to a file called “default-config-static.json” and this is injected in to the vCenter Server Appliance which also runs the EVO:RAIL engine. For the hackers who want to play around with EVO:RAIL, note that the location of the json file and the format may change with newer versions so make sure to always use the latest and greatest if you want to play around. If you have filled out these details, you can just click

Option 2 is also very interesting if you ask me. If you look at EVO:RAIL as it stands today, you have the option to upload a JSON file when you hit the configuration screen (as shown in the screenshot above). This JSON should contain all of your configuration details and then will allow you to configure EVO:RAIL with the click of a button. In other words, you ship the appliance to your remote office. You email them the JSON file (in a secured manner hopefully) and ask them to click “upload configuration file”. They upload the file and then run “Validate”, and probably fill out the password as you don’t want to sent that in clear text. That is it… Nice right :). Of course, if you want … you could even go as far as injecting the .json file into the vCenter Server Appliance yourself, but I am not sure if that will be supported.

As you can imagine, this greatly simplifies the deployment of EVO:RAIL as all it takes is just one click to configure, which is ideal for a ROBO scenario. Anyone can do it!

PuppetConf 2014 Ticket Giveaway, win a free ticket!

Duncan Epping · Sep 7, 2014 ·

Last week I received an email that I could give away 1 free ticket for PuppetConf! I wish I could go myself but unfortunately have other commitments that week. PuppetConf 2014 is the 4th annual IT automation event of the year, taking place in San Francisco September 20-24.

Join the Puppet Labs community and over 2,000 IT pros for 150 track sessions and special events focused on DevOps, cloud automation and application delivery. The keynote speaker lineup includes tech professionals from DreamWorks Animation, Sony Computer Entertainment America, Getty Images and more.

If you’re interested in going to PuppetConf this year, I will be giving away one free ticket to a lucky winner who will get the chance to participate in educational sessions and hands-on labs, network with industry experts and explore San Francisco. Note that these tickets only cover the cost of the conference (a $970+ value), but you’ll need to cover your own travel and other expenses (discounted rates available). You can learn more about the conference at: 2014.puppetconf.com

Want to a free ticket? It is really easy, just leave a comment here with your real(!) email address and I will pick a random winner on Friday the 12th of September. Easy right?

Re: Re: The Rack Endgame: A New Storage Architecture For the Data Center

Duncan Epping · Sep 5, 2014 ·

I was reading Frank Denneman’s article with regards to new datacenter architectures. This in its turn was a response to Stephen Fosket’s article about how the physical architecture of datacenter hardware should change. I recommend reading both articles as that will give a bit more background, plus they are excellent reads by itself. (gotta love these blogging debates) Lets start with an out take of both articles which summarizes blog posts for those who don’t want to read the full article.

Stephen:
Top-of-rack flash and bottom-of-rack disk makes a ton of sense in a world of virtualized, distributed storage. It fits with enterprise paradigms yet delivers real architectural change that could “move the needle” in a way that no centralized shared storage system ever will. SAN and NAS aren’t going away immediately, but this new storage architecture will be an attractive next-generation direction!

If you look at what Stephen describes I think it is more or less in line with what Intel is working towards. The Intel Rack Scale Architecture aims to disaggregate traditional server components and then aggregate by type of resource backed by a super performing and optimized rack fabric. Rack fabric enabled by the new photonic architecture Intel is currently working on. This is not long term future, this is what Intel showcased last year and said to be available in 2015 / 2016.

Frank:
The hypervisor is rich with information, including a collection of tightly knit resource schedulers. It is the perfect place to introduce policy-based management engines. The hypervisor becomes a single control plane that manages both the resource as well as the demand. A single construct to automate instructions in a single language providing a correct Quality of Service model at application granularity levels. You can control resource demand and distribution from one single pane of management. No need to wait on the completion of the development cycles from each vendor.

There’s a bit in Frank’s article as well where he talks about Virtual Volumes and VAAI and how long it took for all storage vendors to adopt VAAI and how he believes that the same may apply to Virtual Volumes and Frank aims more towards the hypervisor being the aggregator instead of doing it through changes in the physical space.

So what about Frank’s arguments? Well Frank has a point with regards to VAAI adoption and the fact that some vendors took a long time to implement these. However, reality is though that Virtual Volumes is going full steam ahead. With many storage vendors demoing it at VMworld in San Francisco last week I have the distinct feeling that things will be different this time. Maybe timing is part of it, as it seems that many customers or on a crosspoint and want to optimize their datacenter operations / architecture by adopting SDDC, of which policy based storage management happens to be a big chunk.

I agree with Frank that the hypervisor is positioned perfect to be that control plane. However, in order to be that control plane for the future there needs to be a way to connect “things” to it which allows for far better scale and more flexibility. VMware, if you ask me, has done that for many parts of the datacenter but one aspect that stills needs to be overhauled for sure is storage. VAAI was a great start, but with VMFS there simply are too many constraints and it doesn’t cater for granular controls.

I feel that the datacenter will need to change on both ends in order to take that next step in the evolution to the SDDC. Intel Rack Scale architecture will allow for far greater scale and efficiency then seen ever before. But it will only be successful when the layer that sits on top has the ability to take all of these disaggregated resources, turn them in to large shared pools and allows to assign resources in a policy driven (and programmable) manner. Not just assign resources but also allow you to specify what the level of availability (HA, DR but also QoS) should be for whatever consumes those resources. Granularity is important here and of course it shouldn’t stop with availability but applies to any other (data) service that one may require.

So where does what fit in? If you look at some of the initiatives that were revealed at VMworld like Virtual Volumes, Virtual SAN and vSphere APIs for IO Filters you can see where the world is moving towards fast. You can see how vSphere is truly becoming that control plane for all resources and how it will be able to provide you end-to-end policy driven management. In order to make all of this reality the current platform will need to change. Changes that allow for more granularity /flexibility and higher scalability and that is where all these (new) initiatives come in to play. Some partners may take longer to adopt than others, especially those that require fundamental changes to the architecture of underlaying platforms (storage systems for instance), but just like with VAAI I am certain that over time this will happen as customers will drive this change by making decisions based on availability of functionality.

Exciting times ahead if you ask me.

VMware EVO:RAIL demo

Duncan Epping · Sep 4, 2014 ·

I just wanted to share the VMware EVO:RAIL demo with my readers. I shared it on twitter / linkedin but figured it made sense as well to have it here. The demo shows both the configuration and the management interface. Note that it takes less than 15 minutes normally to complete the configuration, but of course the video has been edited to keep it short and sweet… No point in watching a percentage completed counter go up.

VMware EVO:RAIL FAQ

Duncan Epping · Sep 2, 2014 ·

Over the last couple of days the same VMware EVO:RAIL questions keep popping up over and over again. I figured I would do a quick VMware EVO:RAIL Q&A post so that I can point people to that instead of constantly answering them on twitter.

  • Can you explain what EVO:RAIL is?
    • EVO:RAIL is the next evolution of infrastructure building blocks for the SDDC. It delivers compute, storage and networking in a 2U / 4 node package with an intuitive interface that allows for full configuration within 15 minutes. The appliance bundles hardware+software+support/maintenance to simplify both procurement and support in a true “appliance” fashion. EVO:RAIL provides the density of blade with the flexibility of rack. Each appliance comes with 100GHz of compute power, 768GB of memory capacity and 14.4TB of raw storage capacity (plus 1.6TB of flash for IO acceleration purposes). For full details, read my intro post.
  • Where can I find the datasheet?
    • http://www.vmware.com/files/pdf/products/evo-rail/vmware-evo-rail-datasheet.pdf
  • What is the minimum number of EVO:RAIL hosts?
    • Minimum number is 4 hosts. Each appliance comes with 4 independent hosts, which means that 1 appliance is the start. It scales per appliance!
  • What is included with an EVO:RAIL appliance?
    • 4 independent hosts each with the following resources
      • 2 x E5-2620 6 core
      • 192GB Memory
      • 3 x 1.2TB 10K RPM Drive for VSAN
      • 1 x 400Gb eMLC SSD for VSAN
      • 1 x ESXi boot device
      • 2 x 10GbE NIC port (SFP / RJ45 can be selected)
      • 1 x IPMI port
    • vSphere Enterprise Plus
    • vCenter Server
    • Virtual SAN
    • Log Insight
    • Support and Maintenance for 3 years
  • What is the total available storage capacity?
    • After the VSAN Datastore is formed and vCenter Server is installed / configured there is about 13.1TB left
  • How many VMs can I run on one appliance?
    • That will very much depend on the size of the virtual machine and the workload. We have been able to comfortably run 250 desktops on one appliance. With Server VMs we ended up with around 100. However, again this very much depends on things like workload / capacity etc.
  • How many EVO:RAIL appliance can I scale to?
    • With the current release EVO:RAIL scales to 4 appliance (aka 16 hosts)
  • If licensing / maintenance / support is three 3 years, what happens after?
    • After 3 years support/maintenance expires. It can be extended, or the appliance can be replaced when desired.
  • How is support handled?
    • All support is handled through the OEM the EVO:RAIL HCIA has been procured through. This ensures that “end to end” support will be provided through the same channel.
  • Who are the EVO:RAIL qualified partners?
    • The following partners were announced at VMworld: Dell, EMC, Fujitsu, Inspur, Net One Systems, Supermicro, Hitachi Data Systems, HP, NetApp
  • How much does an EVO:RAIL appliance cost?
    • Pricing will be set by qualified partners
  • I was told Support and Maintenance is for 3 years, what happens after 3 years?
    • You can renew your support and maintenance with 2 years at most (as far as I know).
    • If not renewed then the EVO:RAIL appliance will remain functioning, but entitlement to support is gone.
  • What if I buy a new appliance after 3 years, can I re-use my licenses that come with the EVO:RAIL appliance??
    • No, the licenses are directly tied to the appliance and cannot be transferred to any other appliance or hardware.
  • Will NSX work with EVO:RAIL?
    • EVO:RAIL uses vSphere 5.5 and Virtual SAN. Anything that works with that will work with EVO:RAIL. NSX has not been explicitly tested but I expect that this should be no problem.
  • Does it use VMware Update Manager (VUM) for updating/patching?
    • No EVO:RAIL does not use VUM for updating and patching. It uses a new mechanism which is built from scratch and comes as part of the EVO:RAIL engine. This to provide a simple updating and patching mechanism, while avoiding the need for a Windows VM (VUM requires Windows).
  • What kind of NIC card is included?
    • 10GbE dual port NIC per host. Majority of vendors will offer both SFP+ and RJ45. This means there is 8 x 10GbE switch port per EVO:RAIL appliance required!
  • Is there a physical switch included?
    • A physical switch is not part of the “recipe” VMware provided to qualified partners, but some may package one (or multiple) with it to simplify green field deployments.
  • What is MARVIN or Mystic ?
    • MARVIN (Modular Automated Rackable Virtual Infrastructure Node) was the codename used by VMware internally for EVO:RAIL. Mystic was the codename used by EMC. Both of them refer to EVO:RAIL
  • Where does EVO:RAIL run?
    • EVO:RAIL runs on vCenter Server. vCenter Server is powered-on automatically when the appliance is started and the EVO:RAIL engine can then be used to configure the appliance
  • Which version of vCenter Server do you use, the Windows version or the Appliance?
    • In order to simplify deployment EVO:RAIL uses the vCenter Server Appliance.
  • Can I use the vCenter Web Client to manage my VMs or do I need to use the EVO:RAIL engine?
    • You can use whatever you like to manage your VMs. Web Client is fully supported and configured for you!
  • Are there networking requirements?
    • IPv6 is required for configuration of the appliance and auto-discovery. Multicast traffic on L2 is required for Virtual SAN
  • …

Some great EVO:RAIL links:

  • Introducing EVO:RAIL
  • EVO:RAIL configuration and management Demo
  • VMTN Community – EVO:RAIL
  • Linkedin Group – EVO:RAIL
  • VMware blog: VMware Horizon and EVO: RAIL – Value Add For Customers
  • Chad Sakac – VMworld 2014 – EVO:RAIL and EMC’s approach
  • Julian Wood – VMware Marvin comes alive as EVO:Rail, a Hyper-Converged Infrastructure Appliance
  • Chris Wahl – VMware announces software defined infrastructure with EVO:RAIL
  • Ivan Pepelnjak – VMware EVO:RAIL – One stop shop for your private cloud
  • Podcast on EVO:RAIL with Mike Laverick
  • EVO:RAIL engineering interview with Dave Shanley
  • EVO:RAIL vs VSAN Ready Node vs Component based
  • …

If you have any questions, feel free to drop them in comments section and I will do my best to answer them.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 99
  • Page 100
  • Page 101
  • Page 102
  • Page 103
  • Interim pages omitted …
  • Page 336
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in