• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

hyperconverged

Liking the VMware EVO:RAIL look? How about a desktop / phone wallpaper?

Duncan Epping · Aug 31, 2014 ·

Dave Shanley (lead engineer for VMware EVO:RAIL) dropped me an email with an awesome looking wallpaper for desktops and smart phones. I asked him if I could share with the world and I guess it is needless to say what the answer was. Grab ’em below while they are still hot :). Thanks Dave! Note, that each pic below links (so click it) to Flickr with various resolutions available!

Desktop wallpaper:

evo:rail desktop wallpaper

Smart phone (optimized for iPhone 5s):

evo:rail smartphone wallpaper

Introducing VMware EVO:RAIL a new hyper-converged offering!

Duncan Epping · Aug 25, 2014 ·

About 18 months ago I was asked to be part of a very small team to build a prototype. Back then it was one developer (Dave Shanley) who did all the development including the user experience aspect. I worked on architectural aspects we had an executive sponsor (Mornay van der Walt). After a couple of months we had something to show internally. In March of 2013, after showing the first prototype, we received the green light and the team expanded quickly. A small team within the SDDC Division’s Emerging Solutions Group was tasked with building something completely new, to enter a market where VMware had never gone before, to do something that would surprise many. The team was given the freedom to operate somewhat like a startup within VMware; run fast and hard, prototype, iterate, pivot when needed, with the goal of delivering a game changing product by VMworld 2014. Today I have the pleasure to introduce this project to the world: VMware EVO:RAIL™.

EVO:RAIL – What’s in a name?
EVO represents a new family of ‘Evolutionary’ Hyper-Converged Infrastructure offerings from VMware. RAIL represents the first product within the EVO family that will ship during the second half of 2014. More on the meaning of RAIL towards the end of the post.

The Speculation is finally over!

Over the past 6-plus months there was a lot of speculation over Project Mystic and Project MARVIN. I’ve been wanting to write about this for so long now, but unfortunately couldn’t talk about it. The speculation is finally over with the announcement of EVO:RAIL, and you can expect multiple articles on this topic here in the upcoming weeks! So just to be clear: MARVIN = Mystic = EVO:RAIL

What is EVO:RAIL?

Simply put, EVO:RAIL is a Hyper-Converged Infrastructure Appliance (HCIA) offering by VMware and of qualified EVO:RAIL partners that includes Dell, EMC, Fujitsu, Inspur, Net One Systems and SuperMicro. This impressive list of partners will ensure EVO:RAIL has a global market reach from day one, as well as the assurance of world class customer support and services these partners are capable of providing. For those who are not familiar with hyper-converged infrastructure offerings: it combines Compute, Network and Storage resources into a single unit of deployment. In the case of EVO:RAIL this is a 2U unit which contains 4 independent physical nodes.

evo:rail logo

But why a different type of hardware? platform What will EVO:RAIL bring to you as a customer? In my opinion EVO:RAIL has a several major advantages over traditional infrastructure:

  • Software-Defined
  • Simplicity
  • Highly Resilient
  • Customer Choice

Software-Defined

EVO:RAIL is a scalable Software-Defined Data Center (SDDC) building block that delivers compute, networking, storage, and management to empower private/hybrid-cloud, end-user computing, test/dev, remote and branch office environment, and small virtual private cloud. EVO:RAIL builds on proven technology of VMware vSphere®, vCenter Server™, and VMware Virtual SAN™, EVO:RAIL delivers the first hyper-converged infrastructure appliance 100% powered by VMware software.

Simplicity Transformed

EVO: RAIL enables time to value to first VM in minutes once the appliance is racked, cabled and powered on.. VM creation, radically simplified via the EVO:RAIL management user interface, easy VM deployment, one-click non-disruptive patch and upgrades, simplified management and scale-out.

Highly Resilient by Design

Resilient appliance design starting with four independent hosts and a distributed Virtual SAN datastore ensures zero VM downtime during planned maintenance or during disk, network, or host failures.

Customer Choice

EVO:RAIL is delivered as a complete appliance solution with hardware, software, and support through qualified EVO:RAIL partners; customers choose an EVO:RAIL appliance from their preferred EVO:RAIL partner. This means a single point of contact to buy new equipment (single SKU includes all components), and a single point of contact for support.

So what will each appliance provide you with in terms of hardware resources? Each EVO:RAIL appliance has four independent nodes with dedicated computer, network, and storage resources and dual, redundant power supplies.

Each of the four EVO:RAIL nodes have (at a minimum):

  • Two Intel E5-2620 v2 six-core CPUs
  • 192GB of memory
  • One SLC SATADOM or SAS HDD as the ESXi™ boot device
  • Three SAS 10K RPM 1.2TB HDD for the VMware Virtual SAN™ datastore
  • One 400GB MLC enterprise-grade SSD for read/write cache
  • One Virtual SAN-certified pass-through disk controller
  • Two 10GbE NIC ports (configured for either 10GBase-T or SFP+ connections)
  • One 1GbE IPMI port for remote (out-of-band) management

All of this leads to a total combined of at least 100GHz CPU resources, 768GB of memory resources, 14.4TB  of storage capacity and 1.6TB of flash capacity used by Virtual SAN for storage acceleration services. Never seen one of these boxes before? Well this is what they tend to look like, in this example you see a SuperMicro Twin configuration. As you can see from the rear view, 4 individual nodes with 2 power supplies and in the front you see all the disks which are connected per group of 6 to each of the nodes!

For those of you who read this far and are still wondering why RAIL, and is it an acronym, the short answer is “No, not an acronym”. The RAIL in EVO:RAIL simply represent the ‘rail mount’ attached to 2U/4-node server platform that allows it to slide easily into a datacenter rack. One RAIL for one EVO:RAIL HCIA, which represents the smallest unit measure with respect to compute, network, storage and management within the EVO product family.

By now you are probably all anxious to know what EVO:RAIL looks like. Before I show you, one more thing to know about EVO:RAIL… the user interface uses HTML-5! So it works on any device, nice right!

If you prefer a video over screenshots, make sure to visit the EVO:RAIL product page on vmware.com!

What do we do to get it up and running? First of all rack the appliance, cable it up and power it on! Next, hit up the management interface on https://<ip-address>:7443

evo:rail intro

Next you start entering the details of your environment, look the following screenshot to get an idea around how easy it is! You can even define your own naming scheme and it will automatically apply that to joining hosts (both the current set, and any additional appliance added in the future)

evo:rail configure hostnames

Besides a naming scheme, EVO:RAIL allows you to configure the following:

  • IP addresses for Management, vMotion, Virtual SAN (by specifying a pool per traffic type, see screenshot below)
  • vCenter Server and ESXi passwords
  • Globals like: Time Zone, NTP Servers, DNS Servers, Centralized Logging (or configure Log Insight), Proxy

Believe me when I say that it does not get easier then this. Specify your IP ranges and globals once and never think about it any more.

evo:rail configure networking

When you are done EVO:RAIL will validate the configuration for you and then when you are ready apply it. Along the way it will indicate the stage and provide an indication of how far it is in terms of configuration.

evo:rail configuring

When it is done it will point you to the management interface and from there you can  start deploying workloads. Just to be clear, the EVO:RAIL interface is a simplified interface. If for any reason at all you feel  the  interface does not bring you the functionality required you can switch to the vSphere Web Client as that is fully supported!

evo:rail vm management

The interface will allow you to manage virtual machines in an easy way.  It has pre-defined virtual machine sizes (small / medium / large) and even security profiles that can be applied to the virtual machine configuration!

evo:rail vm creation

Of course, EVO:RAIL provides you monitoring capabilities, in the same easy fashion as everything else. Simple overview of what is there, what the usage is and what the state is.

evo:rail health

With that I think it is time to conclude this already lengthy blog post. I will however follow up on this shortly with a series that looks a bit more in-depth to some of the details around EVO:RAIL with a couple core team members. I think it is fair to say that EVO:RAIL is an exciting development in the space of datacenter infrastructure and more specifically in the world of hyper-convergence! If you are at VMworld and want to know more visit the EVO:RAIL booth, the EVO:RAIL pavilion or one of the following sessions:

  • Software-Defined Data Center through Hyper-Converged Infrastructure (SDDC4245) Monday 25th August, 2:00 PM – 3:00 PM – Moscone South, Gateway 103 with Mornay van der Walt and Chris Wolf
  • VMware and Hyper-Converged Infrastructure (SDDC2095) Monday 25th of August, 4:00 PM – 5:00 PM – Moscone West, Room 3001 with Bryan Evans
  • VMware EVO:RAIL Technical Deepdive (SDDC1337) Tuesday 26th of August – 11:00 AM – Marriott, Yerba Buena Level, Salon 1 with Dave Shanley and Duncan Epping
  • VMware Customers Share Experiences and Requirements for Hyper-Converged (SDDC1818) – Tuesday 26th of August, 12:30 PM – 1:30 PM – Moscone West, Room 3014 with Bryan Evans and Michael McDonough
  • VMware EVO:RAIL Technical Deepdive (SDDC1337) Wednesday 27th of August – 11:30 AM – Marriott, Yerba Buena Level, Salon 7 with Dave Shanley and Duncan Epping

Building a hyper-converged platform using VMware technology part 3

Duncan Epping · Mar 12, 2014 ·

Considering some of the pricing details have been announced I figured I would write a part 3 of my “Building a hyper-converged platform using VMware technology” series (part 1 and part 2) Before everyone starts jumping in on the pricing details, I want to make sure people understand that I have absolutely no responsibilities whatsoever related to this subject, I am just the messenger in this case. In order to run through this exercise I figured I would take a popular SuperMicro configuration and ensure that the components used are certified by VMware.

I used the thinkmate website to get pricing details on the SuperMicro kit. Lets list the hardware first:

    • 4 hosts each with:
      -> Dual Six-Core Intel Xeon® CPU E5-2620 v2 2.10GHz 15MB Cache (80W)
      -> 128 GB (16GB PC3-14900 1866MHz DDR3 ECC Registered DIMM)
      -> 800GB Intel DC S3700 Series 2.5″ SATA 6.0Gb/s SSD (MLC)
      -> 5 x 1.0TB SAS 2.0 6.0Gb/s 7200RPM – 2.5″ – Seagate Constellation.2
      -> Intel 10-Gigabit Ethernet CNA X540-T2 (2x RJ-45)

The hardware is around $ 30081,-, this is without any discount. Just the online store price. Now the question is, what about Virtual SAN? You would need to license 8 sockets with Virtual SAN in this scenario, again this is the online store price without any discount:

  • $ 2495,- per socket = $ 19960,-

This makes the cost of the SuperMicro hardware including the Virtual SAN licenses for four nodes in this configuration roughly $ 50.041. (There is also the option to license Virtual SAN for View per user which is $ 50,-.) That is around $ 12600 per host including the VSAN licenses.

If you do not own vSphere licenses yet you will need to license vSphere itself as well, I would recommend Enterprise ( $ 2875,- per socket) as with VSAN you will automatically get Storage Policy Based Management and the Distributed Switch. Potentially, depending on your deployment type, you will also need vCenter Server. Standard license for vCenter Server is $ 4995,-. If you would include all VMware licenses the total combined would be: $ 78036,-. That is around 19600 per host including the VSAN and vSphere licenses. Not bad if you ask me,

I want to point out that I did not include Support and Maintenance costs. As this will depend on which type of support you require and what type of vSphere licenses you will have I felt there were too many variable to make a comparison. It should also be noted that many storage solutions come with very limited first year support… Before you do a comparison, make sure to look at what is included and what will need to be bought separately for proper support.

** disclaimer: Please run through these numbers yourself, and validate the HCL before purchasing any equipment. I cannot be held responsible for any pricing / quoting errors, hardware prices can vary from day to day and this is exercise was for educational purposes only! **

Building a hyper-converged platform using VMware technology part 2

Duncan Epping · Jan 23, 2014 ·

In part 1 of “Building a hyper-converged platform using VMware technology” I went through the sizing and scaling exercise. In short to recap, in order to run 100 VMs we would need the following resources:

  • 100 x 1.5 vCPUs = ~30 cores
  • 100 x 5 GB = 500GB of memory
  • 100 x 50 GB (plus FTT etc) = 11.8 TB of disk space

From a storage perspective 11.8 TB is not a huge amount, 500 GB of memory can easily fit in a single host today, and 30 cores… well maybe not easilyin a single host but it is no huge requirement either. What are our options? Lets give an example of some server models that fall into the category we are discussing:

  • SuperMicro Twin Pro – 2U chassis with 4 nodes. Per node: Capable of handling 6 * 2.5″ drives and on-board 10GbE. Supports the Intel E-2600 family and up to 1TB of memory
    • SuperMicro is often used by startups, especially in the hyperconverged space but also hybrid storage vendors like Tintri use their hardware. Hey SuperMicro Marketing Team, this is something to be proud of… SuperMicro powers more infrastructure startups than anyone else probably!
    • Note you can select 3 different disk controller types, LSI 3108, LSI 3008 and the Intel C600. Highly recommend the LSI controllers!
  • HP Sl2500t – 2U chassis with 4 nodes. Per node: Capable of handling 6 * 2.5″ or 3 * 3.5″ drives and FlexibleLOM 10GbE can be included. Supports the Intel E-2600 family and up to 512GB of memory
    • You can select from the various disk controllers HP offers, do note that today there are a limited number of controllers certified.
    • Many probably don’t care, but the HP kit just looks awesome 🙂
  • Dell C6000 series – 2U chassis with 4 nodes. Per node: Capable of handling 6 * 2.5″ per node or 3 * 3.5″ drives. Supports the Intel E-2600 family and up to 512GB of memory
    • Note there is no on-board 10GbE or “LOM” type of solution, you will need to add a 10GbE PCIe card.
    • Dell offers 3 different disk controllers including the LSI 2008 series. Make sure to check the HC.

First thing to note here is that all of the configuration above by default come with 4 nodes, yes you can order them with less but personally I wouldn’t recommend that. Strange thing is that in order to get configuration details for the Dell and HP you need to phone them up, so lets take a look at the SuperMicro Twin Pro as there are details to be found online. What are our configuration options? Well plenty I can tell you that. CPUs ranging low-end Quad-core 1.8GHz up to Twelve-core 2.7 GHz Intel CPUs. Memory configurations ranging from 2GB DIMMS to 32GB DIMMS including the various speeds. Physical disks ranging from 250GB 7200 RPM SATA Seagate to 1.2TB 10k RPM SAS Hitachi drives. Unlimited possibilities, and that is probably where it tends to get more complicated. [Read more…] about Building a hyper-converged platform using VMware technology part 2

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

29-08-2022 – VMware Explore US
07-11-2022 – VMware Explore EMEA
….

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2022 · Log in