• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

robo

Project nanoEDGE aka tiny vSphere/vSAN supported configurations!

Duncan Epping · Oct 10, 2019 ·

A few weeks ago VMware announced Project nanoEDGE on their blog virtual blocks. I had a whole bunch of questions the following days from customers and partners interested in understanding what it is and what it does. I personally prefer to call project nanoEDGE “a recipe”. In the recipe, it states which configuration would be supported for both vSAN as well as vSphere. Lets be clear, this is not a tiny version of VxRail or VMware Cloud Foundation, this is a hardware recipe that should help customers to deploy tiny supported configurations to thousands of locations around the world.

Project nanoEDGE is a project by VMware principal system engineer Simon Richardson. The funny thing is that right around the time Simon started discussing this with customers to see if there would be interest in something like this, I had similar discussions within the vSAN organization. When Simon mentioned he was going to work on this project with support from the VMware OCTO organization I was thrilled. I personally believe there’s a huge market for this. I have had dozens of conversations over the years with customers who have 1000s of locations and are currently running single-node solutions. Many of those customers need to deliver new IT services to these locations and the requirements for those services have changed as well in terms of availability, which makes it a perfect play for vSAN and vSphere (with HA).

So first of all, what would nanoEDGE look like?

As you can see, these are tiny “desktop alike” boxes. These boxes are the Supermicro E300-9D and they come in various flavors. The recipe currently explains the solution as 2 full vSAN servers and 1 host which is used for the vSAN Witness for the 2 node configuration. Of course, you could also run the witness remotely, or even throw in a switch and go with a 3 node configuration. The important part here is that all used components are on both the vSphere as well as the vSAN compatibility guide! The benefit of using the 2-node approach is the fact that you can use cross-over cables between the vSAN hosts and avoid the cost of a 10GbE Switch as a result! So what is in the box? The bill of materials is currently as follows:

  • 3x Supermicro E300-9D-8CN8TP
    • The box comes with 4x 1GbE NIC Port and 2x 10GbE NIC Port
    • 10GbE can be used for direct connect
    • It has an Intel® Xeon® processor D-2146NT – 8 cores
  • 6 x 64GB RAM
  • 3 x PCIe Riser Card (RSC-RR1U-E8)
  • 3 x PCIe M.2 NVMe Add on Card (AOC-SLG3-2M2)
  • 3x Capacity Tier – Intel M.2 NVMe P4511 1TB
  • 3x Cache Tier – Intel M.2 NVMe P4801 375GB
  • 3x Supermicro SATADOM 64GB
  • 1 x Managed 1GbE Switch

From a software point of view the paper lists they tested with 6.7 U2, but of course, if the hardware is on the VCG for 6.7 U3 than it will also be supported to run that configuration. Of course, the team also did some performance tests, and they showed some pretty compelling numbers (40.000+ read IOPS and close to 20.000 write IOPS), especially when you consider that these types of configurations would usually run 15-20 VMs in total. One thing I do want to add, the bill of materials lists M.2 form factor flash devices, this allows nanoEdge to avoid the use of the internal unsupported AHCI disk controller, this is key in the hardware configuration! Do note, that in order to fit two M.2 devices in this tiny box, you will need to also order the listed PCIe Riser Card and the M.2 NVMe add on card, William Lam has a nice article on this subject by the way.

There are many other options on the vSAN HCL for both caching as well as capacity, so if you prefer to use a different device, make sure it is listed here.

I would recommend reading the paper, and if you have an interest in this solution please reach out to your local VMware representative for more detail/help.

Can all VSAN Witness VMs for ROBO be in the same VLAN?

Duncan Epping · Jun 9, 2016 ·

Yesterday I received the question if all VSAN Witness VMs for ROBO can be in the same VLAN? The excellent VSAN 2 node and Stretched Cluster guide for VSAN describes an example of how things can be implemented with different VLANs for the VSAN Witness, which would look like this:

Now the question came in from a customer implementing ROBO at scale if it was possible to have all VSAN Witness VMs for ROBO in the same VLAN. The answer in short is: yes. Keep in mind that the  ROBO locations access the Witness VM over L3 and there is no multicast needed between the ROBO location. Only thing you need to do is set up the routes from the ROBO location to the main site with the central Witness VMs. All of them can be in the same VLAN, fully supported!

VMware EVO:RAIL use case: ROBO

Duncan Epping · Sep 8, 2014 ·

Something that came up a couple of days back was a question around how VMware EVO:RAIL fits the ROBO (remote office branch office) use case. If you watched the demo you will have seen that it is very simple / easy to configure. It takes about 15 minutes to get you up and running and all you need to do is provide details like “IP ranges”, “Subnet mask”, “Gateway” and a couple of other globals.

This by itself makes EVO:RAIL a perfect solution for ROBO deployments… but there is more. When it comes to ROBO deployments and simplifying the roll out there are two more options:

  1. Provide configuration details during procurement process
  2. Specify configuration details in a file and insert in to appliance before shipment to remote office

evo:rail UI

I won’t discuss option 1 in-depth, as this will very much depend on how each of the EVO:RAIL Qualified Partners handles this on their website / during the procurement process. Basically what happens is that you provide your preferred server vendor with configuration details and they put it in to a file called “default-config-static.json” and this is injected in to the vCenter Server Appliance which also runs the EVO:RAIL engine. For the hackers who want to play around with EVO:RAIL, note that the location of the json file and the format may change with newer versions so make sure to always use the latest and greatest if you want to play around. If you have filled out these details, you can just click

Option 2 is also very interesting if you ask me. If you look at EVO:RAIL as it stands today, you have the option to upload a JSON file when you hit the configuration screen (as shown in the screenshot above). This JSON should contain all of your configuration details and then will allow you to configure EVO:RAIL with the click of a button. In other words, you ship the appliance to your remote office. You email them the JSON file (in a secured manner hopefully) and ask them to click “upload configuration file”. They upload the file and then run “Validate”, and probably fill out the password as you don’t want to sent that in clear text. That is it… Nice right :). Of course, if you want … you could even go as far as injecting the .json file into the vCenter Server Appliance yourself, but I am not sure if that will be supported.

As you can imagine, this greatly simplifies the deployment of EVO:RAIL as all it takes is just one click to configure, which is ideal for a ROBO scenario. Anyone can do it!

Performance of vCenter 5.0 in Remote Offices and Branch Offices (ROBO) white paper

Duncan Epping · Jun 19, 2012 ·

I just finished reading the “Performance of VMware vCenter 5.0 in remote offices and branch offices (ROBO)” white paper. I thought it was an excellent read and recommend it to anyone who has a ROBO environment. Also it is interesting to know what kind of traffic hosts / VMs drive in general to vCenter. Especially the details around the statistics level are worth reading for those deploying larger environments as it also gives a sense of the amount of data that vCenter is processing.

Nice work Fei Chen! You can find the paper here:

Performance of VMware vCenter 5.0 in Remote Offices and Branch Offices (ROBO)
This document details the performance of typical vCenter 5.0 operations in a use case where vCenter manages ESXi hosts over a network with limited bandwidth and high latency, which is also known as a remote office, branch office (ROBO) environment.

(Although the date stamp on this entry says 2010 it is a June / 2012 paper, I will try to get this fixed!)

Solutions for Remote and Branch Offices

Duncan Epping · Nov 20, 2009 ·

VMware just released two new editions of vSphere which are specifically targeted at retail store / branch office environments. A PDF describing these package can be found here and the solution page on VMware.com can be found here.

  • VMware vSphere Essentials for Retail and Branch Offices offers a package solution for each of a customer’s branch offices to scale agility, security, and efficiency across the organization.
  • VMware vSphere Essentials Plus for Retail and Branch Offices provides a turnkey solution for complete business agility and continuity at all remote sites by adding high availability and data protection.

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in