• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Project nanoEDGE aka tiny vSphere/vSAN supported configurations!

Duncan Epping · Oct 10, 2019 ·

A few weeks ago VMware announced Project nanoEDGE on their blog virtual blocks. I had a whole bunch of questions the following days from customers and partners interested in understanding what it is and what it does. I personally prefer to call project nanoEDGE “a recipe”. In the recipe, it states which configuration would be supported for both vSAN as well as vSphere. Lets be clear, this is not a tiny version of VxRail or VMware Cloud Foundation, this is a hardware recipe that should help customers to deploy tiny supported configurations to thousands of locations around the world.

Project nanoEDGE is a project by VMware principal system engineer Simon Richardson. The funny thing is that right around the time Simon started discussing this with customers to see if there would be interest in something like this, I had similar discussions within the vSAN organization. When Simon mentioned he was going to work on this project with support from the VMware OCTO organization I was thrilled. I personally believe there’s a huge market for this. I have had dozens of conversations over the years with customers who have 1000s of locations and are currently running single-node solutions. Many of those customers need to deliver new IT services to these locations and the requirements for those services have changed as well in terms of availability, which makes it a perfect play for vSAN and vSphere (with HA).

So first of all, what would nanoEDGE look like?

As you can see, these are tiny “desktop alike” boxes. These boxes are the Supermicro E300-9D and they come in various flavors. The recipe currently explains the solution as 2 full vSAN servers and 1 host which is used for the vSAN Witness for the 2 node configuration. Of course, you could also run the witness remotely, or even throw in a switch and go with a 3 node configuration. The important part here is that all used components are on both the vSphere as well as the vSAN compatibility guide! The benefit of using the 2-node approach is the fact that you can use cross-over cables between the vSAN hosts and avoid the cost of a 10GbE Switch as a result! So what is in the box? The bill of materials is currently as follows:

  • 3x Supermicro E300-9D-8CN8TP
    • The box comes with 4x 1GbE NIC Port and 2x 10GbE NIC Port
    • 10GbE can be used for direct connect
    • It has an Intel® Xeon® processor D-2146NT – 8 cores
  • 6 x 64GB RAM
  • 3 x PCIe Riser Card (RSC-RR1U-E8)
  • 3 x PCIe M.2 NVMe Add on Card (AOC-SLG3-2M2)
  • 3x Capacity Tier – Intel M.2 NVMe P4511 1TB
  • 3x Cache Tier – Intel M.2 NVMe P4801 375GB
  • 3x Supermicro SATADOM 64GB
  • 1 x Managed 1GbE Switch

From a software point of view the paper lists they tested with 6.7 U2, but of course, if the hardware is on the VCG for 6.7 U3 than it will also be supported to run that configuration. Of course, the team also did some performance tests, and they showed some pretty compelling numbers (40.000+ read IOPS and close to 20.000 write IOPS), especially when you consider that these types of configurations would usually run 15-20 VMs in total. One thing I do want to add, the bill of materials lists M.2 form factor flash devices, this allows nanoEdge to avoid the use of the internal unsupported AHCI disk controller, this is key in the hardware configuration! Do note, that in order to fit two M.2 devices in this tiny box, you will need to also order the listed PCIe Riser Card and the M.2 NVMe add on card, William Lam has a nice article on this subject by the way.

There are many other options on the vSAN HCL for both caching as well as capacity, so if you prefer to use a different device, make sure it is listed here.

I would recommend reading the paper, and if you have an interest in this solution please reach out to your local VMware representative for more detail/help.

Related

Edge, Software Defined, Storage, vSAN edge, robo, supermicro, vsan, vSphere

Reader Interactions

Comments

  1. Totie Bash says

    11 October, 2019 at 01:09

    What hba controller would the Inel SSD? Whould the builtin sata ahci controller work?

    • Duncan Epping says

      11 October, 2019 at 07:17

      Yeah I probably should have clarified this, the SSDs are M.2 NVMe devices, which means they don’t need to use the AHCI controller on the motherboard

    • Duncan Epping says

      11 October, 2019 at 07:19

      https://www.vmware.com/resources/compatibility/search.php?deviceCategory=ssd&details=1&vsan_type=vsanssd&ssd_partner=46&ssd_formfactor=7&keyword=4510&vsanrncomp=true&page=1&display_interval=10&sortColumn=Partner&sortOrder=Asc

  2. Eugene says

    11 October, 2019 at 10:59

    So, VMware is commercializing our good old lab setups 😉 Without kidding, great to see that (more) affordable hardware is now getting in reach for not only edge deployments but also lab setups!

  3. Max Abelardo says

    11 October, 2019 at 13:09

    Thanks for publishing this article Duncan! Do you see the vSAN VCG in the future coming up with a category for certified nanoEdge devices? Although this specification by SuperMicro is quite performant, the needs for the edge does vary quite a bit from data center use cases for vSAN.

    • Duncan Epping says

      14 October, 2019 at 11:02

      yeah who knows, this may happen at some point. Especially when this becomes a bigger focus for OEMs

  4. Mark Bye says

    26 October, 2019 at 21:23

    Hey Duncan! I noticed the bill of materials seems to spec all 3 servers the same, although it is talked about as 2 setup for HCI and one lower spec witness servers. How should the witness server be configured from a hardware perspective?

    • [email protected] says

      27 October, 2019 at 09:56

      Personally I would go for 3 hosts and either do:
      – 2 hosts + physical witness
      – 3 vsan hosts

      I am not a big fan of the witness appliance in this example, as the witness appliance would be sitting in the same area, meaning that I now would have: 2 hosts for vSAN + ESXi host + Witness appliance = 4 hosts to manage. With a physical appliance I would have 3 hosts to manage.

    • [email protected] says

      27 October, 2019 at 09:57

      If you would do a physical appliance, you could decrease the size of the capacity tier, although in reality I don’t think it will make a huge difference for the price point.

      • Manuel says

        28 October, 2019 at 21:46

        You could also reduce the RAM to 2×16 or 1×32 as you do not need performance for witness node (1×32 will be cheaper) or let the 3rd node be a replica node with the witness, the backup VM and all the replicas of the infrastructure (in order to save a VSAN license)

  5. Derek says

    9 May, 2020 at 08:44

    Hey Duncan, I have 3 of these exact same machines with the same 64GB ram. I set them up with both 10GB SFPs going into a 10GBe switch with 2 VLANs one for vms/management and one for vSAN. I ran into some performance issues but it could due to my cache drive. I’m using a 1TB Samsung 970 PRO. I would like to change it out for an optane like the 905p but I’m using the AOC slot for a 6.4TB Intel DC P4600, connected to the u.2 interface.

    I was going to connect the 10Gbe RJ45 in a mesh between the 3 hosts but couldn’t figure out how to set that up properly in the dswitch / vsan setup.

    I wanted to see how these are running?

    Thanks

    • [email protected] says

      11 May, 2020 at 08:48

      A direct connect mesh configuration isn’t supported. not even sure it would work to be honest, but I never tested it.

    • [email protected] says

      11 May, 2020 at 08:49

      Keep in mind, these configurations are not designed for high performance. Faster cache devices may improve performance indeed.

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in