VSAN for ROBO?

I noticed this new SuperMicro VSAN Ready Node being published last week. The configuration is potentially a nice solution for ROBO deployments, primarily due to the cost of the system.

When I did the math it came in around $ 3800,-. This is the configuration:

  • SuperMicro SuperServer 1018D-73MTF
  • 1 x Intel E3-1270 V3 3.5GHz- Quadcore
  • 32GB Memory
  • 5 x 1TB 7200 RPM NL-SAS HDD
  • 1 x 200GB Intel S3700 SSD
  • LSI 2308 Disk controller
  • 4 x 1GbE NIC port

It is a nice configuration that will allow for roughly fifteen 1 vCPU Virtual Machines with 3GB of memory and 60GB disk capacity per host. Personally I would use a different CPU and some more memory probably as that gives you a bit more headroom, especially during maintenance. The cost from a software point of view is socket based so you can increase memory and change the type of CPU with relative low cost impact. The SuperMicro server listed however is limited to the E3 CPU family and to 32GB but there are alternatives out there. (For instance the Dell R320 or maybe even the R210 etc)

From a software point of view the cost of this configuration is limited to 3 x VSAN license and 3 x vSphere. As VSAN even works with Essentials Plus and Standard you could leverage that to keep the cost down, but keep in mind that you won’t have DRS if you drop down to Standard or lower. Still sounds like a nice ROBO package to me, especially when you have many sites this could be a great way to create a standardized packaged solution.

30K for a VSAN host @theregister? I can configure one for 2250 USD!

I’ve been following the posts from the Register on VSAN and was surprised when they posted the cost of the hosts they configured: 30K each. With 3 at a minimum they concluded that for 90K you could buy yourself a nice legacy storage system. I don’t disagree with that to be honest… for 90K you can buy a nice legacy storage system. I guess you need to ask yourself first though what you will do with that 90K storage system by itself? Not much indeed, as you would need compute resources sitting next to it in order to do anything. So if you want to make a comparison, do not compare a full VSAN environment (or any other hyper-converged solution out there) to just a storage system at it just doesn’t make sense.

Now that still doesn’t make these hosts cheap I can hear you think, and again I agree with that. Although I have absolutely no clue where the 30K came from, and judging by the tweets this morning most people don’t know and feel it probably was overkill. Call me crazy, but I can configure a fully supported VSAN configuration for about 2250 USD (just HW) on the Dell website.

  • Dell T320
  • Intel Xeon E5-2420 1.90GHz 6 Core
  • Perc H310 Disk Controller
  • 32GB Memory
  • 1 x 7200RPM 1TB NL-SAS
  • 1 x 100GB Intel S3700 SSD (or dell equal drive)
  • 5 x 1GbE NIC Port

I would like to conclude that VSAN would be a lot cheaper than those legacy solutions, less than 7500 USD for 3 hosts is peanuts right?!? Yes I know, the above configuration wouldn’t fit many use cases (except for maybe a ROBO deployment where only a couple of VMs are needed) and that was the whole point of the exercise showing how pointless these exercises can be. You can twist these numbers anyway you like, and you can configure your VSAN hosts any way you like as long as the components (HDD/SSD/Controller) are on the VSAN HCL and the system is on the vSphere HCL. PS: Dear Register, next time you run through the exercise, you may want to post the configuration you selected… It makes things a bit clearer.

Book: Networking for VMware Administrators

Fellow blogger Chris Wahl just announced the availability of an awesome book titled Networking for VMware Administrators he authored with Steve Pantol. The book is published via VMware Press and is a must read if you ask me. I am going to order it for sure as it is an area that I can definitely brush up on. The book is 368 pages and covers everything from the networking models to switching, but of course heavily focuses on the virtual side and dives in to the standard vSwitch, distributed switch and the Cisco Nexus 1000v!

Knowing Chris this book is going to be worth it, his blog material has always been excellent and I expect nothing less. Congrats Chris and Steve, awesome work and looking forward to reading it.

You can pick the book up here: paper | kindle

VSAN HCL more than VSAN-ready nodes

Over the last couple of weeks, basically since VSAN was launched, I noticed something and I figured I would blog about it. Many people seem to be under the impression that the VSAN Ready Nodes are your only option if you want to buy new servers to run VSAN on. This is definitely NOT the case. VSAN Ready Nodes are a great solution for people who do not want to bother going through the exercise of selecting components themselves from the VSAN HCL. However, the process is not as complicated as it sounds.

There are a couple of “critical aspects” when it comes to configuring a VSAN host and those are:

  • Server which is on the vSphere HCL (pick any)
  • SSD, Disk Controller and HDD which is on the VSAN HCL: vmwa.re/vsanhcl

Yes that is it! So if you look at the current list of Ready Nodes for instance, it contains a short list of Dell Servers (T620 and R720). However the vSphere HCL has a long list of Dell Servers, and you can use ANY of those. You just need to make sure your VSAN (critical) components are certified, and you can simply do that using the VSAN HCL. For instance, even the low end PowerEdge R320 can be configured with components that are supported by VSAN today as it supports the H710 and the H310 disk controller which are also on the VSAN HCL.

So let me recap that: You can select ANY host from the vSphere HCL, as long as you ensure the SSD / Disk Controller and HDD are on the VSAN HCL you should be good.

VSAN and the AHCI controller (hint: not supported!)

I have seen multiple people reporting this already so I figured I would write a quick blog post. Several folks are going from Beta to GA release for VSAN and so far people have been very successful, except for those using disk controllers which are not on the HCL like the on-board AHCI controller. For whatever reason it appeared on the HCL for a short time during the beta, but it is not supported (and not listed) today. I have had similar issues in my lab, and as far as I am aware there is no workaround at the moment. The errors you will see appear in the various logfiles have the keywords: “APD”, “PDL”, “Path lost” or “NMP device <xyz> is blocked”.

Before you install / configure Virtual SAN I highly want to recommend validating the HCL: http://vmwa.re/vsanhcl (I figured I will need this URL a couple of times in the future so I created this nice short url)

Rebuilding your Virtual SAN Lab? Wipe the disks first!

Are you ready to start rebuilding your Virtual SAN lab from beta builds to GA code, vSphere 5.5 U1? One thing I noticed is that the installer is extremely slow when there are Virtual SAN partitions on disk. It sits there at “VSAN: successfully initialized” for a long time and when you get to the “scanning disks” part it takes equally as long. Eventually I succeeded, but it just took a long time. Could be because I am running with an uncertified disk controller of course, either way if you are stuck in the following screen there is a simple solution.

Just wipe ALL disks first before doing the installation. I used the Gparted live ISO to wipe all my disks clean, just delete all partitions and select “apply”. Takes a couple of minutes, but saved me at least 30 minutes waiting during the installation.

VSAN Design Consideration: Booting ESXi from USB with VSAN?

One thing most probably won’t realize is that there is a design consideration with VSAN when it comes to installing ESXi. Many of you have probably been installing ESXi on USB or SD and this is also still supported with VSAN. There is one caveat however and this caveat is around the total number of GBs of memory in a host. The design consideration is fairly straight forward and also documented in the VSAN Design and Sizing Guide. Just to make it a bit easier to find it I copied/pasted it here for your convenience:

  • Use SD, USB, or hard disk devices as the installation media whenever ESXi hosts are configured with as much as 512GB memory. Minimum size for USB/SD card is 4GB, recommended 8GB.
  • Use a separate magnetic disk or solid-state disk as the installation device whenever ESXi hosts are configured with more than 512GB memory.

You may wonder what the reason is, the reason for this is that VSAN will use the core dump partition to store VSAN traces that can be used by VMware Global Support Services and the VMware Engineering team for root cause analysis when needed. So make sure when configuring host to keep this in mind when going above 512GB of memory.

Please note that this is what has been tested by VMware and will be supported, so this is not just any recommendation but could have impact on support!