Essential Virtual SAN book, rough cut online now!

A couple of weeks back Eric Sloof broke the news about the book Cormac Hogan and I are working on: Essential Virtual SAN. As of this weekend the rough cut edition is (back) online again, and you can see some of the progress Cormac and I have been making over the last couple of months. As we speak we are working on the final chapters… so hopefully before the end of the month the rough cut should be complete!

Note that this is a rough cut and that means the book will go through tech review (by VSAN Architect Christos Karamanolis, and VMware Integration Engineer Paudie O’Riordan), than editing and a final review by Cormac and I before it will be published. So expect some changes throughout that whole cycle. Never the less, I think it is worth reading for those who have a Safari Online account:

http://my.safaribooksonline.com/book/operating-systems-and-server-administration/virtualization/9780133855036

Building a hyper-converged platform using VMware technology part 3

Considering some of the pricing details have been announced I figured I would write a part 3 of my “Building a hyper-converged platform using VMware technology” series (part 1 and part 2) Before everyone starts jumping in on the pricing details, I want to make sure people understand that I have absolutely no responsibilities whatsoever related to this subject, I am just the messenger in this case. In order to run through this exercise I figured I would take a popular SuperMicro configuration and ensure that the components used are certified by VMware.

I used the thinkmate website to get pricing details on the SuperMicro kit. Lets list the hardware first:

    • 4 hosts each with:
      -> Dual Six-Core Intel Xeon® CPU E5-2620 v2 2.10GHz 15MB Cache (80W)
      -> 128 GB (16GB PC3-14900 1866MHz DDR3 ECC Registered DIMM)
      -> 800GB Intel DC S3700 Series 2.5″ SATA 6.0Gb/s SSD (MLC)
      -> 5 x 1.0TB SAS 2.0 6.0Gb/s 7200RPM – 2.5″ – Seagate Constellation.2
      -> Intel 10-Gigabit Ethernet CNA X540-T2 (2x RJ-45)

The hardware is around $ 30081,-, this is without any discount. Just the online store price. Now the question is, what about Virtual SAN? You would need to license 8 sockets with Virtual SAN in this scenario, again this is the online store price without any discount:

  • $ 2495,- per socket = $ 19960,-

This makes the cost of the SuperMicro hardware including the Virtual SAN licenses for four nodes in this configuration roughly $ 50.041. (There is also the option to license Virtual SAN for View per user which is $ 50,-.) That is around $ 12600 per host including the VSAN licenses.

If you do not own vSphere licenses yet you will need to license vSphere itself as well, I would recommend Enterprise ( $ 2875,- per socket) as with VSAN you will automatically get Storage Policy Based Management and the Distributed Switch. Potentially, depending on your deployment type, you will also need vCenter Server. Standard license for vCenter Server is $ 4995,-. If you would include all VMware licenses the total combined would be: $ 78036,-. That is around 19600 per host including the VSAN and vSphere licenses. Not bad if you ask me,

I want to point out that I did not include Support and Maintenance costs. As this will depend on which type of support you require and what type of vSphere licenses you will have I felt there were too many variable to make a comparison. It should also be noted that many storage solutions come with very limited first year support… Before you do a comparison, make sure to look at what is included and what will need to be bought separately for proper support.

** disclaimer: Please run through these numbers yourself, and validate the HCL before purchasing any equipment. I cannot be held responsible for any pricing / quoting errors, hardware prices can vary from day to day and this is exercise was for educational purposes only! **

Rebuilding your Virtual SAN Lab? Wipe the disks first!

Are you ready to start rebuilding your Virtual SAN lab from beta builds to GA code, vSphere 5.5 U1? One thing I noticed is that the installer is extremely slow when there are Virtual SAN partitions on disk. It sits there at “VSAN: successfully initialized” for a long time and when you get to the “scanning disks” part it takes equally as long. Eventually I succeeded, but it just took a long time. Could be because I am running with an uncertified disk controller of course, either way if you are stuck in the following screen there is a simple solution.

Just wipe ALL disks first before doing the installation. I used the Gparted live ISO to wipe all my disks clean, just delete all partitions and select “apply”. Takes a couple of minutes, but saved me at least 30 minutes waiting during the installation.

Virtual SAN GA aka vSphere 5.5 Update 1

Just a quick note for those who hadn’t noticed yet. Virtual SAN went GA today, so get those download engines running and pull-in vSphere 5.5 Update 1. Below the direct links to the required builds:

It looks like the HCL is being updated as we speak. The Dell T620 was just added as the first Virtual SAN ready node, and I expect many more to follow in the days to come. (Just published a white paper with multiple configurations.) Also the list of supported disk controllers has probably quadrupled.

A couple of KB Articles I want to call out:

Two things I want to explicitly call out, first is around upgrades from Beta to GA:

Upgrade of Virtual SAN cluster from Virtual SAN Beta to Virtual SAN 5.5 is not supported.
Disable Virtual SAN Beta, and perform fresh installation of Virtual SAN 5.5 for ESXi 5.5 Update 1 hosts. If you were testing Beta versions of Virtual SAN, VMware recommends that you recreate data that you want to preserve from those setups on vSphere 5.5 Update 1. For more information, see Retaining virtual machines of Virtual SAN Beta cluster when upgrading to vSphere 5.5 Update 1 (KB 2074147).

The second is around Virtual SAN support when using unsupported hardware:

KB reference
Using uncertified hardware may lead to performance issues and/or data loss. The reason for this is that the behavior of uncertified hardware cannot be predicted. VMware cannot provide support for environments running on uncertified hardware.

Last but not least, a link to the documentation , a ;and of course a link to my VSAN page (vmwa.re/vsan) which holds a lot of links to great articles.

VSAN Design Consideration: Booting ESXi from USB with VSAN?

One thing most probably won’t realize is that there is a design consideration with VSAN when it comes to installing ESXi. Many of you have probably been installing ESXi on USB or SD and this is also still supported with VSAN. There is one caveat however and this caveat is around the total number of GBs of memory in a host. The design consideration is fairly straight forward and also documented in the VSAN Design and Sizing Guide. Just to make it a bit easier to find it I copied/pasted it here for your convenience:

  • Use SD, USB, or hard disk devices as the installation media whenever ESXi hosts are configured with as much as 512GB memory. Minimum size for USB/SD card is 4GB, recommended 8GB.
  • Use a separate magnetic disk or solid-state disk as the installation device whenever ESXi hosts are configured with more than 512GB memory.

You may wonder what the reason is, the reason for this is that VSAN will use the core dump partition to store VSAN traces that can be used by VMware Global Support Services and the VMware Engineering team for root cause analysis when needed. So make sure when configuring host to keep this in mind when going above 512GB of memory.

Please note that this is what has been tested by VMware and will be supported, so this is not just any recommendation but could have impact on support!