Introduction to VMware vSphere Virtual SAN

Many of you have seen the announcements by now and I am guessing that you are as excited as I am about the announcement of the public beta of Virtual SAN with vSphere 5.5. What is Virtual SAN, formerly known as “VSAN” or “vCloud Distributed Storage” all about?

Virtual SAN (VSAN from now on in this article) is a software based distributed storage solution which is built directly in the hypervisor. No this is not a virtual appliance like many of the other solutions out there, this sits indeed right inside your ESXi layer. VSAN is about simplicity, and when I say simple I do mean simple. Want to play around with VSAN? Create a VMkernel NIC for VSAN and enable it on a cluster level. Yes that is it!

vSphere Virtual SAN

Before we will get a bit more in to the weeds, what are the benefits of a solution like VSAN? What are the key selling points?

  • Software defined – Use industry standard hardware, as long as it is on the HCL you are good to go!
  • Flexible – Scale as needed and when needed. Just add more disks or add more hosts, yes both scale-up and scale-out are possible.
  • Simple – Ridiculously easy to manage! Ever tried implementing or managing some of the storage solutions out there? If you did, you know what I am getting at!
  • Automated – Per virtual machine policy based management. Yes, virtual machine level granularity. No more policies defined on a per LUN/Datastore level, but at the level where you want it to be!
  • Converged – It allows you to create dense / building block style solutions!

Okay that sounds great right, but where does that fit in? What are the use-cases for VSAN when it is released?

  • Virtual desktops
    • Scale out model, using predictive (performance etc) repeatable infrastructure blocks lowers costs and simplifies operations
  • Test & Dev
    • Avoids acquisition of expensive storage (lowers TCO), fast time to provision
  • Big Data
    • Scale out model with high bandwidth capabilities
  • Disaster recovery target
    • Cheap DR solution, enabled through a feature like vSphere Replication that allows you to replicate to any storage platform

So lets get a bit more technical, just a bit as this is an introduction right…

When VSAN is enabled a single shared datastore is presented to all hosts which are part of the VSAN enabled cluster. Typically all hosts will contribute performance (SSD) and capacity (magnetic disks) to this shared datastore. This means that when your cluster grows, your datastore will grow with it. (Not a requirement, there can be hosts in the cluster which just consume the datastore!) Note that there are some requirements for hosts which want to contribute storage. Each host will require at least one SSD and one magnetic disk. Also good to know is that with this beta release the limit on a VSAN enabled cluster is 8 hosts. (Total cluster size 8 hosts, including hosts not contributing storage to your VSAN datastore.)

As expected, VSAN heavily relies on SSD for performance. Every write I/O will go to SSD first, and eventually they will go to magnetic disks (SATA). As mentioned, you can set policies on a per virtual machine level. This will also dictate for instance what percentage of your read I/O you can expect to come from SSD. On top of that you can use these policies to define availability of your virtual machines. Yes you read that right, you can have different availability policies for virtual machines sitting on the same datastore. For resiliency “objects” will be replicated across multiple hosts, how many hosts/disks will thus depend on the profile.

VSAN does not require a local RAID set, just a bunch of local disks. Now, whether you defined a 1 host failure to tolerate ,or for instance a 3 host failure to tolerate, VSAN will ensure enough replicas of your objects are created. Is this awesome or what? So lets take a simple example to illustrate that. We have configured a 1 host failure and create a new virtual disk. This means that VSAN will create 2 identical objects and a witness. The witness is there just in case something happens to your cluster and to help you decide who will take control in case of a failure, the witness is not a copy of your object let that be clear! Note, that the amount of hosts in your cluster could potentially limit the amount of “host failures to tolerate”. In other words, in a 3 node cluster you can not create an object that is configured with 2 “host failures to tolerate”. Difficult to visualize? Well this is what it would look like on a high level for a virtual disk which tolerates 1 host failure:

With all this replication going on, are there requirements for networking? At a minimum VSAN will require a dedicated 1Gbps NIC port. Of course it is needless to say that 10Gbps would be preferred with solutions like these, and you should always have an additional NIC port available for resiliency purposes. There is no requirement from a virtual switch perspective, you can use either the Distributed Switch or the plain old vSwitch, both will work fine.

To conclude, vSphere Virtual SAN aka VSAN is a brand new hypervisor based distributed platform that enables convergence of compute and storage resources. It provides virtual machine level granularity through policy based management. It allows you to control availability and performance in a way I have never seen it before, simple and efficient. I am hoping that everyone will be pounding away on the public beta, sign up today: http://www.vmware.com/vsan-beta-register!

Startup News Flash part 3

Who knew so quickly after part 1 and part 2 there would be a part 3, I guess not strange considering VMworld is coming up soon and there was a Flash Memory Summit last week. It seems that there is a battle going on in the land of the AFA’s (all flash arrays), it isn’t about features / data services as one would expect. No they are battling over capacity density aka how many TBs can I cram in to a single U, not sure how relevant this is going to be over time, yes it is nice to have dense configurations, yes it is awesome to have a billion IOps in 1U but most of all I am worried about availability and integrity of my data.  So instead of going all out on density, how about going all out on data services? Not that I am saying density isn’t useful, it is just… Anyway, I digress…

One of the companies which presented at Flash Memory Summit was Skyera. Skyera announced an interesting new product called skyEagle. Another all-flash array is what I can hear many of you thinking, and yes I thought exactly the same… but skyEagle is special compared to others. This 1u box manages to provide 500TB of flash capacity, now that is 500TB of raw capacity. So just imagine what that could end up being after Skyera’s hardware-accelerated data compression and data de-duplication has done its magic. Pricing wise? Skyera has set a list price for the read-optimized half petabyte (500 TB) skyEagle storage system of $1.99 per GB, or $.49 per GB with data reduction technologies. More specs can be found here. Also, I enjoyed reading this article on The Register which broke the news…

David Flynn (Former Fusion-io CEO) and Rick White (Fusion-io founder) started a new company called Primary Data. The WallStreet Journal reported on this and more or less revealed what they will be working on:”that essentially connects all those pools of data together, offering what Flynn calls a “unified file directory namespace” visible to all servers in company computer rooms–as well as those “in the cloud” that might be operatd by external service companies.” This kind of reminds me of Aetherstore, or at least the description aligns with what Aetherstore is doing. Definitely a company worth tracking if you ask me.

One of the companies I did an introduction post on is Simplivity. I liked their approach to converged as it not only combines just compute and storage, but they also included backup, replication, snapshots, dedupe and cloud integration. They announced this week an update on their Omnicube CN-3000 platform and introduced two new platforms Omnicube CN-2000 and the Omnicube CN-5000. So what are these two new Omnicubes? Basically the CN-5000 is the big brother of the CN-3000 and the CN-2000 is its kid brother. I can understand why they introduced these as it will help expanding the target audience, “one size fits all” doesn’t work when the cost for “all” is the same and so the TCO/ROI changes based on your actual requirements, but in a negative way. One of the features that made SimpliVity unique that has had a major update is the OmniStack Accelerator, this is a custom designed PCIe card that does inline dedupe and compression. Basically an offload mechanism for dedupe and compression where others are leveraging the server CPU. Another nice thing SimpliVity added is support for VAAI. If you are interested in getting to know more, two white papers were released which are interesting to read: a deep dive by Hans de Leenheer and Stephen Foskett and one with a focus on “data management” by Howard Marks.

A bit older announcement, but as I spoke with these folks this week and they demoed their GA product I figured I would add them to the list. Ravello Systems developed a cloud hypervisor which abstracts your virtualization layer and allows you to move virtual machines / vApps between clouds (private and public) without the need to rebuild your virtual machines or guest OS’s. What I am saying is that they can move your vApps from vSphere to AWS to Rackspace without painful conversions every time. Pretty neat right? On top of that, Ravello is your single point of contact meaning that they are also a cloud broker. You pay Ravello and they will take care of AWS / RackSpace etc. of course they allow you to do stuff like snapshotting, cloning and create complex network configurations if needed. They managed to impress me during the short call we had, and if you want to know more I recommend reading this excellent article by William Lam or visit their booth during VMworld!

That is it for part 3, I bet I will have another part next week during or right after VMworld as press releases are coming in every hour at this point. Thanks for reading,

With a single Datastore can I still use HA’s Datastore heartbeating?

I had a question last week around HA’s datastore heartbeating, the question was if datastore heartbeating still worked if you only have 1 datastore in your environment. I can understand where the question comes from as HA throws this error that you need to have 2 datastores at a minimum for HA datastore heartbeating to function correctly. I want to point out that even though HA says that 2 datastores is the minimum, even when only one datastore is available it will be used for heartbeat purposes. Yes this error will be there on your cluster, and yes you can suppress it using “das.ignoreInsufficientHbDatastore“. I figured others might be hitting the same error and have the same question so why not document it?!

Minimum bandwidth requirements per concurrent vMotion?

I have been digging for a long time now to figure out what the minimum bandwidth requirements are per concurrent vMotion. After a long time I finally managed to get a statement. In the past the statement was made that 622Mbps was the minimum required bandwidth for vMotion, it appears that this is incorrect for vSphere 5.0 and higher. With vSphere 5.0 a new feature called Stun During Page Send (SDPS) was introduced and this has decreased the bandwidth requirements from 622Mpbs down to 250Mbps per concurrent vMotion.

Always nice to know right?!

ESXi “Management traffic” tickbox, what does it do?

I have seen this popping up various times over the last few years. That little tickbox on your VMkernel NIC that says “Management traffic” (aka management network) what is it for? What if I untick it, will SSH to that VMkernel still work? Will the HA heartbeat still work? Can I still ping the VMkernel NIC? Those are all questions I have had in the past, and I can understand why… I would say that the term “Management traffic” is really really poorly chosen, but why?

The feature described as “Management traffic” does nothing more than enabling that VMkernel NIC for HA heartbeat traffic. Yes that is it. Even if you disable this feature, management traffic, you can still use the VMkernel’s associated IP address for adding it to vCenter Server. You can still SSH that VMkernel associated IP address if you have SSH enabled. So keep that in mind.

Yes I fully agree, very confusing but there you have it: the “management traffic” enables the HA heartbeat network, nothing more and nothing less.