vSphere Storage Appliance – Why I think it is cool

While doing some workshops and presentations for some of our partners and customers one of the comments I usually here when discussing the vSphere Storage Appliance is “Why not just buy a cheap NAS device”? Well there are a couple of arguments:

  • Support, many lower end cheap devices are not on the HCL
  • Management, most storage devices require specific knowledge and can be difficult to setup
  • Resiliency, yes resiliency..

Resiliency is what I want to expand on. I like the vSphere Storage Appliance because of the resilience it offers. Many lower end storage devices have a single storage processor and some even a single power-supply but that is different for the VSA. Lets assume you have a 3 node cluster with each of these three serving up their local storage. What will it look like?

I hope this image is clear but what we see above is a three node cluster. Each node holds 2 volumes. One “active” volume and a Replica volume. Now the Replica volume is where the resiliency comes in to play. If one of the nodes would fail one of the other nodes, depending on which holds the replica, picks up! Yes indeed the VSA volumes are RAID-1 and the failure is literally detected in seconds. Note that this is a synchronous technique, so an acknowledgement is required from both the active and replica of the datastore.

In my example above when ESXi-1 (on the left) would fail then ESXi-2 (middle) would pick up as it is holding the replica. Note that this is a seamless fail-over if the VM is running on a node other than ESXi-1. The amount of time it takes for the fail-over to occur is literally second and the replica will be available through the same ip-address. If the VM happened to be running on ESXi-1 than vSphere HA would restart that virtual machine is in any other scenario.

This video demos what it looks like when a host fails:

For more details on the VSA I would like to recommend the following articles by Cormac Hogan:

Be Sociable, Share!


    1. Kelly O says

      I am a little intrigued by this, but the two things that scare me are performance and also that I can’t run vCenter as a VM on it. I think this is the biggest shortfall VMware guessed on. This is obviously for the SMB market, but who wants to have an extra server for vCenter?

    2. Andrew Fidel says

      My only problem is that this technique doesn’t scale. Once your storage requirements exceed what can be hosted internally with the builtin controller it’s cheaper to buy an array than it is to expand the storage of the nodes. An HP P2000 SAS with dual controllers (or equivalent low end array from your vendor of choice) can be had for less than the licensing cost for two nodes of either the HP P4000 VSA or the VMWare VSA and you don’t end up needing 2x or 3x the number of disks. I really liked the idea of a brick of storage, ram, and compute resources but ultimately the economics fell down when our needs exceeded a fairly small setup. As a techie I’m glad I got to play with an interesting technology but as a manager responsible for my budget I’m definitely not as favorable on the technology as it is priced today.

      • says

        Which usually also requires specific knowledge / management etc.

        I live the VSA for its simplicity and resiliency. Simplicity has a lot of value for small shops with a single admin doing everything.

    3. Paul says

      You can get a fully redundant iSCSI storage device on the HCL for about $3000 US dollars. 2k less then the VSA costs. Further if you have several branches then you could use one of these in each. In a scenario where you need more then one VSA, you are going to have to buy a vCenter license per branch and run them all in Linked Mode. And you can’t put vCenter on the hosts that it manages. The product is pretty cool as is, but has a very limited use case at the moment.

        • Christian van Barneveld says

          Qnap for example (as mentioned here before):
          Add some disks and you are ready for less than 3k.

          Frequently used for SMB and small business environments. We use it for our test environment. VMware certified and scalable.

          I agree the simplicity of the VSA, but the price is a not very competitive. A simple iSCSI device (like QNAP) is even hard (or simple) to maintain as the VSA in my opinion.

          • says

            Considering the price and the size of the box I guess it is fairly safe to assume it does not have a dual storage processor, heck it even has a single powercord. From a resiliency perspective you cannot compare it to the VSA.

            • Christian van Barneveld says

              True, but it fits the requirements for an test environment. And guess what: you safe the costs for three times an hardware raid controller and disks for your server!
              Do you require dual power supply: take another model (http://www.wifimedia.eu/catalog/qnapturbostationts809urp28ghz2gb-p-702.html for example) and take another one for your second business location and with replication you have DR in place. How cool is that! That’s also part of resiliency 😉
              VSA is cool and very simple, but it’s not flexible and it requires again raidcards and disks in your servers. Don’t forget to calculate that in the TCO and compare it with other solutions that fits your requirements.

            • says

              Well that is “resiliency” but what would you use to replicate and how seamless would the failover be? I know the answer to the last one… not :-).

          • says

            I have one of these QNAP’s they are a linux file server, where I have to get my Chineese speaking coworker to get support. Good for a VEEAM backup target, or even a LAB, but for a SMB with predictable small growth, but high uptime requirements its in another league.

            We looked at a lot of solutions before we settled on using the VSA for our bundle.

    4. Frank says

      Or you can look at p4500 starter kit and it includes 10 VSA licenses for free. So you can cover off all of your remote sites and replicate that data back to your central site.

    5. Ruben Renders says

      It’s indeed a nice thing to have.
      But why should we use the VMware VSA instead of the HP Lefthand VSA?

      • says

        Cost, speed of deployment, and ability to simplify support. We put the VSA on Cisco hardware (C210’s) with Cisco switches (2960S’s) and we have a 100% Cisco hardware solution with 100% VMware software. Having seen the vendor finger pointing as to whats wrong happen with a lot of vendors, its nice to know that VMware will have to take ownership for it talking to itself.

        As for the cost, its only 3k above the cost of essentials plus, so its effectively 1k a node. True it has a lot of limits, but if you want a simple, cost effective no single point of failure enviroment, it can be done for under 40k including hardware and setup labor and be kept within 6U of space for servers and storage.

    6. G says

      Still if those customers can afford the performance recommendation of 8 disk RAID 10deployment with 3 hosts I’d question why they wouldn’t be buying a NAS. Even the cheaper qnaps etc don’t require specific knowledge, any monkey can create NFS exports on a web interface and are supported.

      I’m not sure the dual power cord would be a major consideration for them either. If it really was they are in a different category and should be looking at a different solution. You can’t spend money on vSphere licenses and ignore storage and networking IMO, that model never works.

      • says

        While its true that the QNAP is great for cost control (Cheap and uses off the shelf drives) I can’t get a 4 hour response support on it I can’t take failure of certain parts like the motherboard or memory without a major outage. It can’t do an online update of its firmware, and does not have a plugin to integrate management or deployment with vCenter.

        Remember just because a customer is small doesn’t mean they don’t have high up time requirements.

    7. Udubplate says

      Note that code upgrades to the VSA are disruptive in v1.0 according to the product manager. This is a big limitation in my mind. When this is fixed in a future release it will be a compelling solution for many use cases.

      I’d like to see VMwarw combine the use of mirrored i/o used as part of sVmotion to enable writing to parallel datastores (ie FT for storage). I’m sure it’s already in the works as i cant be the first one to think of that. This could be good enough or even better compared to a VSA depending on your use case and drivers.

      • John says

        From our conversations with vmware this is a top development priority. The other big thing is working with the Linux Virtual center (it runs tomcat I think for management so there is no excuse to port to Linux)

    8. says

      I’ve heard that your vCenter must also reside on the same subnet as the VSA. That’s probably not an issue for small companies, but what about big shops that have several small affiliates (with their own budget)? I’m not going to have a vCenter in these remote locations. Not to mention it sounds like you can only have one VSA per vCenter. I’m sure this isn’t the market for the VSA, but it is one that VMware should consider in my opinion.

      As far as cheap SANs, what about the Iomega PX4-300r? Retails for $2800 for 8TB of storage (a bit extra for the redundant PSU). Seems reasonable for small sites. I think it only has one storage processor, but it seems like it would be very competitive with the VSA to me and is on the HCL.


    9. Attila Bognár says

      I am not sure a low-end iSCSI storage (EMC AX4, Fujitsu DX60, there may be others) costs more and is more complicated than buying three servers with lots of disks and RAID controllers (+ RAM, CPU and GbE ports it needs to operate). Once you setup a storage you usually don’t have to care much about it. This seems not to be the case with the VSA (think about the upgrades). Maybe even the consumed power is less.

      This technology is cool but should be free. Or StorMagic is cheaper.