How do you know where an object is located with Virtual SAN?

You must have been wondering the same thing after reading the introduction to Virtual SAN. Last week at VMworld I received many questions on this topic, so I figured it was time for a quick blog post on this matter. How do you know where a storage object resides with Virtual SAN when you are striping across multiple disks and have multiple hosts for availability purposes, what about Virtual SAN object location? Yes I know this is difficult to grasp, even with just multiple hosts for resiliency where are things placed? The diagram gives an idea, but that is just from an availability perspective (in this example “failures to tolerate” is set to 1). If you have stripe width configured for 2 disks then imagine what could happen that picture. (Before I published this article, I spotted this excellent primer by Cormac on this exact topic…)

Luckily you can use the vSphere Web Client to figure out where objects are placed:

  • Go to your cluster object in the Web Client
  • Click “Monitor” and then “Virtual SAN”
  • Click “Virtual Disks”
  • Click your VM and select the object

The below screenshot depicts what you could potentially see. In this case the Policy was configured with “1 host failure to tolerate” and “disk striping set to 2″. I think the screenshot explains it pretty well, but lets go over it.

The “Type” column shows what it is, is it a “witness” (no data) or a “component” (data). The “Component state” shows you if it is available (active) or not at the moment. The “Host” column shows you on which host it currently resides and the “SSD Disk Name” column shows which SSD is used for read caching and write buffering. If you go to the right you can also see on which magnetic disk the data is stored in the column called  “Non-SSD Disk Name”.

Now in our example below you can see that “Hard disk 2″ is configured in RAID 1 and then immediately following with RAID 0. The “RAID 1″ refers to “availability” in this case aka “component failures” and the “RAID 0″ is all about disk striping. As we configured “component failures” to 1 we can see two copies of the data, and we said we would like to stripe across two disks for performance you see a “RAID 0″ underneath. Note that this is just an example to illustrate the concept, this is not a best practice or recommendation as that should be based on your requirements! Last but not least we see the “witness”, this is used in case of a failure of a host. If host 10.20.177.19 would fail or be isolated from the network somehow then the witness would be used by host 10.20.177.17 to claim ownership. Makes sense right?

Virtual SAN object location

Hope this helps understanding Virtual SAN object location a bit better… When I have the time available, I will try to dive a bit more in to the details of Storage Policy Based Management.

Introduction to VMware vSphere Virtual SAN

Many of you have seen the announcements by now and I am guessing that you are as excited as I am about the announcement of the public beta of Virtual SAN with vSphere 5.5. What is Virtual SAN, formerly known as “VSAN” or “vCloud Distributed Storage” all about?

Virtual SAN (VSAN from now on in this article) is a software based distributed storage solution which is built directly in the hypervisor. No this is not a virtual appliance like many of the other solutions out there, this sits indeed right inside your ESXi layer. VSAN is about simplicity, and when I say simple I do mean simple. Want to play around with VSAN? Create a VMkernel NIC for VSAN and enable it on a cluster level. Yes that is it!

vSphere Virtual SAN

Before we will get a bit more in to the weeds, what are the benefits of a solution like VSAN? What are the key selling points?

  • Software defined – Use industry standard hardware, as long as it is on the HCL you are good to go!
  • Flexible – Scale as needed and when needed. Just add more disks or add more hosts, yes both scale-up and scale-out are possible.
  • Simple – Ridiculously easy to manage! Ever tried implementing or managing some of the storage solutions out there? If you did, you know what I am getting at!
  • Automated – Per virtual machine policy based management. Yes, virtual machine level granularity. No more policies defined on a per LUN/Datastore level, but at the level where you want it to be!
  • Converged – It allows you to create dense / building block style solutions!

Okay that sounds great right, but where does that fit in? What are the use-cases for VSAN when it is released?

  • Virtual desktops
    • Scale out model, using predictive (performance etc) repeatable infrastructure blocks lowers costs and simplifies operations
  • Test & Dev
    • Avoids acquisition of expensive storage (lowers TCO), fast time to provision
  • Big Data
    • Scale out model with high bandwidth capabilities
  • Disaster recovery target
    • Cheap DR solution, enabled through a feature like vSphere Replication that allows you to replicate to any storage platform

So lets get a bit more technical, just a bit as this is an introduction right…

When VSAN is enabled a single shared datastore is presented to all hosts which are part of the VSAN enabled cluster. Typically all hosts will contribute performance (SSD) and capacity (magnetic disks) to this shared datastore. This means that when your cluster grows, your datastore will grow with it. (Not a requirement, there can be hosts in the cluster which just consume the datastore!) Note that there are some requirements for hosts which want to contribute storage. Each host will require at least one SSD and one magnetic disk. Also good to know is that with this beta release the limit on a VSAN enabled cluster is 8 hosts. (Total cluster size 8 hosts, including hosts not contributing storage to your VSAN datastore.)

As expected, VSAN heavily relies on SSD for performance. Every write I/O will go to SSD first, and eventually they will go to magnetic disks (SATA). As mentioned, you can set policies on a per virtual machine level. This will also dictate for instance what percentage of your read I/O you can expect to come from SSD. On top of that you can use these policies to define availability of your virtual machines. Yes you read that right, you can have different availability policies for virtual machines sitting on the same datastore. For resiliency “objects” will be replicated across multiple hosts, how many hosts/disks will thus depend on the profile.

VSAN does not require a local RAID set, just a bunch of local disks. Now, whether you defined a 1 host failure to tolerate ,or for instance a 3 host failure to tolerate, VSAN will ensure enough replicas of your objects are created. Is this awesome or what? So lets take a simple example to illustrate that. We have configured a 1 host failure and create a new virtual disk. This means that VSAN will create 2 identical objects and a witness. The witness is there just in case something happens to your cluster and to help you decide who will take control in case of a failure, the witness is not a copy of your object let that be clear! Note, that the amount of hosts in your cluster could potentially limit the amount of “host failures to tolerate”. In other words, in a 3 node cluster you can not create an object that is configured with 2 “host failures to tolerate”. Difficult to visualize? Well this is what it would look like on a high level for a virtual disk which tolerates 1 host failure:

With all this replication going on, are there requirements for networking? At a minimum VSAN will require a dedicated 1Gbps NIC port. Of course it is needless to say that 10Gbps would be preferred with solutions like these, and you should always have an additional NIC port available for resiliency purposes. There is no requirement from a virtual switch perspective, you can use either the Distributed Switch or the plain old vSwitch, both will work fine.

To conclude, vSphere Virtual SAN aka VSAN is a brand new hypervisor based distributed platform that enables convergence of compute and storage resources. It provides virtual machine level granularity through policy based management. It allows you to control availability and performance in a way I have never seen it before, simple and efficient. I am hoping that everyone will be pounding away on the public beta, sign up today: http://www.vmware.com/vsan-beta-register!

Startup intro: SolidFire

This seems to becoming a true series, introducing startups… Now in the case of SolidFire I am not really sure if I should use the word startup as they have been around since 2010. But then again, it is not a consumer solution that they’ve created and enterprise storage platforms do typically take a lot longer to develop and mature. SolidFire was founded in 2010 by Dave Wright who discovered a gap in the current storage market when he was working for Rackspace. The opportunity Dave saw was in the Quality of Service area. Not many storage solutions out there could provide a predictable performance in almost every scenario, and were designed for multi-tenancy and offered a rich API. Back then the term Software Defined Storage wasn’t coined yet, but I guess it is fair to say that is how we would describe it today. This actually how I got in touch with SolidFire. I wrote various articles on the topic of Software Defined Storage, and tweeted about this topic many times, and SolidFire was one of the companies who consistently joined the conversation. So what is SolidFire about?

SolidFire is a storage company, they sell a storage systems and today they offer two models namely the SF3010 and the SF6010. What is the difference between these two? Cache and capacity! With the SF3010 you get 72Gb of cache per node and it uses 300GB SSD’s where the SF6010 gives you 144GB of cache per node and uses 600GB SSD’s. Interesting? Well to a certain point I would say, SolidFire isn’t really about the hardware if you ask me. It is about what is inside the box, or boxes I should say as the starting point is always 5 nodes. So what is inside?

Architecture

SolidFire’s architecture is based on a scale-out model and of course flash, in the form of SSD. You start out with 5 nodes and you can go up to 100 nodes, all connected to your hosts via iSCSI. Those 100 nodes would be able to provide you 5 million IOps and about 2.1 Petabyte of capacity. Each node that is added linearly scales performance and of course adds capacity. Of course SolidFire offers deduplication, compression and thin provisioning. Considering it is a scale-out model it is probably not needed to point this out, but dedupe and compression are cluster wide. Now the nice thing about the SolidFire architecture is that they don’t use a traditional RAID, this means that the long rebuild times when a disk fails or a node fails do not apply to SolidFire. Rather SolidFire evenly distributes data across all disk and nodes, so when a single disk fails or even a node fails rebuild time is not constraint due to a limited amount of resources but many components can help in parallel to get back to a normal state. What I liked most about their architecture is that it already closely aligns with VMware’s Virtual Volume (VVOL) concept, SolidFire is prepared for VVOLs when it is released.

Quality of Service

I already has briefly mentioned this, but Quality of Service (QoS) is one of the key drivers of the SolidFire solution. It revolves around having the ability to provide an X amount of capacity with an X amount of performance (IOps). What does this mean? SolidFire allows you to specify a minimum and maximum number of IOps for a volume, and also a burst space. Lets quote the SolidFire website as I think they explain it in a clear way:

  • Min IOPS - The minimum number of I/O operations per-second that are always available to the volume, ensuring a guaranteed level of performance even in failure conditions.
  • Max IOPS - The maximum number of sustained I/O operations per-second that a volume can process over an extended period of time.
  • Burst IOPS - The maximum number of I/O operations per-second that a volume will be allowed to process during a spike in demand, particularly effective for data migration, large file transfers, database checkpoints, and other uneven latency sensitive workloads.

Now I do want to point out here that SolidFire storage systems have no “form of admission control” when it comes to QoS. Although it is mentioned that there is a guaranteed level of performance this is up to the administrator, you as the admin will need to do the math and not overprovision from a performance point of view if you truly want to guarantee a specific performance level. If you do, you will need to take failure scenarios in to account!

One thing that my automation friends William Lam and Alan Renouf will like is that you can manage all these settings using their REST-based API.

(VMware) Integration

Ofcourse during the conversation integration came up. SolidFire is all about enabling their customers to automate as much as they possibly can and have implemented a REST-based API. They are heavily investing in for instance integration with Openstack but also with VMware. They offer full support for the vSphere Storage APIs – Storage Awareness (VASA) and are also working towards full support for vSphere Storage APIs – Array Integration (VAAI). Currently not all VAAI primitives are supported but they promised me that this is a matter of time. (They support: Block Zero’ing, Space Reclamation, Thin Provisioning. See HCL for more details.) On top of that they are also looking at the future and going full steam ahead when it comes to Virtual Volumes. Obvious question from my side: what about replication / SRM? This is being worked on, hopefully more news about this soon!

Now with all this integration did they forget about what is sitting in between their storage system and the compute resources? In other words what are they doing with the network?

Software Defined Networking?

I can be short, no they did not forget about the network. SolidFire is partnering with Plexxi and Arista to provide a great end-to-end experience when it comes to building a storage environment. Where with Arista currently the focus is more on monitoring the the different layers Plexxi seems to focus more on the configuration and optimization for performance aspect. No end-to-end QoS yet, but a great step forward if you ask me! I can see this being expanded in the future

Wrapping up

I had already briefly looked at SolidFire after the various tweets we exchanged but this proper introduction has really opened my eyes. I am impressed by what SolidFire has achieved in a relatively short amount of time. Their solution is all about customer experience, that could be performance related or the ability to automate the full storage provisioning process… their architecture / concept caters for this. I have definitely added them to my list of storage vendors to visit at VMworld, and I am hoping that those who are looking in to Software Defined Storage solutions will do the same as SolidFire belongs on that list.

Software Defined Storage – What are our fabric friends doing?

I have been discussing Software Defined Storage for a couple of months now and I have noticed that many of you are passionate about this topic as well. One thing that stood out to me during these discussion is that the focus is on the “storage system” itself. What about the network in between your storage system and your hosts? Where does that come in to play? Is there something like a Software Defined Storage Network? Do we need it, or is that just part of Software Defined Networking?

When thinking about it, I could see some clear advantages of a Software Defined Storage Network, I think the answer to all of these is: YES.

  • Wouldn’t it be nice to have end-to-end QoS? Yes from the VM up to the array and including the network sitting in between your host and your storage system!
  • Wouldn’t it be nice to have Storage DRS and DRS be aware of the storage latency, so that the placement engine can factor that in? It is nice to have improved CPU/Memory performance, but when your slowest component (storage+network) is the bottleneck?
  • Wouldn’t it be nice to have a flexible/agile but also secure zoning solution which is aware of your virtual infrastructure? I am talking VM mobility here from a storage perspective!
  • Wouldn’t it be nice to have a flexible/agile but also secure masking solution which is VM-aware?

I can imagine that some of you are iSCSI or NFS users and are less concerned with things like zoning for instance but QoS end-to-end could be very useful right? For everyone a tighter integration between the three different layers (compute->network<-storage) from a VM Mobility would be useful, not just from a performance perspective but also from an operational complexity perspective. Which datastore is connected to which cluster? Where does VM-A reside? If there is something wrong with a specific zone, which workloads does it impact? There are so many different use cases for a tighter integration, I am guessing that most of you can see the common one: storage administrator making zoning/masking changes leading to a permanent device loss and ultimately your VMs hopelessly crashing. Yes that could be prevented when all three layers are aware of each other and integration would warn both sides about the impact of changes. (You could also try communicating to each-other of course, but I can understand you want to keep that to a bare minimum ;-))

I don’t hear too many vendors talking about this yet to be honest. Recently I saw Jeda Networks making an announcement around Software Defined Storage Networks, or at least a  bunch of statements and a high level white paper. Brocade is working with EMC to provide some more insight/integration and automation through ViPR… and maybe others are working on something similar, so far I haven’t see too much to be honest.

Wondering, what you guys would be looking for and what you would expect? Please chip in!

Is flash the saviour of Software Defined Storage?

I have this search column open on twitter with the term “software defined storage”. One thing that kept popping up in the last couple of days was a tweet from various IBM people around how SDS will change flash. Or let me quote the tweet:

What does software-defined storage mean for the future of #flash?

It is part of a twitter chat scheduled for today, initiated by IBM. It might be just me misreading the tweets or the IBM folks look at SDS and flash in a completely different way than I do. Yes SDS is a nice buzzword these days. I guess with the billion dollar investment in flash IBM has announced they are going all-in with regards to marketing. If you ask me they should have flipped it and the tweet should have stated: “What does flash mean for the future of Software Defined Storage?” Or to make it even sound more marketing is flash the saviour of Software Defined Storage?

Flash is a disruptive technology, and changing the way we architect our datacenters. Not only did it already allow many storage vendors to introduce additional tiers of storage it also allowed them to add an additional layer of caching in their storage devices. Some vendors even created all flash based storage systems offering thousands of IOps (some will claim millions), performance issues are a thing of the past with those devices. On top of that host local flash is the enabler of scale-out virtual storage appliances. Without flash those type of solutions would not be possible, well at least not with a decent performance.

Since a couple of years host side flash is also becoming more common. Especially since several companies jumped in to the huge gap there was and started offering caching solutions for virtualized infrastructures. These solutions allow companies who cannot move to hybrid or all-flash solutions to increase the performance of their virtual infrastructure without changing their storage platform. Basically what these solutions do is make a distinction between “data at rest” and “data in motion”. Data in motion should reside in cache, if configured properly, and data in rest should reside on your array. These solutions once again will change the way we architect our datacenters. They provide a significant performance increase removing many of the performance constraints linked to traditional storage systems; your storage system can once again focus on what it is good at… storing data / capacity / resiliency.

I think I have answered the questions, but for those who have difficulties reading between the lines, how does flash change the future of software defined storage? Flash is the enabler of many new storage devices and solutions. Be it a virtual storage appliance in a converged stack, an all-flash array, or host-side IO accelerators. Through flash new opportunities arise, new options for virtualizing existing (I/O intensive) workloads. With it many new storage solutions were developed from the ground up. Storage solutions that run on standard x86 hardware, storage solutions with tight integration with the various platforms, solutions which offer things like end-to-end QoS capabilities and a multitude of data services. These solutions can change your datacenter strategy; be a part of your software defined storage strategy to take that next step forward in optimizing your operational efficiency.

Although flash is not a must for a software defined storage strategy, I would say that it is here to stay and that it is a driving force behind many software defined storage solutions!

Install and configure Virsto

Last week I was in Palo Alto and I had a discussion about Virsto. Based on that discussion I figured it was time to setup Virsto in my Lab. I am not going to go in to much details around what Virsto is or does as I already did that in this article around the time VMware acquired Virsto. On top of that there is an excellent article by Cormac which provides an awesome primer. But lets use two quotes from both of the articles to give an idea of what to expect:

source: yellow-bricks.com
Virsto has developed an appliance and a host level service which together forms an abstraction layer for existing storage devices. In other words, storage devices are connected directly to the Virsto appliance and Virsto aggregates these devices in to a large storage pool. This pool is in its turn served up to your environment as an NFS datastore.

source: cormachogan.com
Virsto Software aims to provide the advantages of VMware’s linked clones (single image management, thin provisioning, rapid creation) but deliver better performance than EZT VMDKs.

Just to give an idea, what do I have running in my lab?

  • 3 x ESX 5.1 host
  • 1 x vCenter Server (Windows install as VCVA isn’t supported_
  • VNX 5500

After quickly glancing the quickstart guide I noticed I needed a Windows VM to install some of Virsto’s components, that VM is what Virsto refers to as the “vMaster”. I also need a bunch of empty LUNs which will be used for storage. I also noticed reference of Namespace VMs and vIOService VMs. Hmmm, it sounds complicated, but is it? I am guessing these components will need to be connected. This is kind of the idea, note that the empty LUNs will be automatically connected by to the IOService VMs. I did not add those to the diagram as that would make it more complex than needed.

Virsto Network Architecture Diagram

[Read more...]

Software Defined Storage; just some random thoughts

I have been reading many articles over the last weeks on Software Defined Storage and wrote an article on this topic a couple of weeks ago. While reading up one thing that stood out was that every single storage/flash vendor out there  has jumped on the bandwagon and (ab)uses this term where ever and when ever possible. In most of those cases however the term isn’t backed by SDS enabling technology or even a strategy, but lets not get in to the finger pointing contest as I think my friends who work for storage vendors are more effective at that.

The article which triggered me to write this article was released a week and a half a go by CRN. The article was a good read, so don’t expect me to tear it down. The article just had me thinking about various things, and what better way to clear your head then to write an article about it. Lets start with the following quote:

While startups and smaller software-focused vendors are quick to define software-defined storage as a way to replace legacy storage hardware with commodity servers, disk drives and flash storage, large storage vendors are not giving ground in terms of the value their hardware offers as storage functionality moves toward the software layer.

Let me also pull out this comment by Keith Norbie in the same article, as I think Keith hit the nail on the head:

Norbie said to think of the software-defined data center as a Logitech Harmony remote which, when used with a home theater system, controls everything with the press of a button.

If you take a look at how Keith’s quote relates to Software Defined Storage it would mean that you should be able to define EVERYTHING via software. Just like you can simply program the Logitech Harmony remote to work with all your devices; you should be able to configure your platform in such a way that spinning up new storage objects can be done by the touch of one button! Now getting back to the first quote, whether functionality moves out of a storage system to a management tool or even to the platform is irrelevant if you ask me. If your storage system has an API and it is allows you to do everything programmatically you are half way there.

I understand that many of the startups like to make potential customers believe different, but the opportunity is there for everyone if you ask me. Yes that includes old-timers like EMC / NetApp / IBM (etc) and their “legacy” arrays. (As some of the startups like to label them.) Again, don’t get me wrong… playing in the SDS space will require significant changes to most storage platforms as most were never architected for this usecase. Most are currently not capable of creating thousands of new objects programmatically. Many don’t even have a public API.

However, what is missing today is not just a public API on most storage systems, it is also the platform which doesn’t allow you to efficiently manage these storage systems through those APIs. When I say platform I refer to vSphere, but I guess the same applies to Hyper-V, KVM, Xen etc. Although various sub-components are already there like the vSphere APIs for Array Integration (VAAI) and the vSphere APIs for Storage Awareness (VASA), there are also still a lot capabilities missing. A good example would be defining and setting specific data-services on a virtual disk level granularity, or end to end Quality of Service for virtual disks or virtual machines, or automatic instantiation of storage objects during virtual machine provisioning without manual action required from your storage admin. Of course, all of this from a single management console…

If you look at VMware vSphere and what is being worked on in the future you know those capabilities will come at some point, in this case I am referring to what was previewed at VMworld as “virtual volumes” (sometime also referred to as VVOLs), but this will take time… Yes I know some storage vendors already offer some of this granularity (primarily the startups out there), but can you define/set this from your favorite virtual infrastructure management solution during the provisioning of a new workload? Or do you need to use various tools to get the job done? If you can define QoS on a per VM basis, is this end-to-end? What about availability / disaster recovery, do they offer a full solution for that? If so, is it possible to simply integrate this with other solutions like for instance Site Recovery Manager?

I think exciting times are ahead of us; but lets all be realistic… they are ahead of us. There is no “Logitech Harmony” experience yet, but I am sure we will get there in the (near) future.