• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

sds

RE: Is VSA the future of Software Defined Storage? (OpenIO)

Duncan Epping · Apr 24, 2013 ·

I was reading this article on the HP blog about the future of Software Defined Storage and how the VSA fits perfectly. Although I agree that a VSA (virtual storage appliance) could potentially be a Software Defined Storage solution I do not really agree with IDC quote used for the basis of this article and on top of that I think some crucial factors are left out. Lets start with the IDC quote:

IDC defines software-based storage as any storage software stack that can be installed on any commodity resources (x86 hardware, hypervisors, or cloud) and/or off-the-shelf computing hardware and used to offer a full suite of storage services and federation between the underlying persistent data placement resources to enable data mobility of its tenants between these resources.

Software Defined Storage solutions to me are not necessarily just a software-based storage product. Yes as said a VSA, or something like Virtual SAN (VSAN), could be part of your strategy but how about the storage you have today? Do we really expect customers to forget about their “legacy” storage and just write it off? Surely that won’t happen, especially not in this economical climate and considering many companies invested heavily in storage when they started virtualizing production workloads. What is missing in this quote, or in that article (although briefly mentioned in linked article), is the whole concept of “abstract, pool, automate”. I guess some of you will say, well that is VMware’s motto right? Well yes and no. Yes, “abstract, pool, automate” is the way of the future if you ask VMware. However this is not something new. Think about Software Defined Networking for instance, this is fully based on the “abstract, pool, automate” concept.

This had me thinking, what is missing today? There are various different initiatives around networking (openflow etc), but what about storage? I created this diagram that from a logical perspective explains what I think we need when it comes to Software Defined Storage. I guess this is what Ray Lucchesi is referring to in his article on Openflow and the storage world. Brent Compton from FusionIO also had an insightful article on this topic, worth reading.

future of software defined storage - OpenIO?

If you look at my diagram… (yes FCoE/Infiniband/DAS etc is missing, not because it shouldn’t be supported but just to simplify the diagram) I drew a hypervisor at the top, reason for it being is that I have been in the hypervisor business for years but reality is this could be anything right. From a hypervisor perspective all you should see is a pool. A pool for your IO, a pool where you can store your data. Now this layer should provide you various things. Lets start at the bottom and work our way up.

  • IO Connect = Abstract / Pool logic. This should allow you to connect to various types of storage abstract it for your consumers and pool it of course
  • API = Do I need to explain this? It addresses the “automate” part but also probably even more importantly the integration aspect. Integration is key for a Software Defined Datacenter. The API(s) should be able to provide both north-, south-, east- and west-bound capabilities (for explanation around this read this article, although it is about Software Defined Networking it should get the point across)
  • PE = Policy Engine, takes care of making sure your data ends up on the right tier with the right set of services
  • DS = Data Services. Although the storage can provide specific data services this is also an opportunity for the layer in between the hypervisor and the storage. Matter a fact, data services should be possible on all layers: physical storage, appliance, host based etc
  • $$ = Caching. I drew this out separately for a reason, yes it could be seen as a data service but I wanted it separately as for any layer inserted there is an impact on performance. An elegant and efficient caching solution at the right level could mitigate the impact. Again, caching could be part of this framework but could very well sit outside of it on the host-side or at the storage layer, or appliance based

One thing I want to emphasize here is the importance of the API. I briefly mentioned enabling north-, south-, east- and west-bound capabilities but in order for a solution like this to be successful this is a must. Although with automation you can go a long way, integration is key here! Whether it is seamless integration with your physical systems, integration with your virtual management solution or with an external cloud storage solution… These APIs should be able to provide that kind of functionality and be enable a true pluggable framework experience.

If you look at this approach, and I drew this out before I even looked at Virsto, it kind of resembles what Virsto offers today. Although there are components missing the concept is similar. It also resembles VVOLs in a way, which was discussed at VMworld in 2011 and 2012. I would say that what I described is a combination of both combined with what Software Defined Networking promises.

So where am I going with this? Good question, honestly I don’t know… For me articles like these are a nice way of blowing steam, get the creative juices going and open up the conversation. I do feel the world is ready for the next step from a Software Defined Storage perspective, I guess the really question is who is going to take this next step and when? I would love to hear your feedback.

Why the world needs Software Defined Storage

Duncan Epping · Mar 6, 2013 ·

Yesterday I was at a Software Defined Datacenter event organized by IBM and VMware. The famous Cormac Hogan presented on Software Defined Storage and I very much enjoyed hearing about the VMware vision and of course Cormac’s take on this. Coincidentally, last week I read this article by long-time community guru Jason Boche on VAAI and number of VMs, and after a discussion with a customer yesterday (at the event) about their operational procedures for provisioning new workloads I figured it was time to write down my thoughts.

I have seen many different definitions so far for Software Defined Storage and I guess there is a source of truth in all of them. Before I explain what it means to me, let me describe commonly faced challenges people have today.

In a lot of environments managing storage and associated workloads is a tedious task. It is not uncommon to see large spreadsheets with a long list of LUNs, IDs, Capabilities, Groupings and whatever more is relevant to them and their workloads. These spreadsheets are typically used to decide where to place a virtual machine or virtual disk. Based on the requirements of the application a specific destination will be selected. On top of that, a selection will need to be made based on currently available disk space of a datastore and of course the current IO load. You do not want to randomly place your virtual machine and find out two days later that you are running out of disk space… Well, that is if you have a relatively mature provisioning process. Of course it is also not uncommon to just pick a random datastore and hope for the best.

To be honest, I can understand many people randomly provision virtual machines. Keeping track of virtual disks, datastores, performance, disk space and other characteristics… it is simply too much and boring. Didn’t we invent computer systems to do these repeatable boring tasks for us? That leads us to the question where and how Software Defined Storage should help you?

A common theme recurring in many “Software Defined” solutions presented by VMware is:

Abstract, Pool, Automate.

This also applies to Software Defined Storage in my opinion. These are three basic requirements that a Software Defined Storage solution should meet. But what does this mean and how does it help you? Let me try to make some sense out of that nice three word marketing slogan:

Software Defined Storage should enable you to provision workloads to a pool of virtualized physical resources based on service level agreements (defined in a policy) in an automated fashion.

I understand that is a mouth full, so lets elaborate a bit more. Think about the challenges I described above… or what Jason described with regards to “VMs per Volume” and how there are various different components that can impact your service level. A Software Defined Storage (SDS) solution should be able to intelligently place virtual disks (virtual machines / vApps) based on selected policy for the object (virtual disk / machine / appliance). These policies typically contain characteristics of the provided service level. On top of that a Software Defined Storage solution should take risks / constraints in to account. Meaning that you don’t want your workload to be deployed to a volume which is running out of disk space for instance.

What about those characteristics, what are those? Characteristics could be anything, just two simple examples to make it a bit more obvious:

  • Does your application require recover-ability after a disaster? –> SDS selects destination which is replicated, or instructs storage system to create replicated object for the VM
  • Does your application require a certain level of performance? –> SDS selects destination that can provide this performance, or instructs storage system to reserve storage resources for the VM

Now this all sounds a bit vague, but I am purposely trying to avoid using product or feature names. Software Defined Storage is not about a particular feature, product or storage system. Although I dropped the word policy, note that enabling Profile Driven Storage within vCenter Server does not provide you a Software Defined Storage solution. It shouldn’t matter either (to a certain extent) if you are using EMC, NetApp, Nimbus, a VMware software solution or any of the other thousands of different storage systems out there. Any of those systems, or even a combination of them, should work in the software defined world. To be clear, in my opinion (today) there isn’t such a thing as a Software Defined Storage product, it is a strategy. It is a way of operating that particular part of your datacenter.

To be fair, there is a huge difference between various solutions. There are products and features out there that will enable you to build a solution like this and transform the way you manage your storage and provision new workloads. Products and features that will allow you to create a flexible offering. VMware has been and is working hard to be a part of this space, vSphere Replication / Storage DRS / Storage IO Control / Virsto / Profile Driven Storage are part of the “now”, but just the beginning… Virtual Volumes, Virtual Flash and Distributed Storage have all been previewed at VMworld and are potentially what is next. Who knows what else is in the pipeline or what other vendors are working on.

If you ask me, there are exciting times ahead. Software Defined Storage is a big part of the Software Defined Data Center story and you can bet this will change datacenter architecture and operations.

** There are two excellent articles on this topic the first by Bill Earl, and the second by Christos Karamanolis, make sure to read their perspective. **

  • « Go to Previous Page
  • Page 1
  • Page 2

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 ยท Log in