• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Storage

Why the world needs Software Defined Storage

Duncan Epping · Mar 6, 2013 ·

Yesterday I was at a Software Defined Datacenter event organized by IBM and VMware. The famous Cormac Hogan presented on Software Defined Storage and I very much enjoyed hearing about the VMware vision and of course Cormac’s take on this. Coincidentally, last week I read this article by long-time community guru Jason Boche on VAAI and number of VMs, and after a discussion with a customer yesterday (at the event) about their operational procedures for provisioning new workloads I figured it was time to write down my thoughts.

I have seen many different definitions so far for Software Defined Storage and I guess there is a source of truth in all of them. Before I explain what it means to me, let me describe commonly faced challenges people have today.

In a lot of environments managing storage and associated workloads is a tedious task. It is not uncommon to see large spreadsheets with a long list of LUNs, IDs, Capabilities, Groupings and whatever more is relevant to them and their workloads. These spreadsheets are typically used to decide where to place a virtual machine or virtual disk. Based on the requirements of the application a specific destination will be selected. On top of that, a selection will need to be made based on currently available disk space of a datastore and of course the current IO load. You do not want to randomly place your virtual machine and find out two days later that you are running out of disk space… Well, that is if you have a relatively mature provisioning process. Of course it is also not uncommon to just pick a random datastore and hope for the best.

To be honest, I can understand many people randomly provision virtual machines. Keeping track of virtual disks, datastores, performance, disk space and other characteristics… it is simply too much and boring. Didn’t we invent computer systems to do these repeatable boring tasks for us? That leads us to the question where and how Software Defined Storage should help you?

A common theme recurring in many “Software Defined” solutions presented by VMware is:

Abstract, Pool, Automate.

This also applies to Software Defined Storage in my opinion. These are three basic requirements that a Software Defined Storage solution should meet. But what does this mean and how does it help you? Let me try to make some sense out of that nice three word marketing slogan:

Software Defined Storage should enable you to provision workloads to a pool of virtualized physical resources based on service level agreements (defined in a policy) in an automated fashion.

I understand that is a mouth full, so lets elaborate a bit more. Think about the challenges I described above… or what Jason described with regards to “VMs per Volume” and how there are various different components that can impact your service level. A Software Defined Storage (SDS) solution should be able to intelligently place virtual disks (virtual machines / vApps) based on selected policy for the object (virtual disk / machine / appliance). These policies typically contain characteristics of the provided service level. On top of that a Software Defined Storage solution should take risks / constraints in to account. Meaning that you don’t want your workload to be deployed to a volume which is running out of disk space for instance.

What about those characteristics, what are those? Characteristics could be anything, just two simple examples to make it a bit more obvious:

  • Does your application require recover-ability after a disaster? –> SDS selects destination which is replicated, or instructs storage system to create replicated object for the VM
  • Does your application require a certain level of performance? –> SDS selects destination that can provide this performance, or instructs storage system to reserve storage resources for the VM

Now this all sounds a bit vague, but I am purposely trying to avoid using product or feature names. Software Defined Storage is not about a particular feature, product or storage system. Although I dropped the word policy, note that enabling Profile Driven Storage within vCenter Server does not provide you a Software Defined Storage solution. It shouldn’t matter either (to a certain extent) if you are using EMC, NetApp, Nimbus, a VMware software solution or any of the other thousands of different storage systems out there. Any of those systems, or even a combination of them, should work in the software defined world. To be clear, in my opinion (today) there isn’t such a thing as a Software Defined Storage product, it is a strategy. It is a way of operating that particular part of your datacenter.

To be fair, there is a huge difference between various solutions. There are products and features out there that will enable you to build a solution like this and transform the way you manage your storage and provision new workloads. Products and features that will allow you to create a flexible offering. VMware has been and is working hard to be a part of this space, vSphere Replication / Storage DRS / Storage IO Control / Virsto / Profile Driven Storage are part of the “now”, but just the beginning… Virtual Volumes, Virtual Flash and Distributed Storage have all been previewed at VMworld and are potentially what is next. Who knows what else is in the pipeline or what other vendors are working on.

If you ask me, there are exciting times ahead. Software Defined Storage is a big part of the Software Defined Data Center story and you can bet this will change datacenter architecture and operations.

** There are two excellent articles on this topic the first by Bill Earl, and the second by Christos Karamanolis, make sure to read their perspective. **

Introducing startup PernixData – Out of stealth!

Duncan Epping · Feb 20, 2013 ·

There are many startups out there that do something with storage these days. To be honest, many of them do the same thing and at times I wonder why on earth everyone focuses on the same segment and tries to attack it with the same product / feature set. One of the golden rules for any startup should be that you have a unique solution that will sell itself. Yes I realize that it is difficult, but if you want to succeed you will need to stand out.

About a year ago Satyam Vaghani (former VMware principal engineer who was responsible for VMFS, VAAI, VVOLs etc.) and Poojan Kumar (former VMware Data products lead and ex-Oracle Exadata founder) decided to start a company – PernixData. PernixData was conceptualized based on their experiences working on the intersection of virtualization, flash based storage and data. Today PernixData is revealed to the world. For those who don’t know, Pernix means “agile”. But what is PernixData about?

How many of you haven’t experienced storage performance problems? It probably is, in fact, the number one bottleneck in most virtualized environments. Convincing your manager (director / VP) that you need a new ultra-fast (and expensive) storage device is not easy; far from it. On top of that, data will always hit the network first before being acknowledged and every read will go over your storage network. How cool would it be if there was a seamless software solution that solves all your storage performance problems without you requiring to rip and replace your existing storage assets?

Server-side flash overcomes problems associated with network based storage and server-side caching solutions provide some respite. Yet, server-side caching solutions usually neither satisfy enterprise class requirements for availability nor transparently support clustered hypervisor features such as VMware vMotion. In addition, while they accelerate reads they fail to do much for writes. Customers are then stuck between either overhauling their entire storage infrastructure or going with caching solutions that work for limited use cases. PernixData is about to release a cool new product – a flash virtualization platform – that bridges this gap. By picking up where hypervisors left off, PernixData is planning to become the VMware of server flash and is aiming to do to server flash what VMware did to CPU and memory. So, what is this flash virtualization platform and why would you need it?

PernixData’s flash virtualization platform virtualizes all flash resources across all server nodes in a vCenter Server cluster into a single high-performance, enterprise class data tier. The great thing is that this happens in a transparent way. PernixData sits completely within the hypervisor and in the data-path of your virtual machine. Note that there are no requirements to install anything in the guest (virtual machine). PernixData is not a virtual appliance because virtual appliances introduce performance overhead and would need to be managed with all costs and complexity associated.

PernixData is also flash technology agnostic. It can leverage SSD or PCIe flash (or both) within the platform. The nice thing is that PernixData uses a scale-out architecture. As you add hosts with flash they can be dynamically added to the platform. On top of that, PernixData does both read and write acceleration while providing full data protection and is fully compatible with VM mobility solutions like vMotion, Storage vMotion, HA, DRS and Storage DRS.

Even more exciting PernixData will support both Write-through and Write-back modes. The cool part is that PernixData also ensures IO is replicated for high availability purposes. You don’t want to run your VM in Write-back mode when you cannot guaranteed data is highly available right?! I guess that is one of the unique selling points of the solution. A distributed, scale out, flash virtualization platform which is not only flash agnostic but also non-disruptive for your virtual workloads.

I would imagine this is many times cheaper than buying a new storage array. Even without knowing what the cost of PernixData will be, or which flash device (PCIe or SSD) you would decide to use… I bet when it comes to overall costs of the solution (product + implementation costs) it will be many many times cheaper.

As I started off with, the golden rule for any startup should be that they have a unique solution that sells itself. I am confident that PernixData FVP has just that by being a disruptive technology that solves a big problem in virtualized environments  in a scale-out and transparent manner while leveraging your existing storage investments.

If you want to be kept up to date, make sure to follow Satyam, Poojan , Charlie and PernixData on twitter. If you are interested in joining the PernixData FVP Beta, make sure to sign up!

Make sure to also read Frank’s article on PernixData.

<update>

I recommend watching the Storage Field Day videos for more details from Satyam Vaghani himself, note the playlist this is 4 videos!

</update>

VMware to acquire Virsto; Brief look at what they offer today

Duncan Epping · Feb 11, 2013 ·

Most of you have seen the announcement around Virsto by now, for those who haven’t read this blog post: VMware to acquire Virsto. Virsto is a storage company which offers a virtual storage solution. I bumped in to Virsto various times in the past and around VMworld 2012 got reminded about them when Cormac Hogan wrote an excellent article about what they have to offer for VMware customers. (Credits go to Cormac for the detailed info in this post)  When visiting Virsto’s website there is one thing that stands out and that is “software defined storage”. Lets take a look at what Virsto offers and what software defined storage means to them.

Lets first start with the architecture. Virsto has developed an appliance and a host level service which together forms an abstraction layer for existing storage devices. In other words, storage devices are connected directly to the Virsto appliance and Virsto aggregates these devices in to a large storage pool. This pool is in its turn served up to your environment as an NFS datastore. Now I can hear you think, what is so special about this?

As Virsto has abstracted storage and raw device are connected to their appliance they control the on-disk format. What does this mean? Devices that are attached to the Virsto appliance are not formatted with VMFS. Rather Virsto has developed their own filesystem which is highly scalable and what makes this solution really interesting. This filesystem is what allows Virsto to offer specific data services, to increase performance and scale and reduce storage capacity consumption.

Lets start with performance, as Virsto sits in between your storage device and your host they can do certain things to your IO. Not only does Virsto increase read performance, but their product also increases write performance. Customers have experienced performance increases between 5x and 10x. For the exact technical details read Cormac’s article. For now let me say that they sequentialise IO in a smart way and de-stage writes to allow for a more contiguous IO flow to your storage device. As you can imagine, this also means that the IO utilisation of your storage device can and probably will go down.

From an efficiency perspective Virsto optimizes your storage capacity by provisioning every single virtual disk as a thin disk. However, this thin disk does not introduce the traditional performance overhead associated with thin disks preventing the waste of precious disk space just to avoid performance penalties. What about functionality like snapshotting and cloning, this must introduce overhead and slow things down is what I can hear you think… Again, Virsto has done an excellent job of reducing overhead and optimizing for scale and performance. Virsto allows for hundreds, if not thousands, of clones of a gold template without sacrificing performance while saving storage capacity. Not surprising Virsto is often used in Virtual Desktop and large Test and Development environments as it has proven to reduce the cost of storage with as much as 70%.

Personally I am excited about what Virsto has to offer and what they have managed to achieve in a relatively short time frame. The solution they have developed, and especially their data services framework promises a lot for the future. Hopefully I will have time on my hands soon to play with their product and provide you with more insights and experience.

vSphere Metro Storage Cluster storage latency requirements

Duncan Epping · Feb 5, 2013 ·

I received some questions today around the storage latency requirements for vSphere Metro Storage Cluster (vMSC) solutions. In the past the support limits were strict:

  • 5ms RTT for vMotion for Enterprise license and lower, 10ms RTT for vMotion for Enterprise plus
  • 5ms RTT for storage replication

RTT stands for Round Trip Time by the way. Recently, and today I noticed I never blogged about this, the support limits have changed. For instance EMC VPLEX supports up to 10MS RTT for vMotion (not fully tested for stretched cluster / vSphere HA). Which indeed makes a lot of sense to have it aligned with the vMotion limits as more than likely the same connection between sites is used for both storage replication and vMotion traffic.

So I would recommend anyone who is considering implementing a vMSC environment (or architecting one) to contact your storage vendor about their support limits when it comes to storage latency.

Converged compute and storage solutions

Duncan Epping · Jan 28, 2013 ·

Lately I have been looking more and more in to converged compute and storage solutions, or “datacenter in a box” solutions as some like to call them. I am a big believer of this concept as some of you may have noticed. Those who have never heard of these solutions, an example of this would be Nutanix or Simplivity. I have written about both Nutanix and Simplivity in the past, and for a quick primer on those respective solutions I suggest to read those articles. In short, these solutions run a hypervisor with a software based storage solution that creates a shared storage platform from local disks. In others, no SAN/NAS required, or as stated… a full datacenter experience in just a couple of U’s.

One thing that stood out to me though in the last 6 months is that for instance Nutanix is often tied to VDI/View solutions, in a way I can understand why as it has been part of their core message / go-to-market strategy for a long time. In my opinion though there is no limit to where these solutions can grow and go. Managing storage, or better said your full virtualization infrastructure, should be as simple as creating or editing a virtual machine. One of the core principles mentioned during the vCloud Distributed Storage talk at VMworld, by the way vCloud Distributed Storage is a VMware software defined storage initiative.

Hopefully people are starting to realize that these so-called Software Defined Storage solutions will fit in most, if not all, scenarios out there today. I’ve been having several discussions with people about these solutions and wanted to give some examples of how it could fit in to your strategy.

Just a week ago I was having a discussion with a customer around disaster recovery. They wanted to add a secondary site and replicate their virtual machines to that site. The cost associated with a second storage array was holding them back. After an introduction to converged storage and compute solutions they realized they could step in to the world of disaster recovery slowly. They realized that these solutions allowed them to protect their Tier-1 applications and expand their DR protected estate when required. By using a converged storage and compute solutions they can avoid the high upfront cost and it allows them to scale out when needed (or when they are ready).

One of the service providers I talk to on a regular basis is planning on creating a new cloud service. Their current environment is reaching its limits and predicting how this new environment will grow in the upcoming 12 months is difficult due to the agile and dynamic nature of this service they are developing. The great thing though about a converged storage and compute solution is that they can scale out whenever needed, without a lot of hassle. Typically the only requirement is the availability of 10Gbps ports in your network. For the provider though the biggest benefit is probably that services are defined by software. They can up-level or expand their offerings when they please or when there is a demand.

These are just two simple examples of how a converged infrastructure solution could fit in to your software defined datacenter strategy. The mentioned vendors Nutanix and Simplivity are also just two examples out of various companies offering these. I know of multiple start-ups who are working on a similar products and of course there are the likes of Pivot3 who already offer turnkey converged solutions. As stated earlier, personally I am a big believer in these architectures and if you are looking to renew your datacenter or at the verge of a green-field deployment… I highly recommend researching these solutions.

Go Software Defined – Go Converged!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 12
  • Page 13
  • Page 14
  • Page 15
  • Page 16
  • Interim pages omitted …
  • Page 53
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in