• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

software defined storage

Looking back: Software Defined Storage…

Duncan Epping · May 30, 2014 ·

Over a year ago I wrote an article (multiple actually) about Software Defined Storage, VSAs and different types of solutions and how flash impacts the world. One of the articles contained a diagram and I would like to pull that up for this article. The diagram below is what I used to explain how I see a potential software defined storage solution. Of course I am severely biased as a VMware employee, and I fully understand there are various scenarios here.

As I explained the type of storage connected to this layer could be anything DAS/NFS/iSCSI/Block who cares… The key thing here is that there is a platform sitting in between your storage devices and your workloads. All your storage resources would be aggregated in to a large pool and the layer should sort things out for you based on the policies defined for the workloads running there. Now I drew this layer coupled with the “hypervisor”, but thats just because that is the world I live in.

Looking back at this article and looking at the state of the industry today, a couple of things stood out. First and foremost, the term “Software Defined Storage” has been abused by everyone and doesn’t mean much to me personally anymore. If someone says during a bloggers briefing “we have a software defined storage solution” I typically will ask them to define it, or explain what it means to them. Anyway, why did I show that diagram, well mainly because I realised over the last couple of weeks that a couple of companies/products are heading down this path.

If you look at the diagram and for instance think about VMware’s own Virtual SAN product than you can see what would be possible. I would even argue that technically a lot of it would be possible today, however the product is also lacking in some of these spaces (data services) but I expect this to be a matter of time. Virtual SAN sits right in the middle of the hypervisor, the API and Policy Engine is provided by the vSphere layer, it has its own caching service… For now it isn’t supported to connect SAN storage, but if I want to I could even today simply by tagging “LUNs” as local disks.

Another product which comes to mind when looking at the diagram is Pernix Data’s FVP. Pernix managed to build a framework that sits in the hypervisor, in the data path of the VMs. They provide a highly resilient caching layer, and will be able do both flash as well as memory caching in the near future. They support different types of storage connected with the upcoming release… If you ask me, they should be in the right position to slap additional data services like deduplication / compression / encryption / replication on top of it. I am just speculating here, and I don’t know the PernixData roadmap so who knows…

Something completely different is EMC’s ViPR (read Chad’s excellent post on ViPR) and although they may not entirely fit the picture I drew today they are aiming to be that layer in between you and your storage devices and abstract it all for you and allow for a single API to ease automation and do this “end to end” including the storage networks in between. If they would extend this to allow for certain data services to sit in a different layer then they would pretty much be there.

Last but not least Atlantis USX. Although Atlantis is a virtual appliance and as such a different implementation than Virtual San and FVP, they did manage to build a platform that basically does everything I mentioned in my original article. One thing it doesn’t directly solve is the management of the physical storage devices, but today neither does FVP or Virtual SAN (well to a certain extend VSAN does…) But I am confident that this will change when Virtual Volumes is introduced as Atlantis should be able to leverage Virtual Volumes for those purposes.

Some may say, well what about VMware’s Virsto? Indeed, Virsto would also fit the picture but the end of availability was announced not too long ago. However, it has been hinted at multiple times that Virsto technology will be integrated in to other products over time.

Although by now “Software Defined Storage” is seen as a marketing bingo buzzword the world of storage is definitely changing. The question now is I guess, are you ready to change as well?

Virtual SAN (related) PEX Updates

Duncan Epping · Feb 12, 2014 ·

I am at VMware Partner Exchange this week and there and figured I would share some of the Virtual SAN related updates.

  • 6th of March their is an online Virtual SAN event with Pat Gelsinger, Ben Fathi and John Gilmartin… Make sure to register for it!
  • Ben Fathi (VMware CTO) stated that VSAN will be GA in Q1, more news in the upcoming weeks
  • Maximum cluster size has been increased from 8 (beta) to 16 according to Ben Fathi, VMware VSAN engineering team is ahead of schedule!
  • VSAN has linear scalability, close to a million IOPS with 16 hosts in a cluster (100% read, 4K blocks). Mixed IOPS close to half a million. All of this with less than 10% CPU/Memory overhead. That is impressive if you ask me. Yeah yeah I know, numbers like these are just a part of the overall story… still it is nice to see that this kind of performance numbers can be achieved with VSAN.
  • I noticed a tweet Chetan Venkatesh and it looks like Atlantis ILIO USX (in memory storage solution) has been tested on top of VSAN and they were capable of hitting 120K IOPS using 3 hosts, WOW. There is a white paper on this topic to be found here, interesting read.
  • It was also reinstated that customers who sign up and download the beta will get a 20% discount on the first purchase of 10 VSAN licenses or more!
  • Several hardware vendors announced support for VSAN, a nice short summary by Alberto to be found here.

Operational simplicity through Flash

Duncan Epping · Feb 11, 2014 ·

A couple of weeks back I had to honor to be one of the panel members at the opening of the Pure Storage office in the Benelux. The topic of course was flash, and the primary discussion around the benefits. The next day I tweeted a quote of one of the answers I gave during the session which was picked up by Frank Denneman in one of his articles, this is the quote:

https://twitter.com/DuncanYB/status/425920926325411840

David Owen responded to my tweet saying that many performance acceleration platforms introduce an additional layer of complexity, and Frank followed up on that in his article. However this is not what my quote was referring to. First of all, I don’t agree with David that many performance acceleration solutions increase operational complexity. However, I do agree that they don’t always make life a whole lot easier either.

I guess it is fair to say that performance acceleration solutions (hyper-visor based SSD caching) are not designed to replace your storage architecture or to simplify it. They are designed to enhance it, to boost the performance. During the Pure Storage panel sessions I was talking about how flash changed the world of storage, or better said is changing the world of storage. When you purchased a storage array in the two decades it would come with days worth of consultancy. Two days typically being the minimum and in some cases a week or even more. (Depending on the size, and the different functionality used etc.) And that was just the install / configure part. It also required the administrators to be trained, in some cases (not uncommon) multiple five-day courses. This says something about the complexity of these systems.

The complexity however was not introduced by storage vendors just because they wanted to sell extra consultancy hours. It was simply the result of how the systems were architected. This by itself being the result of a major big constraint: magnetic disks. But the world is changing, primarily because a new type of storage was introduced; Flash!

Flash allowed storage companies to re-think their architecture, probably fair to state that the this was kickstarted by the startups out there who took flash and saw this as their opportunity to innovate. Innovationg by removing complixity. Removing (front-end) complexity by flattening their architecture.

Complex constructs to improve performance are no longer required as (depending on which type you use) a single flash disk delivers more than a 1000 magnetic disks typically do. Even when it comes to resiliency, most new storage systems introduced different types of solutions to mitigate (disk) failures. No longer is a 5-day training course required to manage your storage systems. No longer do you need weeks of consultancy just to install/configure your storage environment. In essence, flash removed a lot of the burden that was placed on customers. That is the huge benefit of flash, and that is what I was referring to with my tweet.

One thing left to say: Go Flash!

How about an All Flash Virtual SAN?

Duncan Epping · Jan 10, 2014 ·

Yeah that title got your attention right… For now it is just me writing about it and nothing has been announced or promised. At VMworld I believe it was Intel who demonstrated the possibilities in this space, an All Flash Virtual SAN. A couple of weeks back during my holiday someone pointed me to a couple of articles which were around SSD endurance. Typically these types of articles deal with the upper-end of the spectrum and as such are irrelevant to most of us, and some of the articles I have read in the past around endurance were disappointing to be honest.

TechReport.com however decided to look at consumer grade SSDs. We are talking about SSDs like the Intel 335, Samsung 840 series, Kingston Hyper-X and the Corsair Neutron. All of the SSDs used had a capacity of around 250GB and are priced anywhere between $175 and $275. Now if you look at the guarantees given in terms of endurance, we are talking about anything ranging from “20GB of writes per day for the length of its three-year warranty” for the Intel (22TB in total) to three-year and 192TB in total for the Kingston, and anything in between for the other SSDs.

Tech Report had set their first checkpoint at 22TB. After running through a series of tests, which are described in the article, they compare the results between the various SSDs after 22TB writes. Great to see that all SSDs did what they are supposed to do and promised. All of them passed the 22TB mark without any issues. They had another checkpoint at the 200TB mark, which showed the first signs of weakness. As expected the lower end SSDs dropped out first. The next checkpoint was set at the 300TB mark, they also added an unpowered retention test to see how well they retain data when unplugged. So far impressive results, and a blog series I will follow with interest. The articles clearly show that from an endurance perspective the SSDs perform a lot better than most had assumed in the past years. It is fair to say that the consumer grade SSDs are up to the challenge.

Considering the low price points of these flash devices, I can see how an All Flash Virtual SAN solution would be possible leveraging these consumer grade SSDs as the capacity tier (reads) and using enterprise grade SSDs to provide write performance (write buffer). Hopefully we will start to see the capacity increase even further of these types of devices, today some of them go up to 500GB others up to 800GB, wouldn’t it be nice to have a 1TB (or more) version?

Anyway, I am excited and definitely planning on running some test with an all flash Virtual SAN solution in the future… What about you?

** 500TB blog update! **
** 600TB blog update! **
** 1PB blog update! **
** 2PB blog update **
** Conclusion **

How to calculate what your Virtual SAN datastore size should be

Duncan Epping · Jan 8, 2014 ·

I have had this question so many times I figured I would write an article about it, how to calculate what your Virtual SAN datastore size should be? Ultimate this determines which kind of server hardware you can use, which disk controller you need and which disks… So it is important that you get it right. I know the VMware Technical Marketing team is developing collateral around this topic, when that has been published I will add a link here. Lets start with a quote by Christian Dickmann one of our engineers as it is the foundation of this article:

In Virtual SAN your whole cluster acts as a hot-spare

Personally I like to work top-down, meaning that I start with an average for virtual machines or a total combined number. Lets take an example to go through the exercise, makes it a bit easier to digest.

Lets assume the average VM disk size is 50GB. On average the VMs have 4GB of memory provisioned. And we have 100 virtual machines in total that we want to run on a 4 host cluster. Based on that info the formula would look something like this:

(total number of VMs * average VM size) + (total number of VMs * average VM memory size) = total capacity required

In our case that would be:

(100 * 50GB) + (100 * 4GB) = 5400 GB

So that is it? Well not really, like every storage / file system there is some overhead and we will need to take the “failures to tolerate” in to account. If I set my “failures to tolerate” to 1 than I would have 2 copies of my VMs, this means I need 5400 GB * 2 = . Personally I also add an additional 10% in disk capacity to ensure we have room for things like: meta data, log files, vmx files and some small snapshots when required. Note that VSAN by default provisions all VMDKs as thin objects (note that swap files are thick, Cormac explained that here), so there should be room available regardless. Better safe than sorry though. This means that 10800 GB actually becomes 11880 GB. I prefer to round this up to 12TB. The formula I have been using thus looks as follows:

(((Number of VMs * Avg VM size) + (Number of VMs * Avg mem size)) * FTT+1) + 10%

Now the next step is to see how you divide that across your hosts. I mentioned we would have 4 hosts in our cluster. We have two options, we create a cluster that can re-protect itself after a full host failure or we create cluster that cannot. Just to clarify, in order to have 1 host of spare capacity available we will need to divide the total capacity by 3 instead of 4. Lets look at those two options, and what the impact is:

  • 12TB / 3 hosts = 4TB per host (for each of the 4 hosts)
    • Allows you re-protect (sync/mirror) all virtual machine objects even when you lose a full host
    • All virtual machines will maintain availability levels when doing maintenance
    • Requires an additional 1TB per host!
  • 12TB / 4 hosts = 3TB per host (for each of the 4 hosts)
    • If all disk space is consumed, when a host fails virtual machines cannot be “re-protected” as there would be no capacity to sync/mirror the objects again
    • When entering maintenance mode data availability cannot be maintained as there would be no room to sync/mirror the objects to another disk

Now if you look at the numbers, we are talking about an additional 1TB per host. With 4 hosts, and lets assume we are using 2.5″ SAS 900GB Hitachi drives that would be 4 additional drives, at a cost of around 1000 per drive. When using 3.5″ SATA drives the cost would be a lot lower even. Although this is just a number I found on the internet it does illustrate that the cost of providing additional availability could be small. Prices could differ though depending on the server brand used. But even at double the cost, I would go for the additional drive and as such additional “hot spare capacity”.

To make life a bit easier I created a calculator. I hope this helps everyone who is looking at configuring hosts for their Virtual SAN based infrastructure.

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Interim pages omitted …
  • Page 7
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in