A couple of weeks back I had to honor to be one of the panel members at the opening of the Pure Storage office in the Benelux. The topic of course was flash, and the primary discussion around the benefits. The next day I tweeted a quote of one of the answers I gave during the session which was picked up by Frank Denneman in one of his articles, this is the quote:
Many people talk about performance when the subject of new (flash) arrays come up. I think the operational simplicity is evenly important!
— Duncan Epping (@DuncanYB) January 22, 2014
David Owen responded to my tweet saying that many performance acceleration platforms introduce an additional layer of complexity, and Frank followed up on that in his article. However this is not what my quote was referring to. First of all, I don’t agree with David that many performance acceleration solutions increase operational complexity. However, I do agree that they don’t always make life a whole lot easier either.
I guess it is fair to say that performance acceleration solutions (hyper-visor based SSD caching) are not designed to replace your storage architecture or to simplify it. They are designed to enhance it, to boost the performance. During the Pure Storage panel sessions I was talking about how flash changed the world of storage, or better said is changing the world of storage. When you purchased a storage array in the two decades it would come with days worth of consultancy. Two days typically being the minimum and in some cases a week or even more. (Depending on the size, and the different functionality used etc.) And that was just the install / configure part. It also required the administrators to be trained, in some cases (not uncommon) multiple five-day courses. This says something about the complexity of these systems.
The complexity however was not introduced by storage vendors just because they wanted to sell extra consultancy hours. It was simply the result of how the systems were architected. This by itself being the result of a major big constraint: magnetic disks. But the world is changing, primarily because a new type of storage was introduced; Flash!
Flash allowed storage companies to re-think their architecture, probably fair to state that the this was kickstarted by the startups out there who took flash and saw this as their opportunity to innovate. Innovationg by removing complixity. Removing (front-end) complexity by flattening their architecture.
Complex constructs to improve performance are no longer required as (depending on which type you use) a single flash disk delivers more than a 1000 magnetic disks typically do. Even when it comes to resiliency, most new storage systems introduced different types of solutions to mitigate (disk) failures. No longer is a 5-day training course required to manage your storage systems. No longer do you need weeks of consultancy just to install/configure your storage environment. In essence, flash removed a lot of the burden that was placed on customers. That is the huge benefit of flash, and that is what I was referring to with my tweet.
One thing left to say: Go Flash!
Excellent post as always Duncan.
I’m in the middle of testing a number of Flash arrays and ran into an issue which I think more vendors and application owners will soon experience. The issue is a new one and specifically relates to how applications are currently tailored to accept the limitations of spinning disk arrays. With this limitation they throttle the number of actions they attempt to simultaneously execute.
In my example I was performing VDI bootstorm testing and found that vCenter would not allow me to boot more than 100 desktops at a time. The workaround was to skip vCenter and simply execute scripts directly against each ESXi host. This in turn brought us to our second limitation Citrix PVS streaming services, which was configured to also limit simultaneous booting of 100 desktops. Once we removed both of these constraints we were able to see the true ceiling of each array which was being benchmarked. None of these scenarios exhausted the SSD storage arrays or the ESXi servers hosting the VDI desktops.
My question to you. When do you expect VMWare to come out with tailored options for running ESXi against all SSD arrays vs. Spinning disk?
I see hints that this type of logic is in the not so distant future as highlighted in this VMWare labs article. https://labs.vmware.com/vmtj/redefining-esxi-io-multipathing-in-the-flash-era
Duncan Epping says
That is a good point indeed Chris, although the array and its architecture has changed the application or the OS has not necessarily been optimized or architected for that.
I was at the Pure Storage event. I feel this is the start of a whole new Storage decade. But the lack of some futures or a total eco system with management tools or backup options like EMC or Netapp have did feel a bit like a brand new product, which have to develope to get mature. On the other hand, when i did look arround at the biggest storage vendors, there is no one who has a “all flash storage” array integrated in their eco system. They are all stand alone operating systems with a reduced set of managent options. No different from the Pure Storage flash array, or even worse ;-). I don’t think people reallise how all flash array will change the it world. For the storage at first.
Duncan Epping says
Good point, but than again for things like backup there are many solutions “outside” of the array that would work fine. Especially now that many of those arrays run VMs instead of “direct data” it may make sense for them to partner with other solutions instead of building there own.