3 weeks ago I announced the availability of the ebook of “Essential Virtual SAN”. Today I have the pleasure to inform you that the paper copy has also hit the streets and is being shipped by Amazon as of today. So for those who were waiting with ordering until the paper version was available… Go here, and order it today, and have it in house by tomorrow! The book covers the architecture of Virtual SAN, operational and architectural gotchas and sizing guidance, design examples and much more. Just pick it up,
Working within R&D at VMware means you typically work with technology which is 1 – 2 years out, and discuss futures of products which are 2-3 years. Especially in the storage space a lot has changed. Not just innovations within the hypervisor by VMware like Storage DRS, Storage IO Control, VMFS-5, VM Storage Policies (SPBM), vSphere Flash Read Cache, Virtual SAN etc. But also by partners who do software based solutions like PernixData (FVP), Atlantis (ILIO) and SANDisk FlashSoft. Of course there is the whole Server SAN / Hyper-converged movement with Nutanix, Scale-IO, Pivot3, SimpliVity and others. Then there is the whole slew of new storage systems some which are scale out and all-flash, others which focus more on simplicity, here we are talking about Nimble, Tintri, Pure Storage, Xtreme-IO, Coho Data, Solid Fire and many many more.
Looking at it from my perspective, I would say there are multiple phases when it comes to the SDS journey:
- Phase 0 – Legacy storage with NFS / VMFS
- Phase 1 – Legacy storage with NFS / VMFS + Storage IO Control and Storage DRS
- Phase 2 – Hybrid solutions (Legacy storage + acceleration solutions or hybrid storage)
- Phase 3 – Object granular policy driven (scale out) storage
Maybe I should have abstracted a bit more:
- Phase 0 – Legacy storage
- Phase 1 – Legacy storage + basic hypervisor extensions
- Phase 2 – Hybrid solutions with hypervisor extensions
- Phase 3 – Fully hypervisor / OS integrated storage stack
I have written about Software Defined Storage multiple times in the last couple of years, have worked with various solutions which are considered to be “Software Defined Storage”. I have a certain view of what the world looks like. However, when I talk to some of our customers reality is different, some seem very happy with what they have in Phase 0. Although all of the above is the way of the future, and for some may be reality today, I do realise that Phase 1, 2 and 3 may be far away for many. I would like to invite all of you to share:
- Which phase you are in, and where you would like to go to?
- What you are struggling with most today that is driving you to look at new solutions?
I was reading the Virtual SAN Data Locality white paper. I think it is a well written paper, and really enjoyed it. I figured I would share the link with all of you and provide a short summary. (http://blogs.vmware.com/vsphere/files/2014/07/Understanding-Data-Locality-in-VMware-Virtual-SAN-Ver1.0.pdf)
The paper starts with an explanation of what data locality is (also referred to as “locality of reference”), and explains the different types of latency experienced in Server SAN solutions (network, SSD). It then explains how Virtual SAN caching works, how locality of reference is implemented within VSAN and also how VSAN does not move data around because of the high cost compared to the benefit for VSAN. It also demonstrates how VSAN delivers consistent performance, even without a local read cache. The key word here is consistent performance, something that is not in the case for all Server SAN solutions. In some cases, a significant performance degradation is experienced minutes long after a workload has been migrated. As hopefully all of you know vSphere DRS runs every 5 minutes by default, which means that migrations can and will happen various times a day in most environments. (Seen environments where 30 migrations a day was not uncommon.) The paper then explains where and when data locality can be beneficial, primarily when RAM is used and with specific use cases (like View) and then explains how CBRC aka View Accelerator (in RAM deduplicated read cache) could be used for this purpose. (Does not explain how other Server SAN solutions leverage RAM for local read caching in-depth, but sure those vendors will have more detailed posts on that, which are worth reading!)
Couple of real gems in this paper, which I will probably read a couple of times in the upcoming days!
Yes, the day has finally come… Our pet project, the Essential Virtual SAN book is finally out! Cormac and I decided to take the “e-book first” route which enables us to have it out weeks before the printed copy. Before doing the thank you’s and provide you with some details on what the book is about, I want to thank my co-author Cormac! It was a great pleasure working with you on this project Cormac, thanks for asking me to be part of this exciting book!
We want to thank our technical editors Paudie O’Riordan and Christos Karamanolis, whom spent countless of hours reading and editing our raw materials. We would like to thank the VMware Virtual SAN engineering team for the countless of hours discussing the ins and outs of Virtual SAN. Especially Christian Dickmann and (again) Christos Karamanolis, it would not have been possible without your help! We also want to acknowledge William Lam, Wade Holmes, Rawlinson Rivera, Simon Todd, Alan Renouf, and Jad El-Zein for their help and contributions to the book. Last but not least we want to thank the Pearson team for their flexibility and agility and getting things done, and our management (Phil Weiss, Adam Zimman, and Mornay van der Walt) for supporting us on this journey.!
Cormac and I are also very pleased to say that we have two awesome forewords by no one less than VMware CTO Ben Fathi and SVP of Storage and Availability at VMware Charles Fan! Thanks for taking the time out of your busy schedule, we very much appreciate it.
What does the book cover?
Understand and implement VMware Virtual SAN: the heart of tomorrow’s Software-Defined Datacenter (SDDC)
VMware’s breakthrough Software-Defined Datacenter (SDDC) initiative can help you virtualize your entire datacenter: compute, storage, networks, and associated services. Central to SDDC is VMware Virtual SAN (VSAN): a fully distributed storage architecture seamlessly integrated into the hypervisor and capable of scaling to meet any enterprise storage requirement.
Now, the leaders of VMware’s wildly popular Virtual SAN previews have written the first authoritative guide to this pivotal technology. You’ll learn what Virtual SAN is, exactly what it offers, how to implement it, and how to maximize its value.
Writing for administrators, consultants, and architects, Cormac Hogan and Duncan Epping show how Virtual SAN implements both object-based storage and a policy platform that simplifies VM storage placement. You’ll learn how Virtual SAN and vSphere work together to dramatically improve resiliency, scale-out storage functionality, and control over QoS.
Both an up-to-the-minute reference and hands-on tutorial, Essential Virtual SAN uses realistic examples to demonstrate Virtual SAN’s most powerful capabilities. You’ll learn how to plan, architect, and deploy Virtual SAN successfully, avoid gotchas, and troubleshoot problems once you’re up and running.
- Understanding the key goals and concepts of Software-Defined Storage and Virtual SAN technology
- Meeting physical and virtual requirements for safe Virtual SAN implementation
- Installing and configuring Virtual SAN for your unique environment
- Using Storage Policy Based Management to control availability, performance, and reliability
- Simplifying deployment with VM Storage Policies
- Discovering key Virtual SAN architectural details: caching I/O, VASA, witnesses, pass-through RAID, and more
- Ensuring efficient day-to-day Virtual SAN management and maintenance
- Interoperating with other VMware features and products
- Designing and sizing Virtual SAN clusters
- Troubleshooting, monitoring, and performance optimization
Today I was answering some questions on the VMTN forums and one of the questions was around the quality of components in some of the all flash / hybrid arrays. This person kept coming back to the type of flash used (eMLC vs MLC, SATA vs NL-SAS vs SAS). One of the comments he made was the following:
I talked to Pure Storage but they want $$$ for 11TB of consumer grade MLC.
I am guessing he did a quick search on the internet, found a price for some SSDs and multiplied it and figured that Pure Storage was asking way too much… And even compared to some more traditional arrays filled with SSD they could sound more expensive. I guess this also applies to other solutions, so I am not calling out Pure Storage here.One thing some people seem to forget is that when it comes to these new storage architectures is that they are build with flash in mind.
What does that mean? Well everyone has heard all of the horror stories around consumer grade flash wearing out extremely fast and blowing up in your face. Well fortunately that is only true to a certain extent as some consumer grade SSDs easily reach 1PB of writes these days. On top of that there are a couple of things I think you should know and consider before making statements like these or be influenced by a sales team who says “well we offer SLC versus MLC so we are better than them”.
For instance (As Pure Storage lists on their website), there are many more MLC drives shipped than any other type at this point. Which means that it has been tested inside out by consumers, who can break devices in many more ways than you or your QA team can? Right, the consumer! More importantly if you ask me, ALL of these new storage architectures have in-depth knowledge of the type of flash they are using. That is how their system was architected! They know how to leverage flash, they know how to write to flash, they know how to avoid fast wear out. They developed an architecture which was not only designed but also highly optimized for flash… This is what you pay for. You pay for the “total package” which means the whole solution, not just those flash devices that are leveraged. The flash devices are a part of the solution, and just a relatively small part if you ask me. You pay for total capacity with low latency and functionality like deduplication, compression and replication (in some cases). You pay for the ease of deployment and management (operational efficiency), meaning you get to spent your time on stuff that matters to your customer… their applications.
You can summarize all of it in a single sentence: the physical components used in all of these solutions are just a small part of the solution, whenever someone tries to sell you the “hardware” that is when you need to be worried!