• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

sds

Part 2: Is VSA the future of Software Defined Storage? (Customer use case)

Duncan Epping · Nov 12, 2019 ·

About 6.5 years ago I wrote this blog post around the future of Software-Defined Storage and if the VSA (virtual storage appliance) is the future for it. Last week at VMworld a customer reminded me of this article. Not because they read the article and pointed me back at it, but because they implemented what I described in this post, almost to the letter.

This customer had an interesting implementation, which kind of resembles the diagram I added to the blog post, note I added a part to the diagram which I originally left out but had mentioned in the blog (yes that is why the diagram looks like it is ancient… it is):

I want to share with you what the customer is doing because there are still plenty of customers that do not realize that this is supported. Note that this is supported by both vSAN as well as VMware Cloud Foundation, providing you a future proof, scalable, and flexible full-stack HCI architecture which does not need to be implemented in a rip and replace approach!

This customer basically leverages almost all functionality of our Software-Defined Storage offering. They have vSAN with locally attached storage devices (all NVMe) for certain workloads. They have storage arrays with vVols enabled for particular workloads. They have a VAIO Filter Driver which they use for replication. They also heavily rely on our APIs for monitoring and reporting, and as you can imagine they are a big believer in Policy-Based Management, as that is what helps them with placing workloads on a particular type of storage.

Now you may ask yourself, why on earth would they have vSAN and vVols sitting next to each other? Well, they had a significant investment in storage already, the storage solution was fully vVols capable and when they started using vSAN for certain projects they simply fell in love with Storage Policy-Based Management and decided to get it enabled for their storage systems as well. Even though the plan is to go all-in on vSAN over time, the interesting part here, in my opinion, is the “openness” of the platform. Want to go all-in on vSAN? Go ahead! Want to have traditional storage next to HCI? Go ahead! Want to use software-based data services? Go ahead! You can mix and match, and it is fully supported.

Anyway, just wanted to share that bit, and figured it would also be fun to bring up this 6.5 years old article again. One more thing, I think it is also good to realize how long these transitions tend to take. If you would have asked me in 2013 when we would see customers using this approach my guess would have been 2-3 years. Almost 6.5 years later we are starting to see this being seriously looked at. Of course, platforms have to mature, but also customers have to get comfortable with the idea. Change simply takes a lot of time.

Hyper-Converged is here, but what is next?

Duncan Epping · Oct 11, 2016 ·

Last week I was talking to a customer and they posed some interesting questions. What excites me in IT (why I work for VMware) and what is next for hyper-converged? I thought they were interesting questions and very relevant. I am guessing many customers have that same question (what is next for hyper-converged that is). They see this shiny thing out there called hyper-converged, but if I take those steps where does the journey end? I truly believe that those who went the hyper-converged route simply took the first steps on an SDDC journey.

Hyper-converged I think is a term which was hyped and over-used, just like “cloud” a couple of years ago. Lets breakdown what it truly is: hardware + software. Nothing really groundbreaking. It is different in terms of how it is delivered. Sure, it is a different architectural approach as you utilize a software based / server side scale-out storage solution which sits within the hypervisor (or on top for that matter). Still, that hypervisor is something you were already using (most likely), and I am sure that “hardware” isn’t new either. Than the storage aspect must be the big differentiator right? Wrong, the fundamental difference, in my opinion, is how you manage the environment and the way it is delivered and supported. But does it really need to stop there or is there more?

There definitely is much more if you ask me. That is one thing that has always surprised me. Many see hyper-converged as a complete solution, reality is though that in many cases essential parts are missing. Networking, security, automation/orchestration engines, logging/analytic engines, BC/DR (and orchestration of it) etc. Many different aspects and components which seem to be overlooked. Just look at networking, even including a switch is not something you see to often, and what about the configuration of a switch, or overlay networks, firewalls / load-balancers. It all appears not to be a part of hyper-converged systems. Funny thing is though, if you are going on a software defined journey, if you want an enterprise grade private cloud that allows you to scale in a secure but agile manner these components are a requirement, you cannot go without them. You cannot extend your private cloud to the public cloud without any type of security in place, and one would assume that you would like to orchestrate every thing from that same platform and have the same networking / security capabilities to your disposal both private and public.

That is why I was so excited about the VMworld US keynote. Cross Cloud Services on top of hyper-converged leveraging all the tools VMware provides today (vSphere, VSAN, NSX) will exactly allow you to do what I describe above. Whether that is to IBM, vCloud Air or any other of the mega clouds listed in the slide below is even besides the point. Extending your datacenter services in to public clouds is what we have been talking about for a while, this hybrid approach which could bring (dare I say) elasticity. This is a fundamental aspect of SDDC, of which a hyper-converged architecture is simply a key pillar.

Hyper-converged by itself does not make a private cloud. Hyper-converged does not deliver a full SDDC stack, it is a great step in to the right direction however. But before you take that (necessary) hyper-converged step ask yourself what is next on the journey to SDDC. Networking? Security? Automation/Orchestration? Logging? Monitoring? Analytics? Hybridity? Who can help you reach full potential, who can help you take those next steps? That’s what excites me, that is why I work for VMware. I believe we have a great opportunity here as we are the only company who holds all the pieces to the SDDC puzzle. And with regards to what is next? Deliver all of that in an easy to consume manner, that is what is next!

 

 

 

Software Defined Storage, which phase are you in?!

Duncan Epping · Jul 24, 2014 ·

Working within R&D at VMware means you typically work with technology which is 1 – 2 years out, and discuss futures of products which are 2-3 years. Especially in the storage  space a lot has changed. Not just innovations within the hypervisor by VMware like Storage DRS, Storage IO Control, VMFS-5, VM Storage Policies (SPBM), vSphere Flash Read Cache, Virtual SAN etc. But also by partners who do software based solutions like PernixData (FVP), Atlantis (ILIO) and SANDisk FlashSoft. Of course there is the whole Server SAN / Hyper-converged movement with Nutanix, Scale-IO, Pivot3, SimpliVity and others. Then there is the whole slew of new storage systems some which are scale out and all-flash, others which focus more on simplicity, here we are talking about Nimble, Tintri, Pure Storage, Xtreme-IO, Coho Data, Solid Fire and many many more.

Looking at it from my perspective, I would say there are multiple phases when it comes to the SDS journey:

  • Phase 0 – Legacy storage with NFS / VMFS
  • Phase 1 – Legacy storage with NFS / VMFS + Storage IO Control and Storage DRS
  • Phase 2 – Hybrid solutions (Legacy storage + acceleration solutions or hybrid storage)
  • Phase 3 – Object granular policy driven (scale out) storage

<edit>

Maybe I should have abstracted a bit more:

  • Phase 0 – Legacy storage
  • Phase 1 – Legacy storage + basic hypervisor extensions
  • Phase 2 – Hybrid solutions with hypervisor extensions
  • Phase 3 – Fully hypervisor / OS integrated storage stack

</edit>

I have written about Software Defined Storage multiple times in the last couple of years, have worked with various solutions which are considered to be “Software Defined Storage”. I have a certain view of what the world looks like. However, when I talk to some of our customers reality is different, some seem very happy with what they have in Phase 0. Although all of the above is the way of the future, and for some may be reality today, I do realise that Phase 1, 2 and 3 may be far away for many. I would like to invite all of you to share:

  1. Which phase you are in, and where you would like to go to?
  2. What you are struggling with most today that is driving you to look at new solutions?

Looking back: Software Defined Storage…

Duncan Epping · May 30, 2014 ·

Over a year ago I wrote an article (multiple actually) about Software Defined Storage, VSAs and different types of solutions and how flash impacts the world. One of the articles contained a diagram and I would like to pull that up for this article. The diagram below is what I used to explain how I see a potential software defined storage solution. Of course I am severely biased as a VMware employee, and I fully understand there are various scenarios here.

As I explained the type of storage connected to this layer could be anything DAS/NFS/iSCSI/Block who cares… The key thing here is that there is a platform sitting in between your storage devices and your workloads. All your storage resources would be aggregated in to a large pool and the layer should sort things out for you based on the policies defined for the workloads running there. Now I drew this layer coupled with the “hypervisor”, but thats just because that is the world I live in.

Looking back at this article and looking at the state of the industry today, a couple of things stood out. First and foremost, the term “Software Defined Storage” has been abused by everyone and doesn’t mean much to me personally anymore. If someone says during a bloggers briefing “we have a software defined storage solution” I typically will ask them to define it, or explain what it means to them. Anyway, why did I show that diagram, well mainly because I realised over the last couple of weeks that a couple of companies/products are heading down this path.

If you look at the diagram and for instance think about VMware’s own Virtual SAN product than you can see what would be possible. I would even argue that technically a lot of it would be possible today, however the product is also lacking in some of these spaces (data services) but I expect this to be a matter of time. Virtual SAN sits right in the middle of the hypervisor, the API and Policy Engine is provided by the vSphere layer, it has its own caching service… For now it isn’t supported to connect SAN storage, but if I want to I could even today simply by tagging “LUNs” as local disks.

Another product which comes to mind when looking at the diagram is Pernix Data’s FVP. Pernix managed to build a framework that sits in the hypervisor, in the data path of the VMs. They provide a highly resilient caching layer, and will be able do both flash as well as memory caching in the near future. They support different types of storage connected with the upcoming release… If you ask me, they should be in the right position to slap additional data services like deduplication / compression / encryption / replication on top of it. I am just speculating here, and I don’t know the PernixData roadmap so who knows…

Something completely different is EMC’s ViPR (read Chad’s excellent post on ViPR) and although they may not entirely fit the picture I drew today they are aiming to be that layer in between you and your storage devices and abstract it all for you and allow for a single API to ease automation and do this “end to end” including the storage networks in between. If they would extend this to allow for certain data services to sit in a different layer then they would pretty much be there.

Last but not least Atlantis USX. Although Atlantis is a virtual appliance and as such a different implementation than Virtual San and FVP, they did manage to build a platform that basically does everything I mentioned in my original article. One thing it doesn’t directly solve is the management of the physical storage devices, but today neither does FVP or Virtual SAN (well to a certain extend VSAN does…) But I am confident that this will change when Virtual Volumes is introduced as Atlantis should be able to leverage Virtual Volumes for those purposes.

Some may say, well what about VMware’s Virsto? Indeed, Virsto would also fit the picture but the end of availability was announced not too long ago. However, it has been hinted at multiple times that Virsto technology will be integrated in to other products over time.

Although by now “Software Defined Storage” is seen as a marketing bingo buzzword the world of storage is definitely changing. The question now is I guess, are you ready to change as well?

Software Defined Storage articles to read…

Duncan Epping · Jun 4, 2013 ·

Last week there was a floodstorm of articles published around Software Defined Storage, and of course there was the SDS Tweetstorm caused by the NetApp chat (find a summary here and here). One thing that is clear from these articles, and the chat, is that everyone has its own spin around what Software Defined Storage is or should be. Every vendor takes the fairly highly level definition and then molds it in such a way to make it seem they offer a true Software Defined Storage solution today. Personally I will leave that up to you, the consumer, to decide if you agree with them or not… I know that we as VMware are certainly not claiming to have those capabilities today but we are working very hard to get there! Then again, some of my colleagues would argue that Storage DRS, Storage IO Control, Swap to SSD and vSphere Replication are part of that solution.

Anyway, I enjoyed reading these articles, especially back to back as some state the exact opposite around what SDS is. Now as a customer this isn’t necessarily a bad thing as it will offer you choice and various different strategic directions to achieve the same goal: reduce operational complexity and increase agility / time-to-market. I have linked all three articles below with a quote from the article, just take the time read them. Let me know which of the three concepts you liked most… or if you do not agree with either what SDS means to you.

  1. Nutanix – Software-Defined Storage, Our take!
    “A software-defined controller must not use any proprietary hardware. That means no dependence on special-purpose FPGA, ASIC, NVRAM, battery-backup, UPS, modem etc. Use dynamic HTTP-based tunnels instead of modems. Use inexpensive flash instead of ASIC or NVRAM. Use industry standard PCIe passthru if you must bypass the hypervisor.”
  2. NetApp – OK, Sure, We’ll Call it ‘Software-Defined Storage’
    “NetApp has been leading the way with storage virtualization for years. If you go back and look at some of our slide decks, as recently as 2011, we were calling Data ONTAP the “Storage Hypervisor,” but we stopped because, at the end of the day, it’s bigger than that. It’s the Control Plane AND the Data Plane. SVM’s (Vservers) are the virtualized part (i.e. Control Plane) and Data ONTAP’s inner workings, APIs, and inter-node communications, and ability to move data around within itself between nodes across a 10GbE highly-redundant cluster network, with little-to-no loss in performance”
  3. HDS – Software-Defined Storage is not about commodity storage
    “This has led some analysts to predict that storage functions and intelligence will shift to the server and purchasing will shift to commoditized hardware. On the contrary, storage hardware must become more intelligent to include the storage logic, which can be initiated or automated by events or polices that are defined by application or server software. This has already happened with VMware, as evidenced by the company’s motivation to off load server functions to storage systems through APIs like VAAI and VASA in order for VMware to be more efficient in supporting virtual machines. This requires more intelligence in the storage to support these APIs and provide visibility through vCenter.”

 

** The NetApp article is an article by Nick Howell on his personal blog and doesn’t necessarily 100% align with NetApp’s vision. Considering Nick works at NetApp in the Tech Marketing team it probably represents their view **

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in