Episode 004 is out! This time we talk to Cody Hosterman, Director of Product Management at Pure Storage, about Virtual Volumes aka vVols! Cody shares with us the past, present, and future of vVols. I especially enjoyed his explanations around the benefits of vVols for traditional and cloud-native workload. It is also great to hear that VMware is working with Pure Storage on designing and developing a stretched cluster capability for vVols based environments. Listen below, or via Apple, Google, Spotify etc.
Search Results for: vvol
vSphere FT and vVols/SPBM an unsupported config? Why?
I was pointed out by a customer (thanks Johan), that vSphere FT is not supported when using SPBM on non-vSAN based storage systems. You may wonder why this is, at least I did wonder. I figured it would be a testing constraint of some sort, but after emailing product management, engineering, and our quality engineering team I now understand why it is. Now before I explain it, the constraint is documented here, let me quote the section for you:
Virtual Volume datastores.
Storage-based policy management. Storage policies are supported for vSAN storage.
So why is this and why would vSAN be supported as that also uses SPBM? Well the difference is in the implementation. For vVols there’s a dependency on vCenter Server to be available when creating new VMs. This is essentially what happens when an FT instance needs to be restarted. We will need to associate an SPBM policy with it and we can only retrieve it via vCenter Server. With vSAN, FT/HA can also retrieve the needed info via the ESXi host. This is why FT and vSAN are a supported configuration, and vVols and FT, unfortunately, is not at the moment. Hopefully, though, this will change in the future. (Yes, I filed a feature request before anyone asks.)
SRM support for VVols coming!
VMworld is coming up, which means that is “announcements season”. First announcement that I can share with you is the fact that VVols support for SRM is now officially on the roadmap. This is something Cormac and I have pushed hard for the past couple of years, and it is great to see this is finally being planned! A post about this was just published on the VMware Virtual Blocks blog and I think the following piece says it all. Read the blog for more info.
Some of our storage partners such as HP Enterprise 3PAR, HP Enterprise Nimble, and Pure Storage have developed and certified to the lastest VVol 2.0 VASA providers specification. VVol 2.0 is part of the vSphere 6.5 release and supports array-based replication with VVol. To support VVol replication operations on these storage arrays, VMware also developed a set of PowerCLI cmdlets so common BC/DR operations such as failover, test failover, and recovery workflows can be scripted as needed. The use of PowerCLI works well for many VVol customers, but we believe many more customers will be able to take advantage of SRM orchestrated BC/DR workflows with VVol.
I can’t wait for this to be made available, and I am sure many VVol customers (and potential customers) will agree with me that this is a highly anticipated feature!
VVols design and procurement considerations
Over the past couple of months I have had more and more discussions with customers and partners about VVols. It seems that Policy Based Management and the VVol granular capabilities are really starting to sink in, and more and more customers are starting to see the benefit of using vSphere as the management plane. The other option of course is pre-defining what is enabled on a datastore/LUN level and use spreadsheets and complex naming schemes to determine where a VM should land, far from optimal. I am not going to discuss the VVols basics at this point, if you need to know more about that simply do a search on VVol.
When having these discussions a bunch of things typically come up, these all have to do with design and procurement considerations when it comes to VVol capable storage. VMware provided a framework, and API, and based on this each vendor has developed their own implementation. These vary from vendor to vendor, as not all storage systems are created equal. So what do you have to think about when designing a VVols environment or when you are procuring new VVol capable storage? Below you find a list of questions to ask, with a short explanation of why this may be important. I will try to add new questions and considerations when I come up with them.
- What level of software is needed for my storage system to support VVol?
In many cases, especially existing legacy storage systems, an upgrade is needed of the software to support VVols, ask:
- What does this upgrade entail?
- What is the risk?
When it is clear what you need to support VVols from a software point of view, ask:
- What are the constraints and limits?
- How many Protocol Endpoints can I have per storage system?
- Do you support all protocols? (FC, NFS, iSCSI etc)
- Is the IO proxied via the Protocol Endpoint? If it is, is their an impact with a large number of VMs?
- Some systems can make a distinction between traffic type and for normal IO will not go through the PE, which means you don’t hit any PE limitations (queue depth being one)
- How many Storage Pools can you have per storage system?
- In some cases (legacy storage systems) the storage pool equals an existing physical construct on the array, what is it and what is the impact of this?
- What kind of options do I select during the creation of the pool? Anything you select on a per Pool level means that when you change policy VVols may have to migrate to other pools, I prefer to avoid data movement. In some cases for instance “replication” is enabled on a storage pool level, I prefer to have this as a policy option
- In some cases (legacy storage systems) the storage pool equals an existing physical construct on the array, what is it and what is the impact of this?
- How many VVols can I have per storage system? (How many VMs do you have, and how many VVols do you expect to have per VM?)
- In some cases, usually legacy storage systems, the number of VVols per array is limited. I have seen as “low” as 2000, with 3 VVols per VM at a mininum (typical 5) you can imagine this restricts the number of VMs you can run on single storage system
- How many Protocol Endpoints can I have per storage system?
And then there is the control / management plane:
- How is the VASA (vSphere APIs for Storage Awareness) Provider implemented?
- There are two options here, either it comes as part of the storage system or it is provided as a virtual machine.
- Then as part of that there’s also the decision around the availability model of the VASA Provider:
- Is it a single instance?
- Active/Standby?
- Active/Active?
- Scale-out?
Note, as it stands today, in order to power-on a VM or create a VM the VASA Provider needs to be available. Hence the availability model is probably of importance, depending on the type of environment you are designing. Also, some prefer to avoid having it implemented on the storage system, as any update means touching the storage system. Others prefer to have it as part of the storage system as it removes the need to have a separate VM that needs to be managed and maintained.
Last but not least, policy capabilities:
- What is exposed through policy?
- Availability? (RAID type / number of copies of object)
- QoS?
- Reservations
- Limits
- Replication?
- Snapshot (scheduling)?
- Encryption?
- Application type?
- Thin provisioning?
I hope this helps having the conversation with your storage vendor, developing your design or guide the conversation during the procurement process. If anyone has additional considerations please leave a comment so I can add it to the list when applicable.
Virtually Speaking Podcast episode 32 – VVol 2.0
Just wanted to share the Virtually Speaking Podcast with you, this episode (32) is on the topic of VVol 2.0 and features Pete Flecha, Ben Meadowcroft (PM for VVol) and I. Make sure to listen to it, it has some good info on where VVol is today and where it may be going in the near future!