Last year at VMworld there was a lot of talk about VMware vStorage APIs for VM and Application Granular Data Management aka Virtual Volumes aka VVOL / VVOLs. The video of this session was just posted up on youtube and I am guessing people will have questions about it after watching it. What I wanted to do in this article is explain what VMware is trying to solve and how VMware intends to solve it. I tried to keep this article as close to the information provided during the session as possible. Note that this session was a Technology Preview, in no shape or form did VMware commit to ever delivering this solution let alone mention a timeline. Before we go any further… if you want to hear more and are attending VMworld, sign up for this session by Vijay Ramachandra and Tom Phelan!
INF-STO2223 – Tech Preview: vSphere Integration with Existing Storage Infrastructure
Background
The Storage Integration effort started with the vSphere API for Array Integration, also known as VAAI. VAAI was aimed to offload data operations to the array to reduce the load and overhead on the hypervisor but more importantly to allow for greater scale and better performance. In vSphere 5.0 the vSphere Storage APIs for Storage Awareness (aka VASA) were introduced which allowed for an out-of-band communications channel to discover storage characteristics. For those who are interested in VASA, I would like to recommend reading Cormac’s excellent article where he explains what it is and he shows how VMware partners have implemented it.
Although these APIs have bridged a huge gap they do not solve all of the problems customers are facing today.
What is VMware trying to solve?
In general VMware is trying to increase agility and flexibility of the VMware storage stack through providing a general framework where any current and future data operations can be implemented with minimal effort for both VMware and partners. Customers have asked for a solution which allows them to differentiate services to their customers on a per application level. Currently when provisioning LUNs, typically large LUNs, this is impossible.
Another area of improvement is granularity. For instance, it is desired to have per VM level fail-over or for instance to allow deduplication on a per VMDK level. This is currently impossible with VMFS. A VMFS volume is usually a LUN and data management happens at a LUN / Volume granularity. In other words a LUN is the level at which you operate from a storage perspective but this is shared by many VMDKs or VMs which might have different requirements.
As stated in mentioned in last years VMworld presentation the currently wish list is:
- Ability to offload to storage system on a per VMDK level
- Snapshots / cloning / replication / deduplication etc
- A framework where any current or future storage system operation can be leveraged
- No disruption to the existing VM creation workflows
- Highly scalable
These 4 should maximize your ROI on hardware investment, reduce operational effort associated with storage management and virtual machine deployment. It will also allow you to enforce application level SLAs by specifying policies on a VMDK or VM level instead of a datastore level. The granularity that it will allow for is in my opinion the most important part here!
How does VMware intend to solve it?
During the research phase many different options were looked at. Many of these however did not take full advantage of the capabilities of the storage system and they introduced more complexity around data layout. The best way of solving this problem is leveraging well known objects… volumes / LUNs.
These objects are referred to as VM Volumes, but also sometimes referred to as vVOLs. A VM Volume is a VMDK (or it derivative) stored natively inside a storage system. Each VM Volume will have a representative on the storage system. By creating a volume for each VMDK you can set policies on the lowest possible level. Not only that, the SAN vs NAS debate is over. This however does implies that when every VMDK is a storage object there could be thousands of VM Volumes. Will this require a complete redesign of storage systems to allow for this kind of scalability? Just think about the current 256 LUNs per host limit for instance. Will this limit the amount of VMs per host/cluster?
In order to solve this potential problem a new concept is introduced which is called an “IO De-multiplexer” or “IO Demux”. This is one single device which will exist on a storage system and it represents a logical I/O channel from the ESXi hosts to the entire storage system. Multi-pathing and path policies will be defined on a per IO Demux basis, which typically would be once. Behind this IO Demux device there could be 1000s of VM volumes.
This however introduces a new challenge. Where in the past the storage administrator was in control, now the VM administrator could possible create hundreds of large disks without discussing it with the storage admin. To solve this problem a new concept called Capacity Pools is introduced. A Capacity Pool is an allocation of physical storage space and a set of allowed services for any part of that storage space. Services could be replication, cloning, backup etc. This would be allowed until the set threshold is exceeded. It is the intention to allow Capacity Pools to span multiple storage systems and even across physical sites.
In order to allow to set specific QoS parameters another new concept is introduced called Profiles. Profiles are a set of QoS parameters (performance and data services) which apply to a VM Volume, or even a Capacity Pool. The storage administrator can create these profiles and assign these to the Capacity Pools which will allow the tenant of this pool to assign these policies to his VM Volumes.
As you can imagine this shifts responsibilities within the organization between teams, however it will allow for greater granularity, scale, flexibility and most importantly business agility.
Summarizing
Many customers have found it difficult to manage storage in virtualized environments. VMFS volumes typically contain dozens of virtual machines and VMDKs making differentiation on a per application level very difficult. VM Volumes will allow for more granular data management by leveraging the strength of a storage system, the volume manager. VM Volumes will simplify data and virtual infrastructure management by shifting responsibilities between teams and removing multiple layers of complexity.
Sean Duffy says
Would the concept of the “IO De-multiplexer” device be a bit of software? If that is the case I guess it/they would need to provide some kind of fault tolerance if they were handling many VM volumes.
Interesting read, but I must go and find out more by watching the session!
Paul Sheard says
Great blog post Duncan, looking forward to the VMworld session with Vijay and Tom.
Trevor Roberts Jr says
Great article, Duncan!
I’ll be interested to see if vVols offer any performance improvements over standard VMFS, aside from per-VMDK QoS
Duncan Epping says
VMFS has little overhead as it is to be honest. I would suspect that it would decrease the overhead… but I guess that is not what they aim is of vVOLs. It is all about reducing operational complexity.
Jephtah says
Thanks Duncan!
What is the protocol used between the ESX and the storage array? Is it T10 OSD protocol for object storage?
Shriram says
Does this mean VAAI support is not required anymore, since all the operations are offloaded directly to the Storage arrays (SAN or NAS)
Duncan Epping says
No statements have been made with regards to this. But do note that VVOL will require a certain flash/code-drop on the array itself. So it would be safe to assume that arrays which do not even support VAAI today, will also not support VVOL when it will be released.
Deepak C Shetty says
1) Today we can exploit sub-lun array offloads if the array supports it… given that the OS/FS is able to map the vmdisk to the LBA ranges on the LUN. Does this mean that an array supporting vVOLs will implement it using sub-lun offloads or this is totally different implementation. If vVOLs needs a totally different implmn , why ?
2) The I/O Demux is nothing but a mapping of which vmdisk(hence vVOL) is mapped to which LBA ranges of the backing LUN, or there is more to it ?
3) How does this address the scalability issue. Today IIUC array controllers have resources reserved per LUN, hence you cannot scale beyond some max limit. How does that get solved with vVOLs, since its still backed by the traditional LUN. The only adv. i see is that host can now talk in vVOL granularity instead of LUN granularity, thanks to the I/O Demux doing the magic of the mapping.. is this understanding correct ?
memerson says
The concern is allowing one group the complete ability to control everything. The logic is sound but human nature and too many eggs in a basket is a concern. Personal experience says that most admins will consume all of the space provided to them with minimal checks and balances. If you now give the admin the opportunity to provide and consume the space without checks and balances then that opens up governance issues. An admin can say I need another array, another array and another array because no proper planning was done and cause the budget to spiral out of control. It is not a problem of the product per se but a problem regarding process and it should require checks and balances. The capacity pools are interesting but they are created by admins and who owns the whole basket? The admins do and they can continue to overfill their basket with eggs until the basket breaks. I envision two people owning the key to the rocket command central and they have to turn them at the same time :D.