• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Storage

Time to update your RSS reader / bookmarks…

Duncan Epping · Aug 16, 2012 ·

Time to update your RSS reader / bookmarks… There is a new blog in town. This new blog is owned by my personal favorite VMware bloggers, Cormac Hogan.

http://cormachogan.com/

I highly recommend bookmarking his website / adding him to your RSS feed, knowing Cormac he is going to crank out a lot of material. First article is a great one: Nimble Storage 2.0 Architecture Updates. Make sure to read it.

VMware Storage APIs for VM and Application Granular Data Management

Duncan Epping · Aug 7, 2012 ·

Last year at VMworld there was a lot of talk about VMware vStorage APIs for VM and Application Granular Data Management aka Virtual Volumes aka VVOL / VVOLs. The video of this session was just posted up on youtube and I am guessing people will have questions about it after watching it. What I wanted to do in this article is explain what VMware is trying to solve and how VMware intends to solve it. I tried to keep this article as close to the information provided during the session as possible. Note that this session was a Technology Preview, in no shape or form did VMware commit to ever delivering this solution let alone mention a timeline. Before we go any further… if you want to hear more and are attending VMworld, sign up for this session by Vijay Ramachandra and Tom Phelan!

INF-STO2223 – Tech Preview: vSphere Integration with Existing Storage Infrastructure

Background

The Storage Integration effort started with the vSphere API for Array Integration, also known as VAAI. VAAI was aimed to offload data operations to the array to reduce the load and overhead on the hypervisor but more importantly to allow for greater scale and better performance. In vSphere 5.0 the vSphere Storage APIs for Storage Awareness (aka VASA) were introduced which allowed for an out-of-band communications channel to discover storage characteristics. For those who are interested in VASA, I would like to recommend reading Cormac’s excellent article where he explains what it is and he shows how VMware partners have implemented it.

Although these APIs have bridged a huge gap they do not solve all of the problems customers are facing today.

What is VMware trying to solve?

In general VMware is trying to increase agility and flexibility of the VMware storage stack through providing a general framework where any current and future data operations can be implemented with minimal effort for both VMware and partners. Customers have asked for a solution which allows them to differentiate services to their customers on a per application level. Currently when provisioning LUNs, typically large LUNs, this is impossible.

Another area of improvement is granularity. For instance, it is desired to have per VM level fail-over or for instance to allow deduplication on a per VMDK level. This is currently impossible with VMFS. A VMFS volume is usually a LUN and data management happens at a LUN / Volume granularity. In other words a LUN is the level at which you operate from a storage perspective but this is shared by many VMDKs or VMs which might have different requirements.

As stated in mentioned in last years VMworld presentation the currently wish list is:

  1. Ability to offload to storage system on a per VMDK level
  2. Snapshots / cloning / replication / deduplication etc
  3. A framework where any current or future storage system operation can be leveraged
  4. No disruption to the existing VM creation workflows
  5. Highly scalable

These 4 should maximize your ROI on hardware investment, reduce operational effort associated with storage management and virtual machine deployment. It will also allow you to enforce application level SLAs by specifying policies on a VMDK or VM level instead of a datastore level. The granularity that it will allow for is in my opinion the most important part here!

How does VMware intend to solve it?

During the research phase many different options were looked at. Many of these however did not take full advantage of the capabilities of the storage system and they introduced more complexity around data layout. The best way of solving this problem is leveraging well known objects… volumes / LUNs.

These objects are referred to as VM Volumes, but also sometimes referred to as vVOLs. A VM Volume is a VMDK (or it derivative) stored natively inside a storage system. Each VM Volume will have a representative on the storage system. By creating a volume for each VMDK you can set policies on the lowest possible level. Not only that, the SAN vs NAS debate is over. This however does implies that when every VMDK is a storage object there could be thousands of VM Volumes. Will this require a complete redesign of storage systems to allow for this kind of scalability? Just think about the current 256 LUNs per host limit for instance. Will this limit the amount of VMs per host/cluster?

In order to solve this potential problem a new concept is introduced which is called an “IO De-multiplexer” or “IO Demux”. This is one single device which will exist on a storage system and it represents a logical I/O channel from the ESXi hosts to the entire storage system. Multi-pathing and path policies will be defined on a per IO Demux basis, which typically would be once. Behind this IO Demux device there could be 1000s of VM volumes.

This however introduces a new challenge. Where in the past the storage administrator was in control, now the VM administrator could possible create hundreds of large disks without discussing it with the storage admin. To solve this problem a new concept called Capacity Pools is introduced. A Capacity Pool is an allocation of physical storage space and a set of allowed services for any part of that storage space. Services could be replication, cloning, backup etc. This would be allowed until the set threshold is exceeded. It is the intention to allow Capacity Pools to span multiple storage systems and even across physical sites.

In order to allow to set specific QoS parameters another new concept is introduced called Profiles. Profiles are a set of QoS parameters (performance and data services) which apply to a VM Volume, or even a Capacity Pool. The storage administrator can create these profiles and assign these to the Capacity Pools which will allow the tenant of this pool to assign these policies to his VM Volumes.

As you can imagine this shifts responsibilities within the organization between teams, however it will allow for greater granularity, scale, flexibility and most importantly business agility.

Summarizing

Many customers have found it difficult to manage storage in virtualized environments. VMFS volumes typically contain dozens of virtual machines and VMDKs making differentiation on a per application level very difficult. VM Volumes will allow for more granular data management by leveraging the strength of a storage system, the volume manager. VM Volumes will simplify data and virtual infrastructure management by shifting responsibilities between teams and removing multiple layers of complexity.

Cool Fling: Reclaiming deadspace from a thin provisioned guest disk

Duncan Epping · Jul 3, 2012 ·

Yesterday a very cool fling was released. This fling allows you to reclaim “deadspace” from a thin provisioned guest disk! I have written some articles about dead space reclamation of the VMFS layer but this is about one layer up, the guest itself! Yes we used to have a work around of “in-guest” reclamation of blocks by using sdelete and Storage vMotion but due to the VAAI offloading that vSphere did this work around did not longer work. Now there is a cool fling released by Faraz Shaikh and Prasanna Aithal that addresses this. To be clear here, Sdelete by itself on vSphere doesn’t make your disk thin again… it just zeroes out blocks… GuestReclaim actually also issues a SCSI unmap command to allow the underlying storage to reclaim the dead space! For now this however only works for RDMs.

The fling has been tested for Windows 7, Windows XP, Windows Server 2003 and 2008. Note that it required administrator rights to run. GuestReclaim can reclaim deadspace from “simple volumes” and also when full volumes / disks are deleted. Although called out in the documentation I wanted to make sure everyone is clear on this,  there needs to be disk space available in order to be able to reclaim disk space!

I suggest you head over to the Fling and download it and give it a spin in your test environment. More details about the fling itself and some common Q&A can be found in this Doc.

If I have the time I will definitely give it a spin in my lab in the upcoming week.

vSphere Metro Storage Cluster white paper released!

Duncan Epping · May 23, 2012 ·

I wanted to point you guys to a white paper that I have worked on for the last months. This white paper was written in collaboration with Lee Dilworth, Ken Werneburg, Frank Denneman and Stuart Hardman. Thanks guys for taking time out of your busy schedule to work with me on this project! This white paper is about vSphere Metro Storage Cluster solutions (aka stretched clusters) and specifically looks at things from a VMware perspective. Enjoy!

  • VMware vSphere Metro Storage Cluster (VMware vMSC) is a new configuration within the VMware Hardware Compatibility List. This type of configuration is commonly referred to as a stretched storage cluster or metro storage cluster. It is implemented in environments where disaster/downtime avoidance is a key requirement. This case study was developed to provide additional insight and information regarding operation of a VMware vMSC infrastructure in conjunction with VMware vSphere. This paper will explain how vSphere handles specific failure scenarios and will discuss various design considerations and operational procedures.
    http://www.vmware.com/resources/techresources/10284

An introduction to Storage DRS

Duncan Epping · May 22, 2012 ·

Today someone asked for a Storage DRS intro, I wrote one for our book a year ago and figured I would share it with the world. I still feel that Storage DRS is one of the coolest features in vSphere 5.0 and I think that everyone should be using this! I know there are some caveats (1, 2) when you are using specific array functionality or for instance SRM, but nevertheless… this is one of those features that will make an admin’s life that much easier! If you are not using it today, I highly suggest evaluating this cool feature.

*** out take from the vSphere 5.0 Clustering Deepdive ***

vSphere 5.0 introduces many great new features, but everyone will probably agree with us that vSphere Storage DRS is most the exciting new feature. vSphere Storage DRS helps resolve some of the operational challenges associated with virtual machine provisioning, migration and cloning. Historically, monitoring datastore capacity and I/O load has proven to be very difficult. As a result, it is often neglected, leading to hot spots and over- or underutilized datastores. Storage I/O Control (SIOC) in vSphere 4.1 solved part of this problem by introducing a datastore-wide disk-scheduler that allows for allocation of I/O resources to virtual machines based on their respective shares during times of contention.

Storage DRS (SDRS) brings this to a whole new level by providing smart virtual machine placement and load balancing mechanisms based on space and I/O capacity. In other words, where SIOC reactively throttles hosts and virtual machines to ensure fairness, SDRS proactively makes recommendations to prevent imbalances from both a space utilization and latency perspective. More simply, SDRS does for storage what DRS does for compute resources.

There are five key features that SDRS offers:

  • Resource aggregation
  • Initial Placement
  • Load Balancing
  • Datastore Maintenance Mode
  • Affinity Rules

Resource aggregation enables grouping of multiple datastores, into a single, flexible pool of storage called a Datastore Cluster. Administrators can dynamically populate Datastore Clusters with datastores. The flexibility of separating the physical from the logical greatly simplifies storage management by allowing datastores to be efficiently and dynamically added or removed from a Datastore Cluster to deal with maintenance or out of space conditions. The load balancer will take care of initial placement as well as future migrations based on actual workload measurements and space utilization.

The goal of Initial Placement is to speed up the provisioning process by automating the selection of an individual datastore and leaving the user with the much smaller-scale decision of selecting a Datastore Cluster. SDRS selects a particular datastore within a Datastore Cluster based on space utilization and I/O capacity. In an environment with multiple seemingly identical datastores, initial placement can be a difficult and time-consuming task for the administrator. Not only will the datastore with the most available disk space need to be identified, but it is also crucial to ensure that the addition of this new virtual machine does not result in I/O bottlenecks. SDRS takes care of all of this and substantially lowers the amount of operational effort required to provision virtual machines; that is the true value of SDRS.

However, it is probably safe to assume that many of you are most excited about the load balancing capabilities SDRS offers. SDRS can operate in two distinct modes: No Automation (manual mode) or Fully Automated. Where initial placement reduces complexity in the provisioning process, load balancing addresses imbalances within a datastore cluster. Prior to vSphere 5.0, placement of virtual machines was often based on current space consumption or the number of virtual machines on each datastore. I/O capacity monitoring and space utilization trending was often regarded as too time consuming Over the years, we have seen this lead to performance problems in many environments, and in some cases, even result in down time because a datastore ran out of space. SDRS load balancing helps prevent these, unfortunately, common scenarios by making placement recommendations based on both space utilization and I/O capacity when the configured thresholds are exceeded. Depending on the selected automation level, these recommendations will be automatically applied by SDRS or will need to be applied by the administrator.

Although we see load balancing as a single feature of SDRS, it actually consists of two separately-configurable options. When either of the configured thresholds for Utilized Space (80% by default) or I/O Latency (15 milliseconds by default) are exceeded, SDRS will make recommendations to prevent problems and resolve the imbalance in the datastore cluster. In the case of I/O capacity load balancing, it can even be explicitly disabled.

Before anyone forgets, SDRS can be enabled on fully populated datastores and environments. It is also possible to add fully populated datastores to existing datastore clusters. It is a great way to solve actual or potential bottlenecks in any environment with minimal required effort or risk.

Datastore Maintenance Mode is one of those features that you will typically not use often; you will appreciate it when you need. Datastore Maintenance Mode can be compared to Host Maintenance Mode: when a datastore is placed in Maintenance Mode all registered virtual machines, on that datastore, are migrated to the other datastores in the datastore cluster. Typical use cases are data migration to a new storage array or maintenance on a LUN, such as migration to another RAID group.

Affinity Rules enable control over which virtual disks should or should not be placed on the same datastore within a datastore cluster in accordance with your best practices and/or availability requirements. By default, a virtual machine’s virtual disks are kept together on the same datastore.

For those who want more details, Frank Denneman wrote an excellent series about Datastore Clusters which might interest you:

Part 1: Architecture and design of datastore clusters.
Part 2: Partially connected datastore clusters.
Part 3: Impact of load balancing on datastore cluster configuration.
Part 4: Storage DRS and Multi-extents datastores.
Part 5: Connecting multiple DRS clusters to a single Storage DRS datastore cluster.
Part 6: Aggregating datastores from multiple storage arrays into one Storage DRS datastore cluster.

Some other articles that might be of use:

  • SDRS and Auto-Tiering solutions – The Injector (Duncan)
  • Storage DRS Load Balance Frequence (Frank)
  • SDRS Out-Of-Space avoidance (Frank)
  • Storage vMotion and the mirror-mode driver (Duncan)

The following video will give an overview of the above mentioned features… worth checking.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 16
  • Page 17
  • Page 18
  • Page 19
  • Page 20
  • Interim pages omitted …
  • Page 53
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in