• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Server

Punch Zeros!

Duncan Epping · Jul 15, 2011 ·

I was just playing around with vSphere 5.0 and noticed something cool which I hadn’t noticed before. I logged in to the ESXi Shell and typed a command I used a lot in the past, vmkfstools, and I noticed an option called -K. (Just been informed that 4.1 has it as well, I never noticed it though… )

-K –punchzero
This option deallocates all zeroed out blocks and leaves only those blocks that were allocated previously and contain valid data. The resulting virtual disk is in thin format

This is one of those options which many have asked for as in order to re”thin” their disks it would normally require a Storage vMotion. Unfortunately though it only currently works when the virtual machine is powered off, but I guess that is just the next hurdle that needs to be taken.

vSphere 5.0: Storage vMotion and the Mirror Driver

Duncan Epping · Jul 14, 2011 ·

**disclaimer: this article is an out-take of our book: vSphere 5 Clustering Technical Deepdive**

There’s a cool and exciting new feature as part of Storage vMotion in vSphere 5.0. This new feature is called Mirror Mode and it enables faster and highly efficient Storage vMotion processes. But what is it exactly, and what does it replace?

Prior to vSphere 5.0 we used a mechanism called Change Block Tracking (CBT), to ensure that blocks which were already copied to the destination were marked as changed and copied during the iteration. Although CBT was efficient compared to legacy mechanisms (snapshots), the Storage vMotion engineers came up with an even more elegant and efficient solution which is called Mirror Mode. Mirror Mode does exactly what you would expect it to do; it mirrors the I/O. In other words, when a virtual machine that is being Storage vMotioned writes to disk, the write will be committed to both the source and the destination disk. The write will only be acknowledged to the virtual machine when both the source and the destination have acknowledged the write. Because of this, it is unnecessary to do re-iterative copies and the Storage vMotion process will complete faster than ever before.

The questions remain: How does this work? Where does Mirror Mode reside? Is this something that happens inside or outside of the guest? A diagram will make this more obvious.

By leveraging DISKLIB, the Mirror Driver can be enabled for the virtual machine that needs to be Storage vMotioned. Before this driver can be enabled, the virtual machine will need to be stunned and of course unstunned after it has been enabled. The new driver leverages the datamover to do a single-pass block copy of the source disk to the destination disk. Additionally, the Mirror Driver will mirror writes between the two disks. Not only has efficiency increased but also migration time predictability, making it easier to plan migrations. I’ve seen data where the “down time” associated with the final copy pass was virtually eliminated (from 13seconds down to 0.22 seconds) in the case of rapid changing disks, but also the migrations time went from 2900 seconds back to 1900 seconds. Check this great paper by Ali Mashtizadeh for more details.

The Storage vMotion process is fairly straight forward and not as complex as one might expect.

  1. The virtual machine working directory is copied by VPXA to the destination datastore.
  2. A “shadow” virtual machine is started on the destination datastore using the copied files. The “shadow” virtual machine idles, waiting for the copying of the virtual machine disk file(s) to complete.
  3. Storage vMotion enables the Storage vMotion Mirror driver to mirror writes of already copied blocks to the destination.
  4. In a single pass, a copy of the virtual machine disk file(s) is completed to the target datastore while mirroring I/O.
  5. Storage vMotion invokes a Fast Suspend and Resume of the virtual machine (similar to vMotion) to transfer the running virtual machine over to the idling shadow virtual machine.
  6. After the Fast Suspend and Resume completes, the old home directory and VM disk files are deleted from the source datastore.
    1. It should be noted that the shadow virtual machine is only created in the case that the virtual machine home directory is moved. If and when it is a “disks-only Storage vMotion, the virtual machine will simply be stunned and unstunned.

Of course I tested it as I wanted to make sure mirror mode was actually enabled when doing a Storage vMotion. I opened up the VMs log files and this is what I dug up:

2011-06-03T07:10:13.934Z| vcpu-0| DISKLIB-LIB   : Opening mirror node /vmfs/devices/svm/ad746a-1100be4-svmmirror
2011-06-03T07:10:47.986Z| vcpu-0| HBACommon: First write on scsi0:0.fileName='/vmfs/volumes/4d884a16-0382fb1e-c6c0-0025b500020d/VM_01/VM_01.vmdk'
2011-06-03T07:10:47.986Z| vcpu-0| DISKLIB-DDB   : "longContentID" = "68f263d7f6fddfebc2a13fb60560e8e7" (was "dcbd5c17ac7e86a46681af33ef8049e5")
2011-06-03T07:10:48.060Z| vcpu-0| DISKLIB-CHAIN : DiskChainUpdateContentID: old=0xef8049e5, new=0x560e8e7 (68f263d7f6fddfebc2a13fb60560e8e7)
2011-06-03T07:11:29.773Z| Worker#0| Disk copy done for scsi1:0.
2011-06-03T07:15:16.218Z| Worker#0| Disk copy done for scsi0:0.
2011-06-03T07:15:16.218Z| Worker#0| SVMotionMirroredMode: Disk copy phase completed

Is that cool or what? One can only imagine what kind of new features can be introduced in the future using this new mirror mode driver. (FT enabled VMs across multiple physical datacenters and storage arrays anyone? Just guessing by the way…)

Thanks!!

Duncan Epping · Jul 13, 2011 ·

** Update: Available now: paperback full |paperback black & white **

I’ve seen a lot of crazy things, but when I clicked the amazon link for our book yesterday I literally jumped up and started cheering… Number 1 in “Computers & Internet”. These are the kind of things that make it all worth it! PS: We asked amazon/createspace to get the printed copy up asap and they are looking in to it as it should have been ready by now.

vSphere 5.0: Profile-Driven Storage, what is it good for?

Duncan Epping · Jul 13, 2011 ·

By now most of you heard about this new feature called Profile-Driven Storage that will be introduced with vSphere 5.0, but what is it good for? Some of you, depending on the size of the environment, currently have a nice long operational procedure to deploy virtual machines. The procedure usually contains gathering information about the requirements of the virtual machine’s disks, finding the right datastore to meet these requirements, deploy the virtual machine and occasionally check if the virtual machine’s disks are still placed correctly. This is what Profile-Driven Storage aims to solve.

Profile-Driven Storage, in the vCenter UI referred to as VM Storage Profiles, decrease the amount of administration required to properly deploy virtual machines by allowing for the creation of Profiles. These profiles typically list the requirements of storage and can be linked to a virtual machine. I know it all sounds a bit vague, let me visualize that:

In this scenario a virtual machine requires “Gold Storage”, now lets just assume for now that that means RAID-10 and Replicated. By linking the profile to this virtual machine it is possible to validate if the virtual machine is actually located on the right tier of storage. Now this profile can of course be linked to a virtual machine / virtual disk after it has been provisioned, but even more importantly it can be used during the provisioning of the virtual machine to ensure the user picks a datastore (cluster) which is compatible with the requirements! Just check the following screenshot of what that would look like:

Now you might wonder where this storage tier comes from, this is a VM Storage Profile containing storage capabilities provided by:

  • VASA aka vSphere Storage APIs – Storage Awareness
  • User defined capabilities

User defined capabilities are fairly simple to explain, the profile you create (gold / silver / bronze) will be linked to a User Defined “tag” you define on a datastore. For instance you could tag a datastore as “RAID-10”. When would you do this? Well typically when your storage vendor doesn’t offer a Storage Provider for VASA (yet). That takes us to the second method of selecting storage capabilities for your VM Storage Profile, VASA. VASA is a new “API” which enables you to see the characteristics of a datastore through vCenter. With characteristics I am referring to things like: raid level, de-duplication, replication etc. You know what, my a step-by-step guide makes it clear:

  • Go to VM Storage Profiles
  • Create a VM Storage Profile
  • Provide a Name
  • Select the correct Capabilities
  • Finish the creation
  • Create a new VM and select the correct VM Storage Profile, note that only 1 datastore is compatible
  • After creation you can easily check if it is compliant or not by going to the VMs summary tab

A couple of simple initial steps as you can clearly see, but a huge help when provisioning virtual machines and when validating storage / vm requirements!

vSphere 5.0: What has changed for VMFS?

Duncan Epping · Jul 13, 2011 ·

A lot has changed with vSphere 5.0 and so has one of the most under-appreciated “features”…. VMFS. VMFS has been substantially changed and I wanted to list some of the major changes and express my appreciation for the great work the VMFS team has done!

  • VMFS-5 uses GPT instead of MBR
  • VMFS-5 supports volumes up to 64TB
    • This includes Pass-through RDMs!
  • VMFS-5 uses a Unified Blocksize –> 1MB
  • VMFS-5 uses smaller Sub-Blocks
    • ~30.000 8KB blocks versus ~3000 64KB blocks with VMFS-3
  • VMFS-5 has support for very small files (1KB)
  • Non-disruptive upgrade from VMFS-3 to VMFS-5
  • ATS locking enhancements (as part of VAAI)

Although some of these enhancements seem to be “minor” I beg to differ. These enhancements and new capabilities will reduce the amount of volumes needed in your environment and will increase the VM-to-Volume density ultimately leading to less management! Yes I can hear the skeptics thinking “do I really want to introduce such a large failure domain, my standard is a 500GB LUN”. Think about it for a second, although that standard might have been valid years ago, it probably isn’t today. The world has changed, recovery times have decreased, disks continue to grow, locking mechanisms have been improved and can be offloaded through VAAI. Max 10 VMs on a volume? I don’t think so!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 182
  • Page 183
  • Page 184
  • Page 185
  • Page 186
  • Interim pages omitted …
  • Page 336
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in