• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

storage vmotion

Write-Same vs XCopy when using Storage vMotion

Duncan Epping · Mar 6, 2013 ·

I had a question last week about Storage vMotion and when Write-same vs XCopy was used. I was confident I knew the answer, but I figured I would do some testing. So what was the question exactly and the scenario I tested?

Imagine you have a virtual machine with a “lazy zero thick disk” and an “eager zero thick” disk. When initiating a Storage vMotion while preserving the disk format, would the pre-initialized blocks in the “eager zero thick” disk be copied through XCopy or would “write-same” (aka zero out) be used?

So that is what I tested. I created this virtual machine with two disks of which one being thick and about half filled and the other “eager zero thick”. I did a Storage vMotion to a different datastore (same format as source) and checked esxtop while the migration was on going:

CLONE_WR = 21943
ZERO = 2

In other words, when preserving the disk format the “XCopy” command (CLONE_WR) is issued by the hypervisor. The reason for this is when doing a SvMotion and keeping the disk formats the same the copy command is initiated for a chunk but the hypervisor doesn’t read the block before the command is initiated to the array. Hence the reason the hypervisor doesn’t know these are “zero” blocks in the “eager zero thick” disk and goes through the process of copy offload to the array.

Of course it would interesting to see what happens if I tell during the migration that all disks will need to become “eager zero thick”, remember one of the disks was “lazy zero thick”:

CLONE_WR = 21928
ZERO = 35247

It is clear that in this case it does zero out the blocks (ZERO). As there is a range of blocks which aren’t used by the virtual machine yet the hypervisor ensures these blocks are zeroed so that they can be used immediately when the virtual machine wants to… as that is what the admin requested “eager zero thick” aka pre-zeroed.

For those who want to play around with this, check esxtop and then the VAAI stats. I described how-to in this article.

Storage vMotion does not rename files?

Duncan Epping · Jan 25, 2013 ·

A while back I posted that 5.0 U2 re-introduced the renaming behavior for VM file names. I was just informed by our excellent Support Team that unfortunately the release notes missed something crucial and Storage vMotion does not rename files by default. In order to get the renaming behavior you will have to set an advanced setting within vCenter.. This is how you do it:

  • Go to “Administration”
  • Click on “vCenter Server Settings”
  • Click “Advanced Settings”
  • Add the key “provisioning.relocate.enableRename” with value “true” and click “add”
  • Restart vCenter service or vCenter Server

Now the renaming of the files during the SvMotion process should work again!
All of you who need this functionality, please make sure to add this advanced setting.

storage vmotion does not rename files

Renaming virtual machine files using SvMotion back in 5.0 U2

Duncan Epping · Dec 21, 2012 ·

I have been pushing for this heavily internally together with Frank Denneman and it pleases me to say that it is finally back… You can rename your virtual machine files again using Storage vMotion as of 5.0 u2.

vSphere 5 Storage vMotion is unable to rename virtual machine files on completing migration
In vCenter Server , when you rename a virtual machine in the vSphere Client, the vmdk disks are not renamed following a successful Storage vMotion task. When you perform a Storage vMotion of the virtual machine to have its folder and associated files renamed to match the new name. The virtual machine folder name changes, but the virtual machine file names do not change.

This issue is resolved in this release

src: https://www.vmware.com/support/vsphere5/doc/vsp_vc50_u2_rel_notes.html#resolvedissues

Those who want to know what else is fixed, you can find the full release notes here of both ESXi 5.0 U2 and vCenter 5.0 U2:

  • ESXi – https://www.vmware.com/support/vsphere5/doc/vsp_esxi50_u2_rel_notes.html
  • vCenter – https://www.vmware.com/support/vsphere5/doc/vsp_vc50_u2_rel_notes.html

** do note that this fix is not part of 5.1 yet **

Scripts release for Storage vMotion / HA problem

Duncan Epping · Apr 17, 2012 ·

Last week when the Storage vMotion / HA problem went public I asked both William Lam and Alan Renouf if they could write a script to detect the problem. I want to thank both of them for their quick response and turnaround, they cranked the script out in literally hours. The scripts were validated multiple times in a VDS environment and worked flawless. Note that these scripts can detect the problem in an environment using a regular Distributed vSwitch and a Nexus 1000v, the script can only mitigate the problem though in a Distributed vSwitch environment. Here are the links to the scripts:

  • Perl: Identifying & Fixing Virtual Machines Affected By SvMotion / VDS Issue (William Lam)
  • PowerCLI – Identifying and fixing VMs Affected By SvMotion / VDS Issue (Alan Renouf)

Once again thanks guys!

Clarifying the SvMotion / VDS problem

Duncan Epping · Apr 14, 2012 ·

<Update>I asked William Lam if he could write a script to detect this problem and possibly even mitigate it. William worked on it over the weekend and just posted the result! Head over to his blog for the script! Thanks William for cranking it out this quick! For those who prefer PowerCLI… Alan Renouf just posted his version of the script! Both scripts provide the same functionality though!</Update>

I think there is some confusion around the SvMotion / VDS problem I described a couple of days back. Let me try to clarify it in a couple of simple steps.

First of all, this only applies to virtual machines that have been Storage vMotioned by vCenter 5.0 and are connected to a Distributed vSwitch. This could be either manually or using Storage DRS. So what is the exact problem?

  • When a VM is attached to a dvPortgroup it is connected to a port. This information is stored locally on the host and on the VMFS volume this VM is stored on.
  • This volume will contain a file which is named equal to the port number of this VM.
  • When the VM is Storage vMotioned to a different datastore this file is not created on the destination datastore
  • When the host fails on which the Storage vMotioned VM resides HA will attempt to restart that VM.
  • In order for HA to restart it and connect it to the dvPortgroup this file is required.
  • As the file is not available the restart fails.

You can simply resolve this by connecting the impacted VMs to a different dvPortgroup temporarily and then reconnect them back to the original portgroup. As soon as you’ve done that the file will be created on the datastore. For now this is a manual task, but I am sure some of my teammembers are working on a scripted solution as we speak… right Alan / William? 🙂

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in