• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

5.1

Write-Same vs XCopy when using Storage vMotion

Duncan Epping · Mar 6, 2013 ·

I had a question last week about Storage vMotion and when Write-same vs XCopy was used. I was confident I knew the answer, but I figured I would do some testing. So what was the question exactly and the scenario I tested?

Imagine you have a virtual machine with a “lazy zero thick disk” and an “eager zero thick” disk. When initiating a Storage vMotion while preserving the disk format, would the pre-initialized blocks in the “eager zero thick” disk be copied through XCopy or would “write-same” (aka zero out) be used?

So that is what I tested. I created this virtual machine with two disks of which one being thick and about half filled and the other “eager zero thick”. I did a Storage vMotion to a different datastore (same format as source) and checked esxtop while the migration was on going:

CLONE_WR = 21943
ZERO = 2

In other words, when preserving the disk format the “XCopy” command (CLONE_WR) is issued by the hypervisor. The reason for this is when doing a SvMotion and keeping the disk formats the same the copy command is initiated for a chunk but the hypervisor doesn’t read the block before the command is initiated to the array. Hence the reason the hypervisor doesn’t know these are “zero” blocks in the “eager zero thick” disk and goes through the process of copy offload to the array.

Of course it would interesting to see what happens if I tell during the migration that all disks will need to become “eager zero thick”, remember one of the disks was “lazy zero thick”:

CLONE_WR = 21928
ZERO = 35247

It is clear that in this case it does zero out the blocks (ZERO). As there is a range of blocks which aren’t used by the virtual machine yet the hypervisor ensures these blocks are zeroed so that they can be used immediately when the virtual machine wants to… as that is what the admin requested “eager zero thick” aka pre-zeroed.

For those who want to play around with this, check esxtop and then the VAAI stats. I described how-to in this article.

How to disable Datastore Heartbeating

Duncan Epping · Feb 25, 2013 ·

I have had this question multiple times now, how do I disable datastore heartbeating? Personally, I don’t know why you would ever want to do this… but as multiple people have asked I figured I would write it down. There is no “disable” button unfortunately, but there is a work-around. Below are the steps you need to take to disable datastore heartbeating.

vSphere Client:

  • Right Cluster object
  • Click “Edit Settings”
  • Click “Datastore Heartbeating”
  • Click “Select only from my preferred datastores”
  • Do not select any datastores

Web Client:

  • Click “Cluster object”
  • Click “Manage” tab
  • Click “vSphere HA”
  • Click “Edit button” on the right side
  • Click “Datastore Heartbeating”
  • Click “Select only from my preferred datastores”
  • Do not select any datastores

It is as simple as that… However, let me stress that this is not something that I would recommend doing. Only when you are troubleshooting and need it disabled for whatever reason, please make sure to enable it when you are done.

vSphere HA 5.x restart attempt timing

Duncan Epping · Feb 18, 2013 ·

I wrote about how vSphere HA 5.x restart attempt timing works a long time ago but there appears still to be some confusion about this. I figured I would clarify this a bit more, I don’t think I can make it more simple than this:

  • Initial restart attempt
  • If the initial attempt failed, a restart will be retried after 2 minutes of the previous attempt
  • If the previous attempt failed, a restart will be retried after 4 minutes of the previous attempt
  • If the previous attempt failed, a restart will be retried after 8 minutes of the previous attempt
  • If the previous attempt failed, a restart will be retried after 16 minutes of the previous attempt

After the fifth failed attempt the cycle ends. Well that is, unless a new master host is selected (for whatever reason) between the first and the fifth attempt. In that case, we start counting again. Meaning that if a new master is selected after attempt 3, the new master will start with the “initial restart attempt.

Or as Frank Denneman would say:

vSphere HA 5.x restart attempt timing

 

** Disclaimer: This article contains references to the words master and/or slave. I recognize these as exclusionary words. The words are used in this article for consistency because it’s currently the words that appear in the software, in the UI, and in the log files. When the software is updated to remove the words, this article will be updated to be in alignment. **

Do I still need to set “HaltingIdleMsecPenalty” with vSphere 5.x?

Duncan Epping · Feb 4, 2013 ·

I received a question last week from a customer. They have a fairly big VDI environment and are researching the migration to vSphere 5.1. One of the changes they made in the 4.1 time frame was the advanced setting “HaltingIdleMsecPenalty” in order to optimize hyper threading fairness for their specific desktop environment. I knew that this was no longer needed but didn’t have an official reference for them (There is a blog post by Tech Marketing performance guru Mark A. that mentions it though). Today I noticed it was mentioned in a whitepaper which was recently released titled “The CPU Scheduler in VMware vSphere 5.1“. I recommend everyone to read this whitepaper as it gives you a better understanding of how the scheduler works and how it has been improved over time.

The following section is an outtake from that white paper.

Improvement in Hyper-Threading Utilization

In vSphere 4.1, a strict fairness enforcement policy on HT systems might not allow achieving full utilization of all logical processors in a situation described in KB article 1020233 [5]. This KB also provides a work-around based on an advanced ESX host attribute, “HaltingIdleMsecPenalty”. While such a situation should be rare, a recent change in the HT fairness policy described in “Policy on Hyper-Threading,” obviates the need for the work-around. Figure 8 illustrates the effectiveness of the new HT fairness policy for VDI workloads. In the experiments, the number of VDI users without violating the quality of service (QoS) requirement is measured on vSphere 4.1, vSphere 4.1 with “HaltingIdleMsecPenalty” tuning applied, and vSphere 5.1. Without the tuning, vSphere 4.1 supports 10% fewer users. On vSphere 5.1 with the default setting, it slightly exceeds the tuned performance of vSphere 4.1.

Disk.SchedulerWithReservation aka mClock

Duncan Epping · Jan 23, 2013 ·

A long time ago when playing around in my lab with vSphere 5.1 I stumbled across this advanced setting called Disk.SchedulerWithReservation. I start digging to see what I could do with it and what it was about… if I could anything with it at all.

The description was kind of vague but it revealed what this disk scheduler was, it mentioned “mClock”. For those who don’t collect academic papers for night time reading like me, mClock is a new disk scheduler which is being researched by VMware and partners. The disk scheduler, in contrary to the current scheduler SFQ, will allow you to do some more advanced stuff.

For instance mClock will allow you to set an IOps reservation on a VM. So in other words, when you have a virtual machine that needs to have 500 IOps guaranteed you will be able to do so with mClock. Now I have been digging and asking around and unfortunately this logic to set reservations has not been implemented in 5.1.

If you are interested in mClock and its benefits I would recommend reading this academic paper by my colleagues Ajay Gulati (One of the leads on: DRS, Storage DRS, SIOC). I find it very interesting and hope it will be fully available sometime soon. And before you ask, no I don’t know when or even if this will ever be available.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 3
  • Page 4
  • Page 5
  • Page 6
  • Page 7
  • Interim pages omitted …
  • Page 15
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in