• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

5.0

vSphere 5.0: ESXCLI

Duncan Epping · Jul 18, 2011 ·

Many of us have been logging in to the ESX console for ages and have (ab)used the esxcfg-* commands on a regular basis. We’ve used them in scripts and during troubleshooting and all of that is about to change… vSphere 5.0 introduces a new command line utility: esxcli.

Some of you will say “Hey esxcli was already available before 5.0”, and yes you are correct it was around however it has been completely revamped, it feels different… it is different, hence I said “new”. A unified command is most definitely the direction we are heading in and as such it is of utmost importance that you get familiarized with it. Although the esxcfg-* commands are still available they have been deprecated and as such no longer supported.

What has changed? Well very simple many new name spaces have been introduced and the namespace that were already in there moved up a layer to allow for a more scalable and flexible tool. Under the “root” of esxcli there are the following namespaces:

So how are these constructed?

esxcli [dispatcher options] <namespace> [<namespace> …] <cmd> [cmd options]

With dispatcher options we are referring to the ability to connect to a remote host for instance but also with a different username/password. Namespaces is mentioned twice as namespaces can actually be nested and use a drill down approach as I would like to call it. Cmd options refers to the command that needs to be executed to this namespace, this could be “get”, “list” or “set”.

I guess most namespaces actually make a lot of sense. Lets give a couple of example to show the power of esxcli:

  • Add a portgroup to a vSwitch –> esxcli network vswitch standard portgroup add –portgroup-name=<portgroup> –vswitch-name=<vSwitch>
  • List all storage devices –> esxcli storage nmp device list
  • Add a dns-server –> esxcli network ip dns server add –server=<dns server name or ip>
  • Add an nfs-share –> esxcli filesystem nfs add –host=<host_name> –share=<share_name> –volume=<volume_name>
  • Change MTU of vmkernel interface –> esxcli network ip interface set -m <mtu size> -i <interface_name>

It is all fairly straight forward as you’ve seen, but I have found myself lost in the trenches of esxcli already a couple of times. If this happens to you remember that you can also list all namespaces very simply by doing the following:

  • esxcli esxcli command list

For more detailed and in-depth info check this excellent article by William Lam.

vSphere 5.0: UNMAP (vaai feature)

Duncan Epping · Jul 15, 2011 ·

With vSphere 5.0 a brand new primitive has been introduced which is called Dead Space Reclamation as part of the overall thin provisioning primitive. Dead Space Reclamation is also sometimes referred to as unmap and it enables you to reclaim blocks of thin-provisioned LUNs by telling the array that specific blocks are obsolete, and yes that command used is the SCSI “unmap” command.

Now you might wonder when you would need this, but think about it for a second.. what happens when you enable Storage DRS? Indeed, virtual machines might be moved around. When a virtual machine is migrated from a thin provisioned LUN to a different LUN you probably would like to reclaim the blocks that were originally allocated by the array to this volume as they are no longer needed for the source LUN. That is what unmap does. Now of course not only when a virtual machine is storage vmotioned but also when a virtual machine or for instance a virtual disk is deleted. Now one thing I need to point out that this is about unmapping blocks associated to a VMFS volume, if you delete files within a VMDK those blocks will not be unmapped!

When playing around with this I had a question from one of my colleagues, he did not have the need to unmap blocks from these thin-provisioned LUNs so he asked if you could disable it, and yes you can:

esxcli system settings advanced set --int-value 1 --option /VMFS3/EnableBlockDelete

The cool thing is that it works with net-new VMFS-5 volumes but also with upgraded VMFS-3 to VMFS-5 volumes:

  1. Open the command line and go to the folder of the datastore:
    cd /vmfs/volumes/datastore_name
  2. Reclaim a percentage of free capacity on the VMFS5 datastore for the thin-provisioned device by running:
    vmkfstools -y <value>

The value should be between 0 and 100, with 60 being the maximum recommended value. I ran it on a thin provisioned LUN with 60% as the percentage to reclaim. Unfortunately I didn’t have access to the back-end of the array so could not validate if any disk space was reclaimed.

/vmfs/volumes/4ddea74d-5a6eb3bc-f95e-0025b5000217 # vmkfstools -y 60
Attempting to reclaim 60% of free capacity 3.9 TB on VMFS-5 file system 'tm-pod04-sas600-sp-4t'.
Done.
/vmfs/volumes/4ddea74d-5a6eb3bc-f95e-0025b5000217 #

Storage DRS interoperability

Duncan Epping · Jul 15, 2011 ·

I was asked about this a couple of times over the last few days so I figured it might be an interesting topic. This is described in our book as well in the Datastore Cluster chapter but I decided to rewrite it and add some of it into a table to make it easier to digest. Lets start of with the table and explain why/where/what… Keep in mind that this is my opinion and not necessarily the best practice or recommendation of your storage vendor. When you implement Storage DRS make sure to validate this against their recommendations. I have marked the area where I feel caution needs to be taken with (*).

Capability Mode Space I/O Metric
Thin Provisioning Manual Yes (*) Yes
Deduplication Manual Yes (*) Yes
Replication Manual (*) Yes Yes
Auto-tiering Manual Yes No (*)

Yes you are reading that correctly, Storage DRS enabled with all of them and even with I/O metric enabled except for auto-tiering. Now although I said “Manual” for all of them I even believe that in some of these cases Fully Automated mode would be perfectly fine. Now as it will of course depend on the environment I would suggest to start out in Manual mode if any of these 4 storage capabilities are used to see what the impact is after applying a recommendation.

First of all “Manual Mode”… What is it? Manual Mode basically means that Storage DRS will make recommendations when the configured thresholds for latency or space utilization has been exceeded. It also will provide recommendations for placement during the provisioning process of a virtual machine or a virtual disk. In other words, when setting Storage DRS to manual you will still benefit from it as it will monitor your environment for you and based on that recommend where to place or migrate virtual disks to.

In the case of Thin Provisioning I would like to expand. I would recommend before migrating virtual machines that the “dead space” that will be left behind on the source datastore after the migration can be reclaimed by the use of the unmap primitive as part of VAAI.

Deduplication is a difficult one. The question is, will the “deduplication” process be as efficient after the migration as it was before the migration. Will it be able to deduplicate the same amount of data? There is always a chance that this is not the case… But than again, do you really care all that much about it when you are running out of disk space on your datastore or are exceeding your latency threshold? Those are very valid reasons to move a virtual disk as both can lead to degradation of service.

In an environment where replication is used care should be taken when balancing recommendations are applied. The reason for this being that the full virtual disk that is migrated will need to be replicated after the migration. This temporarily leads to an “unprotected state” and as such it is recommended to only migrate virtual disks which are protected during scheduled maintenance windows.

Auto-tiering arrays have been a hot debate lately. Not many seem to agree with my stance but up til today no one has managed to give me a great argument or explain to me exactly why I would not want to enable Storage DRS on auto-tiering solutions. Yes I fully understand that when I move a virtual machine from datastore A to datastore B the virtual machine will more than likely end up on relatively slow storage and the auto-tiering solution will need to optimize the placement again. However when you are running out of diskspace what would you prefer, down time or a temporary slow down? In the case of “I/O” balancing this is different and in a follow up post I will explain why this is not supported.

** This article is based on vSphere 5.0 information **

Punch Zeros!

Duncan Epping · Jul 15, 2011 ·

I was just playing around with vSphere 5.0 and noticed something cool which I hadn’t noticed before. I logged in to the ESXi Shell and typed a command I used a lot in the past, vmkfstools, and I noticed an option called -K. (Just been informed that 4.1 has it as well, I never noticed it though… )

-K –punchzero
This option deallocates all zeroed out blocks and leaves only those blocks that were allocated previously and contain valid data. The resulting virtual disk is in thin format

This is one of those options which many have asked for as in order to re”thin” their disks it would normally require a Storage vMotion. Unfortunately though it only currently works when the virtual machine is powered off, but I guess that is just the next hurdle that needs to be taken.

vSphere 5.0: Storage vMotion and the Mirror Driver

Duncan Epping · Jul 14, 2011 ·

**disclaimer: this article is an out-take of our book: vSphere 5 Clustering Technical Deepdive**

There’s a cool and exciting new feature as part of Storage vMotion in vSphere 5.0. This new feature is called Mirror Mode and it enables faster and highly efficient Storage vMotion processes. But what is it exactly, and what does it replace?

Prior to vSphere 5.0 we used a mechanism called Change Block Tracking (CBT), to ensure that blocks which were already copied to the destination were marked as changed and copied during the iteration. Although CBT was efficient compared to legacy mechanisms (snapshots), the Storage vMotion engineers came up with an even more elegant and efficient solution which is called Mirror Mode. Mirror Mode does exactly what you would expect it to do; it mirrors the I/O. In other words, when a virtual machine that is being Storage vMotioned writes to disk, the write will be committed to both the source and the destination disk. The write will only be acknowledged to the virtual machine when both the source and the destination have acknowledged the write. Because of this, it is unnecessary to do re-iterative copies and the Storage vMotion process will complete faster than ever before.

The questions remain: How does this work? Where does Mirror Mode reside? Is this something that happens inside or outside of the guest? A diagram will make this more obvious.

By leveraging DISKLIB, the Mirror Driver can be enabled for the virtual machine that needs to be Storage vMotioned. Before this driver can be enabled, the virtual machine will need to be stunned and of course unstunned after it has been enabled. The new driver leverages the datamover to do a single-pass block copy of the source disk to the destination disk. Additionally, the Mirror Driver will mirror writes between the two disks. Not only has efficiency increased but also migration time predictability, making it easier to plan migrations. I’ve seen data where the “down time” associated with the final copy pass was virtually eliminated (from 13seconds down to 0.22 seconds) in the case of rapid changing disks, but also the migrations time went from 2900 seconds back to 1900 seconds. Check this great paper by Ali Mashtizadeh for more details.

The Storage vMotion process is fairly straight forward and not as complex as one might expect.

  1. The virtual machine working directory is copied by VPXA to the destination datastore.
  2. A “shadow” virtual machine is started on the destination datastore using the copied files. The “shadow” virtual machine idles, waiting for the copying of the virtual machine disk file(s) to complete.
  3. Storage vMotion enables the Storage vMotion Mirror driver to mirror writes of already copied blocks to the destination.
  4. In a single pass, a copy of the virtual machine disk file(s) is completed to the target datastore while mirroring I/O.
  5. Storage vMotion invokes a Fast Suspend and Resume of the virtual machine (similar to vMotion) to transfer the running virtual machine over to the idling shadow virtual machine.
  6. After the Fast Suspend and Resume completes, the old home directory and VM disk files are deleted from the source datastore.
    1. It should be noted that the shadow virtual machine is only created in the case that the virtual machine home directory is moved. If and when it is a “disks-only Storage vMotion, the virtual machine will simply be stunned and unstunned.

Of course I tested it as I wanted to make sure mirror mode was actually enabled when doing a Storage vMotion. I opened up the VMs log files and this is what I dug up:

2011-06-03T07:10:13.934Z| vcpu-0| DISKLIB-LIB   : Opening mirror node /vmfs/devices/svm/ad746a-1100be4-svmmirror
2011-06-03T07:10:47.986Z| vcpu-0| HBACommon: First write on scsi0:0.fileName='/vmfs/volumes/4d884a16-0382fb1e-c6c0-0025b500020d/VM_01/VM_01.vmdk'
2011-06-03T07:10:47.986Z| vcpu-0| DISKLIB-DDB   : "longContentID" = "68f263d7f6fddfebc2a13fb60560e8e7" (was "dcbd5c17ac7e86a46681af33ef8049e5")
2011-06-03T07:10:48.060Z| vcpu-0| DISKLIB-CHAIN : DiskChainUpdateContentID: old=0xef8049e5, new=0x560e8e7 (68f263d7f6fddfebc2a13fb60560e8e7)
2011-06-03T07:11:29.773Z| Worker#0| Disk copy done for scsi1:0.
2011-06-03T07:15:16.218Z| Worker#0| Disk copy done for scsi0:0.
2011-06-03T07:15:16.218Z| Worker#0| SVMotionMirroredMode: Disk copy phase completed

Is that cool or what? One can only imagine what kind of new features can be introduced in the future using this new mirror mode driver. (FT enabled VMs across multiple physical datacenters and storage arrays anyone? Just guessing by the way…)

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 29
  • Page 30
  • Page 31
  • Page 32
  • Page 33
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in