• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vmfs

Punch Zeros!

Duncan Epping · Jul 15, 2011 ·

I was just playing around with vSphere 5.0 and noticed something cool which I hadn’t noticed before. I logged in to the ESXi Shell and typed a command I used a lot in the past, vmkfstools, and I noticed an option called -K. (Just been informed that 4.1 has it as well, I never noticed it though… )

-K –punchzero
This option deallocates all zeroed out blocks and leaves only those blocks that were allocated previously and contain valid data. The resulting virtual disk is in thin format

This is one of those options which many have asked for as in order to re”thin” their disks it would normally require a Storage vMotion. Unfortunately though it only currently works when the virtual machine is powered off, but I guess that is just the next hurdle that needs to be taken.

vSphere 5.0: What has changed for VMFS?

Duncan Epping · Jul 13, 2011 ·

A lot has changed with vSphere 5.0 and so has one of the most under-appreciated “features”…. VMFS. VMFS has been substantially changed and I wanted to list some of the major changes and express my appreciation for the great work the VMFS team has done!

  • VMFS-5 uses GPT instead of MBR
  • VMFS-5 supports volumes up to 64TB
    • This includes Pass-through RDMs!
  • VMFS-5 uses a Unified Blocksize –> 1MB
  • VMFS-5 uses smaller Sub-Blocks
    • ~30.000 8KB blocks versus ~3000 64KB blocks with VMFS-3
  • VMFS-5 has support for very small files (1KB)
  • Non-disruptive upgrade from VMFS-3 to VMFS-5
  • ATS locking enhancements (as part of VAAI)

Although some of these enhancements seem to be “minor” I beg to differ. These enhancements and new capabilities will reduce the amount of volumes needed in your environment and will increase the VM-to-Volume density ultimately leading to less management! Yes I can hear the skeptics thinking “do I really want to introduce such a large failure domain, my standard is a 500GB LUN”. Think about it for a second, although that standard might have been valid years ago, it probably isn’t today. The world has changed, recovery times have decreased, disks continue to grow, locking mechanisms have been improved and can be offloaded through VAAI. Max 10 VMs on a volume? I don’t think so!

Per-volume management features in 4.x

Duncan Epping · Apr 7, 2011 ·

Last week I noticed that one of the articles that I wrote in 2008 is still very popular. This article explains the various possible combinations of the advanced settings “EnableResignature” and “DisallowSnapshotLUN”. For those who don’t know what these options do in a VI3 environment; they allow you to access a volume which is marked as “unresolved” due to the fact that the VMFS metadata doesn’t match the physical properties of the LUN. In other words, the LUN that you are trying to access could be a Snapshot of a LUN or a copy (think replication) and vSphere is denying you access.

These advanced options where often used in DR scenarios where a fail-over of a LUN needed to occur or when for instance when a virtual machine needed to be restored from a snapshot of a volume. Many of our users would simply change the setting for either EnableResignature to 1 or for DisallowSnapshotLUN to 0 and force the LUN to be available again. Those readers who paid attention noticed that I used “LUN” instead of “LUNs” and here lies one of the problems…. These settings were global settings. Which means that ANY given LUN that was marked as “unresolved” would be resignatured or mounted. This unfortunately more than often lead to problems where incorrect volumes were mounted or resignatured. These volumes should probably have not have been presented in the first place but that is not the point. The point is that a global setting increases the chances that issues like these occur. With vSphere this problem was solved as VMware introduced “esxcfg-volume -r”.

This command enables you to resignature on a per volume basis…. Well not only that, “esxcfg-volume -m” enables you to mount volumes and “-l” is used to list volumes. Of course you can also do this through the vSphere client as well:

  • Click the Configuration tab and click Storage in the Hardware panel.
  • Click Add Storage.
  • Select the Disk/LUN storage type and click Next.
  • From the list of LUNs, select the LUN that has a datastore name displayed in the VMFS Label column and click Next.
    The name present in the VMFS Label column indicates that the LUN is a copy that contains a copy of an existing VMFS datastore.
  • Under Mount Options, select Assign a New Signature and click Next.
  • In the Ready to Complete page, review the datastore configuration information and click Finish.

But what if I do want to resignature all these volumes at once? What if you have a corner case scenario where this is a requirement? Well in that case you could actually still use the advanced features as they haven’t exactly disappeared, they have been hidden in the UI (vSphere Client) but they are still around. From the commandline you can still query the status:

esxcfg-advcfg -g /LVM/EnableResignature

Or you can change the global configuration option:

esxcfg-advcfg -s 1 /LVM/EnableResignature

Please note that the first step was hiding them, but they will be deprecated at some future release. It is recommended to use “esxcfg-volume” and resignature on a per volume basis.

Mythbusters: ESX/ESXi caching I/O?

Duncan Epping · Apr 7, 2011 ·

We had a discussion internally about ESX/ESXi caching I/Os. In particular this discussion was around caching of writes  as a customer was concerned about consistency of their data. I fully understand that they are concerned and I know in the past some vendors were doing write caching however VMware does not do this for obvious reasons. Although performance is important it is worthless when your data is corrupt / inconsistent. Of course I looked around for  data to back this claim up and bust this myth once and for all. I found a KB article that acknowledges this and have a quote from one of our VMFS engineers.

Source Satyam Vaghani (VMware Engineering)
ESX(i) does not cache guest OS writes. This gives a VM the same crash consistency as a physical machine: i.e. a write that was issued by the guest OS and acknowledged as successful by the hypervisor is guaranteed to be on disk at the time of acknowledgement. In other words, there is no write cache on ESX to talk about, and so disabling it is moot. So that’s one thing out of our way.

Source – Knowledge Base
VMware ESX acknowledges a write or read to a guest operating system only after that write or read is acknowledged by the hardware controller to ESX. Applications running inside virtual machines on ESX are afforded the same crash consistency guarantees as applications running on physical machines or physical disk controllers.

RE: VMFS 3 versions – maybe you should upgrade your vmfs?

Duncan Epping · Feb 25, 2011 ·

I was just answering some questions on the VMTN forum when someone asked the following question:

Should I upgrade our VMFS luns from 3.21 (some in 3.31) to 3.46 ? What benefits will we get?

This person was referred to an article by Frank Brix Pedersen who states the following:

Ever since ESX3.0 we have used the VMFS3 filesystem and we are still using it on vSphere. What most people don’t know is that there actually is sub versions of the VMFS.

  • ESX 3.0 VMFS 3.21
  • ESX 3.5 VMFS 3.31 key new feature: optimistic locking
  • ESX 4.0 VMFS 3.33 key new feature: optimistic IO

The good thing about it is that you can use all features on all versions. In ESX4 thin provisioning was introduced but it does need the VMFS to be 3.33. It will still work on 3.21. The changes in the VMFS is primarily regarding the handling of SCSI reservations. SCSI reservations happens a lot of times. Creation of new vm, growing a snapshot delta file or growing thin provisioned disk etc.

I want to make sure everyone realizes that this is actually not true. All the enhancements made in 3.5, 4.0 and even 4.1 are not implemented on a filesystem level but rather on a VMFS Driver level or through the addition of specific filters or even a new datamover.

Just to give an extreme example: You can leverage VAAI capabilities on a VMFS volume with VMFS filesystem version 3.21, however in order to invoke VAAI you will need the VMFS 3.46 driver. In other words, a migration to a new datastore is not required to leverage new features!

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in