• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

unmap

vSphere 6.5 what’s new – VMFS 6 / Core Storage

Duncan Epping · Oct 18, 2016 ·

I haven’t spend a lot of time looking at VMFS lately. I was looking in to what was new for vSphere 6.5 and then noticed a VMFS section. Good to see there is still being worked on new features and functionality for the core vSphere file system. So what is new with VMFS 6:

  • Support for 4K Native Drives in 512e mode
  • SE Sparse Default
  • Automatic Space Reclamation
  • Support for 512 devices and 2000 paths (versus 256 and 1024 in the previous versions)
  • CBRC aka View Storage Accelerator

Lets look at them one by one, I think support for 4K native drives in 512e mode speaks for itself. Sizes of spindles keep growing and these new “advanced format” drives come with a 4K byte sector instead of the usual 512 byte sector, which is primarily for better handling of media errors. As of vSphere 6.5 this is now fully supported but note that for now it is only supported when running in 512e mode! The same applies to Virtual SAN in the 6.5 release, only supported in 512e mode. This basically means that 512 byte sectors is being emulated on a 4k drive. Hopefully we will have more on full support for 4Kn for vSphere/VSAN soon.

From an SE Sparse perspective, right now SE Sparse is used primarily View and for LUNs larger than 2TB. When on VMFS 6 the default will be SE Sparse. Not much more to it than that. If you want to know more about SE Sparse, read this great post by Cormac.

Automatic Space Reclamation is something that I know many of my customers have been waiting for. Note that this is based on VAAI Unmap which has been around for a while and allows you to unmap previously used blocks. In other words, storage capacity is reclaimed and released to the array so that when needed other volumes can use these blocks. In the past you needed to run a command to reclaim  the blocks, now this has been integrated in the UI and can simply be turned on or off. Oh, you can find this in the UI when you go to your datastore object and then click configure, you can set it to “none” which means you disable it, or you set it to low in the UI as shown in the screenshot below.

If you prefer “esxcli” then you can do the following to get the info of a particular datastore (sharedVmfs-0 in my case) :

esxcli storage vmfs reclaim config get -l sharedVmfs-0
   Reclaim Granularity: 1048576 Bytes
   Reclaim Priority: low

Or set the datastore to a particular level, note that using esxcli you can also set the priority to medium and high if desired:

esxcli storage vmfs reclaim config set -l sharedVmfs-0 -p high

Next up, support for 512 Devices and 2000 Paths. In previous versions the limit was 256 devices and 1024 paths and some customers were hitting this limit in their cluster. Especially when RDMs are used or people have a limited number of VMs per datastore, or maybe 8 paths to each device are used it becomes easy to hit those limits. Hopefully with 6.5 that will not happen anytime soon. On the other hand, personally I would hope more and more people are considering moving towards either VSAN or Virtual Volumes.

This is one I accidentally ran in to and not really directly related to VMFS but I figured I would add it here anyway otherwise I would forget about it. In the past CBRC aka View Storage Accelerator was limited to 2GB of memory cache per host. I noticed in the advanced settings that it now is set to 32GB, which is a big difference compared to the 2GB in previous releases. I haven’t done any testing, but I assume our EUC team has and hopefully we will see some good performance data on this big increase soon.

And that was it… some great enhancements in the core storage space if you ask me. And I am sure there was even more, and if I find out more details I will share those with you as well.

vSphere 5.0: UNMAP (vaai feature)

Duncan Epping · Jul 15, 2011 ·

With vSphere 5.0 a brand new primitive has been introduced which is called Dead Space Reclamation as part of the overall thin provisioning primitive. Dead Space Reclamation is also sometimes referred to as unmap and it enables you to reclaim blocks of thin-provisioned LUNs by telling the array that specific blocks are obsolete, and yes that command used is the SCSI “unmap” command.

Now you might wonder when you would need this, but think about it for a second.. what happens when you enable Storage DRS? Indeed, virtual machines might be moved around. When a virtual machine is migrated from a thin provisioned LUN to a different LUN you probably would like to reclaim the blocks that were originally allocated by the array to this volume as they are no longer needed for the source LUN. That is what unmap does. Now of course not only when a virtual machine is storage vmotioned but also when a virtual machine or for instance a virtual disk is deleted. Now one thing I need to point out that this is about unmapping blocks associated to a VMFS volume, if you delete files within a VMDK those blocks will not be unmapped!

When playing around with this I had a question from one of my colleagues, he did not have the need to unmap blocks from these thin-provisioned LUNs so he asked if you could disable it, and yes you can:

esxcli system settings advanced set --int-value 1 --option /VMFS3/EnableBlockDelete

The cool thing is that it works with net-new VMFS-5 volumes but also with upgraded VMFS-3 to VMFS-5 volumes:

  1. Open the command line and go to the folder of the datastore:
    cd /vmfs/volumes/datastore_name
  2. Reclaim a percentage of free capacity on the VMFS5 datastore for the thin-provisioned device by running:
    vmkfstools -y <value>

The value should be between 0 and 100, with 60 being the maximum recommended value. I ran it on a thin provisioned LUN with 60% as the percentage to reclaim. Unfortunately I didn’t have access to the back-end of the array so could not validate if any disk space was reclaimed.

/vmfs/volumes/4ddea74d-5a6eb3bc-f95e-0025b5000217 # vmkfstools -y 60
Attempting to reclaim 60% of free capacity 3.9 TB on VMFS-5 file system 'tm-pod04-sas600-sp-4t'.
Done.
/vmfs/volumes/4ddea74d-5a6eb3bc-f95e-0025b5000217 #

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in