• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vstorage

Automatic rescan of your HBAs….

Duncan Epping · Aug 4, 2009 ·

As some of you, and I hope all of you, have noticed by now when you create / expand / extent / delete a datastore a rescan of your HBAs will automatically be initiated. This however can lead to a “rescan storm” when you are building a new environment.

You can imagine that it’s pointless to rescan your HBA 25 times in a row when you are adding more than 1 new datastore. I can even imagine you would like to be in control; when, which server and at what time. This behavior was introduced with vCenter 2.5 U2 I believe but as I just found out can be disabled. Now keep in mind that disabling is not a best practice. This should be avoided as a default setting but will come in handy when you are building a new site.

  1. Open up the vSphere Client
  2. Go to Administration -> vCenter Server
  3. Go to Settings -> Advanced Settings
  4. If the key “config.vpxd.filter.hostRescanFilter” is not available add it and set it to false

Make sure to set it to “true” as soon as you are done because you would like to make sure the environment is consistent in the future when you or your customer is adding/removing/expanding a datastore.

Storage VMotion and moving to a Thin Provisioned disk

Duncan Epping · Jul 31, 2009 ·

I was just reading this article by Vladan about Storage VMotion. He explains how you can get your unused disk space back with Storage VMotion and moving to a Thin Provisioned disk at the same time. I agree that this is one of the best new features out there. It’s easy and effective.

However, you will need to keep in mind that although it seems like disk space is not used according to the Guest OS it might have been used in the past. (An OS usually only removes the pointer to the data and not the actual data itself.) If you d0 not zero out your disk before you do the Storage VMotion and migration to a thin provisioned disk you will be copying all the “filled” blocks. This is actually the same concept as for instance a VCB full image dump, which I addressed in the beginning of 2008.

So for optimizing migrations to Thin Provisioned disks either use sdelete by Microsoft/Sysinternals or use the “shrink” option within VMware tools. Both work fine, but keep in mind they can be time consuming. You could use sdelete to script the solution and actually zero-out every disk once a week.

NetApp’s vSphere best practices and EMC’s SRM in a can

Duncan Epping · Jul 15, 2009 ·

This week both NetApp and EMC released updated versions of documents I highly recommend to everyone interested in virtualization! Now some might think why would I want to read a NetApp document when we are an EMC shop.! Or why would I want to read an EMC document when my environment is hosted on a NetApp FAS3050. The answer is simple, although both documents contain vendor specific information there’s more to be learned from these documents because the focus is VMware products. No marketing nonsense, just the good stuff!

NetApp’s guide dives in to the basics of multipathing for instance. Especially the section on iSCSI/NFS is useful, how do I setup multiple VMkernels for load balancing and are the pros and cons. EMC’s SRM and Celerra guide includes a full how to set this up. Not only the EMC side but also the VMware SRM side of it. Like I said both documents are highly recommended!

  • TR-3749: vSphere on NetApp Storage Best Practices
  • EMC Celerra VSA and VMware SRM setup and configurations guide

Change the default pathing policy to round robin

Duncan Epping · Jul 10, 2009 ·

I just received an email from one one of my readers, Mike Laskowski, he wanted to share the following with us:

I have over 100+ LUN’s in my environment. Round Robin is officially supported on ESX4. In the past we had a script that would manually load balance the LUN’s across FAs. ESX4 has a different way to balance the LUN’s to round robin. What you can do is build the ESX server and then in the CLI do:

esxcli nmp satp setdefaultpsp –psp VMW_PSP_RR –satp VMW_SATP_SYMM

Note: You should do this before presenting LUN’s and adding datastore. If you already have LUN’s presented and datastore added, then you do that command and then you’ll have to reboot the ESX server to take effect. This will make Round Robin the default on all LUN’s. It would take forever if you had to manually change each LUN.

THX Mike Laskowski

Please note that this example is specifically for the “SYMM” SATP. SATP stands for Storage Array Type Plugin and Symm stands for EMC DMX Symmetrix. In case you are using a different array find out what the SATP is you are using and change it accordingly.

That’s why I love blogging…

Duncan Epping · Jun 2, 2009 ·

I’m an outspoken person as most of you noticed by now, but I’m also open for discussion and that’s why I particularly like blogging. Every now and then a good discussion starts based on one of my blog articles. (Or a blog article of any of the bloggers for that matter.) These usually start in the form of a comment on an article but also via email or Twitter, even within VMware some of my articles have been discussed extensively.

A couple of weeks ago I voiced my opinion about VMFS block sizes and growing your VMFS. Growing your VMFS is a new feature, introduced with vSphere. In the article I stated that a large block size, 8MB, would be preferabel because of the fact that you would have less locking when using thin provisioned disks.

If you create a thin provisioned disk on a datastore with a 1MB block size the thin provisioned disk will grow with increments of 1MB. Hopefully you can see where I’m going. A thin provisioned disk on a datastore with an 8MB block size will grow in 8MB increments. Each time the thin-provisioned disk grows a SCSI reservation takes place because of meta data changes. As you can imagine an 8MB block size will decrease the amount of meta data changes needed, which means less SCSI reservations. Less SCSI reservations equals better performance in my book.

As a consultant I get a lot of question on vmfs locking and I assumed, with the current understanding I had, that a larger blocksize would be beneficial in terms of performance. I’m no scientist or developer and I rely on the information I find on the internet, manuals, course material and the occasional internal mailinglists… In this case this information wasn’t correct, or better said not updated yet to the changes that vSphere introduced. Luckily for me, and you guys, one of my colleagues jumped in to give us some good insights:

I am a VMware employee and I wrote VMFS with a few cronies, but the following is a personal opinion:

Forget about locking. Period. Yes, SCSI reservations do happen (and I am not trying to defend that here) and there will be some minor differences in performance, but the suggestion on the (very well written) blog post goes against the mission of VMFS, which is to simplify storage virtualization.

Heres a counter example: if you have a nearly full 8MB VMFS volume and a less full 1MB VMFS volume, you’ll still encounter less IO overheads allocating blocks on a 1MB VMFS volume compared to the 8MB volume because the resource allocator will sweat more trying to find a free block in the nearly full volume. This is just one scenario, but my point is that there are tons of things to consider if one wants to account for overheads in a holistic manner and the VMFS engineers don’t want you to bother with these “tons” of things. Let us handle all that for you.

So in summary, blocksizes and thin provisioning should be treated orthogonally. Since thin provisioning is an official feature, the thing for users to know is that it will work “well” on all VMFS blocksize configurations that we support. Thinking about reservations or # IOs the resource manager does, queue sizes on a host vs the blocksize, etc will confuse the user with assertions that are not valid all the time.

I like the post in that it explains blocks vs sub-blocks. It also appeals to power users, so that’s great too. But reservation vs. thin provisioning considerations should be academic only. I can tell you about things like non-blocking retries, optimistic IO (not optimistic locking) and tons of other things that we have done under the covers to make sure reservations and thin provisioning don’t belong in the same sentence with vSphere 4. But conversely, I challenge any user to prove that 1MB incurs a significant overhead compared to 8MB with thin provisioning :)

Satyam Vaghani

Does this mean that I would not pick an 8MB block size over a 1MB block size any more?

Not exactly, but it will depend on the specific situation of a customer. My other reason for picking an 8MB block size was VMFS volume growing. If you grow a VMFS volume the reason for this probably is the fact that you need to grow a VMDK. If the VMDK needs to grow larger than the maximum file size, which is dictated by the chosen block size, you would need to move(Storage VMotion or cold migration) the VMDK to a different datastore. But if you would have selected an 8MB block size when you created the VMFS volume you would not be in this position. In other words I would prefer a larger block size, but this is based on flexibility in terms of administration and not based on performance or possible locking issues.

I want to thank Satyam for his very useful comment, thanks for chipping in!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 7
  • Page 8
  • Page 9
  • Page 10
  • Page 11
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in