• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vstorage

Block sizes and growing your VMFS

Duncan Epping · May 14, 2009 ·

I had a discussion on block sizes after the post on thin-provisioned disks with some of my colleagues. For those that did not read this post here’s a short recap:

If you create a thin provisioned disk on a datastore with a 1MB blocksize the thin provisioned disk will grow with increments of 1MB. Hopefully you can see where I’m going. A thin provisioned disk on a datastore with an 8MB blocksize will grow in 8MB increments. Each time the thin-provisioned disk grows a SCSI reservation takes place because of meta data changes. As you can imagine an 8MB blocksize will decrease the amount of meta data changes needed, which means less SCSI reservations. Less SCSI reservations equals better performance in my book.

As some of you know the locking mechanism has been improved with vSphere, yes there’s a good reason why they call it “optimistic locking”. In other words why bother increasing your block size if the locking mechanism has improved?

Although the mechanism behaves differently it does not mean that locking does not need to occur. In my opinion it’s still better to have 1 lock vs 8 locks if a VMDK need to grow. But there’s another good reason, with vSphere comes growable VMFS volumes. You might start with a 500GB VMFS volume and a 1MB block size, but when you expand the disk this block size might not be sufficient when you create new VMs. Keep in mind that you can’t modify the block size, while you just might have given people the option to create disks beyond the limit of the block size. (Mind: you will receive an error, it’s not possible.)

So what about overhead? Will my 1KB log files all be created in 8MB blocks? Cause this would mean a large overhead and might be a valid reason to use 1MB block sizes!

No it will not. VMFS-3 solves this issue by offering a sub-block allocator. Small files use a sub block to reduced overhead. A sub block of a 1MB block size volume is 1/16th the size of the block. For an 8MB block size volume it’s 1/128th. In other words, the sub-blocks are 64KB large in both cases and thus the overhead is the same in both cases as well.

Now my question to you guys, what do you think? Would it make sense to always use an 8MB blocksize… I think it would

Storage views, exploring the next version of ….

Duncan Epping · Apr 20, 2009 ·

I was playing around with vSphere this weekend while replying to topics on the VMTN Community. One of the things often asked is storage reporting… (Snapshot info / Disk utilization etc) With ESX 3.5 / vCenter 2.5 it needs to be scripted and can be integrated into vCenter by using custom fields, but as you can imagine not everyone would like to add custom functionality to vCenter.

As of the next version of ESX/vCenter aka vSphere you can just click the storage tab on a host or VM. The following is the storage tab of a VM, click the pic for a large version:

And of course the storage view on a host:

And of course the storage view on a host:

And what about the new map functionality, drilling down to HBA level

EMC announced the Symmetrix V-Max!

Duncan Epping · Apr 14, 2009 ·

EMC has just announced the Symmetrix V-Max. Here are the specs, what a beast:

The first new Symmetrix model based on the Virtual Matrix Architecture is the Symmetrix V-Max storage system, the world’s largest high-end storage array, featuring:

  • Up to 128 Intel Xeon processor cores
  • Up to 1 TB (terabyte) of global memory
  • Fibre Channel/FICON/Gigabit Ethernet/iSCSI connectivity
  • Latest generation Flash/Fibre Channel/SATA drive support
  • Scale to 2,400 drives
  • Maximum usable, protected capacity of 2 PBs (petabytes)

Of course, as the name suggests, the Symmetrix has been optimized for Virtualized Datacenters. Here’s just one example of how the Symmetrix V-Max will make the life of storage admins a lot easier:

Re-architecting the way that Enginuity interacts with the host OS layers, especially in the virtualization space. Creating the ability to dynamically provision entire port groups, initiator groups, even topologies with less steps than it takes to create a MetaLUN in Navisphere. I’m sure that Chad Sakac will have more to say regarding this but, let’s be REALLY clear on this: putting a Symmetrix V-Max into your virtualized environment is now going to be even easier than…well, most other things out there. We’re calling this Auto-provisioning, by the way, and to put it in the words of wizened genius within the walls of EMC (Duane Olson, if I may be so bold):

What a concept. Create a group for a particular host(ESX farm as an example), and now all you do is create/assign storage to this group and magically, the host has new storage.. One command, and its done. No more create the device, assign a device to a frontend channel, and then mask the device to a host…One step, and its all done. This will significantly decrease the time needed to allocate new storage to an existing host/hosts.

And for those thinking about BC-DR:

EMC is introducing the new zero-data-loss SRDF Extended Distance Protection (EDP) feature for Symmetrix V-Max systems, which can reduce the cost of multi-site replication by up to 50 percent. SRDF is ideal for virtual server environments and has been integrated with VMware Site Recovery Manager (SRM) and supports EMC Replication Manager for automated protection of VMware environments.

For more info check out these three blog articles:

  • Chad Sakac – EMC’s VMware Storage Strategy – The 3rd Shoe Drops
  • Chuck Hollis – Symmetrix V-Max: A New Paradigm For Storage Virtualization?
  • Chuck Hollis – Symmetrix V-Max: Storage Architecture Redefined
  • Dave Graham – Welcome to the next generation: Symmetrix V-Max is here…
  • Barry Burke – Symmetrix v-max – a revolutionary evolution
  • Barry Burke – Symmetrix v-max – scale up, scale out, scale away!

Rescan for datastores, exploring the next version of ESX/vCenter

Duncan Epping · Apr 8, 2009 ·

I love exploring new products, no matter how many times I click through the GUI and browse through the directory structure on the console you discover will discover something new everyday.

I never noticed this one, but as of the next version of ESX/vCenter you can rescan your complete cluster or datacenter just by right clicking on the object and click on “Rescan for Datastore”. Cool now I will not need to run a script anymore…

Virtual Geek Week?

Duncan Epping · Apr 3, 2009 ·

It must have been Virtual Geek week this week! I guess most of you already know Virtual Geek, and if you didn’t you’ve been missing out on the good stuff. Virtual Geek is being maintained by Chad Sakac of EMC and let’s say there’s a reason why his blog is called “Virtual Geek”. Chad posted a series of blog articles which are a must read for anyone interested in storage related to VMware and storage/VMware in general.

It started out with the “VMFS best practices and counter FUD” article where he sets the facts straight and debunks several myths like max amount of vm’s per VMFS volume and the use of extents. Besides countering this FUD there are also some very valuable tips in this article, for instance the advanced setting “Disk.SchedNumReqOutstanding” and the why/where/when.

In his second post this week he revealed that the upcoming release of ESX/vCenter(vSphere) will include the counterpart of the EMC Storage Viewer (vCenter plugin, youtube demo to be found here.). For all Clarion/Celerra customers who are planning on upgrading to vSphere a nice “little” extra!

The third one was the one I have been personally waiting for, the brand new version of the Celerra VSA. If you want to run a virtual “virtual environment” this virtual storage appliance is a must have. Especially if you want to test SRM this VSA will come in handy. Be sure to also download the how to guide that Chad provided in the “HOWTO 401” article.

Number four and five deal about multipathing and MRU behavior. I fully agree that understanding how MRU works is essential if you are using the policy. Post number 5 contains the script that is demoed in post 4. The script load balances the LUNs on the backend of the array(storage processors) and of course makes sure this is reflected on ESX for an optimal performance.

Let’s hope there’s more to come over the next weeks…

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 8
  • Page 9
  • Page 10
  • Page 11
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in