• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

ESX

Changing the block size of your local VMFS during the install…

Duncan Epping · Nov 11, 2009 ·

I did not even knew it was possible but on the VMTN Community Forums user PatrickD revealed a workaround to set a different block size for your local VMFS. Of course the question remains why you would want to do this and not create a dedicated VMFS for your Service Console and one for your VMs. Anyway, it’s most definitely a great work around thanks Patrick for sharing this.

There isn’t an easy way of doing that right now. Given that a number of people have asked for it we’re looking at adding it in future versions.

If you want to do this now, the only way to do it is by mucking around with the installer internals (and knowing how to use vi). It’s not that difficult if you’re familiar with using a command line. Try these steps for changing it with a graphical installation:

  1. boot the ESX installation DVD in text mode
  2. switch to the shell (Alt-F2)
  3. ps | grep Xorg
  4. kill the PID which comes up with something like “Xorg -br -logfile …”. On my system this comes up as PID 590, so “kill 590”
  5. cd /usr/lib/vmware/weasel
  6. vi fsset.py
  7. scroll down to the part which says “class vmfs3FileSystem(FileSystemType):”
  8. edit the “blockSizeMB” parameter to the block size that you want. it will currently be set to ‘1’. the only values that will probably work are 1, 2, 4, and 8.
  9. save and exit the file
  10. cd /
  11. /bin/weasel

After that, run through the installer as you normally would. To check that it worked, after the installer has completed you can go back to a different terminal (try Ctl-Alt-F3 since weasel is now running on tty2) and look through /var/log/weasel.log for the vmfstools creation command.

Hope that helps.

Block sizes, think before you decide

Duncan Epping · Nov 10, 2009 ·

I wrote about block sizes a couple of times already but I had the same discussion twice over the last couple of weeks at a customer site and on Twitter(@VirtualKenneth) so lets recap. First the three articles that started these discussions: vSphere VM Snapshots and block size, That’s why I love blogging… and Block sizes and growing your VMFS.

I think the key take aways are:

  • Block sizes do not impact performance, neither large or small, as the OS dictates the block sizes used.
  • Large block sizes do not increase storage overhead as sub-blocks are used for small files. The sub-blocks are always 64KB.
  • With thin provisioning there theoretically are more locks when a thin disk is growing but the locking mechanism has been vastly improved with vSphere which means this can be neglected. A thin provisioned VMDK on a 1Mb block size VMFS volume grows in chunks of 1MB and so on…
  • When separating OS from Data it is important to select the same block size for both VMFS volumes as other wise it might be impossible to create snapshots.
  • When using a virtual RDM for Data the OS VMFS volume must have an appropriate block size. In other words the maximum file size must match the RDM size.
  • When growing a VMFS volume there is no way to increase the block size and maybe you will need to grow the volume to grow the VMDK. Which might possibly be beyond the limit of the maximum file size.

My recommendation would be to forget about the block size. Make your life easier and standardize, go big and make sure you have the flexibility you need now and in the future.

HA admission control, the answers…

Duncan Epping · Nov 9, 2009 ·

I received a whole bunch of questions around my two latest posts on HA admission control. I added all the info to my HA Deepdive page but just in case you don’t regularly read that section I will post them here as well:

  1. The default of 256Mhz when no reservations are set is too conservative in my environment. What happens if you set a 100Mhz reservation?
    Nothing. The minimum value VMware HA uses to calculate with is 256Mhz. Keep in mind that this goes for slots and when using a percentage based admission control policy. Of course this can be overruled with an advanced setting (das.slotCpuInMHz) but I don’t recommend doing this.
  2. What happens if you have an unbalanced cluster and the largest host fails?
    If your admission control policy is based on amount of host failures VMware HA will take this into account. However, when you select a percentage this is not the case. You will need to make sure that you specify a percentage which is equal or preferably larger than the percentage of resources provided by the largest host in this cluster. Otherwise there’s a chance that VMware HA can’t restart all virtual machines.
  3. What would your recommendation be, reserve a specific percentage or set the amount of host failures VMware HA can tolerate?
    It depends. Yes I know, that is the obvious answer but it actually does. There are three options and each have it’s own advantages and disadvantages. Here you go:

    • Amount of host failures
      Pros: Fully automated, when a host is added to a cluster HA calculates how many slots are available.
      Cons: Can be very conservative and
      inflexible when reservations are used as the largest reservation dictates slot sizes.
    • Percentage reserved
      Pros: Flexible. Although reservations have its effect on the amount of available resources it impacts the environment less.
      Cons: Manual calculations need to  be done when adding additional hosts in a cluster. Unbalanced clusters can be a problem when chosen percentage is too low.
    • Designated failover host
      Pros: What you see is what you get.
      Cons: What you see is what you get.

How to avoid HA slot sizing issues with reservations?

Duncan Epping · Nov 6, 2009 ·

Can I avoid large HA slot sizes due to reservations without resorting to advanced settings? That’s the question I get almost daily. The answer used to be NO. HA uses reservations to calculate the slot size and there’s no way to tell HA to ignore them without using advanced settings pre-vSphere. So there is your answer: pre-vSphere.

With vSphere VMware introduced a percentage next to an amount of host failures. The percentage avoids the slot size issue as it does not use slots for admission control. So what does it use?

When you select a specific percentage that percentage of the total amount of resources will stay unused for HA purposes. First of all VMware HA will add up all available resources to see how much it has available. Then VMware HA will calculate how much resources are currently consumed by adding up all reservations of both memory and cpu for powered on virtual machines. For those virtual machines that do not have a reservation a default of 256Mhz will be used for CPU and a default of 0MB+memory overhead will be used for Memory. (Amount of overhead per config type can be found on page 28 of the resource management guide.)

In other words:

((total amount of available resources – total reserved VM resources)/total amount of available resources)
Where total reserved VM resources include the default reservation of 256Mhz and the memory overhead of the VM.

Let’s use a diagram to make it a bit more clear:

Total cluster resources are 24Ghz(CPU) and 96GB(MEM). This would lead to the following calculations:

((24Ghz-(2Gz+1Ghz+256Mhz+4Ghz))/24Ghz) = 69 % available
((96GB-(1,1GB+114MB+626MB+3,2GB)/96GB= 85 % available

As you can see the amount of memory differs from the diagram. Even if a reservation has been set the amount of memory overhead is added to the reservation. For both metrics HA admission control will constantly check if the policy has been violated or not. When one of either two thresholds are reached, memory or CPU, admission control will disallow powering on any additional virtual machines. Pretty simple huh?!

Document it…

Duncan Epping · Nov 4, 2009 ·

Something that I noticed over the last months while doing design reviews is that hardly anyone documents decisions in a design. Most designs I review are physical designs, which is understandable as most IT people are technical people who could not care less about logical designs. I am perfectly fine with that, although I do recommend taking a different approach, as long as you document why you are going down a specific path.

There can be specific constraints or requirements (both technical and business related) which justify your decision, but if you don’t document these constraints or requirements chances are someone will change the design based on a false assumption and who knows what it will lead to…

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 16
  • Page 17
  • Page 18
  • Page 19
  • Page 20
  • Interim pages omitted …
  • Page 83
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in