• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

best practice

VMFS Metadata size?

Duncan Epping · Nov 11, 2009 ·

When designing your VMware vSphere / VI3 environment there are so many variables you need to take into account that it is easy to get lost. Something hardly anyone seem to be taking into account when creating VMFS volumes is that the metadata will also take up a specific amount of disk space. You might think that everyone has at least 10% disk space free on a VMFS volume but this is not the case. Several of my customers have dedicated VMFS volumes for a single VMDK and  noticed during the creation of a VMDK that they just lost a specific amount of MBs. Most of you guessed by now that that is due to the metadata but how much disk space will the actually metadata consume?

There’s a simple formula that can be used to calculate how much disk space the metadata will consume. This formula used to be part of the “SAN System Design and Deployment Guide” (January 2008) but seems to have been removed in the updated versions.

Approximate metadata size in MB = 500MB + ((LUN Size in GB – 1) x 0.016KB)

For a 500GB LUN this would result in the following:

500 MB + ((500 - 1) x 0.016KB) = 507,984 MB
Roughly 1% of the total disk size used for metadata

For a 1500MB LUN this would result in the following:

500 MB + ((1.5 - 1) x 0.016KB) = 500,008 MB
Roughly 33% of the total disk size used for metadata

As you can see for a large VMFS volume(500GB) the disk space taken up by the metadata is only 1% and can almost be neglected, but for a very small LUN it will consume a lot of the disk space and needs to be taken into account….

[UPDATE]: As mentioned in the comments, the formula seems to be incorrect. I’ve looked into it and it appears that this is the reason it was removed from the documentation. The current limit for metadata is 1200MB and this should be the number you should use for sizing your datastores.

Block sizes, think before you decide

Duncan Epping · Nov 10, 2009 ·

I wrote about block sizes a couple of times already but I had the same discussion twice over the last couple of weeks at a customer site and on Twitter(@VirtualKenneth) so lets recap. First the three articles that started these discussions: vSphere VM Snapshots and block size, That’s why I love blogging… and Block sizes and growing your VMFS.

I think the key take aways are:

  • Block sizes do not impact performance, neither large or small, as the OS dictates the block sizes used.
  • Large block sizes do not increase storage overhead as sub-blocks are used for small files. The sub-blocks are always 64KB.
  • With thin provisioning there theoretically are more locks when a thin disk is growing but the locking mechanism has been vastly improved with vSphere which means this can be neglected. A thin provisioned VMDK on a 1Mb block size VMFS volume grows in chunks of 1MB and so on…
  • When separating OS from Data it is important to select the same block size for both VMFS volumes as other wise it might be impossible to create snapshots.
  • When using a virtual RDM for Data the OS VMFS volume must have an appropriate block size. In other words the maximum file size must match the RDM size.
  • When growing a VMFS volume there is no way to increase the block size and maybe you will need to grow the volume to grow the VMDK. Which might possibly be beyond the limit of the maximum file size.

My recommendation would be to forget about the block size. Make your life easier and standardize, go big and make sure you have the flexibility you need now and in the future.

HA admission control, the answers…

Duncan Epping · Nov 9, 2009 ·

I received a whole bunch of questions around my two latest posts on HA admission control. I added all the info to my HA Deepdive page but just in case you don’t regularly read that section I will post them here as well:

  1. The default of 256Mhz when no reservations are set is too conservative in my environment. What happens if you set a 100Mhz reservation?
    Nothing. The minimum value VMware HA uses to calculate with is 256Mhz. Keep in mind that this goes for slots and when using a percentage based admission control policy. Of course this can be overruled with an advanced setting (das.slotCpuInMHz) but I don’t recommend doing this.
  2. What happens if you have an unbalanced cluster and the largest host fails?
    If your admission control policy is based on amount of host failures VMware HA will take this into account. However, when you select a percentage this is not the case. You will need to make sure that you specify a percentage which is equal or preferably larger than the percentage of resources provided by the largest host in this cluster. Otherwise there’s a chance that VMware HA can’t restart all virtual machines.
  3. What would your recommendation be, reserve a specific percentage or set the amount of host failures VMware HA can tolerate?
    It depends. Yes I know, that is the obvious answer but it actually does. There are three options and each have it’s own advantages and disadvantages. Here you go:

    • Amount of host failures
      Pros: Fully automated, when a host is added to a cluster HA calculates how many slots are available.
      Cons: Can be very conservative and
      inflexible when reservations are used as the largest reservation dictates slot sizes.
    • Percentage reserved
      Pros: Flexible. Although reservations have its effect on the amount of available resources it impacts the environment less.
      Cons: Manual calculations need to  be done when adding additional hosts in a cluster. Unbalanced clusters can be a problem when chosen percentage is too low.
    • Designated failover host
      Pros: What you see is what you get.
      Cons: What you see is what you get.

How to avoid HA slot sizing issues with reservations?

Duncan Epping · Nov 6, 2009 ·

Can I avoid large HA slot sizes due to reservations without resorting to advanced settings? That’s the question I get almost daily. The answer used to be NO. HA uses reservations to calculate the slot size and there’s no way to tell HA to ignore them without using advanced settings pre-vSphere. So there is your answer: pre-vSphere.

With vSphere VMware introduced a percentage next to an amount of host failures. The percentage avoids the slot size issue as it does not use slots for admission control. So what does it use?

When you select a specific percentage that percentage of the total amount of resources will stay unused for HA purposes. First of all VMware HA will add up all available resources to see how much it has available. Then VMware HA will calculate how much resources are currently consumed by adding up all reservations of both memory and cpu for powered on virtual machines. For those virtual machines that do not have a reservation a default of 256Mhz will be used for CPU and a default of 0MB+memory overhead will be used for Memory. (Amount of overhead per config type can be found on page 28 of the resource management guide.)

In other words:

((total amount of available resources – total reserved VM resources)/total amount of available resources)
Where total reserved VM resources include the default reservation of 256Mhz and the memory overhead of the VM.

Let’s use a diagram to make it a bit more clear:

Total cluster resources are 24Ghz(CPU) and 96GB(MEM). This would lead to the following calculations:

((24Ghz-(2Gz+1Ghz+256Mhz+4Ghz))/24Ghz) = 69 % available
((96GB-(1,1GB+114MB+626MB+3,2GB)/96GB= 85 % available

As you can see the amount of memory differs from the diagram. Even if a reservation has been set the amount of memory overhead is added to the reservation. For both metrics HA admission control will constantly check if the policy has been violated or not. When one of either two thresholds are reached, memory or CPU, admission control will disallow powering on any additional virtual machines. Pretty simple huh?!

Best Practices: running vCenter virtual (vSphere)

Duncan Epping · Oct 9, 2009 ·

Yesterday we had a discussion on running vCenter virtual on one of the internal mailinglists. One of the gaps identified was the lack of a best practices document. Although there are multiple for VI3 and there are some KB articles these do need seem to be easy to find or complete. This is one of the reasons I wrote this article. Keep in mind that these are my recommendations and they do not necessarily align with VMware’s recommendations or requirements.

Sizing

Sizing is one of the most difficult parts in my opinion. As of vSphere the minimum requirements of vCenter have changed but it goes against my personal opinion on this subject. My recommendation would be to always start with 1 vCPU for environments with less than 10 hosts for instance. Here’s my suggestion:

  • < 10 ESX Hosts
    • 1 x vCPU
    • 3GB of memory
    • Windows 64Bit OS(preferred) or Windows 32Bit OS
  • > 10 ESX Hosts but < 50 ESX Hosts
    • 2 x vCPU
    • 4GB of memory
    • Windows 64Bit OS(preferred) or Windows 32Bit OS
  • > 50 ESX hosts but < 200 ESX Hosts
    • 4 x vCPU
    • 4GB of memory
    • Windows 64Bit OS(preferred) or Windows 32Bit OS
  • > 200 ESX Hosts
    • 4 x vCPU
    • 8GB of memory
    • Windows 64Bit OS(requirement)

My recommendation differ from VMware’s recommendation. The reason for this is that in small environments(<10 Hosts) there’s usually more flexibility for increasing resources in terms of scheduling down time. Although 2 vCPUs are a requirement I’ve seen multiple installations where a single vCPU was more than sufficient. Another argument for starting with a single vCPU would be “Practice What You Preach”. (How many times have you convinced an application owner to downscale after a P2V?!) I do however personally prefer to always use a 64Bit OS to enable upgrades to configs with more than 4GB of memory when needed.

vCenter Server in a HA/DRS Cluster

  1. Disable DRS(Change Automation Level!) for your vCenter Server and make sure to document where the vCenter Server is located (My suggestion would be the first ESX host on the cluster).
  2. Make sure HA is enabled for your vCenter Server, and set the startup priority to high. (Default is medium for every VM.)
  3. Make sure the vCenter Server VM gets enough resources by setting the shares for both Memory and CPU to “high”.
  4. Make sure other services and servers on which vCenter depends are also starting automatically, with a high priority and in the correct order like:
    1. Active Directory.
    2. DNS.
    3. SQL.
  5. Write a procedure to boot the vCenter / AD / DNS / SQL manually in case of a complete power outage occurs.

Most of these recommendations are pretty obvious but you would be surprised how many environments I’ve seen where for instance MS SQL had a medium startup priority and vCenter a high priority. Or where after a complete power outage no one knows how to boot the vCenter Server. Documenting standard procedures is key here; especially know that with vSphere vCenter is more important than ever before.

Source:
http://kb.vmware.com/kb/1009080

http://kb.vmware.com/kb/1009039
ESX and vCenter Server Installation Guide
Upgrade Guide

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in