• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

ESX

vSphere and Service Console Memory

Duncan Epping · Nov 24, 2009 ·

Today I read something I have not seen anywhere else before. I have always been under the impression that the memory reserved for the Service Console was increased from 272MB to 300MB. Although the bare minimum is indeed 300MB there’s another side to this story, something I did not expect but actually does make sense. As of ESX 4.0 the allocated Service Console memory automatically scales up and down when there is enough memory available during installation. Let’s make try to make that crystal clear:

  • ESX Host – 8GB RAM -> Default allocated Service Console RAM = 300MB
  • ESX Host – 16GB RAM -> Default allocated Service Console RAM = 400MB
  • ESX Host – 32GB RAM -> Default allocated Service Console RAM = 500MB
  • ESX Host – 64GB RAM -> Default allocated Service Console RAM = 602MB
  • ESX Host – 96GB RAM -> Default allocated Service Console RAM = 661MB
  • ESX Host – 128GB RAM -> Default allocated Service Console RAM = 703MB

Lessons learned:

  1. Allocated Service Console memory is based on a formula which takes available RAM into account. (Haven’t found the exact formula yet, if I do I will of course add it to this article.)
  2. Always make your swap partition 1600MB; as an increase of RAM might automatically lead to a swap partition which is too small.

vSphere ESX/vCenter 4.0 Update 1

Duncan Epping · Nov 20, 2009 ·

VMware just released ESX 4.0 Update 1 and vCenter 4.0 Update 1. Most people have already reported on this by now. Two things that stood out for me personally is the following:

  1. HA Cluster Configuration Maximum — HA clusters can now support 160 virtual machines per host in HA Cluster of 8 hosts or less. The maximum number of virtual machines per host in cluster sizes of 9 hosts and above is still 40, allowing a maximum of 1280 Virtual Machines per HA cluster.
  2. Enhanced Clustering Support for Microsoft Windows – Microsoft Cluster Server (MSCS) for Windows 2000 and 2003 and Windows Server 2008 Failover Clustering is now supported on an VMware High Availability (HA) and Dynamic Resource Scheduler (DRS) cluster in a limited configuration. HA and DRS functionality can be effectively disabled for individual MSCS virtual machines as opposed to disabling HA and DRS on the entire ESX/ESXi host. Refer to the Setup for Failover Clustering and Microsoft Cluster Service guide for additional configuration guidelines.

Especially the first is important as many people have been building non DRS-HA clusters solely for MSCS VMs. As of now this is not needed anymore. You can simply disable DRS and HA via the Cluster properties to make sure your MSCS VMs do not move around. I think Update 1 is an important release for everyone running vSphere at this moment.

Of course you View guys were all waiting for Update 1 to drop:

  • VMware View 4.0 support – This release adds support for VMware View 4.0, a solution built specifically for delivering desktops as a managed service from the protocol to the platform.

Full ESX 4.0 U1 Release Notes
Full vCenter 4.0 U1 Release Notes

Something else I noticed… The release notes for ESX talk about “vMotion” where the release notes for vCenter talk about “VMotion”. It seems that VMotion is about to be renamed to vMotion.

in the ghetto….

Duncan Epping · Nov 18, 2009 ·

William Lam just updated two of his most popular scripts. If you haven’t looked at them yet, make sure you do as they are worth it. ghettoVCB(g2) enables the backup of virtual machines residing on either an ESX or ESXi host. ghettoVCBg2 is a complete rewritten and enhanced version of ghettoVCB or as William puts it “harder, better, faster, stronger”.

ghettoVCBg2

11/17/09 – The following enhancements and fixes have been implemented in this release of ghettoVCBg2. Special thanks goes out to Gerhard Ostermann for assisting with some of the logic in the ghettoVCBg2 script and the rest of the ghettoVCBg2 BETA testers. Thanks for everyones time and comments to make this script better!

Enhancements:

  • Email log support
  • Include/exclude specific VMDK(s)
  • Additional logging + dry run mode

Fixes:

  • Independent disk aware
  • Large VMDK backups

Original script, but updated with new features and a bug fix:

ghettoVCB

11/17/09 – The following enhancements and fixes have been implemented in this release of ghettoVCB. Special thanks goes out to all the ghettoVCB BETA testers for providing time and their environments to test features/fixes of the new script!

Enhancements:

  • Individual VM backup policy
  • Include/exclude specific VMDK(s)
  • Logging to file
  • Timeout variables
  • Configur snapshot memory/quiesce
  • Adapter format
  • Additional logging + dryrun mode
  • Support for both physical/virtual RDMs

Fixes:

  • Independent disk aware

Resource Pools and Shares

Duncan Epping · Nov 13, 2009 ·

I just wanted to write a couple of lines about Resource Pools. During most engagements I see environments where Resource Pools have been implemented together with shares. These Resource Pools are usually labeled “Low”, “Normal” and “High” with the shares set respectively. This is the traditional example being used during the VMware vSphere / VI3 course. Why am I writing about this you might ask yourself as many have successfully deployed environments with resource pools.

The problem I have with default implementations is the following:

Sibling resource pools share resources according to their relative share values.

Please read this line a couple of times. And then look at the following diagram:

What’s the issue here?

RP – 01 -> 2000 Shares -> 6 VMs
RP – 02 -> 1000 Shares -> 3 VMs

So what happens if these 9 VMs start fight for resources. Most people assume that the 6 VMs, which are part of RP-01,  get more resources than the 3 VMs. Especially when you name them “Low” and “Normal” you expect the VMs which are part of “Low” to get a “lower” amount of resources than those which belong to the “Normal” resource pool. But is this the case?

No it is not. Sibling resource pools share resources according to their relative share values. In other words, resources are divided on a resource pool level, not on a per VM level. So what happens here? RP-01 will get 66% of the resources and RP-02 will get 33% of the resources. But because RP-01 contains twice as many VMs as RP-02 this will not make a difference when all VMs are fighting over resources… Each VM will roughly get the same amount of processor time. This is something that not many people take into account when designing an infrastructure or when implementing resource pools.

VMFS Metadata size?

Duncan Epping · Nov 11, 2009 ·

When designing your VMware vSphere / VI3 environment there are so many variables you need to take into account that it is easy to get lost. Something hardly anyone seem to be taking into account when creating VMFS volumes is that the metadata will also take up a specific amount of disk space. You might think that everyone has at least 10% disk space free on a VMFS volume but this is not the case. Several of my customers have dedicated VMFS volumes for a single VMDK and  noticed during the creation of a VMDK that they just lost a specific amount of MBs. Most of you guessed by now that that is due to the metadata but how much disk space will the actually metadata consume?

There’s a simple formula that can be used to calculate how much disk space the metadata will consume. This formula used to be part of the “SAN System Design and Deployment Guide” (January 2008) but seems to have been removed in the updated versions.

Approximate metadata size in MB = 500MB + ((LUN Size in GB – 1) x 0.016KB)

For a 500GB LUN this would result in the following:

500 MB + ((500 - 1) x 0.016KB) = 507,984 MB
Roughly 1% of the total disk size used for metadata

For a 1500MB LUN this would result in the following:

500 MB + ((1.5 - 1) x 0.016KB) = 500,008 MB
Roughly 33% of the total disk size used for metadata

As you can see for a large VMFS volume(500GB) the disk space taken up by the metadata is only 1% and can almost be neglected, but for a very small LUN it will consume a lot of the disk space and needs to be taken into account….

[UPDATE]: As mentioned in the comments, the formula seems to be incorrect. I’ve looked into it and it appears that this is the reason it was removed from the documentation. The current limit for metadata is 1200MB and this should be the number you should use for sizing your datastores.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 15
  • Page 16
  • Page 17
  • Page 18
  • Page 19
  • Interim pages omitted …
  • Page 83
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in