• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

ESX

Service Console Memory, a common misunderstanding (ESX 4.0+)

Duncan Epping · Sep 21, 2010 ·

I was reading the Maximum vSphere book by Eric Siebert today and noticed something that I also spotted in Scott Lowe’s Mastering VMware Sphere book. Both Scott and Eric described the fact that the default amount of assigned Service Console memory for ESX has been increased from 272MB to X. I deliberately use “X” as both Eric and Scott mention a different value in their book.

The reason both Scott and Eric mention a different value in their book can be easily explained though, and I wrote about this a while ago, as of vSphere 4.0 there is no default value anymore. I gave this feedback to Scott a while back and of course he asked where this was documented. Back then it was nowhere to be found, well except for on my blog but that is not an official VMware publication. I asked our KB team to update the KB article that explains Service Console memory and I just noticed that they have:

src

ESX 4.x hosts – the default amount of RAM is dynamically configured to a value between 300MB and 800MB, depending on the amount of RAM that is installed in the host. For example, if the host has 32GB of memory the service console RAM will be set to 500MB, while a host which has 128GB of RAM will see the service console RAM set to 700MB. The maximum has not changed from 800MB, which would be seen on hosts with 256GB of RAM or higher, if it is being dynamically allocated.

This is exactly what I observed almost a year ago:

  • ESX Host – 8GB RAM -> Default allocated Service Console RAM = 300MB
  • ESX Host – 16GB RAM -> Default allocated Service Console RAM = 400MB
  • ESX Host – 32GB RAM -> Default allocated Service Console RAM = 500MB
  • ESX Host – 64GB RAM -> Default allocated Service Console RAM = 602MB
  • ESX Host – 96GB RAM -> Default allocated Service Console RAM = 661MB
  • ESX Host – 128GB RAM -> Default allocated Service Console RAM = 703MB
  • ESX Host – 256GB RAM -> Default allocated Service Console RAM = 800MB

Just wanted to point this out as I am certain that many people will not be aware of this.

Memory states

Duncan Epping · Aug 17, 2010 ·

I was just browsing the vsinodes/procnodes. I noticed the following:

Free memory state thresholds {
soft:64 pct
hard:32 pct
low:16 pct
}

As explained in Frank’s excellent article on memory reservations, ESX/ESXi uses memory states to determine what type of memory reclamation technique to use. Techniques that can be used are TPS, ballooning and swapping. Of course you will always want to avoid ballooning and swapping but that is not the point here.  The point is that as far as I am aware the thresholds for those states have always been:

  • High – 6%
  • Soft – 4%
  • Hard – 2%
  • Low – 1%

This is also what our documentation states. Now if you do the math you will notice that 64% of 6% is indeed 4% and so on… Although it doesn’t seem to be substantial it is something I wanted to document, just for completeness sake.

Storage Filters

Duncan Epping · Aug 11, 2010 ·

I was reading about Storage Filters last week and wanted to do a short write up. I totally forgot about it until I noticed this new KB article. The KB article only discusses the LUN filters though and not the other filters that are available today.

Currently 4 filters have been made public:

  1. config.vpxd.filter.hostRescanFilter
  2. config.vpxd.filter.vmfsFilter
  3. config.vpxd.filter.rdmFilter
  4. config.vpxd.filter.SameHostAndTransportsFilter

The first filter on the list is one I discussed roughly a year ago. The “Host Rescan Filter” makes it possible to disable the automatic storage rescan that occurs on all hosts after a VMFS volume has been created. The reason you might want to avoid this is when you adding multiple volumes and want to avoid multiple rescans but just initiate a single rescan after you create your final volume. By setting “config.vpxd.filter.hostRescanFilter” to false the automatic rescan is disabled. In short the steps needed:

  1. Open up the vSphere Client
  2. Go to Administration -> vCenter Server
  3. Go to Settings -> Advanced Settings
  4. If the key “config.vpxd.filter.hostRescanFilter” is not available add it and set it to false

To be honest this is the only storage filter I would personally recommend using. For instance “config.vpxd.filter.rdmFilter” when set to “false” will enable you to add a LUN as an RDM to a VM while this LUN is already used as an RDM by a different VM. Now that can be useful in very specific situations like when MSCS is used, but in general should be avoided as data could be corrupted when the wrong LUN is selected.

The filter “config.vpxd.filter.vmfsFilter” can be compared to the RDM filter as when set to false it would enable you to overwrite a VMFS volume with VMFS or re-use as an RDM. Again, not something I would recommend enabling as it could lead to loss of data which has a serious impact on any organization.

Same goes for “config.vpxd.filter.SameHostAndTransportsFilter”. When it is set to “False” you can actually add an “incompatible LUN” as an extend to an existing volume. An example of an incompatible LUN would for instance be a LUN which is not presented to all hosts that have access to the VMFS volume it will be added to. I can’t really think of a single reason to change the defaults on this setting to be honest besides troubleshooting, but it is good to know they are there.

Most of the storage filters have its specific use cases. In general storage filters should be avoided, except for “config.vpxd.filter.hostRescanFilter” which has proven to be useful in specific situations.

Standby NICs in an “IP-Hash” configuration

Duncan Epping · Aug 6, 2010 ·

I was reviewing a document today and noticed something that I’ve seen a couple of times already. I already wrote about Active/Standby set ups for etherchannels a year ago but this is a slight different variant. Frank also wrote a more extensive article on it a while and I just want to re-stress this.

Scenario:

  • Two NICs
  • 1 Etherchannel of 2 links
  • Both Management and VMkernel traffic on the same switch

I created a simple diagram to depict this:

nics

In the above scenario each “portgroup” is configured in an active/standby scenario. So let’s take the Service Console. It has VMNIC0 as active and VMNIC1 as standby. The physical switch however is configured with both NICs active in a single channel.

Based on the algorithm that etherchannels use either of the two VMNICs will accept inbound traffic. The Service Console however will only send traffic outbound via VMNIC0. Even worse, the Service Console isn’t actively listening to VMNIC1 for incoming traffic as it was placed in Standby mode. Standby mode means that it will only be used when VMNIC0 fails. In other words your physical switch will think it can use VMNIC1 for you Service Console but your Service Console will not see the traffic coming in on VMNIC1 as it is configured in Standby mode on the vSwitch. Or to quote from Frank’s article…

it will sit in a corner, lonely and depressed, wondering why nobody calls it anymore.

High physical switch CPU load?

Duncan Epping · Aug 4, 2010 ·

One of my customers experienced high CPU load on their physical switch. After some investigation they noticed broadcasts packets being sent every two seconds. The first reaction was Beacon Probing is probably enabled.

Unfortunately this wasn’t the case. But VMware GSS came to the rescue and pointed us towards a KB article. Apparently a bug has been identified in 4.0 which causes this behaviour:

src: http://kb.vmware.com/kb/1024435

Problem:

  • ESX sends Beacon Packets when vDS/vSwitch are connected to more than one uplink.
  • ESX server sends periodic broadcast of Beacon Packets even if the vSwitch/vNetwork Distributed Switch (vDS) is not configured to use Beacon Probing for Network Failover Detection.
  • These packets have the virtual MAC of the vmnic in the Source MAC Address field.

Workaround:

#esxcfg-advcfg -s 0 /Net/MaxBeaconsAtOnce

The customer implemented this workaround and the problem is gone… From what I have been told this issue does not exist in ESX(i) 4.1 so if you are experiencing it, an upgrade might be a better solution. In this case due to the size of the environment that was not an option.

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Interim pages omitted …
  • Page 83
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in