• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

performance

Overcommiting memory, the war is on…

Duncan Epping · Mar 19, 2008 ·

I blogged about this yesterday, and a couple of hours later several articles on this topic popped up on the VMware website. Read them, and be prepared for another round of Citrix/MS blogs trying to discredit this feature. In my opinion it’s simple any feature that lets me virtualize more users / servers on the same or less hardware without significantly reducing the performance is worth looking into/testing. Looking at the “Services”(SBC/VDI) that are discussed in these topics I can imagine why Citrix is fully defending their technology and discrediting this feature, this is something that they didn’t think of or have developed yet.

  1. VMware: Memory Overcommitment in the Real World
  2. Scott Lowe: More on memory overcommitment
  3. Cheap HyperVisors
  4. The comment
  5. The Discussion

Talking about competition and cheap hypervisors…. Rumors are going around that Dell will be supplying servers with VMware ESX 3i with any additional costs, free, no license… So the war is on, but before you guys start bashing check the numbers in article number 1… 178 VM’s with 512MB of memory only using 19.07GB instead of the 89GB is would be using otherwise.

Fiber channel round-robin load balancing

Duncan Epping · Mar 5, 2008 ·

There’s a nice article about “round-robin” load balancing on SystemsArchitech which got me a bit dazzled about this new functionality:

esxcfg-mpath – lun <*.lun> —policy custom –H minq –T any –C 0 –B 2048
The policy states that the LUN should utilize a custom policy that determines which (of two) HBA to utilize based on the minimum queue length. This HBA selection is triggered every 2048 blocks transmitted to a given LUN over the same target. The policy will use any targets available to either of the two HBAs. In using storage that manages host port load-balancing, LUNs will only have two paths (one per fabric) and the storage array will perform storage array host port balancing from within its own management. With other storage arrays it is typical to perform this host port balancing from the hosts accessing the given storage array.

With 10 ESX hosts set with this policy wouldn’t it probably cause path thrashing on an active/active SAN? You never know which controller will access a specific LUN and if your unlucky it will be switching from controller A to controller B every millisecond. Anyone else any thoughts on this new feature and the possible danger?

What’s New in VMware Infrastructure 3: Performance Enhancements

Duncan Epping · Mar 2, 2008 ·

There’s a new PDF about performance enhancements in ESX 3.5:

The new features in VMware® Infrastructure 3 makes it even easier for organizations to virtualize their most demanding and intense workloads. The new version of VMware Infrastructure 3 provides significant performance enhancements, including the release of VMware ESX Server 3.5 and a new ultra-thin hypervisor called VMware ESX Server 3i that can significantly.

Download:
http://www.vmware.com/files/pdf/vi3_performance_enhancements_wp.pdf

CPU utilization increasing after VMotion in a DRS enabled cluster

Duncan Epping · Jan 24, 2008 ·

VMwarewolf already posted this fix on his blog but had to remove it… Now VMware added it to their knowledge base. Check out the original article because it may change in time. For the lazy people I included how to diagnose the problem and more…

Diagnose the problem:

  1. Use the VI Client to log in to VirtualCenter as an administrator.
  2. Disable DRS in the cluster and wait for 1 minute.
  3. In the VI Client, note the virtual machine’s CPU usage from performance tab.
  4. In the VI Client, note the virtual machine’s memory overhead in the summary tab.
  5. Enable DRS in the cluster.
  6. Use VMotion to move the problematic virtual machine to another host.
  7. Note the virtual machine CPU usage and memory overhead on the new host.
  8. Disable DRS in the cluster and wait for 1 minute.
  9. Note the virtual machine CPU usage and memory overhead on the new host.

If the CPU usage of the virtual machine increases in step 7 in comparison to step 3, and decreases back to the original state (similar to the behavior in step 3) in step 9 with an observable increase in the overhead memory, this indicates the issue discussed in this article.

You do not need to disable DRS to work around this issue.

The workaround:

  1. Use the VI Client to log in to VirtualCenter as an administrator.
  2. Right-click your cluster from the inventory.
  3. Click Edit Settings.
  4. Ensure that VMware DRS is shown as enabled. If it is not enabled check the box to enable VMware DRS.
  5. Click OK.
  6. Click an ESX Server from the Inventory.
  7. Click the Configuration tab.
  8. Click Advanced Settings.
  9. Click the Mem option.
  10. Locate the Mem.VMOverheadGrowthLimit parameter.
  11. Change the value of this parameter to 5. (Note: By default this setting is set to -1.)
  12. Click OK.

To verify the setting has taken effect:

Log in to your ESX Server service console as root from either an SSH Session or directly from the console of the server.

  1. Type less /var/log/vmkernel.

A successfully changed setting displays a message similar to the following and no further action is required:
vmkernel: 1:16:23:57.956 cpu3:1036)Config: 414: VMOverheadGrowthLimit” = 5, Old Value: -1, (Status: 0x0)

If changing the setting was unsuccessful a message similar to the following is displayed:
vmkernel: 1:08:05:22.537 cpu2:1036)Config: 414: “VMOverheadGrowthLimit” = 0, Old Value: -1, (Status: 0x0)

Note: If you see a message changing the limit to 5 and then changing it back to -1, the fix is not successfully applied.

To fix multiple ESX Server hosts:

If this parameter needs to be changed on several hosts (or if the workaround fails for the individual host) use the following procedure to implement the workaround instead of changing every server individually:

  1. Log on to the VirtualCenter Server Console as an administrator.
  2. Make a backup copy of the vpxd.cfg file (typically it is located in C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\vpxd.cfg).
  3. In the vpxd.cfg file, add the following configuration after the <vpxd> tag:
    <cluster>
    <VMOverheadGrowthLimit>5</VMOverheadGrowthLimit>
    </cluster>
    This configuration provides an initial growth margin in MB-to-virtual machine overhead memory. You can increase this amount to larger values if doing so further improves virtual machine performance.
  4. Restart the VMware VirtualCenter Server Service.Note: When you restart the VMware VirtualCenter Server Service, the new value for the overhead limit should be pushed down to all the clusters in VirtualCenter.

What about those Jumbo Frames?

Duncan Epping · Jan 3, 2008 ·

Support for Jumbo Frames is one of the major new features for ESX 3.5. Especially for the people who are using an iSCSI SAN configuring jumbo frames could be very beneficial. Instead of having an MTU(maximum size of transmitted packet) of 1500 an MTU of 9000 would be possible. That would cut out a lot of the iSCSI overhead. But are jumbo frames supported for 3.5? Answer: Yes and no. [Read more…] about What about those Jumbo Frames?

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 18
  • Page 19
  • Page 20
  • Page 21
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in