• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

ESX

Re: Memory Compression

Duncan Epping · Mar 2, 2010 ·

I was just reading Scott Drummonds article on Memory Compression. Scott explains where Memory Compression comes in to play. I guess the part I want to reply on is the following:

VMware’s long-term prioritization for managing the most aggressively over-committed memory looks like this:

  1. Do not swap if possible.  We will continue to leverage transparent page sharing and ballooning to make swapping a last resort.
  2. Use ODMC to a predefined cache to decrease memory utilization.*
  3. Swap to persistent memory (SSD) installed locally in the server.**
  4. Swap to the array, which may benefit from installed SSDs.

(*) Demonstrated in the lab and coming in a future product.
(**) Part of our vision and not yet demonstrated.

I just love it when we give insights in upcoming features but I am not sure I agree with the prioritization. I think there are several things that one needs to keep in mind. In other words there’s a cost associated to these decisions / features and your design needs to adjusted to these associated effects.

  1. TPS -> Although TPS is an amazing way of reducing the memory footprint you will need to figure out what the ratio of deduplication is. Especially when you are using Nehalem processors there’s a serious decrease. The reasons for the decrease of TPS effectiveness are the following:
    • NUMA – By default there is no inter node transparent page sharing (read Frank’s article for more info on this topic)
    • Large Pages – By default TPS does not share large(2MB) pages. TPS only shares small(4KB) pages. It will break large pages down in small pages when memory is scarce but it is definitely something you need to be aware off. (for more info read my article on this topic.
  2. Use ODMC -> I haven’t tested with ODMC yet and I don’t know what the associated cost is at the moment.
  3. Swap on local SSD -> Swap on local SSD will most definitely improve the speed when swapping occurs. However as Frank already described in his article there is an associated cost:
    • Disk space – You will need to make sure you will have enough disk space available to power on VMs or migrate VMs as these swap files will be created at power on or at migration.
    • Defaults – By default .vswp files are stored in the same folder as the .vmx. Changing this needs to be documented and taken into account during upgrades and design changes.
  4. Swap to array (SSD) -> This is the option that most customers use for the simple reason that it doesn’t require a local SSD disk. There are no changes needed to enable it and it’s easier to increase a SAN volume than it is to increase a local disk when needed. The associated costs however are:
    • Costs – Shared storage is relatively expensive compared to local disks
    • Defaults – If .vswp files need to be SSD based you will need to separate the .vswp from the rest of the VMs and created dedicated shared SSD volumes.

I fully agree with Scott that it’s an exciting feature and I can’t wait for it to be available. Keep in mind though that there is a trade off for every decision you make and that the result of a decision might not always end up as you expected it would. Even though Scott’s list makes totally sense there is more than  meets the eye.

VMware vCenter Update Manager 4.0 Update 1 Patch 1

Duncan Epping · Feb 27, 2010 ·

VMware just released VMware vCenter Update Manager 4.0 Update 1 Patch 1.

This patch resolves the following issues :

  • After upgrading Cisco Nexus 1000V VSM to the latest version, you might not be able to patch the kernel of ESX hosts attached to the vDS (KB 1015717)Upgrading Cisco Nexus 1000V VSM to the latest version upgrades the Cisco Virtual Ethernet Module (VEM) on ESX hosts attached to the vDS. Subsequently, from the same vSphere Client instance, you might not be able to use a host patch baseline to apply patches to the ESX vmkernel64 or ESXi firmware of hosts attached to the vDS. Applying patches to ESX vmkernel64 or ESXi firmware requires that you include the compatible Cisco Nexus 1000V VEM patch in the baseline. However, this patch might not be available for selection in the Update Manager New Baseline wizard or in the Update Manager patch repository.
  • Upgrade of Cisco Nexus 1000V version 4.0(4)SV1(1) to version 4.0(4)SV1(2) with Update Manager might fail for hosts with certain patch levels (KB 1017069)If you are using Cisco Nexus 1000V version 4.0(4)SV1(1), and the ESX patch bulletins ESX400-200912401-BG or ESXi400-200912401-BG are installed on the host, you might not be able to upgrade to Cisco Nexus 1000V version 4.0(4)SV1(2).
  • Scanning of hosts in a cluster and staging of patches to hosts in a cluster might take a long time to finishThe scanning and staging operations of hosts in a cluster run sequentially. If a cluster contains a lot of hosts, scanning and staging patches might take a long time to be completed. Scanning and staging of hosts in a cluster run concurrently on all of the selected hosts in the cluster.

For details regarding these new fixes, please refer to the release notes.

VMware vCenter Update Manager 4.0 Update 1 Patch 1 is available for download.

VMware vCenter Update Manager 4.0 Update 1 is required for installation of this patch.

Custom shares on a Resource Pool, scripted

Duncan Epping · Feb 24, 2010 ·

We’ve spoken about Resource Pools a couple of times over the last months and specifically about shares. (The Resource Pool Priority-Pie Paradox, Resource Pools and Shares.) The common question I received was how can we solve this. The solution is simple: Custom Shares.

However, the operational overhead associated with custom shares is something most people want to avoid. Luckily for those who have the requirement to use share based resource pools one my colleague Andrew Mitchell shared a powershell script. This powershell script defines custom shares based on a pre-defined weight and the amount of VMs / vCPUs in the resource pool. I would recommend to schedule the script to run on a weekly basis and ensure the correct amount of shares have been set to avoid running into one of the scenarios described in the articles above.

Please keep in mind that if you use nested resource pools you will need to run a separate script for each level in the hierarchy.

Eg. If the resource pools are setup like this the following you will need one script to set the shares for RP1, RP2 and RP3, and another script to set the shares for RP1-Child1 and RP1-Child2.

RP1
>>RP1-Child1
>>RP1-Child2
RP2
RP3

Download the script here. Again to emphasize it I am not the author, we would appreciate it though if you could share any modifications / enhancements to this script.

Cool Tool Update: RVTools 2.8.1

Duncan Epping · Feb 21, 2010 ·

Rob de Veij has just released a new version of RVTools. The update only contains bug fixes but most definitely worth downloading again!

Version 2.8.1 (February 2010)

  • On vHost tab new field: number of running vCPUs
  • On vSphere VMs in vApp where not displayed.
  • Filter not working correct when annotations or custum fields contains null value.
  • When NTP server(s) = null the time info fields are not displayed on the vHost tabpage.
  • When datastore name or virtual machine name containts spaces the inconsistent foldername check was not working correct.
  • Tools health check now only executed for running VMs.

VUM and downloading patches via PAC

Duncan Epping · Feb 16, 2010 ·

When I tried to download patches via a freshly installed VMware vSphere Update Manager today I received the following error:

https://hostupdate.vmware.com/software/VUM/PRODUCTION/index.xml;
hosting the patch definitions and patches cannot be reached or has no patch data

Although we configured a proxy including the appropriate account it would not work. As suggested in this KB article I removed the “http://” part of the proxy address but still it bailed out with the error above. After trying several combinations I noticed that the proxy was actually a PAC address instead of a proxy server. A PAC basically serves a list which contains the proxy details of the environment. This comes in handy when you’ve got multiple proxy for redundancy… In this case VMware Update Manager wasn’t fond of the PAC file. When I used the address of the proxy server instead of the host server the PAC file it worked like a charm…

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 10
  • Page 11
  • Page 12
  • Page 13
  • Page 14
  • Interim pages omitted …
  • Page 83
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in