• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

patches

VMware patches for #shellshock

Duncan Epping · Oct 2, 2014 ·

Last night a whole bunch of patches for the shellshock security issue were released. Although I am hoping that all of you have your datacenter secured for outside threads and inside threads by isolating networks, firewalls etc… It would be wise to install these patches ASAP. Majority of linux based VMware appliances were impacted, but luckily patching them is not a huge thing. Below you can find a list of the patches and links to the downloads for your convenience.

  • VMware vCenter Server Appliance 5.0 U3b – download – release notes
  • VMware vCenter Server Appliance 5.1 U2b – download – release notes
  • VMware vCenter Server Appliance 5.5 U2a – download – release notes

Note that the downloads are in the middle of the list, so you need to scroll down before you see them. There are also patches for products like the VMware VSA, vSphere Replication, VC Ops etc. Make sure to download those as well!

VMware vCenter Update Manager 4.0 Update 1 Patch 1

Duncan Epping · Feb 27, 2010 ·

VMware just released VMware vCenter Update Manager 4.0 Update 1 Patch 1.

This patch resolves the following issues :

  • After upgrading Cisco Nexus 1000V VSM to the latest version, you might not be able to patch the kernel of ESX hosts attached to the vDS (KB 1015717)Upgrading Cisco Nexus 1000V VSM to the latest version upgrades the Cisco Virtual Ethernet Module (VEM) on ESX hosts attached to the vDS. Subsequently, from the same vSphere Client instance, you might not be able to use a host patch baseline to apply patches to the ESX vmkernel64 or ESXi firmware of hosts attached to the vDS. Applying patches to ESX vmkernel64 or ESXi firmware requires that you include the compatible Cisco Nexus 1000V VEM patch in the baseline. However, this patch might not be available for selection in the Update Manager New Baseline wizard or in the Update Manager patch repository.
  • Upgrade of Cisco Nexus 1000V version 4.0(4)SV1(1) to version 4.0(4)SV1(2) with Update Manager might fail for hosts with certain patch levels (KB 1017069)If you are using Cisco Nexus 1000V version 4.0(4)SV1(1), and the ESX patch bulletins ESX400-200912401-BG or ESXi400-200912401-BG are installed on the host, you might not be able to upgrade to Cisco Nexus 1000V version 4.0(4)SV1(2).
  • Scanning of hosts in a cluster and staging of patches to hosts in a cluster might take a long time to finishThe scanning and staging operations of hosts in a cluster run sequentially. If a cluster contains a lot of hosts, scanning and staging patches might take a long time to be completed. Scanning and staging of hosts in a cluster run concurrently on all of the selected hosts in the cluster.

For details regarding these new fixes, please refer to the release notes.

VMware vCenter Update Manager 4.0 Update 1 Patch 1 is available for download.

VMware vCenter Update Manager 4.0 Update 1 is required for installation of this patch.

Fixed: Memory alarms triggered with AMD RVI and Intel EPT?

Duncan Epping · Sep 25, 2009 ·

I wrote about two weeks ago and back in March but the issues with false memory alerts due to large pages being used have finally been solved.

Source

Fixes an issue where a guest operating system shows high memory usage on Nehalem based systems, which might trigger memory alarms in vCenter. These alarms are false positives and are triggered only when large pages are used. This fix selectively inhibits the promotion of large page regions with sampled small page files. This provides a specific estimate instead of assuming a large page is active when one small page within it is active.

BEFORE INSTALLING THIS PATCH: If you have set Mem.AllocGuestLargePage to 0 to workaround the high memory usage issue detailed in the Summaries and Symptoms section, undo the workaround by setting Mem.AllocGuestLargePage to 1.

Six patches have been released today but this fix was probably the one that people talk about the most that’s why I wanted to make everyone aware of it! Download the patches here.

Site Recovery Manager 1.0 Update 1 Patch 4

Duncan Epping · Sep 14, 2009 ·

One of my colleagues, Michael White, just pointed out that VMware released a patch for Site Recovery Manager:

Site Recovery Manager 1.0 Update 1 Patch 4
File size: 7.9 MB
File type: .msi

Here are the most important fixes:

  • a problem that could cause a recovery plan to fail and log the message
    Panic: Assert Failed: “_pausing” @ d:/build/ob/bora-172907/santorini/src/recovery/secondary/recoveryTaskBase.cpp:328
  • a problem that caused the SRM SOAP API method getFinalStatus() to write all XML output on a single line
  • full session keys are no longer logged (partial keys are used in the log instead)
  • a problem that could cause SRM to crash during a test recovery and log the message
    Exception: Assert Failed: “!IsNull()” @ d:/build/ob/bora-128004/srm101-stage/santorini/public\common/typedMoRef.h:168
  • a problem that could cause a recovery plan test to fail to create test bubble network when recovering virtual machines that had certain types of virtual NICs
  • a problem that could cause incorrect virtual machine start-up order on recovery hosts that enable DRS
  • a problem that could cause the SRM server to crash while testing a recovery plan
  • a problem that could cause SRM to fail and log a “Cannot execute scripts” error when customizing Windows virtual machines on ESX 3.5 U1 hosts.
  • support for customizing Windows 2008 has been added
  • a problem that could prevent network settings from being updated during test recovery for guests other than Windows 2003 Std 32-bit
  • a problem that prevents protected virtual machines from following recommended Distributed Resource Scheduler (DRS) settings when recovering to more than one DRS cluster.
  • a problem observed at sites that support more than seven ESX hosts. If you refresh inventory mappings when connected to such a site, the display becomes unresponsive for up to ten minutes.
  • a problem that could prevent SRM from computing LUN consistency groups correctly when one or more of the LUNs in the consistency group did not host any virtual machines.
  • a problem that could cause the client user interface to become unresponsive when creating protection groups with over 300 members
  • several problems that could cause SRM to log an error rmessage vim.fault.AlreadyExists when recomputing datastore groups
  • a problem that could cause SRM to log an Assert Failed: “ok” @ src/san/consistencyGroupValidator.cpp:64 error when two different datastores match a single replicated device returned by the SRA
  • a problem that could cause SRM to remove static iSCSI targets with non-test LUNs during test recovery
  • several problems that degrade the performance of inventory mapping

Memory alarms triggered with AMD RVI and Intel EPT?

Duncan Epping · Sep 11, 2009 ·

I’ve reported on this twice already but it seems a fix will be offered soon. I discovered the problem back in March when I did a project where we virtualized a large amount of Citrix XenApp servers on an AMD platform with RVI capabilities. As Hardware MMU increased performance significantly it was enabled by default for 32Bit OS’es. This is when we noticed that large pages(side effect of enabling MMU) are not TPS’ed and thus give a totally different view of resource consumption than on your average cluster. When vSphere and Nehalem was released more customers experienced this behavior, as EPT(Intel’s version of RVI) is fully supported and utilized on vSphere, as reported in this article. To be absolutely clear: large pages were never supposed to be TPS’ed and this is not a bug but actually working as designed. However; we did discover an issue with the algorithm  being used to calculate Guest Active Memory which causes the alarms to be triggered as “kichaonline” describes in this reply.

I’m not going to reiterate everything that has been reported in this VMTN Topic about the problem, but what I would like to mention is that a patch will be released soon to fix the incorrect alarms:

Several people have, understandably, asked about when this issue will be fixed. We are on track to resolving the problem in Patch 2, which is expected in mid to late September.

In the meantime, disabling large page usage as a temporary work-around is probably the best approach, but I would like to reiterate that this causes a measurable loss of performance. So once the patch becomes available, it is a good idea to go back and reenable large pages.

Also a small clarification. Someone asked if the temporary work-around would be “free” (i.e., have no performance penalty) for Win2k3 x64 which doesn’t enable large pages by default. While this may seem plausible, it is however not the case. When running a virtual machine, there are two levels of memory mapping in use: from guest linear to guest physical address and from guest physical to machine address. Large pages provide benefits at each of these levels. A guest that doesn’t enable large pages in the first level mapping, will still get performance improvements from large pages if they can be used for the second level mapping. (And, unsurprisingly, large pages provide the biggest benefits when both mappings are done with large pages.) You can read more about this in the “Memory and MMU Virtualization” section of this document:

http://www.vmware.com/resources/techresources/10036

Thanks,
Ole

Mid / Late september may sound to vague for some and that’s probably why Ole reported the following yesterday:

The problem will be fixed in Patch 02, which we currently expect to be available approximately September 30.

Thanks,
Ole

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 7
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

Feb 9th – Irish VMUG
Feb 23rd – Swiss VMUG
March 7th – Dutch VMUG
May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in