• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

esxi

VMware ESX(i) 3.5 U4 released!

Duncan Epping · Mar 31, 2009 ·

VMware has just released ESX Update 4. You can find the release notes here. I’ve picked a couple of bullets which I think are important, please read the release notes for the other improvements and known issues!

Be sure to also read the compatability matrix, ESX(i) 3.5 U4 requires vCenter 2.5 U2 or higher, and be sure to check if your Hardware Management Agent is still supported.

Newly supported features:

  • PXE booting VMware ESX Server 3i version 3.5 Update 4 Installable is an experimental feature and is supported as such! (http://kb.vmware.com/kb/1008971 , http://kb.vmware.com/kb/1009034)
  • LUN Queue depth throttling for 3PAR Arrays (http://kb.vmware.com/kb/1008113)
  • Increasing the VMklinux module heap size:
    esxcfg-advcfg -k <mbs> vmklinuxHeapMaxSizeMB

Newly supported Guest Operating Systems:

  • SUSE Linux Enterprise Server 11 (32-bit and 64-bit).
  • SUSE Linux Enterprise Desktop 11 (32-bit and 64-bit).
  • Ubuntu 8.10 Desktop Edition and Server Edition (32-bit and 64-bit).
  • Windows Preinstallation Environment 2.0 (32-bit and 64-bit).

Newly supported SAS/SATA controllers:

  • PMC 8011 (for SAS and SATA drives)
  • Intel ICH9
  • Intel ICH10
  • CERC 6/I SATA/SAS Integrated RAID Controller (for SAS and SATA drivers)
  • HP Smart Array P700m Controller

Newly supported Storage Arrays:

  • Sun StorageTek 2530 SAS Array
  • Sun Storage 6580 Array
  • Sun Storage 6780 Array

Expanded support for the Enhanced VMXNET driver :

  • Microsoft Windows Server 2003, Standard Edition (32-bit)
  • Microsoft Windows Server 2003, Standard Edition (64-bit)
  • Microsoft Windows Server 2003, Web Edition
  • Microsoft Windows Small Business Server 2003
  • Microsoft Windows XP Professional (32-bit)

Update: VMware Health Check Report 0.94

Duncan Epping · Mar 27, 2009 ·

William Lam posted an update of his Health Check script on the VMTN Communities. I’ve been using this script extensively at several customer sites together with VIMA. Here are the release notes:

03-24-2009 – v0.9.4
Fixes:
-There was a bug reported by Duncan Epping and others regarding hosts that were appearing in the wrong cluster with respect to the portgroup listings, this should be fixed.

Enhancements:
-Detail Hardware Health sensor readings provided by CIM
-CDP Summary (individual cdp.pl available)

An 8MB VMFS blocksize doesn’t increase performance?

Duncan Epping · Mar 24, 2009 ·

VMFS Blocksizes have always been a hot topic regarding storage performance. It has been discussed by many including Eric Siebert on ITKE and Gabe also opened a topic on VMTN and he answered  his own question at the bottom. Steve Chambers wrote a great article about Disk Alignment and Blocksize on VI:OPS which also clearly states:”the VMFS block size is irrelevant for guest I/O.” Reading these articles/topics we can conclude that an 8MB blocksize opposed to a 1MB blocksize doesn’t increase performance.

But, is this really the case? Isn’t there more to it than meets the eye?

Think about thin-provisioning for a second. If you create a thin provisioned disk on a datastore with a 1MB blocksize the thin provisioned disk will grow with increments of 1MB. Hopefully you can see where I’m going. A thin provisioned disk on a datastore with an 8MB blocksize will grow in 8MB increments. Each time the thin-provisioned disk grows a SCSI reservation takes place because of meta data changes. As you can imagine an 8MB blocksize will decrease the amount of meta data changes needed, which means less SCSI reservations. Less SCSI reservations equals better performance in my book.

For the current VI3 environments, besides VDI, I hardly have any customers using thin provisioned vmdk’s. But with the upcoming version of ESX/vCenter this is likely to change because the GUI will make it possible to create thin provisioned vmdk’s. Not only during the creation of vmdk’s will thin provisioned disks be an option, but also when you initiate a Storage VMotion you will have the option to migrate to a thin provisioned disk. It’s more than likely that thin provisioned disks will become a standard in most environments to reduce the storage costs. If it does, remember that when a thin provisioned disk grows a SCSI reservation takes place and less reservations is definitely beneficial for the stability and performance of your environment.

HA enhancements, exploring the next version of ESX/vCenter

Duncan Epping · Mar 23, 2009 ·

Let’s start with a screenshot:

These are the properties of an HA cluster, as you can see there are two sections that changed:

  1. “Enable Host Monitoring” is a brand new feature. Anyone who did network maintenance while HA was enabled knows why this feature will come in handy. Those that didn’t: Isolation response! If ESX is unable to send or receive it’s heartbeat and can’t ping it’s default isolation response address it will shutdown all VM’s. To avoid this behavior you can switch of HA for a specific host with this new feature. In just four words: Maintenance mode for HA.
  2. Besides the amount of host failures a cluster can tolerate you can also specify a percentage. With the “host failures” option VMware uses the highest values of CPU and Memory reservations to calculate the amount of slots. (For more on slot / slot size read the Resource Management Guide for ESX 3.5) With the new option “Percentage of cluster resources” this isn’t the case. This new option uses the actual reservation of the VM and calculates the total percentage of resources used based on these calculations. If no reservation have been made it uses the default 256Mhz / 256MB reservation. In other words, you will be more flexible and will get a higher consolidation ratio. If the default reservation values are to low you can always use the advanced options to increase it. Another new option is “specify a failover host”. This option can be compared to “das.defaultfailoverhost”. The good thing about this option is that the designated host will be used for fail-over only. DRS will not migrate VM’s to this host, and it’s not possible to start VM’s on this host.

Two new patches for ESX(i) 3.5

Duncan Epping · Mar 22, 2009 ·

VMware released two patches on the 20th of March. Both patches are related to an issue with the Broadcom bnx2x driver. If you are running on hardware with a Broadcom NetXtreme II 57710, 57711, or 57711E NICs be sure to look into the patches below:

  1. ESX350-200903412-BG – PATCH – KB 1009232 – General
    Updates Kernel Source and VMNIX
  2. ESX350-200903411-BG – PATCH – KB 1009231 – Critical
    Updates the bnx2x Driver for Broadcom

Download them now!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 42
  • Page 43
  • Page 44
  • Page 45
  • Page 46
  • Interim pages omitted …
  • Page 66
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in