• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

3.5

ESX 3.5 Update 2 available now!

Duncan Epping · Jul 26, 2008 ·

Am I the first one to notice this? VMware just released Update 2 for ESX(i) 3.5 and a whole bunch of new patches!

So what’s new?

  • Windows Server 2008 support – Windows Server 2008 (Standard, Enterprise, and Datacenter editions) is supported as a guest operating system. With VMware’s memory overcommit technology and the reliability of ESX, virtual machine density can be maximized with this new guest operating system to achieve the highest degree of ROI. Guest operating system customizations and Microsoft Cluster Server (MSCS) are not supported with Windows Server 2008.
  • Enhanced VMotion Compatibility – Enhanced VMotion compatibility (EVC) simplifies VMotion compatibility issues across CPU generations by automatically configuring server CPUs with Intel FlexMigration or AMD-V Extended Migration technologies to be compatible with older servers. Once EVC is enabled for a cluster in the VirtualCenter inventory, all hosts in that cluster are configured to ensure CPU compatibility for VMotion. VirtualCenter will not permit the addition of hosts which cannot be automatically configured to be compatible with those already in the EVC cluster.
  • Storage VMotion – Storage VMotion from a FC/iSCSI datastore to another FC/iSCSI datastore is supported. This support is extended on ESX/ESXi 3.5 Update 1 as well.
  • VSS quiescing support – When creating quiesced snapshot of Windows Server 2003 guests, both filesystem and application quiescing are supported. With Windows Server 2008 guests, only filesystem quiescing is supported. For more information, see the Virtual Machine Backup Guide and the VMware Consolidated Backup 1.5 Release Notes.
  • Hot Virtual Extend Support – The ability to extend a virtual disk while virtual machines are running is provided. Hot extend is supported for vmfs flat virtual disks without snapshots opened in persistent mode.
  • 192 vCPUs per host – VMware now supports increasing the maximum number of vCPUs per host 192 given that the maximum number of Virtual Machines per host is 170 and that no more than 3 virtual floppy devices or virtual CDROM devices are configured on the host at any given time. This support is extended on ESX 3.5 Update 1 as well.

I really really like the VSS support for Snapshots, especially for VCB this is a great feature! And what about hot extending your harddisk, this makes a VMFS datastore as flexible as a RDM datastore!

For Hardware there are also a couple of really great additions:

  • 8Gb Fiber Channel HBAs – Support is available for 8Gb fiber channel HBAs. See the I/O Compatibility Guide for ESX Server 3.5 and ESX Server 3i for details.
  • SAS arrays – more configurations are supported.  See the Storage/SAN Compatibility Guide for ESX Server 3.5 and ESX Server 3i for details.
  • 10 GbE iSCSI initiator – iSCSI over a 10GbE interface is supported. This support is extended on ESX Server 3.5 Update 1, ESX Server version 3.5 Update 1 Embedded and ESX Server version 3.5 Update 1 Installable as well.
  • 10 GbE NFS support – NFS over a 10GbE interface is supported.
  • IBM System x3950 M2 – x3950 M2 in a 4-chassis configuration is supported, complete with hardware management capabilities through multi-node Intelligent Platform Management Interface (IPMI) driver and provider. Systems with up to 32 cores are fully supported.  Systems with more than 32 cores are supported experimentally.
  • IPMI OEM extension support – Execution of IPMI OEM extension commands is supported.
  • System health monitoring through CIM providers – More Common Information Model (CIM) providers are added for enhanced hardware monitoring, including storage management providers provided by QLogic and Emulex.  LSI MegaRAID providers are also included and are supported experimentally.
  • CIM SMASH/Server Management API – The VMware CIM SMASH/Server Management API provides an interface for developers building CIM-compliant applications to monitor and manage the health of systems.  CIM SMASH is now a fully supported interface on ESX Server 3.5 and VMware ESX Server 3i.
  • Display of system health information – More system health information is displayed in VI Client for both ESX Server 3.5 and VMware ESX Server 3i.
  • Remote CLI – Remote Command Line Interface (CLI) is now supported on ESX Server 3.5 as well as ESX Server 3i. See the Remote Command-Line Interface Installation and Reference Guide for more information.

One of the important thing in my opinion is the full support for the CIM Smash API! And iSCSI over a 10GBe interface, same goes for NFS! 8GB fibre and SAS arrays is a great extension.

  • VMware High Availability – VirtualCenter 2.5 update 2 adds full support for monitoring individual virtual machine failures based on VMware tools heartbeats. This release also extends support for clusters containing mixed combinations of ESX and ESXi hosts, and minimizes previous configuration dependencies on DNS.
  • VirtualCenter Alarms – VirtualCenter 2.5 Update 2 extends support for alarms on the overall health of the server by considering the health of each of the individual system components such as memory and power supplies. Alarms can now be configured to trigger when host health degrades.
  • Guided Consolidation – now provides administrators with the ability to filter the list of discovered systems by computer name, IP address, domain name or analyzing status. Administrators can also choose to explicitly add physical hosts for analysis, without waiting for systems to be auto-discovered by the Consolidation wizard. Systems can be manually added for analysis by specifying either a hostname or IP address. Multiple hostnames or IP addresses, separated by comma or semi-colon delimiters, may also be specified for analysis. Systems can also be manually added for analysis by specifying an IP address range or by importing a file containing a list of hostnames or IP addresses that need to be analyzed for consolidation. Guided Consolidation also allows administrators to override the provided recommendations and manually invoke the conversion wizard.
  • Live Cloning – VirtualCenter 2.5 Update 2 provides the ability of creating a clone of a powered-on virtual machine without any downtime to the running virtual machine. Therefore, administrators are no longer required to power off a virtual machine in order to create a clone of it.
  • Single Sign-On – You can now automatically authenticate to VirtualCenter using your current Windows domain login credentials on the local workstation, as long as the credentials are valid on the VirtualCenter server. This capability also supports logging in to Windows using Certificates and Smartcards. It can be used with the VI Client or the VI Remote CLI to ensure that scripts written using the VI Toolkits can take advantage of the Windows credentials of your current session to automatically connect to VirtualCenter.

One of the best new features described above in my opinion is the extension of Alarms! It’s awesome that VirtualCenter will report on hardware health! But what about that live cloning, that will definitely come in handy when troubleshooting a live production environment. Just copy the server, start it without the network attached and try to solve the problem!

DOWNLOAD it now:

ESX 3.5 Update 2
ESXi 3.5 installable Update 2

VirtualCenter 2.5 Update 2

VMware Consolidated Backup 1.5

Howto: Check if a LUN is being locked by the host?

Duncan Epping · Jul 23, 2008 ·

I just came across the following on the VMTN forum which is a very useful command in my opinion. When metadata changes for a LUN, that LUN is being locked by a host. Sometime the lock isn’t released, which can cause weird situations. In this case you would want to know which host is locking the LUN, especially when you’ve got over a dozen hosts. Rubeck posted a reply on Vliegenmeppers question on the forum:

esxcfg-info -s | grep -i -B 12 pending

Thanks guys,

NIC reordering

Duncan Epping · Jul 19, 2008 ·

I’ve seen this happen a lot, you’ve got multiple vendor nics in your ESX hosts and for some reason the numbering is all screwed up. So the onboard nics are vmnic0 and vmnic2 the pci nics are vmnic1 and vmnic3, this can be really confusing, and even more confusing when the renumbering is inconsistent. Instead of manually editing your esx.conf file Allen Sanabria created a python script which fixes this issue. Check out this blog for the full article and the script:

Could you beleive that VMWare says that a feature of there software will reorder your NICs after the kickstart???
So if this was the order of our NICS
03:02.0 Ethernet controller: Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) (rev 01)
03:02.1 Ethernet controller: Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) (rev 01)
eth0 == 03:02.0
eth1 == 03:02.1
When VMWare comes up it will reorder them so that vmnic0 will point to 03:02:01 when it should be 03:02:00 Now this only happens when you have a box with multiple nics from multiple vendors. This script will take care of it for you.

Command line tips and tricks #3

Duncan Epping · Jul 10, 2008 ·

Enter maintenance mode from the ESX command line:

vimsh -n -e /hostsvc/maintenance_mode_enter

Backup every running vm via vcb in just one command:

for /f “tokens=2 delims=:” %%i in (’vcbvmname -h <virtualcenterserver> -u <user> -p <password> -s Powerstate:on ^| find “name:”‘) do cscript pre-command.wsf “c:\program files\vmware\vmware consolidated backup framework\” %%i fullvm

Enable VMotion from the command line:

vimsh -n -e “hostsvc/vmotion/vnic_set vmk0″

Multiple virtual CPU vm’s

Duncan Epping · Jul 7, 2008 ·

I was always under the impression that ESX 3.x still used “strict co-scheduling” as 2.5.x did. In other words when you have a multi vcpu vm all vcpu’s need to be scheduled and started at the same time on seperate cores/cpu’s. You can imagine that this can cause the VM to have high “ready times”, which means waiting for physical cpu’s to be ready to serve the multiple cpu task.

About a week ago and a month ago two blog’s appeared around this subject which clarifies the way ESX does vcpu scheduling as of 3.x. Read them for more in depth information. (1, 2)

So in short. Since 3.x VMware ESX uses “relaxed coscheduling”. And is this as relaxed as the name implies? Yes it is. And for a simple reason:

Idle vCPUs, vCPUs on which the guest is executing the idle loop, are detected by ESX and descheduled so that they free up a processor that can be productively utilized by some other active vCPU. Descheduled idle vCPU’s are considered as making progress in the skew detection algorithm. As a result, for co-scheduling decisions, idle vCPUs do not accumulate skew and are treated as if they were running . This optimization ensures that idle guest vCPUs don’t waste physical processor resources, which can instead be allocated to other VMs.

In other words VM’s with multiple vCPU’s don’t take up cycles anymore when these vCPU’s aren’t used by the OS. ESX checks the CPU’s for the idle proces loop and when it’s idle the CPU will be released and available for other vCPU’s. This also means that when you are using an application that will use all vCPU’s the same problems will still exists as they did in ESX 2.5, my advice don’t over do it. Only 1 vCPU is more than enough most of the times!

And what appeared to be a sidenote in the blog that deserves special attention is the following statement:

The %CSTP column in the CPU statistics panel shows the fraction of time the VCPUs of a VM spent in the “co-stopped” state, waiting to be “co-started”. This gives an indication of the coscheduling overhead incurred by the VM. If this value is low, then any performance problems should be attributed to other issues, and not to the coscheduling of the VM’s virtual cpus.

In other words, check esxtop to determine if there are coscheduling problems.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 4
  • Page 5
  • Page 6
  • Page 7
  • Page 8
  • Interim pages omitted …
  • Page 19
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in