• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

VMware

NIC reordering

Duncan Epping · Jul 19, 2008 ·

I’ve seen this happen a lot, you’ve got multiple vendor nics in your ESX hosts and for some reason the numbering is all screwed up. So the onboard nics are vmnic0 and vmnic2 the pci nics are vmnic1 and vmnic3, this can be really confusing, and even more confusing when the renumbering is inconsistent. Instead of manually editing your esx.conf file Allen Sanabria created a python script which fixes this issue. Check out this blog for the full article and the script:

Could you beleive that VMWare says that a feature of there software will reorder your NICs after the kickstart???
So if this was the order of our NICS
03:02.0 Ethernet controller: Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) (rev 01)
03:02.1 Ethernet controller: Intel Corporation 82546EB Gigabit Ethernet Controller (Copper) (rev 01)
eth0 == 03:02.0
eth1 == 03:02.1
When VMWare comes up it will reorder them so that vmnic0 will point to 03:02:01 when it should be 03:02:00 Now this only happens when you have a box with multiple nics from multiple vendors. This script will take care of it for you.

Command line tips and tricks #3

Duncan Epping · Jul 10, 2008 ·

Enter maintenance mode from the ESX command line:

vimsh -n -e /hostsvc/maintenance_mode_enter

Backup every running vm via vcb in just one command:

for /f “tokens=2 delims=:” %%i in (’vcbvmname -h <virtualcenterserver> -u <user> -p <password> -s Powerstate:on ^| find “name:”‘) do cscript pre-command.wsf “c:\program files\vmware\vmware consolidated backup framework\” %%i fullvm

Enable VMotion from the command line:

vimsh -n -e “hostsvc/vmotion/vnic_set vmk0″

Write cache enabled or disabled

Duncan Epping · Jul 9, 2008 ·

BernieT wrote a nice blog about why you should enable write cache. Check out his findings, below a short outtake.

Explaining Write mode (basic’s).
Write through -> When a write request is received by the RAID controller, the controller will not respond to the O/S with a “write success” until the data is written to the physical disk/s.

Write back –> When a write request is received by the RAID controller, the controller will cache the request/data and respond to the O/S with a “write success”, then write the data to the physical disk/s.

Multiple virtual CPU vm’s

Duncan Epping · Jul 7, 2008 ·

I was always under the impression that ESX 3.x still used “strict co-scheduling” as 2.5.x did. In other words when you have a multi vcpu vm all vcpu’s need to be scheduled and started at the same time on seperate cores/cpu’s. You can imagine that this can cause the VM to have high “ready times”, which means waiting for physical cpu’s to be ready to serve the multiple cpu task.

About a week ago and a month ago two blog’s appeared around this subject which clarifies the way ESX does vcpu scheduling as of 3.x. Read them for more in depth information. (1, 2)

So in short. Since 3.x VMware ESX uses “relaxed coscheduling”. And is this as relaxed as the name implies? Yes it is. And for a simple reason:

Idle vCPUs, vCPUs on which the guest is executing the idle loop, are detected by ESX and descheduled so that they free up a processor that can be productively utilized by some other active vCPU. Descheduled idle vCPU’s are considered as making progress in the skew detection algorithm. As a result, for co-scheduling decisions, idle vCPUs do not accumulate skew and are treated as if they were running . This optimization ensures that idle guest vCPUs don’t waste physical processor resources, which can instead be allocated to other VMs.

In other words VM’s with multiple vCPU’s don’t take up cycles anymore when these vCPU’s aren’t used by the OS. ESX checks the CPU’s for the idle proces loop and when it’s idle the CPU will be released and available for other vCPU’s. This also means that when you are using an application that will use all vCPU’s the same problems will still exists as they did in ESX 2.5, my advice don’t over do it. Only 1 vCPU is more than enough most of the times!

And what appeared to be a sidenote in the blog that deserves special attention is the following statement:

The %CSTP column in the CPU statistics panel shows the fraction of time the VCPUs of a VM spent in the “co-stopped” state, waiting to be “co-started”. This gives an indication of the coscheduling overhead incurred by the VM. If this value is low, then any performance problems should be attributed to other issues, and not to the coscheduling of the VM’s virtual cpus.

In other words, check esxtop to determine if there are coscheduling problems.

Command line tips and tricks #1

Duncan Epping · Jun 29, 2008 ·

Because I will be posting less in the upcoming weeks about problems I face at customer sites I will try to post some cool command-line tip or trick I discovered or picked up somewhere….

open ESX console ,via putty and type the following
vm-support -x
result: all the VMID’s also known as World ID’s,

And if you’re colleagues hardly ever clean up their snapshots:

find /vmfs/volumes -iname “*delta.vmdk”
result: every delta file gets listed, including the unregistered and/or orphaned snapshots ones!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 100
  • Page 101
  • Page 102
  • Page 103
  • Page 104
  • Interim pages omitted …
  • Page 123
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in