• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

ESX

Open source VI(vSphere) Java API 2.0 GA!

Duncan Epping · Jun 26, 2009 ·

For all the developers out there, I just received the following from my colleague Steve Jin:

VI (VSphere) Java API 2.0 was GAed last night. The 2.0 release represents 6 months of continuous (after work) engineering effort since this January. It has packed many features:

New high performance web service engine. When I told people that we replaced AXIS, most of them wanted me to confirm what I said. The new engine is 15X faster in loading, 4+X in de-serialization than AXIS 1.4 with only 1/4 of size.

  • vSphere 4 support.
  • REST client API.
  • Caching framework API.
  • Multiple version support with single set of APIs.
  • Clean licenses. The API and dependent dom4j are all BSD licenses.

The open source project was sponsored by VMware but not supported by VMware. To download it, visit http://vijava.sf.net

load balancing active/active SANs part II

Duncan Epping · Jun 26, 2009 ·

About a year ago I wrote about a script that would load balance your Active/Active SAN by evenly dividing LUNs on all available paths. A week ago I provided Kees van Vloten with this script so that it could be incorporated into a scripted install solution. Kees has enhanced the script and emailed it so that I could share it with you guys:


for N_PATHS in 2 4 6 8; do
# These are the LUNs with N_PATHS:
LUN_LIST=`esxcfg-mpath -l | egrep "^Disk.+has $N_PATHS paths" | awk '{print $2}'`
N=1
for LUN in $LUN_LIST; do
echo "LUN: $LUN, Counter: $N, Possible paths:"
esxcfg-mpath -q --lun=$LUN | grep "FC" | awk '{print $4}'
# Take the Nth path for this LUN
LUN_NEWPATH=`esxcfg-mpath -q --lun=$LUN | \
grep "FC" | awk '{print $4}' | head -n $N | tail -n 1`
# Make the Nth path the preferred path
esxcfg-mpath --lun=$LUN --path=$LUN_NEWPATH --preferred
echo ""
# Increase N (within the limit)
N=$(($N+1))
if [ $N -gt $N_PATHS ]; then
N=1
fi
done
done

Thanks for sharing,

VMFS/LUN size?

Duncan Epping · Jun 23, 2009 ·

A question that pops up on the VMTN Community once every day is what size VMFS datastore should I create? The answer always varies,  one says “500Gb” the other says “1TB”. Now the real answer should be, it depends.

Most companies can use a simple formula in my opinion. First you should answer these questions:

  • What’s the maximum amount of VMs you’ve set for a VMFS volume?
  • What’s the average size of a VM in your environment? (First remove the really large VM’s that typically get an RDM.)

If you don’t know what the maximum amount of VMs should be just use a safe number, anywhere between 10 and 15. Here’s the formula I always use:

round((maxVMs * avgSize) + 20% )

I usually use increments of 25GB. This is where the round comes in to play. If you end up with 380GB round it up to 400GB and if you end up with 321GB round it up to 325GB. Let’s assume your average VM size is 30GB and your max amount of VMs per VMFS volume is 10:

(10*30) + 60 =360
360 rounded up -> 375GB

Cluster sizes and SRM

Duncan Epping · Jun 15, 2009 ·

A couple of weeks ago I published an article on the maximum amount of VMs one could run on a ESX host. In short; if you enabled HA on your cluster it restricts the amount of VMs you can run on a host depending on the total amount of hosts in the cluster. This means that depending on your consolidation ratio you would need to limit the amount of hosts in a cluster.

Now, there might be another argument to take a good look at your cluster size. Site Recovery Manager. Currently vCenter 2.5 does not allow to simultaneously boot more than 16 VMs.

I’ve always been under the impression that the limit for SRM was 8 simultaneous boots but one of my colleagues notified me that the limit is actually 16. If all you care about is a low RTO, 16 would be a good cluster size wouldn’t it? There’s no point in having more than 16 hosts in a SRM enabled cluster if you don’t need the resources. However, don’t forget to take the HA “limitation” in account when designing your environment for availability.

ESX 4.0 Web Access

Duncan Epping · Jun 15, 2009 ·

I just wanted to access my ESX 4.0 server via https. Unfortunately I received a “503 Service unavailable” error. First I checked if the service was running:

service vmware-webAccess status

It wasn’t running so I started it:

service vmware-webAccess start

But why did this happen? Well page 7 of the vSphere Web Access Guide revealed it. As of ESX 4.0 this service has been disabled by default. If you do however need it on a regular base it might be a smart thing to enable it:

chkconfig --level 345 vmware-webAccess on
  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 24
  • Page 25
  • Page 26
  • Page 27
  • Page 28
  • Interim pages omitted …
  • Page 83
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in