• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Management & Automation

Slot sizes

Duncan Epping · Oct 6, 2009 ·

I’ve been receiving a lot of questions around slot sizes lately. Although I point everyone to my HA Deepdive post not everyone seems to understand what I am trying to explain. The foremost reason is that most people need to be able to visualize it; which is tough with slot sizes. Just to freshen up an outtake from the article:

HA uses the highest CPU reservation of any given VM and the highest memory reservation of any given VM. If there is no reservation a default of 256Mhz will be used for the CPU slot and the memory overhead will be used for the memory slot!

If VM1 has 2GHZ and 1024GB reserved and VM2 has 1GHZ and 2048GB reserved the slot size for memory will be 2048MB+memory overhead and the slot size for CPU will be 2GHZ.

Now how does HA calculate how many slots are available per host?

Of course we need to know what the slot size for memory and CPU is first. Then we divide the total available CPU resources of a host by the CPU slot size and the total available Memory Resources of a host by the memory slot size. This leaves us with a slot size for both memory and CPU. The most restrictive number is the amount of slots for this host. If you have 25 CPU slots but only 5 memory slots the amount of available slots for this host will be 5.

The first question I got was around unbalanced clusters. Unbalanced would for instance be a cluster with 5 hosts of which one contains substantially more memory than the others. What would happen to the total amount of slots in a cluster of the following specs:

Five hosts, each host has 16GB of memory except for one host(esx5) which has recently been added and has 32GB of memory. One of the VMs in this cluster has 4CPUs and  4GB of memory, because there are no reservations set the memory overhead of 325MB is being used to calculate the memory slot sizes. (It’s more restrictive than the CPU slot size.)

This results in 50 slots for esx01, esx02, esx03 and esx04. However, esx05 will have 100 slots available. Although this sounds great admission control rules the host out with the most slots as it takes the worst case scenario into account. In other words; end result: 200 slot cluster.

With 5 hosts of 16GB, (5 x 50) – (1 x 50), the result would have been exactly the same. To make a long story short: balance your clusters when using admission control!

The second question I received this week was around limiting the slotsizes with the advanced options das.slotCpuInMHz and/or das.slotMemInMB. If you need to use a high reservation for either CPU or Memory these options could definitely be useful, there is however something that you need to know. Check this diagram and see if you spot the problem, the das.slotMemInMB has been set to 1024MB.

Notice that the memory slotsize has been set to 1024MB. VM24 has a 4GB reservation set. Because of this VM24 spans 4 slots. As you might have noticed none of the hosts has 4 slots left. Although in total there are enough slots available; they are scattered and HA might not be able to actually boot VM24. Keep in mind that admission control does not take scattering of slots into account. It does count 4 slots for VM24, but it will not verify the amount of available slots per host.

To make sure you will always have enough slots and know what your current situation is Alan Renouf wrote an excellent script. This script reports the following:

Example Output:

Cluster        : Production
TotalSlots     : 32
UsedSlots      : 10
AvailableSlots : 22
SlotNumvCPUs   : 1
SlotCPUMHz     : 256
SlotMemoryMB   : 118

My article was a collaboration with Alan and I hope you find both article valuable. We’ve put a lot of time into making things as straight forward and simplistic as we possibly can.

IO DRS – Providing Performance Isolation to VMs in Shared Storage Environments (TA3461)

Duncan Epping · Sep 16, 2009 ·

This was probably one of the coolest sessions of VMworld. Irfan Ahmad was the host of this session and some of you might know him from Project PARDA. The PARDA whitepaper describes the algorithm being used and how the customer could benefit from this in terms of performance. As Irfan stated this is still in a research phase. Although the results are above expectations it’s still uncertain if this will be included in a future release and if it does when this will be. There are a couple of key take aways that I want to share:

  • Congestion management on a per datastore level -> limits on IOPS and set shares per VM
  • Check the proportional allocation of the VMs to be able to identify bottlenecks.
  • With I/O DRS throughput for tier 1 VMs will increase when demanded (More IOPS, lower latency) of course based on the limits / shares specified.
  • CPU overhead is limitied -> my take: with the new hardware of today I wouldn’t worry about an overhead of a couple percent.
  • “If it’s not broken, don’t fix it” -> if the latency is low for all workloads on a specific datastore do not take action, only above a certain threshold!
  • I/O DRS does not take SAN congestion in account, but SAN is less likely to be the bottleneck
  • Researching the use of Storage VMotion move around VMDKs when there’s congestion on the array level
  • Interacting with queue depth throttling
  • Dealing with end-points and would co-exist with Powerpath

That’s it for now… I just wanted to make a point. There’s a lot of cool stuff coming up. Don’t be fooled by the lack of announcements(according to some people, although I personally disagree) during the keynotes. Start watching the sessions, there’s a lot of knowledge to be gained!

Cool Tool Update: RVTools 2.6

Duncan Epping · Sep 13, 2009 ·

Rob de Veij just uploaded a new version of RVTools. Check it out, there are a whole bunch of new cool features added. Honestly one of the best free tools around, great work Rob! (Everyone keep in mind that Rob does this during the evening so if you’re using this for commercial purposes would be nice to make a small donation.)

Version 2.6 (September, 2009)

  • RVTools is now using the vSphere 4 SDK. The SDK has been enhanced to support new features of ESX/ESXi 4.0 and vCenter Server 4.0 systems.
  • On vNetwork tab the Vmxnet2 information is improved (due to the new SDK).
  • The name of the vCenter server or ESX host to which RVTools is connected is now visible in the windows title.
  • New menu option: Export All. Which exports all the data to csv files.
  • Export All function can also started from the command line. The output files are written to a unique directory in the users documents directory.
  • New vSwitch tab. The vSwitch tab displays for each virtual switch the name of the switch, number of ports, free ports, promiscuous mode value, mac address changed allowed value, forged transmits allowed value, traffic shapping flag, width, peak and burst, teaming policy, reverse policy flag, notify switch value, rolling order, offload flag, TSO support flag, zero copy transmits support flag, maximum transmission unit size, host name, datacenter name and cluster name.
  • New vPort tab. The vPort tab displays for each port the name of the port, the name of the virtual switch where the port is defined, VLAN ID, promiscuous mode value, mac address changed allowed value, forged transmits allowed value, traffic shapping flag, width, peak and burst, teaming policy, reverse policy flag, notify switch value, rolling order, offload flag, TSO support flag, zero copy transmits support flag, size, host name, datacenter name and cluster name.
  • Filter is now also working on vHost, vSwitch and vPort tab.
  • Health check change: number of virtual machines per core check is changed to number of virtual CPUs per core.

VMware Studio 2.0 GA’ed

Duncan Epping · Sep 7, 2009 ·

A couple of weeks ago I wrote about VMware Studio 2.0. VMware Studio 2.0 has just officially been released.

Source:
VMware Studio 2.0 helps author, configure, deploy and customize vApps and virtual appliances. vApps support the industry standard Open Virtualization Format (OVF). vApps can be deployed on VMware vSphere 4.0 or in the cloud. vCenter Server 4.0 now supports creating and running vApps, as well as importing and exporting them in compliance with OVF 1.0 standard.

Studio 2.0 is designed to be used by ISVs, developers, IT professionals and members of the virtualization community. It is a free product and is available as a virtual appliance.

The following new features have been added:

  • Ability to create multiple-VM appliances, or vApps, to run on VMware vSphere.
  • More provisioning engines including ESX/ESXi 3.5 and 4, VMware Workstation 6.5.1, and VMware Server 2.0.
  • Build support for Windows Server 2003 and 2008 (32-bit and 64-bit) virtual appliances.
  • Build support for 64-bit Red Hat Enterprise Linux (RHEL) and SUSE Enterprise Linux Server (SLES).
  • Build support new Linux distributions RHEL 5.3, CentOS 5.3, and Ubuntu 8.04.1.
  • Extensible management services allow you to customize an interface into a new tab.
  • An Eclipse™ plug-in helps you package applications and create management services.
  • Automatic dependency resolution for application packages installed on Linux-based virtual appliances.
  • Existing VM build (input-as-VM) for Linux virtual appliances.
  • DMTF standard OVF 1.0 and open virtual appliance (OVA) packaging. VMware Studio 1.0 supported OVF 0.9.
  • Eclipse usability improvements.
  • Appliance updates from CDROM.
  • Web console footer customization in the appliance VM.
  • EULA first-boot display control in the appliance VM.
  • Host name editing in the Web console of the appliance VM.
  • Security fix for VMware Studio when uploading management services. See CVE-2009-2968.

Just download it and try it out!

HA Admission Control and DPM

Duncan Epping · Aug 20, 2009 ·

A couple of days ago we had a discussion on Admission Control and DPM internally at VMware. One of our customers had enabled DPM on a HA cluster. During the evening 4 out of 5 hosts where placed into standby mode because of this.

This customer, as many of our customers have these days, had vCenter running virtual. This of course led to the question; what happens if this one host fails and our virtual vCenter server is running on it?
That’s an easy one; nothing. It might not be the answer you are looking for but when the host fails that runs vCenter there’s no host or service left to get these hosts out of maintenance mode or restart your VMs.

Now maybe even more important; what causes this behavior?
This behavior is caused by the fact that admission control is disabled. If you disable admission control DPM will put hosts into standby mode even if it violates failover requirements. This means that if you have virtualized your vCenter server this is definitely something to be aware of.

For more info/background: http://kb.vmware.com/kb/1007006

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 18
  • Page 19
  • Page 20
  • Page 21
  • Page 22
  • Interim pages omitted …
  • Page 44
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in