• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

5.1

DRS not taking CPU Ready Time in to account? Need your help!

Duncan Epping · May 9, 2013 ·

For years these rumors have been floating around that DRS does not take CPU Ready Time (%RDY) in to account when it comes load balancing the virtual infrastructure. Fact is that %RDY has always been a part of the DRS algorithm but not as a first class citizen but as part of CPU Demand, which is a combination of various metrics but includes %RDY. Still, one might ask why %RDY is not a first class citizen.

There is a good reason though that %RDY isn’t, just think about what DRS is and does and how it actually goes about balancing out the environment, trying to please all virtual machines. Yes a lot of possibilities indeed to move virtual machines around in a cluster. So you can imagine that it is is really complex (and expensive) to calculate what the possible impact is after a virtual machine has been migrated “from a host” or “to a host” for all of the first class citizen metrics.

Now, for a long time the DRS engineering team has been looking for situations in the field where a cluster is balanced according to DRS but there are still virtual machines experiencing performance problems due to high %RDY. The DRS team really wants to fix this problem or bust the myth – what they need is hard data. In other words, vc-support bundles from vCenter and vm-support bundles from all hosts with high ready times. So far, no one has been able to provide these logs / cold hard facts.

If you see this scenario in your environment regularly please let me know. I will personally get you in touch with our DRS engineering team and they will look at your environment and try to solve this problem once and for all. We need YOU!

What is static overhead memory?

Duncan Epping · May 6, 2013 ·

We had a discussion internally on static overhead memory. Coincidentally I spoke with Aashish Parikh from the DRS team on this topic a couple of weeks ago when I was in Palo Alto. Aashish is working on improving the overhead memory estimation calculation so that both HA and DRS can be even more efficient when it comes to placing virtual machines. The question was around what determines the static memory and this is the answer that Aashish provided. I found it very useful hence the reason I asked Aashish if it was okay to share it with the world. I added some bits and pieces where I felt additional details were needed though.

First of all, what is static overhead and what is dynamic overhead:

  • When a VM is powered-off, the amount of overhead memory required to power it on is called static overhead memory.
  • Once a VM is powered-on, the amount of overhead memory required to keep it running is called dynamic or runtime overhead memory.

Static overhead memory of a VM depends upon various factors:

  1. Several virtual machine configuration parameters like the number vCPUs, amount of vRAM, number of devices, etc
  2. The enabling/disabling of various VMware features (FT, CBRC; etc)
  3. ESXi Build Number

Note that static overhead memory estimation is calculated fairly conservative and we take a worst-case-scenario in to account. This is the reason why engineering is exploring ways of improving it. One of the areas that can be improved is for instance including host configuration parameters. These parameters are things like CPU model, family & stepping, various CPUID bits, etc. This means that as a result, two similar VMs residing on different hosts would have different overhead values.

But what about Dynamic? Dynamic overhead seems to be more accurate today right? Well there is a good reason for it, with dynamic overhead it is “known” where the host is running and the cost of running the VM on that host can easily be calculated. It is not a matter of estimating it any longer, but a matter of doing the math. That is the big difference: Dynamic = VM is running and we know where versus Static = VM is powered off and we don’t know where it might be powered!

Same applies for instance to vMotion scenarios. Although the platform knows what the target destination will be; it still doesn’t know how the target will treat that virtual machine. As such the vMotion process aims to be conservative and uses static overhead memory instead of dynamic. One of the things or instance that changes the amount of overhead memory needed is the “monitor mode” used (BT, HV or HWMMU).

So what is being explored to improve it? First of all including the additional host side parameters as mentioned above. But secondly, but equally important, based on the vm -> “target host” combination the overhead memory should be calculated. Or as engineering calls it calculating “Static overhead of VM v on Host h”.

Now why is this important? When is static overhead memory used? Static overhead memory is used by both HA and DRS. HA for instance uses it with Admission Control when doing the calculations around how many VMs can be powered on before unreserved resources are depleted. When you power-on a virtual machine the host side “admission control” will validate if it has sufficient unreserved resource available for the “static memory overhead” to be guaranteed… But also DRS and vMotion use the static memory overhead metric, for instance to ensure a virtual machine can be placed on a target host during a vMotion process as the static memory overhead needs to be guaranteed.

As you can see, a fairly lengthy chunk of info on just a single simple metric in vCenter / ESXTOP… but very nice to know!

Increase Storage IO Control logging level

Duncan Epping · May 2, 2013 ·

I received this question today around how to increase the Storage IO Control logging level. I knew either Frank or I wrote about this in the past but I couldn’t find it… which made sense as it was actually documented in our book. I figured I would dump the blurp in to an article so that everyone who needs it for whatever reason can use it.

Sometimes it is necessary to troubleshoot your environment and having logs to review is helpful in determining what is actually happening. By default, SIOC logging is disabled, but it should be enabled before collecting logs. To enable logging:

  1. Click Host Advanced Settings.
  2. In the Misc section, select the Misc.SIOControlLogLevel parameter. Set the value to 7 for complete logging.  (Min value: 0 (no logging), Max value: 7)
  3. SIOC needs to be restarted to change the log level, to stop and start SIOC manually, use: /etc/init.d/storageRM {start|stop|status|restart}
  4. After changing the log level, you see the log level changes logged in /var/log/vmkernel

Please note that SIOC log files are saved in /var/log/vmkernel.

Awesome Fling: vCenter 5.1 Pre-Install Check

Duncan Epping · Mar 22, 2013 ·

One of the things that many people have asked me is how they could check if their environment was meeting the requirements for an upgrade to 5.1. Until today I never really had a good answer for it but fortunately that has changed. Alan Renouf has spent countless of hours developing a script that validated your environment and assesses if it is ready for an upgrade to vSphere 5.1.

This is a PowerShell script written to help customers validate their environment and assess if it is ready for a 5.1.x upgrade. The script checks against known misconfiguration and issues raised with VMware Support. This script checks the Windows Server and Active Directory configuration and provides an on screen report of known issues or configuration issues, the script also provides a text report which can help with further trouble shooting.

Is that helpful or what? Instead of going through the motion your just run this pre-flight script and it will tell you if you are good to go or not, or if changes are required. If you are planning an upgrade or are about to upgrade make sure to run this script.

Awesome job Alan, lets keep these coming!

What is: Current Memory Failover Capacity?

Duncan Epping · Mar 14, 2013 ·

I have had this question many times by now, what is “Current Memory Failover Capacity” that is shown in the cluster summary when you have selected the “Percentage Based Admission Control Policy”? What is that percentage? 99% of what? And will it go down to 0%? Or will it go down to the percentage that you reserved? Well I figured it was time to put things to the test and no longer be guessing.

As shown in the screenshot above, I have selected 33% of memory to be reserved and currently have 99% of memory failover capacity. Lets power-on a bunch of virtual machines and see what happens. Below is the result shown in a screenshot, “current memory failover capacity” went down from 99% to 94%.

Also when I increase the reservation in a virtual machine I can see “Current Memory Failover Capacity” drop down even further. So it is not about “used” but about “unreserved / reserved” memory resources (including memory overhead), let that be absolutely clear! When will vCenter Server shout “Insufficient resources to satisfy configured failover level for vSphere HA”?

It shouldn’t be too difficult to figure that one out, just power-on new VMs until it says “stop it”. As you can see in the screenshot below. This happens when you reach the percentage you specified to reserve as “memory failover capacity”. In other words in my case I reserved 33%, when “Current Memory Failover Capacity” reaches 33% it doesn’t allow the VM to be powered on as this would violate the selected admission control policy.

I agree, this is kind of confusing…  But I guess when you run out of resources it will become pretty clear very quickly 😉

 

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to page 5
  • Go to page 6
  • Interim pages omitted …
  • Go to page 15
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007) and the author of the "vSAN Deep Dive" and the “vSphere Clustering Technical Deep Dive” series.

Upcoming Events

25-Feb-21 | Swiss VMUG – Roadshow
04-Mar-21 | Polish VMUG – Roadshow
09-Mar-21 | Austria VMUG – Roadshow
16-Mar-21 | VMUG Turkey – Roadshow
18-Mar-21 | St Louis Usercon Keynote
26-Mar-21 | Hungary VMUG – Roadshow
08-Apr-21 | VMUG France – Roadshow

Recommended reads

Sponsors

Want to support us? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2021 · Log in