Comments

  1. Sudharsan says

    Another Great one from Duncan again to prove why you alwasy top the list !! Good handy reference whenever we troubleshoot performance issues

  2. says

    WOW a great idea to put all that on a score card.

    Personally I use a threshold of 10 for %RDY. I use TOP as well to compare that info (IOWAIT).

    For me KAVG/cmd above 3ms requires immediate attention!

    By the way, GAVG is the sum of DAVG and KAVG, so you should put 30 there…

    Cheers,
    Didier

  3. says

    No I did not mean to use the sum of KAVG and DAVG. I think a 25ms latency is just bad overall, if that is 20 on disk and 5 on kernel or 25 on disk and 0 on kernel, it’s still latency. I do understand what you are saying and will try to mention this explicitly.

  4. says

    You will be hard pressed to come up with one %RDY value because the value carries different weighting depending on the number of vCPUs in the VM. The %RDY value is a sum of all vCPU %RDY for the VM. Some examples:

    The max %RDY value of a 1vCPU VM is 100%
    The max %RDY value of a 4vCPU VM is 400%

    %RDY 20 for a 1vCPU VM is bad. It means 1vCPU is waiting 20% of the time to be scheduled by the VMkernel.

    %RDY 20 for a 2vCPU VM is moderately bad. It means 2vCPUs are each waiting 10% of the time to be co-scheduled by the VMkernel.

    %RDY 20 for a 4vCPU VM is borderline reasonable. It means 4vCPUs are each waiting 5% of the time to be co-scheduled by the VMkernel.

    %RDY 20 for a 1vCPU VM is roughly equivilent to %RDY 80 for a 4vCPU VM.

    In the end, the best judge of %RDY severity is end user perception and the threshold is going to vary depending on application characteristics and end user tolerance.

    Jas

  5. says

    I agree with Jason about %RDY varying based on vCPUs in a VM. A “threshold” for this based on the number of vCPUs could work however.

    %RDY 10 for 1 vCPU
    %RDY 20 for 2 vCPU
    %RDY 40 for 4 vCPU

    Or so… User perception is key, but it’s good to have some thresholds setup for troubleshooting/alerting. Better to be a bit ahead of the phone calls.

    -Cody
    http://professionalvmware.com

  6. says

    10% RDY could be a good threshold per cpu, IMHO 20% per cpu is a bit too much even though it was mentioned in a VMTN doc somewhere. It depends on the workload running. For some workloads 10% could be too much while it would be fine in others.

    Lars

  7. says

    I treat %RDY like I do context switching – in a green/yellow/red stoplight fashion.

    My threshold preference for %RDY is something like:
    0-4 per vCPU = green
    5-9 per vCPU = yellow
    10+ per vCPU = red

  8. Doug Baer says

    Thanks Duncan. This seems to be a global issue with performance monitoring — everyone agrees that performance is generally a perception issue, but it is difficult to find ‘recommended’ or ‘guideline’ threshold values. Part of the problem is that, if they’re published by the vendor, admins tend to treat them as absolute; the other part is that vendors just don’t publish the numbers.

  9. says

    That’s because these are the sum of read+write. For both read and write there are separate views which can be enabled if needed.

  10. says

    Maybe it is me, but read the table it says “Look at “DAVG” and “KAVG” as the sum of both is GAVG.”

    The values in the article I disagree with. They are mention 10 and 100 as a threshold? Weird.

  11. Fred Peterson says

    Over how many intervals would you consider these thresholds an issue?

    That was kind of a rhetorical question, but for all of us VM admins, we’ve all seen one or more of these values exceed the thresholds defined above….but we also recognize that once every 100 intervals isn’t a big deal…necessarily :)

  12. Craig Risinger says

    Just to complicate matters with SMP VMs, you can’t assume the sum is evenly split among all vCPUs.

    Where possible, look at %Ready for each vCPU. If any is high, there’s probably a problem.

  13. Ochoa says

    In regards to disk performance,
    Is System Center Operation Manager (MS) a good tool to use for gathering disk performance data for VM’s? What metrics should one use to gather this type of information? My goal is to use SCOM to gather disk performance statistics for VM’s but I’m not sure how to go about it at this point and if this tool will do the job for the VM environment. Also, are the same metrics used to gather stats for the VM and host?

  14. dharmesh says

    Is there any relation between SPLTCMD/s and GAVG / rd Or GAVG / wr latencies.
    I am confused as I see high SPLTCMD/s where latency is high for Guest reads or writes.

    Basically SPLTCMD/s is for multipathing ? or IO sizes or partition boundary conditions?
    does this affect latency per command? eg if GAVG / cmd = 300 ms and SPLTCMD / s = 30
    does this mean that latency per read is now 10 ms (for whatever MB /s throughput)

Leave a Reply