I received this question last week about a recommendation which was in the vSphere 5.1 Hardening Guide. The recommendation in the vSphere 5.1 Hardening Guide is the following:
By default, all virtual machines on an ESXi host share the resources equally. By using the resource management capabilities of ESXi, such as shares and limits, you can control the server resources that a virtual machine consumes. You can use this mechanism to prevent a denial of service that causes one virtual machine to consume so much of the host’s resources that other virtual machines on the same host cannot perform their intended functions.
Now it might be just me but I don’t get the recommendation and my answer to this customer was as follows:
Virtual machines can never use more CPU/Memory resources then provisioned. For instance, when 4GB of memory is provisioned for a virtual machine the Guest OS of that VM will never consume more than 4GB. Same applies to CPU, if a VM has a single vCPU than that VM can never consume more than a single core of a CPU.
So how do I limit my VM? First of all: right sizing! If your VM needs 4GB then don’t provision it with 12GB as it some point it will consume it. Secondly: shares. Shares are the easiest way to ensure that the “noisy neighbor” isn’t pushing away the other virtual machines. By even leaving the shares set to default you can ensure that at least all “alike VMs” have more or less the same priority when it comes to resources. So what about limits?
Try to avoid (VM Level) limits at all times! Why? Well look at memory for a second, lets say you provision your VM with 4GB and limit it to 4GB and now someone changes the memory to 8GB but forgets to change the limit. So what happens? Well your VM uses up the 4GB and moves in to “extra 4GB” but the limit is there, so you the VM will experience memory pressure and you will see ballooning / swapping etc. Not a scenario you want to find yourself in right, indeed! What about CPU then? Well again, it is a hard limit in ALL scenarios. So if you set a 1GHz scenario but have a 2.3GHz CPU, your VM will not consume the 2.3GHz ever…. A waste? Yes it is. And not just VM level limits, there is also an operational impact with resource pool limits.
I can understand what the hardening guide is suggesting, but believe me you don’t want to go there. So let it be clear, AVOID using limits at all times!
Loren Gordon says
Doesn’t vCloud Director impose limits automatically for some vDC types?
Matt Cowger (@mcowger) says
Duncan – I think the argument that you shoudl avoid limits at all times is a bit harsh. Avoid limits at all times = never use limits. Which means you are arguing that there is *never* a scenario where you should use limits….In that case, are you advocating to have the feature removed, because, as you suggest, it should never be used?
Duncan says
I don’t know about you, but there are hardly any use cases for implementing limits. There are some, but they should be avoided when possible if you ask me.
But if you know a lot of usecases I am interested in hearing them 🙂
Matt Cowger (@mcowger) says
Hardly any isn’t the same as none. Your post says there are none….thats what I’m suggesting. There are a few.
James Hess says
When it comes down to it; I would begin to make the argument that there are no reasonable use cases for utilizing Limits for production workloads in the datacenter — they avoid creating unnecessary bottlenecks, and are more easily maintained than a random assortment of individual VM limits. Of course, if you can actually show a situation where using limits is not a much worse idea than sticking with shares and reservations; I am all ears. Otherwise… I feel like we’re just saying “limits can be useful in some cases”, as a blind assumption, that since limits let you do something shares and reservations can’t do, then that thing must be useful, therefore there most be some case for that.
Well that one case is: reducing the efficiency of host resource usage, by preventing the entirety resources from being used. I would argue that makes no sense for any production workload; It would be like having a BIOS setting to turn off half the memory or cap all the CPUs at 50%.
What sense would that make on a mission critical server? Zip, nada, none.
I believe the reasonable uses of the Limit feature fall exclusively in the category of lab use, since you can’t really test contention on an uncontended host with shares – It makes sense vSphere would provide this feature, as it is potentially very useful to facilitate application stress testing — you can use limits to help test how an application might perform under artificially harsh conditions.
Mirč says
Hi Duncan,
it is hard to object such an authority, but I can’t help myself 🙂
I agree setting limits on VMs does not make sense.
But limits on resource pools are another thing.
2 possible use cases (I have met them both, although both were on small systems with only 3 hosts, where management was not a big issue) :
– you have several departments (each locked into it’s own resource pool) using the cluster and you want to strictly divide the system to them according to some criteria (possibly cost sharing)
– you have developers using test resource pool and you want to absolutely prevent them to interfere with production system on the same cluster by using use too many resources
Michael Webster says
I would go with limits should be used almost never. I know one customer who is a service provider that imposes a limit on each vDisk = to 500 IOPS per TB. This is to enforce an SLA and performance that customers pay for and also to simplify backups, which uses vADP. Also vCD does use resource pool and some VM limits, the resource pool ones are ok, the VM ones are only ok because they are automatically modified by vCD and vCD manages it. I would not suggest placing limits on a VM manually and then having that additional management overhead, it just makes no sense. I’ve also see several customers get into trouble with limits on resource pools when they forgot about them. They started to add hosts and VM’s and wondered why there were no performance gains. This got to the point that all VM’s started swapping heavily and this took down their storage. So I would say don’t use limits unless you have a very well justified reason for it and you have the management processes and disciplines in place.
Duncan Epping says
Limits on storage is a different thing indeed, I agree that there are of course usecases for using limits. Resource Pools based limits in a multi-tenant environment could be one of them, but indeed there is a huge operational implications. Hence when you can, you should avoid using them!
Gavin Hamill says
@Michael
I’m intrigued by this comment: ““vCD does use resource pool and some VM limits, the resource pool ones are ok, the VM ones are only ok because they are automatically modified by vCD and vCD manages it.”
Can you tell me which VM level limits that vCD imposes? I get that a resource pool is created for each customer VDC with optional CPU+RAM shares/limits – it’s great to give a customer a limit on elasticity.
I’m really interested in the per-VM limits, though; we’re a service provider, so for us, a ‘small’ VM should always have ‘small’ CPU performance (we would never enforce limits on RAM)
Michael Webster says
@Gavin, when using Allocation Pool and PAYG resource allocation model every VM will get a CPU and Memory reservation and limit applied automatically by vCD (depending on version, it’s changed slightly between versions). Then vCD will modify the reservation and limit for CPU and Memory if an end user changes the parameters of the VM. For RAM it sets the limit = the allocation (doesn’t constrain performance), which is no problem for vCD which manages it, but if you try something like this in a normal environment people always forget to update it.
Gavin Hamill says
Interesting! I was proposing to implement CPU limits automatically by using vCO as part of the provisioning process; vCO would listen for a ‘New vApp’ event from vCD and set “appropriate” limits based on metadata that I hadn’t worked out yet.
If vCD is already manipulating MHz limits on a per-VM (rather than per-Resource Pool) basis then I’ll pay particular attention to that behaviour when I have a vCD lab again – thanks for the tip! 🙂
Michael Webster says
One thing to watch out for is the vCPU MHz default on PAYG, which is new in 5.1. Also there is a limit concept for a PAYG pool now too, where as there wasn’t before. There has been quite a few changes and enhancements in the way that 5.1 does it’s allocation models and who knows maybe we’ll hear a few more interesting things about vCD next week at VMworld too.
Cédric Blomart says
Limits…We don’t use them…
There is tuth in these two words: “right size”.
Naturaly this is in a “Private Cloud”: we know what we do and why we do it.
Multitenancy makes room for limits: you can’t get more than what you pay for.
So if it doesn’t impact your “budget” don’t touch it.
James Hess says
I agree. Use shares, avoid limits at almost all costs. Limits create unnecessary performance caps,
and actually result in more swapping, and more overall I/O, which hurts the performance of even the higher priority workload.
I often take the top 10% of important VMs in the environment and guestimate
30% to 50% of their memory size as a reasonable reservation, and apply at least half that through its share of resource pool reservation, and a tiny little bit of memory reserved directly on VMs, then manage the “excess memory requirement” using shares.
I once had a host die as a result of someone accidentally configured and powered on a VM with much more RAM than they intended (more RAM than the physical host had) — I believe having at least 1 VM per host with a non-zero memory reservation has a side-effect of putting a stop to that foolishness.
Having reservations also helps make HA admission control do its job appropriately.
What VMware doesn’t even talk about is I/O. Typically, if there’s a bad neighbor: it is CPU and
disk I/O or network I/O usage. Memory is a lot easier to measure; RAM is so much cheaper than CPU, nowadays, that memory overcommit really makes no sense anymore — outside VDI and disaster scenarios.
The I/O is the bigger problem…. Storage I/O control shares are very difficult to manage,
because there are no “SIOC Resource pools” we can assign groups of VMs to; Instead we are stuck using multiple LUNs and separate SANs as the primary storage I/O contention management.
And just a reminder….. this is the MAIN kind of contention that I find occurs frequently in vSphere environments; this may be a side-effect of application users and various managers not understanding the issue —- people have difficulty grasping that Disk capacity is NOT gigabytes. Disk I/O is a very limited resource, and it is very difficult to measure, because not all I/O workload patterns are “equal in cost”; even if the same number of I/O or bytes are transferred.
“So how do I limit my VM? First of all: right sizing! If your VM needs 4GB then don’t provision it with 12GB as it some point it will consume it. ”
Absolutely right; in principle I would agree 100%, but it’s a little harder than that in the real world.
There are all these sociological dynamics that can get in the way of rightsizing in practice. Those are some of the challenges with keeping things right sized; it would be hard to achieve in most organizations, even with an edict from upper management supporting it.
There are “lots of hands in the virtualization resource pot”, as it were.
If there are any “performance issues” at all with their application later, they will insist that it must be that 4 vCPUs is not enough, even if they sit 95% idle 24×7 and there is no significant CPU usage this could be something atrocious like “Every now and then, there is a web site that takes 100ms to load instead of 80ms”.
How do you prove a VM doesn’t need more than 4GB? And even if the virtualization team has been allowed to buy the expensive capacity management tools to efficiently show it — is the application owner persuaded to allow _their_ precious VM’s size to dare be reduced? Very often you just have to live with oversizing and right-size is not an option 🙂
Because there are often business units in orgs; that tend to have developers or outside vendors that will specify “requirements” for their workload.. The purpose of specifying requirements is to make sure specifications are padded by 400%, to guarantee more capacity than required is available — 16GB of RAM and 4 vCPUs are frequently required for apps that need 2gb of RAM and 1 vCPU,.
Of course if you right size…. at some later date, your rightsizing is bound to get blamed by application vendor Tier1 support over unrelated app issues.
Then you have Windows servers that have memory leaks, that will gradually (over weeks of uptime) expand to fill it, no matter how much RAM you assign, and application owners who demand 100% uptime — there’s no such thing as a maintenance window; “What’s a scheduled reboot?” 🙂
SQL server or Exchange with its caching and “Heap” space used by .NET IIS worker threads for OWA come to mind, as well. You need about 6 gigabytes and 2 vCPUs for 100 users, but someone will wind up giving it 16GB and 32 vCPUs ‘just in case’.
In organizations, in the real world, maintaining constant “right sizing” is very hard in practice.
Because “Virtual machines are free”, and there is no shortage of opinions (Whoever is responsible for a particular VM wants to make sure that their VM runs as fast as it possibly can, even if the business doesn’t need that); an unexpected demand to create 50 16 vCPU 12GB of RAM VMs on a 3 node ESX cluster of 16-way servers is sometimes demanded by management: without warning or allowance for planning by the infrastructure teams — of course the virtualization team doesn’t get notified, until the last minute when it’s suddenly become urgent and “We MUST have all this stuff provisioned, online, and ready to go, within 2 hours; for outside_vendor to start installing their unpredictably bloated software not tested by IT that was purchased solely because someone in Accounting was impressed by the shiny marketing material and its the cheapest on the market they could find”.
” Hey, we designed the cluster for failover right?” “Let’s just tap use that extra bit of ‘failover’ (or DR) capacity to run extra production VMs…”
” What if a node fails, and we don’t have enough memory to run all those VMs, then, can’t we get some more memory?”
” Not in the budget for 4 years. If a server crashes, they will just run slower after a failover, right? In that case We’ll just expect immediate detailed report of the reason the node failed, and for you to work around the clock 24 hours, to get that server repaired.”
Marko says
James, this exactly as it happens out in the wild – every day!
And I/O is what causes most of the trouble. A lot of guys out there would smile from one ear to the other if there were a easy way to limit I/O and to guarantee I/O to VMs.
Ian Campbell says
+1000. Point me at the doc, cause I really, really need this.
Simon Green says
You can limit IOPS, we do it for every single customer VM. Here’s a screenshot showing the setting:
http://i.imgur.com/fAKxdkC.png
And I can confirm this works perfectly.
Also a tip/bug/gotcha: if you don’t set the same limit on every disk on the VM then the highest will apply to all disks.
Loren Gordon says
Here’s a new post that seems relevant to this discussion. Such rules help us make good choices.
http://www.daedtech.com/seeing-the-value-in-absolutes
Rudy says
How about virtualizing very old operating systems like Win95? Sometimes old software doesn’t play nicely with all that megaherz. There’s still companies out there that use that old stuff. IMO that would be a case for CPU limits.