How does Mem.MinFreePct work with vSphere 5.0 and up?

With vSphere 5.0 VMware changed the way Mem.MinFreePct worked. I had briefly explained Mem.MinFreePct in a blog post a long time ago. Basically Mem.MinFreePct, pre vSphere 5.0, was the percentage of memory set aside by the VMkernel to ensure there are always sufficient system resources available. I received a question on twitter yesterday based on the explanation in the vSphere 5.1 Clustering Deepdive and after exchanging > 10 tweets I figured it made sense to just write an article.

Mem.MinFreePct used to be 6% with vSphere 4.1 and lower. Now you can imagine that when you had a host with 10GB you wouldn’t worry about 600MB being kept free, but that is slightly different for a host with 100GB as it would result in 6GB being kept free but still not an extreme amount right. What would happen when you have a host with 512GB of memory… Yes, that would result in 30GB of memory being kept free. I am guessing you can see the point now. So what changed with vSphere 5.0?

In vSphere 5.0 a “sliding scale” principle was introduced instead of Mem.MinFreePct. Let me call it “Mem.MinFree”, as I wouldn’t view this as a percentage but rather do the math and view it as a number instead. Lets borrow Frank’s table for this sliding scale concept:

Percentage kept free of –>
Memory Range
6% 0-4GB
4% 4-12GB
2% 12-28GB
1% Remaining memory

What does this mean if you have 100GB of memory in your host? It means that from the first 4GB of memory we will set aside 6% which equates to ~ 245MB. For the next 8GB (4-12GB range) we set aside another 4% which equates to ~327MB. For the next 16GB (12-28GB range) we set aside 2% which equates to ~ 327MB. Now from the remaining 72GB (100GB host – 28GB) we set aside 1% which equates to ~ 720MB. In total the value of Mem.MinFree is ~ 1619MB. This number, 1619MB, is being kept free for the system.

Now, what happens when the host has less than 1619MB of free memory? That is when the various memory reclamation techniques come in to play. We all know the famous “high, soft, hard, and low” memory states, these used to be explained as: 6% (High), 4% (Soft), 2% (Hard), 1% (Low). FORGET THAT! Yes, I mean that… forget these as that is what we used in the “old world” (pre 5.0). With vSphere 5.0 and up these water marks should be viewed as a Percentage of Mem.MinFree. I used the example from above to clarify it a bit what it results in.

Free memory state Threshold in Percentage
Threshold in MB
High water mark Higher than or equal to Mem.MinFree 1619MB
Soft water mark 64% of Mem.MinFree 1036MB
Hard water mark 32% of Mem.MinFree 518MB
Low water mark 16% of Mem.MinFree 259MB

I hope this clarifies a bit how vSphere 5.0 (and up) ensures there is sufficient memory available for the VMkernel to handle system tasks…

Software Defined Storage – What are our fabric friends doing?

I have been discussing Software Defined Storage for a couple of months now and I have noticed that many of you are passionate about this topic as well. One thing that stood out to me during these discussion is that the focus is on the “storage system” itself. What about the network in between your storage system and your hosts? Where does that come in to play? Is there something like a Software Defined Storage Network? Do we need it, or is that just part of Software Defined Networking?

When thinking about it, I could see some clear advantages of a Software Defined Storage Network, I think the answer to all of these is: YES.

  • Wouldn’t it be nice to have end-to-end QoS? Yes from the VM up to the array and including the network sitting in between your host and your storage system!
  • Wouldn’t it be nice to have Storage DRS and DRS be aware of the storage latency, so that the placement engine can factor that in? It is nice to have improved CPU/Memory performance, but when your slowest component (storage+network) is the bottleneck?
  • Wouldn’t it be nice to have a flexible/agile but also secure zoning solution which is aware of your virtual infrastructure? I am talking VM mobility here from a storage perspective!
  • Wouldn’t it be nice to have a flexible/agile but also secure masking solution which is VM-aware?

I can imagine that some of you are iSCSI or NFS users and are less concerned with things like zoning for instance but QoS end-to-end could be very useful right? For everyone a tighter integration between the three different layers (compute->network<-storage) from a VM Mobility would be useful, not just from a performance perspective but also from an operational complexity perspective. Which datastore is connected to which cluster? Where does VM-A reside? If there is something wrong with a specific zone, which workloads does it impact? There are so many different use cases for a tighter integration, I am guessing that most of you can see the common one: storage administrator making zoning/masking changes leading to a permanent device loss and ultimately your VMs hopelessly crashing. Yes that could be prevented when all three layers are aware of each other and integration would warn both sides about the impact of changes. (You could also try communicating to each-other of course, but I can understand you want to keep that to a bare minimum ;-))

I don’t hear too many vendors talking about this yet to be honest. Recently I saw Jeda Networks making an announcement around Software Defined Storage Networks, or at least a  bunch of statements and a high level white paper. Brocade is working with EMC to provide some more insight/integration and automation through ViPR… and maybe others are working on something similar, so far I haven’t see too much to be honest.

Wondering, what you guys would be looking for and what you would expect? Please chip in!

Contribute: Tweet sized vSphere Design Considerations

Most of you have probably seen the announcement Frank did yesterday… Frank, Cormac, Vaughn and I are working on this book project called “Tweet sized vSphere Design Considerations” and we need YOU, the community, to contribute! What are you saying? You are writing a book and want us to do the work for you? Yes indeed, but this book belongs to all of us! As Frank stated:

The current working title is “Tweet-sized vSphere design considerations”. As this book is created by people from the virtualization community for the virtualization community, this book will be available free of cost.

So how does it work, let me briefly recap Frank’s blog post, what are the rules:

  • Each design consideration should be tweet-sized like, maximum of 200 characters (excluding spaces)
  • Should fit in on one the following categories:
    • Host design
    • vCenter design
    • Cluster design
    • Networking and Security design
    • Storage design
  • Max 3 submissions per category per person

So what would that look like? I know some folks have already submitted entries, but for those who are considering and don’t know where to start or are not certain of what they are thinking about will meet the requirements here are two examples of what I submitted:

  1. For your “Management Network” portgroup ensure to combine different physical NICs connected to different physical switches. This will increase resiliency and decrease chances of an HA false positive.
  2. Ensure syslog is correctly configured for your virtual infrastructure and log files are offloaded to a safe location outside of your virtual infrastructure. This will allow for root cause analysis in case disaster strikes.

So head-over to Frank’s blog, read the rules a couple of times and start entering those design considerations! You might end-up in this cool book, if you do you will get a free paper copy provided by PernixData. It will also be made available to everyone for free through the various channels as an ebook.

Available now: VMware Technical Journal, Summer 2013

For those like me who love reading research papers by developers you might want to head over to labs.vmware.com as today a new version of the VMware Technical Journal was released, the summer 2013 edition. You can download it as a PDF on the website, or you can read the individual articles straight in your web browser. Below you can find the Table of Content, and the titles convinced me that these are worth reading. Personally I found the “Redefining ESXi IO Multipathing in the Flash ERA” very interesting… but I suggest you read all of them as it typically gives a good hint of what VMware engineering is working on now / or in the future!

Thanks for making the book promotion a huge success!

I just want to thank everyone for making the vSphere 4.1 and vSphere 5.0 Clustering Deepdive book promotion a huge success. Thanks for sharing the news with your colleagues / friends / readers! Frank and I never expected that so many would be downloaded… As I said on facebook:

  • vSphere 4.1 HA/DRS Deepdive on Amazon –> $ 0,-
  • vSphere 5.0 Clustering Deepdive ebook on Amazon –> $ 0,-
  • Revenue for the authors –> $ 0,-
  • 7000+ downloads of the books, and the number 1 spot on Amazon again –> PRICELESS!