How to register a Storage Provider using the vSphere Web Client

I needed to register a Storage Provider for vSphere Storage APIs for Storage Awareness (VASA) today. I force myself to use the vSphere Web Client and it had me looking for this option for a couple of minutes. It actually was the second time this week I had to do this, so I figured if I need to search for it there will probably be more people hitting the same issue. So where can you register those VASA Storage Provider’s in the Web Client?

  • In your vSphere Web Client “home screen” click “vCenter”
  • Now in the “Inventory Lists” click “vCenter Servers”
  • Select your “vCenter Server” in the left pane
  • Click the “Manage” tab in the right pane
  • Click “Storage Provider” in the right pane
  • Click on the “green plus”
  • Fill out your details and hit “OK” just like the example below (VNX, block storage)
    registering a Storage Provider

I personally find this not very intuitive and would prefer to have it in the Rules and Profiles section of the Web Client, and when I do configure it… I should be able to configure it for all vCenter Server instances just by select all or individual vCenter Servers. Do you agree? I am going to push for this within VMware, so if you don’t agree, please speak up and let me know why :-).

Big changes for the Dutch VMUG, show your support!

Those who follow me on twitter probably have seen me “moaning” about the Dutch VMUG for a long time now. For years the Dutch VMUG was not a VMUG like any other VMUG in the world. Yes we had HUGE event in the Netherland every year with over 700 attendees, but it was a commercial event (my opinion!) and not a User Group event. User groups throughout the world are groups which organize events / meetings, these are organized by VMware users for VMware users, free of charge (or for a minimal fee), independent events, striving to make the life of the user better by sharing experiences and knowledge! In the Netherlands this was different, the VMUG was controlled by a single company but that has changed… finally!

As of June the 15th 2013 there is an Official Dutch VMUG. This VMUG is part of the world wide international VMUG organization and controlled by a board of VMware users called the Customer Council!

So what does this mean? Lets make this absolutely clear, there is only one official VMUG in the Netherlands and that is NLVMUG.nl. Also, the yearly event in December (already announced for the 13th) is not an event by the Official Dutch VMUG. Personally I did not attend the VMUG event in 2012 as I do not want to support an event which is supposed to be a user group event but doesn’t come close to it, at least not what I perceive a user group to be. Both the Customer Council and VMware apparently agree with me on this, as on the announcement they state that neither of them will support or attend the December event that was announced.

Het op vmug.nl aangekondigde evenement is geen officieel VMUG evenement, en zal dus ook niet gesteund en bezocht worden door de Customer Council of VMware. We zullen jullie zo snel mogelijk proberen te informeren over een evenement dat door de officiële Nederlandse VMUG, de Customer Council, VMware Benelux en natuurlijk de community (bloggers) zal worden gehouden.

A couple of things before I wrap up this blog post with a call to action for all of my readers. First of all, I want to thank Ferry Limpens, Joep Piscaer, Viktor van den Berg, Dennis Hoegen Dijkhof, Robert van den Nieuwendijk, Laurens van Gunst, Sander Daems and Arjan Timmerman aka “The Customer Council” for taking this bold step. It is great to see that you guys are not afraid of change and are willing to take a risk. Congrats!

Secondly, I would like to ask every single person who has ever attended a Dutch VMUG event to let all of their friends and colleagues know about these changes and sign up for the official VMUG using the link below! You can read it in Dutch on the official Dutch VMUG website: nlvmug.com. If you are on twitter, make sure to follow @nlvmug, and if you have a question / would like to present at a VMUG / help organizing… feel free to drop these guys an email: customercouncil@nlvmug.com. This is a user event, so try to participate, you are the community!

Last, but not least, I would like to ask all of the Sponsors of the Dutch VMUG to contact the Customer Council to see how you can help taking this User Group to the next level. The official Dutch VMUG can use all the help they can get!

SIGN UP NOW! Join the Official Dutch VMUG!

How does Mem.MinFreePct work with vSphere 5.0 and up?

With vSphere 5.0 VMware changed the way Mem.MinFreePct worked. I had briefly explained Mem.MinFreePct in a blog post a long time ago. Basically Mem.MinFreePct, pre vSphere 5.0, was the percentage of memory set aside by the VMkernel to ensure there are always sufficient system resources available. I received a question on twitter yesterday based on the explanation in the vSphere 5.1 Clustering Deepdive and after exchanging > 10 tweets I figured it made sense to just write an article.

Mem.MinFreePct used to be 6% with vSphere 4.1 and lower. Now you can imagine that when you had a host with 10GB you wouldn’t worry about 600MB being kept free, but that is slightly different for a host with 100GB as it would result in 6GB being kept free but still not an extreme amount right. What would happen when you have a host with 512GB of memory… Yes, that would result in 30GB of memory being kept free. I am guessing you can see the point now. So what changed with vSphere 5.0?

In vSphere 5.0 a “sliding scale” principle was introduced instead of Mem.MinFreePct. Let me call it “Mem.MinFree”, as I wouldn’t view this as a percentage but rather do the math and view it as a number instead. Lets borrow Frank’s table for this sliding scale concept:

Percentage kept free of –>
Memory Range
6% 0-4GB
4% 4-12GB
2% 12-28GB
1% Remaining memory

What does this mean if you have 100GB of memory in your host? It means that from the first 4GB of memory we will set aside 6% which equates to ~ 245MB. For the next 8GB (4-12GB range) we set aside another 4% which equates to ~327MB. For the next 16GB (12-28GB range) we set aside 2% which equates to ~ 327MB. Now from the remaining 72GB (100GB host – 28GB) we set aside 1% which equates to ~ 720MB. In total the value of Mem.MinFree is ~ 1619MB. This number, 1619MB, is being kept free for the system.

Now, what happens when the host has less than 1619MB of free memory? That is when the various memory reclamation techniques come in to play. We all know the famous “high, soft, hard, and low” memory states, these used to be explained as: 6% (High), 4% (Soft), 2% (Hard), 1% (Low). FORGET THAT! Yes, I mean that… forget these as that is what we used in the “old world” (pre 5.0). With vSphere 5.0 and up these water marks should be viewed as a Percentage of Mem.MinFree. I used the example from above to clarify it a bit what it results in.

Free memory state Threshold in Percentage
Threshold in MB
High water mark Higher than or equal to Mem.MinFree 1619MB
Soft water mark 64% of Mem.MinFree 1036MB
Hard water mark 32% of Mem.MinFree 518MB
Low water mark 16% of Mem.MinFree 259MB

I hope this clarifies a bit how vSphere 5.0 (and up) ensures there is sufficient memory available for the VMkernel to handle system tasks…

Software Defined Storage – What are our fabric friends doing?

I have been discussing Software Defined Storage for a couple of months now and I have noticed that many of you are passionate about this topic as well. One thing that stood out to me during these discussion is that the focus is on the “storage system” itself. What about the network in between your storage system and your hosts? Where does that come in to play? Is there something like a Software Defined Storage Network? Do we need it, or is that just part of Software Defined Networking?

When thinking about it, I could see some clear advantages of a Software Defined Storage Network, I think the answer to all of these is: YES.

  • Wouldn’t it be nice to have end-to-end QoS? Yes from the VM up to the array and including the network sitting in between your host and your storage system!
  • Wouldn’t it be nice to have Storage DRS and DRS be aware of the storage latency, so that the placement engine can factor that in? It is nice to have improved CPU/Memory performance, but when your slowest component (storage+network) is the bottleneck?
  • Wouldn’t it be nice to have a flexible/agile but also secure zoning solution which is aware of your virtual infrastructure? I am talking VM mobility here from a storage perspective!
  • Wouldn’t it be nice to have a flexible/agile but also secure masking solution which is VM-aware?

I can imagine that some of you are iSCSI or NFS users and are less concerned with things like zoning for instance but QoS end-to-end could be very useful right? For everyone a tighter integration between the three different layers (compute->network<-storage) from a VM Mobility would be useful, not just from a performance perspective but also from an operational complexity perspective. Which datastore is connected to which cluster? Where does VM-A reside? If there is something wrong with a specific zone, which workloads does it impact? There are so many different use cases for a tighter integration, I am guessing that most of you can see the common one: storage administrator making zoning/masking changes leading to a permanent device loss and ultimately your VMs hopelessly crashing. Yes that could be prevented when all three layers are aware of each other and integration would warn both sides about the impact of changes. (You could also try communicating to each-other of course, but I can understand you want to keep that to a bare minimum ;-))

I don’t hear too many vendors talking about this yet to be honest. Recently I saw Jeda Networks making an announcement around Software Defined Storage Networks, or at least a  bunch of statements and a high level white paper. Brocade is working with EMC to provide some more insight/integration and automation through ViPR… and maybe others are working on something similar, so far I haven’t see too much to be honest.

Wondering, what you guys would be looking for and what you would expect? Please chip in!

Contribute: Tweet sized vSphere Design Considerations

Most of you have probably seen the announcement Frank did yesterday… Frank, Cormac, Vaughn and I are working on this book project called “Tweet sized vSphere Design Considerations” and we need YOU, the community, to contribute! What are you saying? You are writing a book and want us to do the work for you? Yes indeed, but this book belongs to all of us! As Frank stated:

The current working title is “Tweet-sized vSphere design considerations”. As this book is created by people from the virtualization community for the virtualization community, this book will be available free of cost.

So how does it work, let me briefly recap Frank’s blog post, what are the rules:

  • Each design consideration should be tweet-sized like, maximum of 200 characters (excluding spaces)
  • Should fit in on one the following categories:
    • Host design
    • vCenter design
    • Cluster design
    • Networking and Security design
    • Storage design
  • Max 3 submissions per category per person

So what would that look like? I know some folks have already submitted entries, but for those who are considering and don’t know where to start or are not certain of what they are thinking about will meet the requirements here are two examples of what I submitted:

  1. For your “Management Network” portgroup ensure to combine different physical NICs connected to different physical switches. This will increase resiliency and decrease chances of an HA false positive.
  2. Ensure syslog is correctly configured for your virtual infrastructure and log files are offloaded to a safe location outside of your virtual infrastructure. This will allow for root cause analysis in case disaster strikes.

So head-over to Frank’s blog, read the rules a couple of times and start entering those design considerations! You might end-up in this cool book, if you do you will get a free paper copy provided by PernixData. It will also be made available to everyone for free through the various channels as an ebook.