SIOControlFlag2 what is it?

I had a question this week what Misc SIOControlFlag2 was. Some refer to it as SIOControlFlag2 and I’ve also seen Misc.SIOControlFlag2. In the end it is the same thing. It is something that sometimes pops up in the log files, or some may stumble in to the setting in the “advanced settings” on a host level. The question I had was why the value is 0 on some hosts, 2 on others or even 34 on other hosts.

Let me start with saying that it is nothing to worry about, even when you are not using Storage IO Control. It is an internal setting which is used by ESXi (hostd sets it) when there is an operation done where disk files on a volume are opened (vMotion / power on etc). This is set to ensure that when Storage IO Control is used that the “SIOC injector” knows when to or when not to use the volume to characterize it. Do not worry about this setting being different on the hosts in your cluster, it is an internal setting which has no impact on your environment itself, other then when you use SIOC this will help SIOC making the right decision.

Virtualization networking strategies…

I was asked a question on LinkedIn about the different virtualization networking strategies from a host point of view. The question came from someone who recently had 10GbE infrastructure introduced in to his data center and the way the network originally was architected was with 6 x 1 Gbps carved up in three bundles of 2 x 1Gbps. Three types of traffic use their own pair of NICs: Management, vMotion and VM. 10GbE was added to the current infrastructure and the question which came up was: should I use 10GbE while keeping my 1Gbps links for things like management for instance? The classic model has a nice separation of network traffic right?

Well I guess from a visual point of view the classic model is nice as it provides a lot of clarity around which type of traffic uses which NIC and which physical switch port. However in the end you typically still end up leveraging VLANs and on top of the physical separation you also provide a logical separation. This logical separation is the most important part if you ask me. Especially when you leverages Distributed Switches and Network IO Control you can create a great simple architecture which is fairly easy to maintain and implement both from a physical and virtual point of view, yes from a visual perspective it may be bit more complex but I think the flexibility and simplicity that you get in return definitely outweighs that. I definitely would recommend, in almost all cases, to keep it simple. Converge physically, separate logically.

Geek Whisperers episode: Marketing, Blogging & Community

I had the honor a couple of weeks ago to be on the Geek Whisperers podcast. It was very entertaining with John, Amy and Matt. The podcast deals about how I got started with blogging and communities and many other random topics.

There are a couple things which I wanted to share. First and foremost, blogging and social media has got nothing do with marketing for me personally, it is what I do, it is who I am. Everyone has a different way of digesting information, learning new things, dealing with complex matters or even dealing with emotions… Some sit down behind a white board, some discuss it with their colleagues, I write / share.

Secondly, when it comes to social media I am (more and more) a believer in the “social aspect”. I’ve seen the rise of the “message boards” and online communities and all the flame wars that came with it, I’ve seen the same on twitter / facebook etc. Recently I decided to be more hardline when it comes to social media and following people / accepting friend requests. If you look at facebook for instance, which is more personal for me then twitter, I have pictures of my kids up there so in that case I want to make sure I “trust” the person before I accept. And then there is the whole unfollowing / unfriending thing… Anyway, enough said… just have a listen.

And euuhm, thanks Matt for the nice pic of me riding a unicorn shooting rainbows, not sure what to think of it yet :)

http://geek-whisperers.com/2014/12/marketing-blogging-community-talking-with-duncan-epping-episode-69/

Operational Efficiency (You’re not Facebook/Google/Netflix)

In previous roles, also before I joined VMware, I was a system administrator and a consultant. The tweets below reminded me of the kind of work I did in the past and triggered a train of thought that I wanted to share…

Howard has a great point here. For some reason many people started using Google, Facebook or Netflix as the prime example of operational efficiency. Startups use it in their pitches to describe what they can bring and how they can simplify your life, and yes I’ve also seen companies like VMware use it in their presentations.When I look back at when I managed these systems my pain was not the infrastructure (servers / network / storage)… Even though the environment I was managing was based on what many refer to as legacy: EMC Clariion, NetApp FAS or HP EVA. The servers were never really the problem to manage either, sure updating firmware was a pain but not my biggest pain point. Provisioning virtual machines was never a huge deal… My pain was caused by the application landscape many of my customers had.

At companies like Facebook and Google the ratio of Application to Admin is different as Howard points out. I would also argue that in many cases the applications are developed in-house and are designed around agility, availability and efficiency… Unfortunately for most of you this is not the case. Most applications are provided by vendors which don’t really seem to care about your requirements, they don’t design for agility and availability. No, instead they do what is easiest for them. In the majority of cases these are legacy monolithic (cr)applications with a simple database which all needs to be hosted on a single VM and when you get an update that is where the real pain begins. At one of the companies I worked for we had a single department using over 80 different applications to calculate mortgages for the different banks and offerings out there, believe me when I say that that is not easy to manage and that is where I would spent most of my time.

I do appreciate the whole DevOps movement and I do see the value in optimizing your operations to align with your business needs, but we also need to be realistic. Expecting your IT org to run as efficient as Google/Facebook/Netflix is just not realistic and is not going to happen. Unless of course you invest deep and develop the majority of your applications in-house, and do so using the same design principles these companies use. Even then I doubt you would reach the same efficiency, as most simply won’t have the scale to reach it. This does not mean you should not aim to optimize your operations though! Everyone can benefit from optimizing operations, from re-aligning the IT department to the demands of todays world, from revising procedures… Everyone should go through this motion, constantly, but at the same time stay realistic. Set your expectations based on what lands on the infrastructure as that is where a lot of the complexity comes in.

Startup Intro: Eco4Cloud

This week I had the pleasure to be briefed by Eco4Cloud on what it is they bring to the world of IT. First thing which stood out instantly that this startup is based out of Italy, yes indeed… Europe and not a Silicon Valley based startup… that is a nice change if you ask me! Not just from a geographical perspective are they different then most startups today, but also in terms of what solution they are building. Eco4Cloud is all about datacenter optimization and efficiency. What does this mean?

Most of you probably have heard of vSphere DRS and DPM, if you look at DPM from a conceptual perspective then you could say it is all about lowering cost by consolidating more virtual machines on fewer physical hosts and powering off the unneeded hosts. Eco4Cloud is targeting to do something similar, but doesn’t stop just there. Lets look at what they can do today.

Workload Consolidation is the name of the their core piece of technology (in my opinion). Workload Consolidation analyses your hosts and virtual machines and tries to increase consolidation to allow for hosts to be powered off without impacting the virtual machine SLAs. In other words, if your VM is using 1024MB and 2GHz it should have this available after the consolidation as well. (vMotion is used to move VMs around.) Now it does this in a smart way of course by ensuring that resources are properly balanced both from a CPU and Memory point of view. E4C has done many proof of concepts now and they have shown that they can for instance reduce power consumption between 30-60%, as you can imagine this is huge for larger datacenters. Of course it is not just the decrease of power consumption, but it is also reduction in carbon footprint etc.

Besides consolidation of your workload E4C also has a number of features that can help with optimizing your workloads itself. For instance Smart Ballooning which will preemptively, and in a smart way, claim unused memory from specific virtual machines so that other virtual machines can use the memory when needed. But more importantly, free up claimed resources which are not used anyway to avoid the scenario where you reach a state of (false) overcommitment.

Of course it is best to right size your virtual machines in the first place, but as we all know this is fairly difficult and especially with the ever growing demands of the application owners it is not going to get any easier. E4C can also help with that part, they can provide you the data needed to show VMs are oversized and help providing them the correct resources: Capacity Decision Support Manager. It doesn’t just allow you to analyze the current scenario, but also provides you the option to do “what if” scenarios. These “what if” scenarios are very useful in the case where you expect a growth. CDSM will be able to tell you how many hosts you will need to add, but can also help identifying which type of hosts.

Last but not least there is E4C Troubleshooter, a monitoring solution that will help identifying configuration problems for hosts and virtual machines. It can help you with identifying problems in different areas, but for now the focus seems to be SLA compliance, VM mobility and resource accessibility.

So who is doing this? E4C showed me a case study they have done with Telecom Italia, and out of the 500 hosts Telecom Italia had they were able to place 100 hosts in hibernation mode, leading to a 440MWh decrease (avg 20%). What I like about the solution by the way, as that you can run it in analysis mode without having it apply the recommendations. That way you can see first what the potential savings are.

So how does this thing work? Well it is fairly straight forward, as far as I understand. It is a simple appliance and installing it is no rocket science… Of course you will need to ask yourself how you would benefit from this solution, if you have 2 hosts then it probably will not make sense, but in large(r) environments I can definitely see how costs can be dramatically lowered leveraging their datacenter optimization solution.

** disclaimer: I was briefed by E4C, I have no direct experience with their products. E4C is actively looking for Enterprise customers who are willing to test out their solution in there data center. If you work for an Enterprise and are wondering if you can benefit from this, please leave a comment and I can get you in touch with them directly! **