SIOControlFlag2 what is it?

I had a question this week what Misc SIOControlFlag2 was. Some refer to it as SIOControlFlag2 and I’ve also seen Misc.SIOControlFlag2. In the end it is the same thing. It is something that sometimes pops up in the log files, or some may stumble in to the setting in the “advanced settings” on a host level. The question I had was why the value is 0 on some hosts, 2 on others or even 34 on other hosts.

Let me start with saying that it is nothing to worry about, even when you are not using Storage IO Control. It is an internal setting which is used by ESXi (hostd sets it) when there is an operation done where disk files on a volume are opened (vMotion / power on etc). This is set to ensure that when Storage IO Control is used that the “SIOC injector” knows when to or when not to use the volume to characterize it. Do not worry about this setting being different on the hosts in your cluster, it is an internal setting which has no impact on your environment itself, other then when you use SIOC this will help SIOC making the right decision.

Virtualization networking strategies…

I was asked a question on LinkedIn about the different virtualization networking strategies from a host point of view. The question came from someone who recently had 10GbE infrastructure introduced in to his data center and the way the network originally was architected was with 6 x 1 Gbps carved up in three bundles of 2 x 1Gbps. Three types of traffic use their own pair of NICs: Management, vMotion and VM. 10GbE was added to the current infrastructure and the question which came up was: should I use 10GbE while keeping my 1Gbps links for things like management for instance? The classic model has a nice separation of network traffic right?

Well I guess from a visual point of view the classic model is nice as it provides a lot of clarity around which type of traffic uses which NIC and which physical switch port. However in the end you typically still end up leveraging VLANs and on top of the physical separation you also provide a logical separation. This logical separation is the most important part if you ask me. Especially when you leverages Distributed Switches and Network IO Control you can create a great simple architecture which is fairly easy to maintain and implement both from a physical and virtual point of view, yes from a visual perspective it may be bit more complex but I think the flexibility and simplicity that you get in return definitely outweighs that. I definitely would recommend, in almost all cases, to keep it simple. Converge physically, separate logically.

Geek Whisperers episode: Marketing, Blogging & Community

I had the honor a couple of weeks ago to be on the Geek Whisperers podcast. It was very entertaining with John, Amy and Matt. The podcast deals about how I got started with blogging and communities and many other random topics.

There are a couple things which I wanted to share. First and foremost, blogging and social media has got nothing do with marketing for me personally, it is what I do, it is who I am. Everyone has a different way of digesting information, learning new things, dealing with complex matters or even dealing with emotions… Some sit down behind a white board, some discuss it with their colleagues, I write / share.

Secondly, when it comes to social media I am (more and more) a believer in the “social aspect”. I’ve seen the rise of the “message boards” and online communities and all the flame wars that came with it, I’ve seen the same on twitter / facebook etc. Recently I decided to be more hardline when it comes to social media and following people / accepting friend requests. If you look at facebook for instance, which is more personal for me then twitter, I have pictures of my kids up there so in that case I want to make sure I “trust” the person before I accept. And then there is the whole unfollowing / unfriending thing… Anyway, enough said… just have a listen.

And euuhm, thanks Matt for the nice pic of me riding a unicorn shooting rainbows, not sure what to think of it yet :)

Must read post on Cloud Native Apps

I don’t do this too often but I wanted to share an excellent blog post by one of my colleagues. I was writing something along the same lines as it seems there is a lot of confusion around what cloud native apps are and what they bring. Even when it comes to containers there still seems to be a lot of confusion. What fits where and how you can leverage certain technologies to its full potential will all depend on your application architecture if you ask me. If you read the examples of how these types of apps are (or aren’t) administered you can also see that with the wrong understanding and knowledge applying the same logic to an app which is not just there could lead to a world of pain.

Anyway, Massimo’s post is a great start for everyone who wants to have a better understanding of the evolution which is going on in the developers world. Thanks Massimo for taking the time to write this great article. Below a short out take and the link, I urge all of you to read it and soak it in.

Cloud Native Applications for dummies

This is where the virtual machines (aka instances) hosting the code of our cloud native application live. They are completely stateless, they are an army of VMs all identically configured (on a role-basis) and whose entire life cycle is automated. In such an environment traditional IT concepts often associated to virtual machines do not even make any sense. See below for some examples.

  • You don’t install (in the traditional way) these servers, because they are generated by automated scripts that are either triggered by an external event or by a policy (e.g. autoscale a front end layer based on user demand)
  • You don’t operate these servers,  for the same reason above.

Operational Efficiency (You’re not Facebook/Google/Netflix)

In previous roles, also before I joined VMware, I was a system administrator and a consultant. The tweets below reminded me of the kind of work I did in the past and triggered a train of thought that I wanted to share…

Howard has a great point here. For some reason many people started using Google, Facebook or Netflix as the prime example of operational efficiency. Startups use it in their pitches to describe what they can bring and how they can simplify your life, and yes I’ve also seen companies like VMware use it in their presentations.When I look back at when I managed these systems my pain was not the infrastructure (servers / network / storage)… Even though the environment I was managing was based on what many refer to as legacy: EMC Clariion, NetApp FAS or HP EVA. The servers were never really the problem to manage either, sure updating firmware was a pain but not my biggest pain point. Provisioning virtual machines was never a huge deal… My pain was caused by the application landscape many of my customers had.

At companies like Facebook and Google the ratio of Application to Admin is different as Howard points out. I would also argue that in many cases the applications are developed in-house and are designed around agility, availability and efficiency… Unfortunately for most of you this is not the case. Most applications are provided by vendors which don’t really seem to care about your requirements, they don’t design for agility and availability. No, instead they do what is easiest for them. In the majority of cases these are legacy monolithic (cr)applications with a simple database which all needs to be hosted on a single VM and when you get an update that is where the real pain begins. At one of the companies I worked for we had a single department using over 80 different applications to calculate mortgages for the different banks and offerings out there, believe me when I say that that is not easy to manage and that is where I would spent most of my time.

I do appreciate the whole DevOps movement and I do see the value in optimizing your operations to align with your business needs, but we also need to be realistic. Expecting your IT org to run as efficient as Google/Facebook/Netflix is just not realistic and is not going to happen. Unless of course you invest deep and develop the majority of your applications in-house, and do so using the same design principles these companies use. Even then I doubt you would reach the same efficiency, as most simply won’t have the scale to reach it. This does not mean you should not aim to optimize your operations though! Everyone can benefit from optimizing operations, from re-aligning the IT department to the demands of todays world, from revising procedures… Everyone should go through this motion, constantly, but at the same time stay realistic. Set your expectations based on what lands on the infrastructure as that is where a lot of the complexity comes in.