Congratulations Virtual SAN team, more than 1000 customers reached in 2014!

I want to congratulate the Virtual SAN team with their huge success in 2014. I was listening to the Q4 earnings call yesterday and was amazed by what was achieved. Of course I knew that Virtual SAN was doing well, but I didn’t know that they had already reached 1000 customers in 2014 utilizing the Virtual SAN platform. (Page 6) I am sure that these numbers will grow strong in 2015 and that Virtual SAN is unstoppable, especially knowing what 2015 has to offer in terms of feature/functionality set.

I know many of you must be interested as well in what is coming in the near future. If you haven’t registered yet for the launch event on February the 2, 3 or 5th (depending on region) make sure you do so now. It is going to be an interesting event with some great announcements! Besides that, by simply registering you will have the chance to win a VMworld 2015 ticket and who wouldn’t want that? Register now!


EZT Disks with VSAN, why would you?

I noticed a tweet today which made a statement around the use of eager zero thick disks in a VSAN setup for running applications like SQL Server. The reason this user felt this was needed was to avoid the hit on “first write to block on VMDK”, it is not the first time I have heard this and I have even seen some FUD around this so I figured I would write something up. On a traditional storage system, or at least in some cases, this first write to a new block takes a performance penalty. The main reason for this is that when the VMDK is thin, or lazy zero thick, the hypervisor will need to allocate that new block that is being written to and zero it out.

First of all, this was indeed true with a lot of the older storage system architectures (non-VAAI). However, this is something that even in 2009 was dispelled as forming a huge problem. And with the arrival of all-flash arrays this problem disappeared completely. But indeed VSAN isn’t an all-flash solution (yet), but for VSAN however there is something different to take in to consideration. I want to point out, that by default when you deploy a VM on VSAN you typically do not touch the disk format even and it will get deployed as “thin” with potentially a space reservation setting which comes from the storage policy! But what if you use an old template which has a zeroed out disk and you deploy that and compare it to a regular VSAN VM, will it make a difference? For VSAN eager zero thick vs thin will (typically) make no difference to your workload at all. You may wonder why, well it is fairly simple… just look at this diagram:

If you look at the diagram then you will see that the acknowledgement will happen to the application as soon as the write to flash has happened. So in the case of thick vs thin you can imagine that it would make no difference as the allocation (and zero out) of that new block would happen minutes after the application (or longer) has received the acknowledgement. A person paying attention would now come back and say: hey you said “typically”, what does that mean? Well that means that the above is based in the understanding that your working set will fit in cache, of course there are ways to manipulate performance tests to proof that the above is not always the case, but having seen customer data I can tell you that this is not a typical scenario… or extremely unlikely.

So if you deploy Virtual SAN… and have “old” templates floating around and they have “EZT” disks, I would recommend overhauling them as it doesn’t add much, well besides a longer waiting time during deployment.


I know a lot of you guys have home labs and are always looking for that next cool thing. Every once in a while you see something cool floating by on twitter and in this case it was so cool I needed to share it with you guys. Someone posted a picture of his version of “EVO:RACK” leveraging Intel NUC, a small switch and Lego… How awesome is a Lego VSAN EVO:RACK?! Difficult to see indeed in the pics below, but if you look at this picture then you will see how the top of rack switch was included.

Besides the awesome tweet, Nick also shared how he has build his lab in a couple of blog posts which are worth reading for sure!


E1000 VMware issues

Lately I’ve noticed that various people have been hitting my blog through the search string “e1000 VMware issues”, I want to make sure people end up in the right spot so I figured I would write a quick article that points people there. I’ve hit the issues described in the various KB articles myself, and I know how frustrating it can be. The majority of problems seen with the E1000 and E1000E drivers have been solved with the newer releases. I always run the latest and greatest version so it isn’t something I encounter any longer, but you may potentially witness the following:

  • vmkernel.log entries with “Heap netGPHeap already at its maximum size. Cannot expand.”
  • PSOD with “E1000PollRxRing@vmkernel#nover+”
  • vmware.log entries with “[msg.ethernet.e1000.openFailed] Failed to connect ethernet0.”

These problems are witnessed with vSphere 5.1 U2 and earlier and patches have been released to mitigate these problems, if you are running one of those versions either use the patch or preferably upgrade to vSphere 5.1 Update 3 at a minimum when you are still running 5.0 or 5.1, or move up to the latest 5.5 release.

KB articles with more details can be found here:

Platform9 manages private clouds as a service

A couple of months ago I introduced you to this new company founded by 4 former VMware employees called Platform9. I have been having discussions with them occasionally about what they were working on and I’ve been very intrigued by what they are building and am very pleased to see there first version go GA and want to congratulate them with hitting this major milestone. For those who are not familiar with what they do, this is what their website says:

Platform9 Managed OpenStack is a cloud service that enables Enterprises to manage their internal server infrastructure as efficient private clouds.

In short, they have a PaaS based solution which allows you to simply manage KVM based virtualization hosts. It is a very simple way of creating a private cloud and it will literally get your KVM based solution up and running in minutes which very welcome in this world where things seem to become increasingly more complex, and especially when you talk about KVM/Openstack.

Besides the GA announcement the pricing model was also announced. The pricing model follows the same “pay per month” model as CloudPhysics has. In the case of  Platform9 the costs are $49 per CPU per month with an annual subscription being required. This is for what they call their “business tier” which has unlimited scale. There is also a “lite tier” which is free but will have limited scale and is mainly aimed for people to test Platform9 and learn about their offering. An Enterprise tier is also in the works and will offer more advanced features and premium support. Features it will include additionally to what the Business tier offers appear to be mainly in the “software defined networking”  and security space, so I would expect things like firewalling, network isolation, single sign-on etc.

I highly recommend watching the Virtualization Field Day 4 videos as they demonstrate perfectly what they are capable off. The video that is probably most interesting to you is the one where they demonstrate a beta of the offering they are planning for vSphere (embedded below). The beta shows vSphere hosts and KVM hosts in a single pane of glass. The end-user can deploy “instances” (virtual machines) in the environment of choice using a single tool which from an operational perspective is great. On top of that, Platform9 discovers existing workloads on KVM and vSphere and non-disruptively adds them to their management interface.