Lego VSAN EVO:RACK

I know a lot of you guys have home labs and are always looking for that next cool thing. Every once in a while you see something cool floating by on twitter and in this case it was so cool I needed to share it with you guys. Someone posted a picture of his version of “EVO:RACK” leveraging Intel NUC, a small switch and Lego… How awesome is a Lego VSAN EVO:RACK?! Difficult to see indeed in the pics below, but if you look at this picture then you will see how the top of rack switch was included.

Besides the awesome tweet, Nick also shared how he has build his lab in a couple of blog posts which are worth reading for sure!

Enjoy,

You wanted VMTN back? VMUG to the rescue!

I’ve written about VMTN in the past and discussed the return of VMTN many times within VMware with various people all the way up to our CTO. Unfortunately due to various reasons it never happened, but fortunately the VMUG organization jumped on to it not too long ago and managed to get it revamped. If you are interested in it then see the blurb below, visit the VMUG website and sign up. I can’t tell you how excited I am about this and how surprised I was that the VMUG team has managed to pull this off in a relatively short time frame. Thanks VMUG!

Source: VMUG – EVALExperience!
VMware and VMUG have partnered with Kivuto Solutions to provide VMUG Advantage Subscribers a customized web portal that provides VMUG Advantage Subscribers with self-service capability to download software and license keys. Licenses to available VMware products are regularly updated and posted to the self-service web portal. The licenses available to VMUG Advantage Subscribers are 365-day evaluation licenses that require a one-time, annual download. Annual product downloads ensure that Subscribers receive the most up-to-date versions of products.

Included products are:

A new 365 entitlement will be offered with the renewal of your yearly VMUG Advantage Subscription. Software is provided to VMUG Advantage Subscribers with no associated entitlement to support services, and users may not purchase such services in association with the EVALExperience licenses.

ScaleIO in the ESXi Kernel, what about the rest of the ecosystem?

Before reading my take on this, please read this great article by Vijay Ramachandran as he explains the difference between ScaleIO and VSAN in the kernel. And before I say anything, let me reinforce that this is my opinion and not VMware’s necessarily. I’ve seen some negative comments around Scale IO / VMware / EMC, most of them are around the availability of a second storage solution in the ESXi kernel next to VMware’s own Virtual SAN. The big complaint typically is: Why is EMC allowed and the rest of the ecosystem isn’t? The question though is if VMware is really not allowing other partners to do the same? While flying to Palo Alto I read an article by Itzik which stated the following:

ScaleIO 1.31 introduces several changes in the VMware environment. First, it provides the option to install the SDC natively in the ESX kernel instead of using the SVM to host the SDC component. The V1.31 SDC driver for ESX is VMware PVSP certified, and requires a host acceptance level of “PartnerSupported” or lower in the ESX hosts.

Let me point out here that the solution that EMC developed is under PVSP support. What strikes me is the fact that many seem to think that what ScaleIO achieved is a unique thing despite the “partner support” statement. Although I admit that there aren’t many storage solutions that sit within the hypervisor, and this is great innovation, it is not unique for a solution to sit within the hypervisor.

If you look at flash caching solutions for instance you will see that some sit in the hypervisor (PernixData, SanDisk’s Flashsoft) and some sit on top (Atlantis, Infinio). It is not like VMware favours one over the other in case of these partners. It was their design, it was their way to get around a problem they had… Some managed to develop a solution that sits in the hypervisor, others did not focus on that. Some probably felt that optimizing the data path first was most important, and maybe even more important they had the expertise to do so.

Believe me when I say that it isn’t easy to create these types of solutions. There is no standard framework for this today, hence they end up being partner supported as they leverage existing APIs and frameworks in an innovative way. Until there is you will see some partners sitting on top and others within the hypervisor, depending on what they want to invest in and what skill set they have… (Yes a framework is being explored as talked about in this video by one of our partners, I don’t know when or if this will be release however!)

What ScaleIO did is innovative for sure, but there are others who have done something similar and I expect more will follow in the near future. It is just a matter of time.

Two logical PCIe flash devices for VSAN

A couple of days ago I was asked whether I would recommend to use two logical PCIe flash devices leveraging a single physical PCIe flash device. The reason for the question was the recommendation from VMware to have two Virtual SAN disk groups instead of (just) one disk group.

First of all, I want to make it clear that this is a recommended practices but definitely not a requirement. The reason people have started recommending it is because of “failure domains”. As some of you may know, when a flash device becomes unavailable, which is used for read caching / write buffering and fronts a given set of disks, all the disks in that disk group associated with the flash devices becomes unavailable. As such a disk group can be considered a failure domain, and when it comes to availability it is typically best to spread risks so having multiple failure domains is desirable.

When it comes to PCIe devices would it make sense to carve up a single physical device in to multiple logical? From a failure point of view I personally think it doesn’t add much value, if the device fails then it is likely both logical devices fail. From an availability point of view there isn’t much 2 logical devices adds, however it could be beneficial to have multiple logical devices if you have more than 7 disks per server.

As most of you will know each host can have 7 disks per disk group at most and 5 disk groups per server. If there is a requirement for the server to have more than 7 disks then there will be a need to have multiple flash devices, in that scenario creating multiple logical devices would be needed, although I would still prefer having multiple physical devices from a failure tolerance perspective than having multiple logical devices. But I guess it all depends on what type of devices you use, if you have sufficient PCIe slots available etc. In the end the decision is up to you, but do make sure you understand the impact of your decision.

SIOControlFlag2 what is it?

I had a question this week what Misc SIOControlFlag2 was. Some refer to it as SIOControlFlag2 and I’ve also seen Misc.SIOControlFlag2. In the end it is the same thing. It is something that sometimes pops up in the log files, or some may stumble in to the setting in the “advanced settings” on a host level. The question I had was why the value is 0 on some hosts, 2 on others or even 34 on other hosts.

Let me start with saying that it is nothing to worry about, even when you are not using Storage IO Control. It is an internal setting which is used by ESXi (hostd sets it) when there is an operation done where disk files on a volume are opened (vMotion / power on etc). This is set to ensure that when Storage IO Control is used that the “SIOC injector” knows when to or when not to use the volume to characterize it. Do not worry about this setting being different on the hosts in your cluster, it is an internal setting which has no impact on your environment itself, other then when you use SIOC this will help SIOC making the right decision.