Startup News Flash part 15

Number 15 of the Startup News Flash… What happened in the world of (storage / flash related) startup’s in the last couple of weeks? Not too much news, but I felt it was worth releasing anyway as other wise the below would be really old news.

One of the most interesting BC/DR startups of the last couple of years, if you ask me, just announced a new round of funding: 100 million. Investors include North Bridge, Greylock, Advanced Technology Ventures, Andreessen Horowitz, and Technology Crossover Ventures. For those who don’t know Actifio… Actifio offers what is commonly referred to as a “Data Copy Management” solution. It could be described as a solution which sits in between your storage solution and your hypervisor and can do things like: backup, cloning, replication, archiving etc. Really neat solution, with a brilliant super simple UI. Worth checking out if you are looking to improve your business continuity story!

A while back I wrote an introduction to SoftNAS. When doing that review there was one thing that stood out to me and that was that SoftNAS didn’t have a great availability story. I spoke with Rick Brady about that and he said that it would be one of the first things they would try to tackle in an upcoming release. In the just announced release SoftNAS introduces Snap HA. Snap HA provides an “active / passive” solution where when an issue arises ownership is transferred to the “passive” node which then of course becomes “active”. More details can be found in this blog post by Rick Brady. Awesome work guys!

VSAN Basics – Changing a VM’s storage policy

I have been talking a lot about the architecture of VSAN and have written many articles. It seems that somehow some of the more basic topics have not been fully addressed yet like changing a VM’s storage policy. One of our field folks had a question from a customer which was based on this video.

The question was how do you change the policy of a single VM? And why would you change the policy for a group of VMs?

Lets answer the “group of VMs” question first. You can imagine setting a policy for VMs that perform a specific function, for instance web servers. It could be that after a period of monitoring you notice that these VMs are not performing as expected when data needs to come from spindles. By changing the policy, as demonstrated in the video, you can simply increase the stripe width for all virtual machines.

Now the question remains, how do I change the policy of a single VM? It is actually really straight forward:

  • Create a new policy
    • Go to VM Storage Policies
    • Click “Create a new storage policy”
    • Select the capabilities
  • Now go to your virtual machines and right click VM which needs a new policy
  • Click on “all vCenter actions”
  • Click on “VM Storage Policies”
  • Click on “Manage…”
  • Select a new policy
  • Apply to disks
  • Click “Ok”

Now the new policy will be applied to the VM. Depending on the selected policy this will take a certain amount of time as new components of your objects may need to be created.

 

VSAN HCL more than VSAN-ready nodes

Over the last couple of weeks, basically since VSAN was launched, I noticed something and I figured I would blog about it. Many people seem to be under the impression that the VSAN Ready Nodes are your only option if you want to buy new servers to run VSAN on. This is definitely NOT the case. VSAN Ready Nodes are a great solution for people who do not want to bother going through the exercise of selecting components themselves from the VSAN HCL. However, the process is not as complicated as it sounds.

There are a couple of “critical aspects” when it comes to configuring a VSAN host and those are:

  • Server which is on the vSphere HCL (pick any)
  • SSD, Disk Controller and HDD which is on the VSAN HCL: vmwa.re/vsanhcl

Yes that is it! So if you look at the current list of Ready Nodes for instance, it contains a short list of Dell Servers (T620 and R720). However the vSphere HCL has a long list of Dell Servers, and you can use ANY of those. You just need to make sure your VSAN (critical) components are certified, and you can simply do that using the VSAN HCL. For instance, even the low end PowerEdge R320 can be configured with components that are supported by VSAN today as it supports the H710 and the H310 disk controller which are also on the VSAN HCL.

So let me recap that: You can select ANY host from the vSphere HCL, as long as you ensure the SSD / Disk Controller and HDD are on the VSAN HCL you should be good.

VSAN – The spoken reality

Yesterday Maish and Christian had a nice little back and forth on their blogs about VSAN. Maish published a post titled “VSAN – The Unspoken Truth” which basically talks about how VSAN doesn’t fit blade environments, and how many enterprise environments adopted blade to get better density from a physical point of view. With that meaning increase the number of physical servers to the number of rack U(nits) consumed. Also there is the centralized management aspect of many of these blade solutions that is a major consideration according to Maish.

Christian countered this with a great article titled “VSAN – The Unspoken Future“. I very much agree with Christian’s vision. Christian’s point basically is that when virtualization was introduced IT started moving to blade infrastructures as that was a good fit for the environment they needed to build. Christian then explains how you can leverage for instance the SuperMicro Twin architecture to get a similar (high physical) density while using VSAN at the same time. (See my Twin posts here) However, the essence of the article is: “it shows us that Software Designed Data Center (SDDC) is not just about the software, it’s about how we think, manage AND design our back-end infrastructure.”

There are three aspects here in my opinion:

  • Density – the old physical servers vs rack units discussion.
  • Cost – investment in new equipment and (potential) licensing impact.
  • Operations – how do you manage your environment, will this change?

First of all, I would like to kill the whole density discussion. Do we really care how many physical servers you can fit in a rack? Do we really care you can fit 8 or maybe even 16 blades in 8U? Especially when you take in to consideration your storage system sitting next to it takes up another full rack. Than on top of that there is the impact density has in terms of power and cooling (hot spots). I mean if I can run 500 VMs on those 8 or 16 blades and that 20U storage system, is that better or worse than 500 VMs on 12 x 1U rack mounted with VSAN? I guess the answer to that one is simple: it depends… It all boils down the total cost of ownership and the return on investment. So lets stop looking at a simple metric like physical density as it doesn’t say much!

Before I forget… How often have we had those “eggs in a basket” discussions in the last two years? This was a huge debate 5 years back, in 2008/2009 did you really want to run 20 virtual machines on a physical host? What if that host failed? Those discussions are not as prevalent any longer for a good reason. Hardware improved, stability of the platforms increased, admins became more skilled and less mistakes are made… chances of hitting failures simply declined. Kind of like the old Microsoft blue screen of death joke, people probably still make the joke today but ask yourself how often does it happen?

Of course there is the cost impact. As Christian indicated, you may need to invest in new equipment… As people mentioned on twitter: so did we when we moved to a virtualized environment. And I would like to add: and we all know what that brought us. Yes there is a cost involved. The question is how do you balance this cost. Does it make sense to use a blade system even for VSAN when you can only have a couple of disks at this point in time? It means you need a lot of hosts, and also a lot of VSAN licenses (+maintenance costs). It may be smarter, from economical point of view, to invest in new equipment. Especially when you factor in operations…

Operations, indeed… what does it take / cost today to manage your environment “end to end”? Do you need specialized storage experts to operate your environment? Do you need to hire storage consultants to add more capacity? What about when things go bad, can you troubleshoot the environment by yourself? How about my compute layer, most blade environments offer centralized management for those 8 or 16 hosts. But can I reduce the number of physical hosts from 16 or 8 to for instance 5 with a slightly larger form factor? What would the management overhead be, if there is any? Each of these things need to be taken in to considerations and somehow quantified to compare.

Reality is that VSAN (and all other hyper-converged solutions) brings something new to the table, just like virtualization did years ago. These (hyper-converged) solutions are changing the way the game is played, so you better revise your play book!

VSAN and the AHCI controller (hint: not supported!)

I have seen multiple people reporting this already so I figured I would write a quick blog post. Several folks are going from Beta to GA release for VSAN and so far people have been very successful, except for those using disk controllers which are not on the HCL like the on-board AHCI controller. For whatever reason it appeared on the HCL for a short time during the beta, but it is not supported (and not listed) today. I have had similar issues in my lab, and as far as I am aware there is no workaround at the moment. The errors you will see appear in the various logfiles have the keywords: “APD”, “PDL”, “Path lost” or “NMP device <xyz> is blocked”.

Before you install / configure Virtual SAN I highly want to recommend validating the HCL: http://vmwa.re/vsanhcl (I figured I will need this URL a couple of times in the future so I created this nice short url)

Update: with 5.5 U2 it is reported AHCI works, however still not supported!