Free Kindle copy of vSphere 5.0 Clustering Deepdive?

Do you want a free Kindle copy of the vSphere 5.0 Clustering Deepdive or the vSphere 4.1 HA and DRS Deepdive? Well make sure to check Amazon next week! I just put both of the books up for a promotional offer… For 48 hours, Wednesday June the 5th and Thursday June the 6th, you can download the Kindle (US Kindle Store) copy of both these books for free, yes that is correct ZERO dollars.

So make sure you pick it up either Wednesday June the 5th or Thursday June the 6th, it might be the only time this year it is on promo.

Pinging from different VMkernel NICs using esxcli?

Today I had a network issue in my lab, I still don’t have a clue what the issue was but I did discover something useful. I had 3 different VMkernel’s setup and I wanted to make sure each of the three had network connection to a specific destination address. After going through the esxcli command I bumped in to the following command which I found very helpful:

esxcli network diag ping -I vmk0 -H 10.27.51.132

In this case I use VMkernel Interface “vmk0″ to ping to the address “10.27.51.132″. If I want to use a different VMkernel Interface I just specify it, so swap “vmk0″ with “vmk1″ for instance. Useful right?!

How to change the IP Address of ESXi through the commandline

I was building out my virtualized lab and instead of re-installing ESXi over and over again I figured I would just quickly clone them. Now of course this leads to a “minor” problem as the virtualized ESXi hosts will all boot with the same IP-Address. As I don’t have DHCP to my disposal I needed to change them manually, so how do you change the IP address of ESXi through the commandline?

It is actually pretty straight forward with esxcli these days. First thing I did was listing all VMkernel NICs:

esxcli network ip interface ipv4 get

This will give you the list of all VMkernel interfaces with their details (See screenshot below). Changing the IP address is just a matter of adding some parameters:

esxcli network ip interface ipv4 set -i vmk1 -I 10.27.51.143 -N 255.255.255.0 -t static

In your situation you will need to replace “vmk1″ with the appropriate VMkernel NIC of course and change the IP details.

change ip address of esxi

Replaced certificates and get vSphere HA Agent unreachable?

Replaced certificates and get vSphere HA Agent unreachable? I have heard this multiple times in the last couple of weeks. I started looking in to it and it seems that in many of these scenarios the common issue was the thumbprints. The log files typically give a lot of hints that look like this:

[29904B90 verbose 'Cluster' opID=SWI-d0de06e1] [ClusterManagerImpl::IsBadIP] <ip of the ha master> is bad ip

Also note that the UI will state “vSphere HA agent unreachable” in many of these cases. Yes I know, these error messages can be improved for sure.

You can simply solve this by disconnecting and reconnecting the hosts. Yes it really is as simple as that, and you can do this without any downtime. No need to move the VMs off even, just right click the host and disconnect it. Then when the disconnect task is finished reconnect it.

Number of vSphere HA heartbeat datastores less than 2 error, while having more?

Last week on twitter someone mentioned he received the error that he had less than two vSphere HA heartbeat datastores configured. I wrote an article about this error a while back so I asked him if he had two or more. This was the case, so next thing to do was to “reconfigure for HA” to clear the message hopefully.

The number of vSphere HA heartbeat datastores for this host is 1 which is less than required 2

Unfortunately after reconfiguring for HA the error was still there, next suggestion was looking at the “heartbeat datastore” section in HA. For whatever reason HA was configured to “Select only from my preferred datastores” and no datastores were selected just like in the screenshot below. HA does not override this so when configured like this NO heartbeat datastores are used, resulting in this error within vCenter. Luckily the fix is easy, just set it to “Select any of the cluster datastores”.

the number of heartbeat datastores for host is 1

the number of heartbeat datastores for host is 1

Is flash the saviour of Software Defined Storage?

I have this search column open on twitter with the term “software defined storage”. One thing that kept popping up in the last couple of days was a tweet from various IBM people around how SDS will change flash. Or let me quote the tweet:

What does software-defined storage mean for the future of #flash?

It is part of a twitter chat scheduled for today, initiated by IBM. It might be just me misreading the tweets or the IBM folks look at SDS and flash in a completely different way than I do. Yes SDS is a nice buzzword these days. I guess with the billion dollar investment in flash IBM has announced they are going all-in with regards to marketing. If you ask me they should have flipped it and the tweet should have stated: “What does flash mean for the future of Software Defined Storage?” Or to make it even sound more marketing is flash the saviour of Software Defined Storage?

Flash is a disruptive technology, and changing the way we architect our datacenters. Not only did it already allow many storage vendors to introduce additional tiers of storage it also allowed them to add an additional layer of caching in their storage devices. Some vendors even created all flash based storage systems offering thousands of IOps (some will claim millions), performance issues are a thing of the past with those devices. On top of that host local flash is the enabler of scale-out virtual storage appliances. Without flash those type of solutions would not be possible, well at least not with a decent performance.

Since a couple of years host side flash is also becoming more common. Especially since several companies jumped in to the huge gap there was and started offering caching solutions for virtualized infrastructures. These solutions allow companies who cannot move to hybrid or all-flash solutions to increase the performance of their virtual infrastructure without changing their storage platform. Basically what these solutions do is make a distinction between “data at rest” and “data in motion”. Data in motion should reside in cache, if configured properly, and data in rest should reside on your array. These solutions once again will change the way we architect our datacenters. They provide a significant performance increase removing many of the performance constraints linked to traditional storage systems; your storage system can once again focus on what it is good at… storing data / capacity / resiliency.

I think I have answered the questions, but for those who have difficulties reading between the lines, how does flash change the future of software defined storage? Flash is the enabler of many new storage devices and solutions. Be it a virtual storage appliance in a converged stack, an all-flash array, or host-side IO accelerators. Through flash new opportunities arise, new options for virtualizing existing (I/O intensive) workloads. With it many new storage solutions were developed from the ground up. Storage solutions that run on standard x86 hardware, storage solutions with tight integration with the various platforms, solutions which offer things like end-to-end QoS capabilities and a multitude of data services. These solutions can change your datacenter strategy; be a part of your software defined storage strategy to take that next step forward in optimizing your operational efficiency.

Although flash is not a must for a software defined storage strategy, I would say that it is here to stay and that it is a driving force behind many software defined storage solutions!

EMC ViPR; My take

When I started writing this article I knew people were going to say that I am biased considering I work for VMware (EMC owns a part of VMware), but so be it. It is not like that has ever stopped me from posting anything about potential competitors so it also will not stop me now either. After seeing all the heated debates on twitter between the various storage vendors I figured it wouldn’t hurt to try to provide my perspective. I am looking at this from a VMware Infrastructure point of view and with my customer hat on. Considering I have huge interest in Software Defined Storage solutions this should be my cup of tea. So here you go, my take on EMC ViPR. Note that I did not actually played with the product yet (like most people providing public feedback), so this is purely about the concept of ViPR.

First of all, when I wrote about Software Defined Storage one the key requirements I mentioned was the ability to leverage existing legacy storage infrastructures… Primary reason for this is the fact I don’t expect customers to deprecate their legacy storage all at once, if they will at all. Keep that in mind when reading the rest of the article.

Let me summarize shortly what EMC introduced last week. EMC introduced a brand new product call ViPR. ViPR is a Software Defined Storage product; at least this is how EMC labels it. Those who read my articles on SDS know the “abstract / pool / automate” motto by now, and that is indeed what ViPR can offer:

  • It allows you to abstract the control path from the actual underlying hardware, enabling management of different storage devices through a common interface
  • It enables grouping of different types storage in to a single virtual storage pool. Based on policies/profiles the right type of storage can be consumed
  • It offers a single API for managing various devices; in other words a lower barier to automate. On top of that, when it comes to integration it for instance allows you to use a single “VASA” (vSphere APIs for Storage Awareness) provider instead of the many needed in a multi-vendor environment

So what does that look like?

What surprised me is that ViPR not only works with EMC arrays of all kinds but will also work for 3rd party storage solutions. For now NetApp support has been announced but I can see that being extended, and I now EMC is aiming to. You can also manage your fabric using ViPR, do note that this is currently limited to just a couple of vendors but how cool is that? When I did vSphere implementations the one thing I never liked doing was setting up the FC zones, ViPR makes that a lot easier and I can also see how this will be very useful in environments where workloads move around clusters. (Chad has a great article with awesome demos here) So what does this all mean? Let me give an example from a VMware point of view:

Your infrastructure has 3 different storage systems. Each of these systems have various data services and different storage tiers. Now when you need to add new data stores or introduce a new storage system without ViPR it would mean you will need to add new VASA providers, create LUNs, present these, potentially label these, see how automation works as typically API implementation differ etc. Yes a lot of work, but what if you had a system sitting in between you and your physical systems who takes some of these burdens on? That is indeed where ViPR comes in to play. Single VASA provider on vSphere, single API, single UI and self-service.

Now what is all the drama about then I can hear some of you think as it sounds pretty compelling. To be honest, I don’t know. Maybe it was the messaging used by EMC, or maybe the competition in the Software Defined space thought the world was crowded enough already? Maybe it is just the way of the storage industry today; considering all the heated debates witnessed over the last couple of years that is a perfectly viable option. Or maybe the problem is that ViPR enables a Software Defined Storage strategy without necessarily introducing new storage. Meaning that where some pitch a full new stack, in this case the current solution is used and a man-in-the-middle solution is introduced.

Don’t get me wrong, I am not saying that ViPR is THE solution for everyone. But it definitely bridges a gap and enables you to realise your SDS strategy. (Yes I know, there are other vendors who offer something similar.) ViPR can help those who have an existing storage solution to: abstract / pool / automate. Yes indeed, not everyone can afford it to swap out their full storage infrastructure for a new so-called Software Defined Storage device and that is where ViPR will come in handy. On top of that, some of you have, and probably always will, a multi-vendor strategy… again this is where ViPR can help simply your operations. The nice thing is that ViPR is an open platform, according to Chad source code and examples of all critical elements will be published so that anyone can ensure their storage system works with ViPR.

I would like to see ViPR integrate with host-local-caching solutions, it would be nice to be able to accelerate specific datastores (read caching / write back / write through) using a single interface / policy. Meaning as part of the policy ViPR surfaces to vCenter. Same applies to host side replication solutions by the way. I would also be interested in seeing how ViPR will integrate with solutions like Virtual Volumes (VVOLs) when it is released… but I guess time will tell.

I am about to start playing with ViPR in my lab so this is all based on what I have read and heard about ViPR (I like this series by Greg Schultz on ViPR). My understanding, and opinion, might change over time and if so I will be the first to admit and edit this article accordingly.

I wonder how those of you who are on the customer side look at ViPR, and I want to invite you to leave a comment.