vSphere HA 5.x restart attempt timing

I wrote about how vSphere HA 5.x restart attempt timing works a long time ago but there appears still to be some confusion about this. I figured I would clarify this a bit more, I don’t think I can make it more simple than this:

  • Initial restart attempt
  • If the initial attempt failed, a restart will be retried after 2 minutes of the previous attempt
  • If the previous attempt failed, a restart will be retried after 4 minutes of the previous attempt
  • If the previous attempt failed, a restart will be retried after 8 minutes of the previous attempt
  • If the previous attempt failed, a restart will be retried after 16 minutes of the previous attempt

After the fifth failed attempt the cycle ends. Well that is, unless a new master host is selected (for whatever reason) between the first and the fifth attempt. In that case, we start counting again. Meaning that if a new master is selected after attempt 3, the new master will start with the “initial restart attempt.

Or as Frank Denneman would say:

vSphere HA 5.x restart attempt timing

VMware to acquire Virsto; Brief look at what they offer today

Most of you have seen the announcement around Virsto by now, for those who haven’t read this blog post: VMware to acquire Virsto. Virsto is a storage company which offers a virtual storage solution. I bumped in to Virsto various times in the past and around VMworld 2012 got reminded about them when Cormac Hogan wrote an excellent article about what they have to offer for VMware customers. (Credits go to Cormac for the detailed info in this post)  When visiting Virsto’s website there is one thing that stands out and that is “software defined storage”. Lets take a look at what Virsto offers and what software defined storage means to them.

Lets first start with the architecture. Virsto has developed an appliance and a host level service which together forms an abstraction layer for existing storage devices. In other words, storage devices are connected directly to the Virsto appliance and Virsto aggregates these devices in to a large storage pool. This pool is in its turn served up to your environment as an NFS datastore. Now I can hear you think, what is so special about this?

As Virsto has abstracted storage and raw device are connected to their appliance they control the on-disk format. What does this mean? Devices that are attached to the Virsto appliance are not formatted with VMFS. Rather Virsto has developed their own filesystem which is highly scalable and what makes this solution really interesting. This filesystem is what allows Virsto to offer specific data services, to increase performance and scale and reduce storage capacity consumption.

Lets start with performance, as Virsto sits in between your storage device and your host they can do certain things to your IO. Not only does Virsto increase read performance, but their product also increases write performance. Customers have experienced performance increases between 5x and 10x. For the exact technical details read Cormac’s article. For now let me say that they sequentialise IO in a smart way and de-stage writes to allow for a more contiguous IO flow to your storage device. As you can imagine, this also means that the IO utilisation of your storage device can and probably will go down.

From an efficiency perspective Virsto optimizes your storage capacity by provisioning every single virtual disk as a thin disk. However, this thin disk does not introduce the traditional performance overhead associated with thin disks preventing the waste of precious disk space just to avoid performance penalties. What about functionality like snapshotting and cloning, this must introduce overhead and slow things down is what I can hear you think… Again, Virsto has done an excellent job of reducing overhead and optimizing for scale and performance. Virsto allows for hundreds, if not thousands, of clones of a gold master without sacrificing performance while saving storage capacity. Not surprising Virsto is often used in Virtual Desktop and large Test and Development environments as it has proven to reduce the cost of storage with as much as 70%.

Personally I am excited about what Virsto has to offer and what they have managed to achieve in a relatively short time frame. The solution they have developed, and especially their data services framework promises a lot for the future. Hopefully I will have time on my hands soon to play with their product and provide you with more insights and experience.

SRM vs Stretched Cluster solution /cc @sakacc

I was reading this article by Chad Sakac on vSphere DR / HA, or in other words SRM versus Stretched (vMSC) solutions. I have presented on vSphere Metro Storage Cluster solutions at VMworld together with Lee Dilworth and also wrote a white paper on this topic a while back and various blog posts since. I agree with Chad that there are too many people misinformed about the benefits of both solutions. I have been on calls with customers where indeed people were saying SRM is a legacy solution and the next big thing is “Active / Active”. Funny thing is that in a way I agree when they say SRM has been around for a long time and the world is slowly changing, I do not agree with the term “legacy” though.

I guess it depends on how you look at it, yes SRM has been around for a long time but it also is a proven solution that does what it says it does. It is an orchestration solution for Disaster Recovery solutions. Think about a disaster recovery scenario for a second and then read those two last sentences again. When you are planning for DR, isn’t it nice to use a solution that does what it says it does. Although I am a big believer in “active / active” solutions, there is a time and place for it; in many of the discussions I have been a stretched cluster solution was just not what people were looking for. On top of that Stretched Cluster solutions aren’t always easy to operate. That is I guess what Chad was also referring to in his post. Don’t get me wrong, a stretched cluster is a perfectly viable solution when your organization is mature enough and you are looking for a disaster avoidance and workload mobility solution.

If you are at the point of making a decision around SRM vs Stretched Cluster make sure to think about your requirements / goals first. Hopefully all of you have read this excellent white paper by Ken Werneburg. Ken describes the pros and cons of each of these solutions perfectly, read it carefully and then make your decision based on your business requirement.

So just in short to recap for those who are interested but don’t have time to read the full paper, make time though… really do!

Where does SRM shine:

  • Disaster Recovery
  • Orchestration
  • Testing
  • Reporting
  • Disaster Avoidance (will incur downtime when VMs failover to other site)

Where does a Stretched Cluster solution shine:

  • Workload mobility
  • Cross-site automated load balancing
  • Enhanced downtime avoidance
  • Disaster Avoidance (VMs can be vMotioned, no downtime incurred!)

 

vCloud Suite equals a Software Defined Datacenter

I was on the VMTN podcast this week with Frank Denneman and Rawlinson Rivera, hosted by John Troyer. One of the discussions we had was around the Software Defined Datacenter and the vCloud Suite. Often people make a direct connection between a Software Defined Datacenter and the vCloud Suite and I can understand why. I have heard some people comment that because some components are not fully integrated yet; the vCloud Suite does not allow you to build a full Software Defined Datacenter.

On the call I mentioned that a Software Defined Datacenter is not just about the vCloud Suite. Using the vCloud Suite does not magically provide you with a Software Defined Datacenter. I guess the same could be said for a cloud, using the vCloud Suite does not magically provide you with a cloud.

What a lot of people tend to forgot is that a cloud or an SDDC is not about the infrastructure or the individual components. (Lets from now on use SDDC instead of the full name or the word cloud) An SDDC is about how you are providing services to your customers. In this case, customers could be external / internal customers of course. An SDDC is about software defined services, about flexibility and agility. What does that mean? There are two points of view, the consumer of the platform and the platform administrator. Lets explain from both views what is means, or at least what I think it means…

  1. The consumer of the platform
    The consumer should be able to select a specific service level for their workload, or select a specific service for their workload. When they they select a service or service level the platform should sort things out for them fully automated, whether it is DR / Backup / Resources / Storage Tiering / Security… it should be automatically applied to the workload when either of those software defined service characteristics are selected and applied.
  2. The platform administrator
    The platform administrator should be able to define services and policies which can be consumed. These services or policies could be as simple as “enabling vSphere Replication” on a virtual machine, or as complex as deploying a 3 tier vApp including a full application stack and security services using vCloud Automation Center in combination with Application Director and vCloud Networking and Security.

In some cases that means you will need to deploy the full vCloud Suite and potentially more, in other cases it might mean you will deploy less but use 3rd party solutions to provide a fully automated solution stack and experience to your consumers . In the end it is about having the ability to define and offer services in a specific way and enabling your customers to consume these in a specific way.

Although the SDDC could be architected and build using the vCloud Suite, using the vCloud Suite does not automagically provide you with an SDDC. An SDDC is about your operating model and service offering, not about the components you are using.

Feel free to chip in,

Software Defined Datacenter Roadshow – Benelux – Free Event!

Would like to hear more about Software Defined Datacenters from experts like Frank Denneman, Mike Laverick, Cormac Hogan, Kamau Wanguhu and many others? VMware and IBM are organizing an awesome event in the Benelux. Yes this is a full day event, and it is free for everyone, if you just want to sign up… go here. If you need to be convinced keep reading as there are some awesome sessions scheduled.

Agenda
09.00 - 09.30 Registration
09.30 - 09.45 Welcome
09.45 - 10.30 Keynote VMware: Software-Defined Data Center
10.30 - 11.15 Keynote IBM: Converged Systems: beyond NextGen DC’s
11.15 - 11.30 Break and split into parallel sessions
11.30 - 12.15 Parallel track 1 or meet the expert
12.15 - 13.00 Lunch
13.00 - 13.45 Parallel track 2 or meet the expert
14.00 - 14.45 Parallel track 3 or meet the expert
15.00 - 15.45 Parallel track 4 or meet the expert
16.00 - 16.45 Parallel track 5 or meet the expert
16.45 - 17.30 Networking drink

The awesome part is that at this event you will also have the ability to sit down with one of the experts for a 1:1 discussion and get your questions answered. Below is the list of people you can sit down with, make sure to register for that!

VMware
Frank Denneman – Resource Management Expert
Cormac Hogan – Storage Expert
Kamau Wanguhu – Software Defined Networking Expert
Mike Laverick – Cloud Infrastructure Expert
Ton Hermes – End User Computing Expert

IBM
Tikiri Wanduragala – IBM PureSystems Expert
Dennis Lauwers – Converged Systems Expert
Geordy Korte – Software Defined Networking Expert
Andreas Groth – End User Computing Expert

So if you live in The Netherlands, Belgium or Luxemburg… make sure to sign up. As mentioned, it is a free event. And with people like Cormac Hogan, Frank Denneman, Mike Laverick and Kamau Wanguhu you know it is going to get deep technical.

  • 5th March – Amsterdam
  • 7th March – Brussels
  • 8th March – Luxemburg

–> Sign up now <–