vSphere 4.1 HA/DRS Deepdive promo was a huge hit!

Thanks to each and everyone of you who took the time to download the vSphere 4.1 HA/DRS Deepdive kindle copy during our promo days. Over 6000 downloads in just 2 days is nothing short of amazing. Frank and I were talking about this promo opportunity a week ago for the 4.1 book and never anticipated on these kind of numbers. We expected a couple of hundred copies to be given away, maybe close to a 1000, but definitely not 6000+. Just some facts about this promo:

  • 175+ retweets of my tweets
  • 600+ tweets
  • 30.000+ people reached
  • 6000+ Kindle copies

We were shocked, we anticipated on a couple of hundred copies, maybe close to a 1000, but never did we anticipate on 6000 kindle copies being downloaded. Thanks to everyone who helped driving this. All the tweets / facebook and G+ mentions helped with this huge success.

 

How do I use das.isolationaddress[x]?

Recently I received a question on twitter how the vSphere HA advanced option “das.isolationaddress” should be used. This setting is used when there is the desire or a requirement to specify an additional isolation address. The isolation address is used by a host which “believes” it is isolated. In other words, if a host isn’t receiving heartbeats anymore it pings the isolation address to validate if it still has network access or not. If it does still have network access (response from isolation address) then no action is taken, if the isolation address does not respond then the “isolation response” is triggered.

Out of the box the “default gateway” is used as an isolation address. In most cases it is recommended to specify at least one extra isolation address. This would be done as follows:

  • Right click your vSphere Cluster and select “Edit settings”
  • Go to the vSphere HA section and click “Advanced options”
  • Add “das.isolationaddress0″ under the option column
  • And add the “IP Address” of the device you want to use as an isolation address under the value column

Now if you want to specify a second isolation address you should add “das.isolationaddress1″. In total 10 isolation addresses will be used (0 – 9). Keep in mind that all of these will be pinged in parallel! Many seem to be under the impression that this happens sequential, but that is not the case!

Now if for whatever reason the default gateway should not be used you could disable this by adding the “das.usedefaultisolationaddress” to “false”. A usecase for this would be when the default gateway is a “non-pingable” device, in most scenarios it is not needed though to use “das.usedefaultisolationaddress”.

I hope this helps when implementing your cluster,

Want a free Kindle version of the VMware vSphere 4.1 HA and DRS technical deepdive?

Just a limited offer, two days only… The VMware vSphere 4.1 HA and DRS tech deepdive is free. WHAT? Yes it is free… $ 0,-. So hop over to amazon and pick up your free Kindle copy today!

For those who have been living under a rock, this book will explain the ins and outs of both vSphere HA and DRS. Admission Control, Resource Pools, Limits, HA Restart Timelines… it is all in there. Pick it up!

vSphere Metro Storage Cluster white paper released!

I wanted to point you guys to a white paper that I have worked on for the last months. This white paper was written in collaboration with Lee Dilworth, Ken Werneburg, Frank Denneman and Stuart Hardman. Thanks guys for taking time out of your busy schedule to work with me on this project! This white paper is about vSphere Metro Storage Cluster solutions (aka stretched clusters) and specifically looks at things from a VMware perspective. Enjoy!

  • VMware vSphere Metro Storage Cluster (VMware vMSC) is a new configuration within the VMware Hardware Compatibility List. This type of configuration is commonly referred to as a stretched storage cluster or metro storage cluster. It is implemented in environments where disaster/downtime avoidance is a key requirement. This case study was developed to provide additional insight and information regarding operation of a VMware vMSC infrastructure in conjunction with VMware vSphere. This paper will explain how vSphere handles specific failure scenarios and will discuss various design considerations and operational procedures.
    http://www.vmware.com/resources/techresources/10284

An introduction to Storage DRS

Today someone asked for a Storage DRS intro, I wrote one for our book a year ago and figured I would share it with the world. I still feel that Storage DRS is one of the coolest features in vSphere 5.0 and I think that everyone should be using this! I know there are some caveats (1, 2) when you are using specific array functionality or for instance SRM, but nevertheless… this is one of those features that will make an admin’s life that much easier! If you are not using it today, I highly suggest evaluating this cool feature.

*** out take from the vSphere 5.0 Clustering Deepdive ***

vSphere 5.0 introduces many great new features, but everyone will probably agree with us that vSphere Storage DRS is most the exciting new feature. vSphere Storage DRS helps resolve some of the operational challenges associated with virtual machine provisioning, migration and cloning. Historically, monitoring datastore capacity and I/O load has proven to be very difficult. As a result, it is often neglected, leading to hot spots and over- or underutilized datastores. Storage I/O Control (SIOC) in vSphere 4.1 solved part of this problem by introducing a datastore-wide disk-scheduler that allows for allocation of I/O resources to virtual machines based on their respective shares during times of contention.

Storage DRS (SDRS) brings this to a whole new level by providing smart virtual machine placement and load balancing mechanisms based on space and I/O capacity. In other words, where SIOC reactively throttles hosts and virtual machines to ensure fairness, SDRS proactively makes recommendations to prevent imbalances from both a space utilization and latency perspective. More simply, SDRS does for storage what DRS does for compute resources.

There are five key features that SDRS offers:

  • Resource aggregation
  • Initial Placement
  • Load Balancing
  • Datastore Maintenance Mode
  • Affinity Rules

Resource aggregation enables grouping of multiple datastores, into a single, flexible pool of storage called a Datastore Cluster. Administrators can dynamically populate Datastore Clusters with datastores. The flexibility of separating the physical from the logical greatly simplifies storage management by allowing datastores to be efficiently and dynamically added or removed from a Datastore Cluster to deal with maintenance or out of space conditions. The load balancer will take care of initial placement as well as future migrations based on actual workload measurements and space utilization.

The goal of Initial Placement is to speed up the provisioning process by automating the selection of an individual datastore and leaving the user with the much smaller-scale decision of selecting a Datastore Cluster. SDRS selects a particular datastore within a Datastore Cluster based on space utilization and I/O capacity. In an environment with multiple seemingly identical datastores, initial placement can be a difficult and time-consuming task for the administrator. Not only will the datastore with the most available disk space need to be identified, but it is also crucial to ensure that the addition of this new virtual machine does not result in I/O bottlenecks. SDRS takes care of all of this and substantially lowers the amount of operational effort required to provision virtual machines; that is the true value of SDRS.

However, it is probably safe to assume that many of you are most excited about the load balancing capabilities SDRS offers. SDRS can operate in two distinct modes: No Automation (manual mode) or Fully Automated. Where initial placement reduces complexity in the provisioning process, load balancing addresses imbalances within a datastore cluster. Prior to vSphere 5.0, placement of virtual machines was often based on current space consumption or the number of virtual machines on each datastore. I/O capacity monitoring and space utilization trending was often regarded as too time consuming Over the years, we have seen this lead to performance problems in many environments, and in some cases, even result in down time because a datastore ran out of space. SDRS load balancing helps prevent these, unfortunately, common scenarios by making placement recommendations based on both space utilization and I/O capacity when the configured thresholds are exceeded. Depending on the selected automation level, these recommendations will be automatically applied by SDRS or will need to be applied by the administrator.

Although we see load balancing as a single feature of SDRS, it actually consists of two separately-configurable options. When either of the configured thresholds for Utilized Space (80% by default) or I/O Latency (15 milliseconds by default) are exceeded, SDRS will make recommendations to prevent problems and resolve the imbalance in the datastore cluster. In the case of I/O capacity load balancing, it can even be explicitly disabled.

Before anyone forgets, SDRS can be enabled on fully populated datastores and environments. It is also possible to add fully populated datastores to existing datastore clusters. It is a great way to solve actual or potential bottlenecks in any environment with minimal required effort or risk.

Datastore Maintenance Mode is one of those features that you will typically not use often; you will appreciate it when you need. Datastore Maintenance Mode can be compared to Host Maintenance Mode: when a datastore is placed in Maintenance Mode all registered virtual machines, on that datastore, are migrated to the other datastores in the datastore cluster. Typical use cases are data migration to a new storage array or maintenance on a LUN, such as migration to another RAID group.

Affinity Rules enable control over which virtual disks should or should not be placed on the same datastore within a datastore cluster in accordance with your best practices and/or availability requirements. By default, a virtual machine’s virtual disks are kept together on the same datastore.

For those who want more details, Frank Denneman wrote an excellent series about Datastore Clusters which might interest you:

Part 1: Architecture and design of datastore clusters.
Part 2: Partially connected datastore clusters.
Part 3: Impact of load balancing on datastore cluster configuration.
Part 4: Storage DRS and Multi-extents datastores.
Part 5: Connecting multiple DRS clusters to a single Storage DRS datastore cluster.
Part 6: Aggregating datastores from multiple storage arrays into one Storage DRS datastore cluster.

Some other articles that might be of use:

The following video will give an overview of the above mentioned features… worth checking.

KB article about SvMotion / VDS / HA problem republished with script to mitigate!

Just a second ago the GSS/KB team republished the KB article that explain the vSphere 5.0 problem around SvMotion / vDS / HA. I wrote about this problem various times and would like to refer to that for more details. What I want to point out here though is that the KB article now has a script attached which will help preventing problems until a full fixed is released. This script is basically the script that William Lam wrote, but it has been fully tested and vetted by VMware. For those running vSphere 5.0 and using SvMotion on VMs attached to distributed switches I urge you to use the script. I expect that the PowerCLI version will also be attached soon.

http://kb.vmware.com/kb/2013639