Want a free digital copy of the vSphere Design Pocketbook?

I just noticed that PernixData is offering free copies of the vSphere Design Pocketbook. Only thing you will need to do is register here. I believe at VMworld they handed out roughly 1500 copies of these, and they have been very well received. For those who don’t know, this book was “authored by the community” with people like Frank Denneman, Cormac Hogan, Jason Nash, Eric Sloof, Vaughn Stewart and I deciding which consideration was in and which was one out. (Somehow we needed to ensure message weren’t conflicting or potentially “damaging” to an environment.)

Hopefully PernixData will have some more physical copies in Barcelona, but just in case they don’t… sign up for the free e-copy!

How do you know where an object is located with Virtual SAN?

You must have been wondering the same thing after reading the introduction to Virtual SAN. Last week at VMworld I received many questions on this topic, so I figured it was time for a quick blog post on this matter. How do you know where a storage object resides with Virtual SAN when you are striping across multiple disks and have multiple hosts for availability purposes, what about Virtual SAN object location? Yes I know this is difficult to grasp, even with just multiple hosts for resiliency where are things placed? The diagram gives an idea, but that is just from an availability perspective (in this example “failures to tolerate” is set to 1). If you have stripe width configured for 2 disks then imagine what could happen that picture. (Before I published this article, I spotted this excellent primer by Cormac on this exact topic…)

Luckily you can use the vSphere Web Client to figure out where objects are placed:

  • Go to your cluster object in the Web Client
  • Click “Monitor” and then “Virtual SAN”
  • Click “Virtual Disks”
  • Click your VM and select the object

The below screenshot depicts what you could potentially see. In this case the Policy was configured with “1 host failure to tolerate” and “disk striping set to 2″. I think the screenshot explains it pretty well, but lets go over it.

The “Type” column shows what it is, is it a “witness” (no data) or a “component” (data). The “Component state” shows you if it is available (active) or not at the moment. The “Host” column shows you on which host it currently resides and the “SSD Disk Name” column shows which SSD is used for read caching and write buffering. If you go to the right you can also see on which magnetic disk the data is stored in the column called  “Non-SSD Disk Name”.

Now in our example below you can see that “Hard disk 2″ is configured in RAID 1 and then immediately following with RAID 0. The “RAID 1″ refers to “availability” in this case aka “component failures” and the “RAID 0″ is all about disk striping. As we configured “component failures” to 1 we can see two copies of the data, and we said we would like to stripe across two disks for performance you see a “RAID 0″ underneath. Note that this is just an example to illustrate the concept, this is not a best practice or recommendation as that should be based on your requirements! Last but not least we see the “witness”, this is used in case of a failure of a host. If host 10.20.177.19 would fail or be isolated from the network somehow then the witness would be used by host 10.20.177.17 to claim ownership. Makes sense right?

Virtual SAN object location

Hope this helps understanding Virtual SAN object location a bit better… When I have the time available, I will try to dive a bit more in to the details of Storage Policy Based Management.

vSphere 5.5 nuggets: High Availability Enhancement

There aren’t a lot of changes in 5.5 when it comes to vSphere High Availability aka HA, but one is worth noting. As most of you are probably aware of, vSphere HA in the past did nothing with VM to VM Affinity or Anti Affinity rules. Typically for people using “affinity” rules this was not an issue, but those using “anti-affinity” rules did see this as an issue. They created these rules to ensure specific virtual machines would never be running on the same host, but vSphere HA would simply ignore the rule when a failure had occurred and just place the VMs “randomly”. With vSphere 5.5 this has changed! vSphere HA is now “anti affinity” aware. In order to ensure anti-affinity rules are respected you will need to set an advanced setting:

das.respectVmVmAntiAffinityRules - Values: "false" (default) and "true"

Now note that this also means that when you configure anti-affinity rules and have this advanced setting  configured to “true” and somehow there aren’t sufficient hosts available to respect these rules… then rules will be respected and it could result in HA not restarting a VM. Make sure to understand this potential impact when configuring this setting and configuring these rules.

vSphere 5.5 nuggets: Change Disk.SchedNumReqOutstanding per device!

Always wanted to change Disk.SchedNumReqOutstanding per device instead of per host? Well now with vSphere 5.5 you can! I didn’t know about this either, but my colleague Paudie pointed this out. Useful feature when you have several storage arrays and you need to tweak these values, now lets be clear… I do not recommend tweaking this, but in the case you need to you can now do it per device using esxcli.

Get the current configured value for a specific device:
esxcli storage core device list --device <device>

Set the value for a specific device::
esxcli storage core device set -d <device> -O <value between 1-256>.

Testing vSphere Virtual SAN in your virtual lab with vSphere 5.5

For those who want to start testing the beta of vSphere Virtual SAN in their lab with vSphere 5.5 I figured it would make sense to describe how I created my nested lab. (Do note that performance will be far from optimal) I am not going to describe how to install ESXi nested as there are a billion articles out there that describe how to do that.I suggest creating ESXi hosts with 3 disks each and a minimum of 5GB of memory per host:

  • Disk 1 – 5GB
  • Disk 2 – 20GB
  • Disk 3 – 200GB

After you have installed ESXi and imported a vCenter Server Appliance (my preference for lab usage, so easy and fast to set up!) you add your ESXi hosts to your vCenter Server. Note to the vCenter Server NOT to a Cluster yet.

Login via SSH to each of your ESXi hosts and run the following commands:

  • esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device mpx.vmhba2:C0:T0:L0 –option “enable_local enable_ssd”
  • esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device mpx.vmhba3:C0:T0:L0 –option “enable_local”
  • esxcli storage core claiming reclaim -d mpx.vmhba2:C0:T0:L0
  • esxcli storage core claiming reclaim -d mpx.vmhba3:C0:T0:L0

These two commands ensure that the disks are seen as “local” disks by Virtual SAN and that the “20GB” disk is seen as an “SSD”, although it isn’t using an SSD. There is another option which might even be better, you can simply add a VMX setting to specify the disks are SSDs. Check William’s awesome blog post for the how to.

After running these two commands we will need to make sure the hosts are configured properly for Virtual SAN. First we will add them to our vCenter Server, but without adding them to a cluster! So just add them on a Datacenter level.

Now we will properly configure the host. We will need to create an additional VMkernel adapter, do this for each of the three hosts:

  1. Click on your host within the web client
  2. Click “Manage” -> “Networking” -> “VMkernel Adapters”
  3. Click the “Add host networking” icon
  4. Select “VMkernel Network Adapter”
  5. Select the correct vSwitch
  6. Provide an IP-Address and tick the “Virtual SAN” traffic tickbox!
  7. Next -> Next -> Finish

When this is configured for all three hosts, configure a cluster:

  1. Click your “Datacenter” object
  2. On the “Getting started” tab click “Create a cluster”
  3. Give the cluster a name and tick the “Turn On” tickbox for Virtual SAN
  4. Also enable HA and DRS if required

Now you should be able to move your hosts in to the cluster. With the Web Client for vSphere 5.5 you can simply drag and drop the hosts one by one in to the cluster. VSAN will now be automatically configured for these hosts… Nice right. When all configuration tasks are completed just click on your Cluster object and then “Manage” -> “Settings” -> “Virtual SAN”. Now you should see the amount of hosts part of the VSAN cluster, number of SSDs and number of data disks.

Now before you get started there is one thing you will need to do, and that is enable “VM Storage Policies” on your cluster / hosts. You can do this via the Web Client as follows:

  • Click the “home” icon
  • Click “VM Storage Policies”
  • Click the little policy icon with the green checkmark, second from the left
  • Select your cluster and click “Enable” and then close

Now note that you have enabled VM Storage Policies, there are no pre-defined policies. Yes there is a “default policy”, but you can only see that on the command line. For those interested just open up an SSH session and run the following command:

~ # esxcli vsan policy getdefault
Policy Class Policy Value
------------ --------------------------------------------------------
cluster (("hostFailuresToTolerate" i1) )
vdisk (("hostFailuresToTolerate" i1) )
vmnamespace (("hostFailuresToTolerate" i1) )
vmswap (("hostFailuresToTolerate" i1) ("forceProvisioning" i1))
~ #

Now this means that in the case of “hostFailuresToTolerate”, Virtual SAN can tolerate a 1 host failure before you potentially lose data. In other words, in a 3 node cluster you will have 2 copies of your data and a witness. Now if you would like to have N+2 resilience instead of N+1 it is fairly straight forward. You do the following:

  • Click the “home” icon
  • Click “VM Storage Policies”
  • Click the “New VM Storage Policy” icon
  • Give it a name, I used “N+2 resiliency” and click “Next”
  • Click “Next” on Rule-Sets and select a vendor, which will be “vSan”
  • Now click <add capability> and select “Number of failures to tolerate” and set it to 2 and click “Next”
  • Click “Next” -> “Finish”

That is it for creating a new profile. Of course you can make these as complex as you want, their are various other options like “Number of disk stripes” and “Flash read cache reservation %”. For now I wouldn’t recommend tweaking these too much unless you absolutely understand the impact of changing these.

In order to use the profile you will go to an existing virtual machine and you right click it and do the following:

  • Click “All vCenter Actions”
  • Click “VM Storage Service Policies”
  • Click “Manage VM Storage Policies”
  • Select the appropriate policy on “Home VM Storage Policy” and do not forget to hit the “Apply to disks” button
  • Click OK

Now the new policy will be applied to your virtual machine and its disk objects! Also while deploying a new virtual machine you can in the provisioning workflow immediately select the correct policy so that it is deployed in a correct fashion.

These are some of the basics for testing VSAN in a virtual environment… now register and get ready to play!