RVTools 3.7 is here!

It has been a year since the last version. This weekend Rob de Vey finally released RVTools 3.7. With over 400.000 downloads so far, this is for sure the most used free health check tool that exists. And there is a good reason for it, it is most definitely also one of the bests tools that can provide valuable insights. I’ve personally been using it since the very first version and have discovered a bunch of potential problems at customers sites. Make sure to follow RVTools on twitter for any updates.

Version 3.7 (March, 2015)

  • VI SDK reference changed from 5.0 to 5.5
  • Extended the timeout value from 10 to 20 minutes for really big environments
  • New field VM Folder on vCPU, vMemory, vDisk, vPartition, vNetwork, vFloppy, vCD, vSnapshot and vTools tabpages
  • On vDisk tabpage new Storage IO Allocation Information
  • On vHost tabpage new fields: service tag (serial #) and OEM specific string
  • On vNic tabpage new field: Name of (distributed) virtual switch
  • On vMultipath tabpage added multipath info for path 5, 6, 7 and 8
  • On vHealth tabpage new health check: Multipath operational state
  • On vHealth tabpage new health check: Virtual machine consolidation needed check
  • On vInfo tabpage new fields: boot options, firmware and Scheduled Hardware Upgrade Info
  • On statusbar last refresh date time stamp
  • On vHealth tabpage: Search datastore errors are now visible as health messages
  • You can now export the csv files separately from the command line interface (just like the xls export)
  • You can now set a auto refresh data interval in the preferences dialog box
  • All datetime columns are now formatted as yyyy/mm/dd hh:mm:ss
  • The export dir / filenames now have a formated datetime stamp yyyy-mm-dd_hh:mm:ss
  • Bug fix: on dvPort tabpage not all networks are displayed
  • Overall improved debug information

 

Get your download engines running, vSphere 6.0 is here!

Yes the day is finally there, vSphere 6.0 / SRM / VSAN (and more) finally available. So where do you find it? Well that is simple… here:

Have fun!

All Flash VSAN – One versus multiple disk groups

A while ago I wrote this article on the topic of “one versus multiple disk groups“. The summary was that you can start with a single disk group, but that from a failure domain perspective having multiple disk groups is definitely preferred. Also from a performance stance there could be a benefit.

So the question now is, what about all-flash VSAN? First of all, same rules apply: 5 disk groups max, each disk group 1 SDD for caching and 7 devices for capacity. There is something extra to consider though. It isn’t something I was aware off until I read the excellent Design and Sizing Guide by Cormac. It states the following:

In version 6.0 of Virtual SAN, if the flash device used for the caching layer in all-flash configurations is less than 600GB, then 100% of the flash device is used for cache. However, if the flash cache device is larger than 600GB, the only 600GB of the device is used for caching. This is a per-disk group basis.

Now for the majority of environments this won’t really be an issue as they typically don’t hit the above limit, but it is good to know when doing your design/sizing exercise. The recommendation of 10% cache to capacity ratio still stands, and this is used capacity before FTT. If you have a requirement for a total of 100TB, then with FTT=1 that is roughly 50TB of usable capacity. When it comes to flash this means you will need a total of max 5TB flash. That is 5TB of flash in total, with 10 hosts that would be 500GB per host and that is below the limit. But with 5 hosts that would be 1TB per host which is above the 600GB mark and would result in 400GB per host being unused.

When there is a requirement to have more than 600GB of write cache capacity for all-flash it is required to create multiple disk groups. Personally I would always recommend this anyway. So when you do the sizing make sure to take this in to consideration and create multiple diskgroups when you have the chance!

Virtual Volumes primer

I was digging through my blog for a link to a virtual volumes primer article and I realized I never wrote one. I did an article which described what Virtual Volumes (VVol) is in 2012 but that is it. I am certain that Virtual Volumes is a feature that will be heavily used with vSphere 6.0 and beyond, so it was time to write a primer. What is Virtual Volumes about? What will they bring to the table?

First and foremost, Virtual Volumes was developed to make your life (vSphere admin) and that of the storage administrator easier. This is done by providing a framework that enables the vSphere administrator to assign policies to virtual machines or virtual disks. In these policies capabilities of the storage array can be defined. These capabilities can be things like snapshotting, deduplication, raid-level, thin / thick provisioning etc. What is offered to the vSphere administrator is up to the Storage administrator, and of course up to what the storage system can offer to begin with. When a virtual machine is deployed and a policy is assigned then the storage system will enable certain functionality of the array based on what was specified in the policy. So no longer a need to assign capabilities to a LUN which holds many VMs, but rather a per VM or even per VMDK level control. So how does this work? Well lets take a look at an architectural diagram first.

Virtual Volumes primer

The diagram shows a couple of components which are important in the VVol architecture. Lets list them out:

  • Protocol Endpoints aka PE
  • Virtual Datastore and a Storage Container
  • Vendor Provider / VASA
  • Policies
  • Virtual Volumes

Lets take a look at all of these three in the above order. Protocol Endpoints, what are they?

Protocol Endpoints are literally the access point to your storage system. All IO to virtual volumes is proxied through a Protocol Endpoint and you can have 1 or more of these per storage system, if your storage system supports having multiple of course. (Implementations of different vendors will vary.) PEs are compatible with different protocols (FC, FCoE, iSCSI, NFS) and if you ask me that whole discussion with Virtual Volumes will come to an end. You could see a Protocol Endpoint as a “mount point” or a device, and yes they will count towards your maximum number of devices per host (256). (Virtual Volumes it self won’t count towards that!)

Next up is the Storage Container. This is the place where you store your virtual machines, or better said where your virtual volumes end up. The Storage Container is a storage system logical construct and is represented within vSphere as a “virtual datastore”. You need 1 per storage system, but you can have many when desired. To this Storage Container you can apply capabilities. So if you like your virtual volumes to be able to use array based snapshots then the storage administrator will need to assign that capability to the storage container. Note that a storage administrator can grow a storage container without even informing you. A storage container isn’t formatted with VMFS or anything like that, so you don’t need to increase the volume in order to use the space.

But how does vSphere know which container is capable of doing what? In order to discover a storage container and its capabilities we need to be able to talk to the storage system first. This is done through the vSphere APIs for Storage Awareness. You simply point vSphere to the Vendor Provider and the vendor provider will report to vSphere what’s available, this includes both the storage containers as well as the capabilities they possess. Note that a single Vendor Provider can be managing multiple storage systems which in its turn can have multiple storage containers with many capabilities. These vendor providers can also come in different flavours, for some storage systems it is part of their software but for others it will come as a virtual appliance that sits on top of vSphere.

Now that vSphere knows which systems there are, what containers are available with which capabilities you can start creating policies. These policies can be a combination of capabilities and will ultimately be assigned to virtual machines or virtual disks even. You can imagine that in some cases you would like Quality of Service enabled to ensure performance for a VM while in other cases it isn’t as relevant but you need to have a snapshot every hour. All of this is enabled through these policies. No longer will you be maintaining that spreadsheet with all your LUNs and which data service were enabled and what not, no you simply assign a policy. (Yes, a proper naming scheme will be helpful when defining policies.) When requirements change for a VM you don’t move the VM around, no you change the policy and the storage system will do what is required in order to make the VM (and its disks) compliant again with the policy. Not the VM really, but the Virtual Volumes.

The great thing about Virtual Volumes is the fact that you know have a granular control over your workloads. Some storage systems will even allow you to assign IO profiles to your VM to ensure optimal performance. Also, when you delete a VM the virtual volumes will be deleted and the space will automatically be reclaimed by the storage system, no more fiddling with vmkfstools. Another great thing about virtual volumes is that even when you delete something within your VM this space can also be reclaimed by the storage system. When your storage system supports T10 UNMAP that is.

That is in short how Virtual Volumes work and what they bring. You as the vSphere administrator create policies and assign those to VMs, while the storage administrator manages capacity and capabilities. Easy right?!

Outlook 2016 OSX requires activation?

I ran in to this issue where Outlook 2016 for OSX said it required to be activated. Very annoying as I don’t have a 365 account and that is what it requires before you can use it. It seems that this problem was caused by the fact that I used an older version of the preview/beta in the past. It can simply be solved be taking the following steps:

  1. Open Terminal
  2. Type: defaults delete com.microsoft.Outlook
  3. Type: killall cfprefsd
  4. Exit the Terminal session and launch Outlook

This basically deletes all the current settings for Outlook and wipes the cache. Now you can enter your details again and it will all work as expected.