• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Server

Project Octopus Beta

Duncan Epping · May 3, 2012 ·

I’ve been using Octopus for months internally already as I already discussed in my enterprise social collaboration post and I think it is an awesome tool! I would recommend everyone who is interested in an enterprise level file sharing solution, not unlike dropbox, to sign up for the beta as Octopus is the way to go!

Project Octopus is the successful marriage of Zimbra and Mozy technologies, with some additional code jointly developed by the two teams. Prior to GA release, it will be folded into Horizon, providing a centralized policy and entitlement engine that will broker user access to applications, virtual desktops and data resources. The result will be a simple, seamless end-user experience when accessing work resources across private and public clouds on whatever device the user chooses.

The beta is open to all and will last through VMworld. Due to limited support resources, priority will be placed on customers with active engagements.

With vSphere 5.0 and HA can I share datastores across clusters?

Duncan Epping · Apr 30, 2012 ·

I have had this question multiple times by now so I figured I would write a short blog post about it. The question is if you can share datastores across clusters with vSphere 5.0 and HA enabled. This question comes from the fact that HA has a new feature called “datastore heartbeating” and uses the datastore as a communication mechanism.

The answer is short and sweet: Yes.

For each cluster a folder is created. The folder structure is as follows:

/<root of datastore>/.vSphere-HA/<cluster-specific-directory>/

 

The “cluster specific directory” is based on the uuid of the vCenter Server, the MoID of the cluster, a random 8 char string and the name of the host running vCenter Server. So even if you use dozens of vCenter Servers there is no need to worry.

Each folder contains the files HA needs/uses as shown in the screenshot below. So no need to worry around sharing of datastores across clusters. Frank also wrote an article about this from a Storage DRS perspective. Make sure you read it!

PS: all these details can be found in our Clustering Deepdive book… find it on Amazon.

What is das.maskCleanShutdownEnabled about?

Duncan Epping · Apr 25, 2012 ·

I had a question today around what the vSphere HA option advanced setting das.maskCleanShutdownEnabled is about. I described why it was introduced for Stretched Clusters  but will give a short summary here:

Two advanced settings have been introduced in vSphere 5.0 Update 1 to enable HA to fail-over virtual machines which are located on datastores which are in a Permanent Device Loss state. This is very specific to stretchec cluster environments. The first setting is configured on a host level and is “disk.terminateVMOnPDLDefault”. This setting can be configured in /etc/vmware/settings and should be set to “True”. This setting ensures that a virtual machine is killed when the datastore it resides on is in a PDL state.

The second setting is a vSphere HA advanced setting called “das.maskCleanShutdownEnabled“. This setting is also not enabled by default and it will need to be set to “True”. This settings allows HA to trigger a restart response for a virtual machine which has been killed automatically due to a PDL condition. This setting allows HA to differentiate between a virtual machine which was killed due to the PDL state or a virtual machine which has been powered off by an administrator.

But why is “das.maskCleanShutdownEnabled” needed for HA? From a vSphere HA perspective there are two different types of “operations”. The first is a user initiated power-off (clean) and the other is a kill. When a virtual machine is powered off by a user, part of the process is setting the property “runtime.cleanPowerOff” to true.

Remember that when “disk.terminateVMOnPDLDefault” is configured your VMs will be killed when they issue I/O. This is where the  problem arises, in a PDL scenario it is impossible to set “runtime.cleanPowerOff” as the datastore, and as such the vmx, is unreachable. As the property defaults to “true” vSphere HA will assume the VMs were cleanly powered off. This would result in vSphere HA not taking any action in a PDL scenario. By setting “das.maskCleanShutdownEnabled” to true, a scenario where all VMs are killed but never restarted can be avoided as you are telling vSphere HA to assume that all VMs are not shutdown in a cleanly matter. In that case vSphere HA will assume VMs are killed UNLESS the property is set.

If you have a stretched cluster environment, make sure to configure these settings accordingly!

Cool tool update: RVTools 3.3 released!

Duncan Epping · Apr 24, 2012 ·

Rob de Veij just published RVTools 3.3. I know many of you are using it and I definitely suggest downloading the latest version! RVTools has been downloaded more than 100.000, so definitely worth checking out if you had not so far! Here are the changes in this release:

Version 3.3 (April, 2012)

  • GetWebResponse timeout value changed from 5 minutes to 10 minutes (for very big environments)
  • New tabpage with HBA information
  • On vDatastore tab the definition of the Provisioned MB and In Use MB columns was confusing! This is changed now.
  • RVToolsSendMail accepts now multiple recipients (semicolon is used as separator)
  • Folder information of VMs and Templates are now visible on vInfo tabpage
  • Bugfix: data in comboboxes on filter form are now sorted
  • Bugfix: Problem with api version 2.5.0 solved
  • Bugfix: Improved exception handling on vCPU tab.
  • Bugfix: Improved exception handling on vDatastore tab.

VMworld call for papers just opened up…

Duncan Epping · Apr 18, 2012 ·

Call for papers for VMworld just opened up and I am finalizing two of the sessions I will submit. Besides these two sessions I suspect I will be part of the expert program again, meaning that I will be available for 15 minute one on one’s and several group discussions. Currently I am planning to submit the following sessions:

  • DR of the Cloud – In this session Chris Colotti and I will focus on vCloud Director infrastructure resilience.  We will go over the concept Chris and I developed and discuss the recommended practices and operational aspects of DR of the Cloud.
  • Architecting and Operating a vSphere Metro Storage Cluster – In this session Lee Dilworth and I will discuss the design and operational considerations for vSphere Metro Storage Clusters environments. Our focus will primarily be vSphere though!

I am considering submitting another session. I know many have enjoyed the open-floor / Q&A style sessions, but the main topic was always HA and DRS aka vSphere Clustering. Currently I am thinking “Cloud Infrastructure Q&A”… but if you could pick a topic what would you like to see and who would you love to see on the panel? (Max 4 people) I try to make it happen!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 152
  • Page 153
  • Page 154
  • Page 155
  • Page 156
  • Interim pages omitted …
  • Page 336
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in