Every year a percentage of VMworld sessions is selected based on community voting. This voting process has started this week and as I submitted a couple of sessions myself I would like to draw some attention to them and ask you to consider voting for them… that is, if you like the session of course. Below you can find the details of the two sessions I personally submitted. Just login and do a search on the session ID which is in bold below
- Frank Denneman and Duncan Epping – Five Functions of Software Defined Availability (4535)
In this session Frank and Duncan will discuss 5 functions of Software Defined Availability, which are part of vSphere 6.0. For each of these functions certain scenarios will be discussed to explain how vSphere can help improving availability of your workloads. This ranges from “how Site Recovery Manager and Storage DRS are loosely coupled but tightly integrated” with vSphere 6.0 to “how vSphere HA responds in the case of a certain failure”. Be prepared to get in to the trenches of workload availability… - Lee Dilworth and Duncan Epping – Five Common Customer Use Cases for Virtual SAN (4650)
In this quick talk Lee Dilworth and Duncan Epping will discuss the five most common use cases seen within the Virtual SAN install base. This session will not just focus on the use case but also include some common hardware configuration details to provide a better understanding of the flexibility which Virtual SAN offers.



I’ve been thinking about the term Software Defined Data Center for a while now. It is a great term “software defined” but it seems that many agree that things have been defined by software for a long time now. When talking about SDDC with customers it is typically referred to as the ability to abstract, pool and automate all aspects of an infrastructure. To me these are very important factors, but not the most important, well at least not for me as they don’t necessarily speak to the agility and flexibility a solution like this should bring. But what is an even more important aspect?
With Virtual Volumes placement of a VM (or VMDK) is based on how the policy is constructed and what is defined in it. The Storage Policy Based Management engine gives you the flexibility to define policies anyway you like, of course it is limited to what your storage system is capable of delivering but from the vSphere platform point of view you can do what you like and make many different variations. If you specify that the object needs to thin provisioned, or has a specific IO profile, or needs to be deduplicated or… then those requirements are passed down to the storage system and the system makes its placement decisions based on that and will ensure that the demands can be met. Of course as stated earlier also requirements like QoS and availability are passed down. This could be things like latency, IOPS and how many copies of an object are needed (number of 9s resiliency). On top of that, when requirements change or when for whatever reason SLA is breached then in a requirements driven environment the infrastructure will assess and remediate to ensure requirements are met.