Must attend VMworld sessions 2014

Every year I do this post on the must attend VMworld sessions, and I just realized I had not done this for 2014 yet. So here it is, the list of sessions I feel are most definitely worth attending. I tend to focus on sessions which I know will have solid technical info and great presenters. Many of which over the years I have either seen presenting myself and respect very much. I tried to limit the list to 20 this year (edit: 21, 22), so of course it could be that your session (or your fav session) is missing, unfortunately I cannot list all as that would defeat the purpose.

Here we go:

  1. STO3008-SPO – Decoupled Storage: Practical Examples of Leveraging Server Flash in a Virtualized Datacenter by Satyam Vaghani and Frank Denneman. What more do I need to say? Both rock stars!
  2. STO1279 – Virtual SAN Architecture Deep Dive Christian and Christos were the leads on VSAN, who can tell you better than they can??
  3. SDDC1176 – Ask the Expert vBloggers featuring Chad Sakac, Scott Lowe, William Lam, myself and moderated by Rick Scherer. This session has been a hit for the last years and will be one you cannot miss!
  4. STO2996-SPO – The vExpert Storage Game Show featuring Vaughn Steward, Cormac Hogan, Rawlinson Rivera and many others… It will be educational and entertaining for sure! Not the standard “death by powerpoint” session. If you do want “DBP”, this is not for you!
  5. STP3266 – Web-Scale Converged Infrastructure for Enterprise. Josh Odgers talking web scale for Enterprise organizations, are you still using legacy apps? Then this is a must attend.
  6. SDDC2492 – How the New Software-defined Paradigms Will Impact Your vSphere Design Forbes Guthrie and Scott Lowe talking vSphere Design, you bet you will learn something here!
  7. HBC2068 – vCloud Hybrid Service Networking Technical Deep Dive Want to know more about vCHS networking, I am sure David Hill is going to dive deep!
  8. NET2747 – VMware NSX: Software Defined Networking in the Real World Chris Wahl and Jason Nash talking networking, what is there not to like?
  9. BCO1893 – Site Recovery Manager and vCloud Automation Center: Self-service DR Protection for the Software-Defined Data Center My co-presenter Lee Dilworth for the previous 2 VMworlds, he knows what he is talking about! Co-hosting a DR session with one of the BC/DR PMs, Ben Meadowcroft. This will be good.
  10. NET1674 – Advanced Topics & Future Directions in Network Virtualization with NSX I have seen Bruce Davie present multiple times, always a pleasure and educational!
  11. STO2496 – vSphere Storage Best Practices: Next-Gen Storage Technologies Chad and Vaughn in one session… this will be good!
  12. BCO2629 – Site Recovery Manager and vSphere Replication: What’s New Technical Deep Dive Jeff Hunter and Ken Werneburg are the DR experts at VMware Tech Marketing, so worth attending for sure!
  13. HBC2638 – Ten Vital Best Practices for Effective Hybrid Cloud Security by Russel Callen and Matthew Probst… These guys are the vCHS architects, you can bet this will be useful!
  14. STO3162 – Software Defined Storage: Satisfy the Requirements of Your Application at the Granularity of a Virtual Disk with Virtual Volumes (VVols) Cormac Hogan talking VVOLs with Rochna from Nimble, this is one I would like to see!
  15. STO2480 – Software Defined Storage – The VCDX Way Part II : The Empire Strikes Back The title by itself is enough to attend this one… (Wade Holmes and Rolo Rivera)
  16. SDDC3281 – A DevOps Story: Unlocking the Power of Docker with the VMware platform and its ecosystem. You may not know these guys, but I do… Aaron and George are rock stars, and Docker seems to be the new buzzword. Find out what it is about!
  17. VAPP2979 – Advanced SQL Server on vSphere Techniques and Best Practices Scott and Jeff are the experts when it comes to virtualizing SQL, what more can I say?!
  18. STO2197 – Storage DRS: Deep Dive and Best Practices Mustafa Uysal is the lead on SDRS/SIOC, I am sure this session will contain some gems!
  19. HBC1534 – Recovery as a Service (RaaS) with vCloud Hybrid Service David Hill and Chris Colotti talking, always a pleasure to attend!
  20. MGT1876 – Troubleshooting With vCenter Operations Manager (Live Demo) Wondering why your VM is slow? Sam McBride and Praveen Kannan will show you live…
  21. INF1601 – Taking Reporting and Command Line Automation to the Next Level with PowerCLI with Alan Renouf and Luc Dekens, all I would like to know is if PowerCLI-man is going to be there or not?
  22. MGT1923 – vCloud Automation Center 6 and Storage Policy-Based Management Framework Integration with Rawlinson Rivera and Chen Wei… They are doing things with VCAC and SPBM which has never been seen before!

As stated, some of your fav sessions may be missing… feel free to leave a suggestion so that others know which sessions they should attend.

Software Defined Storage, which phase are you in?!

Working within R&D at VMware means you typically work with technology which is 1 – 2 years out, and discuss futures of products which are 2-3 years. Especially in the storage  space a lot has changed. Not just innovations within the hypervisor by VMware like Storage DRS, Storage IO Control, VMFS-5, VM Storage Policies (SPBM), vSphere Flash Read Cache, Virtual SAN etc. But also by partners who do software based solutions like PernixData (FVP), Atlantis (ILIO) and SANDisk FlashSoft. Of course there is the whole Server SAN / Hyper-converged movement with Nutanix, Scale-IO, Pivot3, SimpliVity and others. Then there is the whole slew of new storage systems some which are scale out and all-flash, others which focus more on simplicity, here we are talking about Nimble, Tintri, Pure Storage, Xtreme-IO, Coho Data, Solid Fire and many many more.

Looking at it from my perspective, I would say there are multiple phases when it comes to the SDS journey:

  • Phase 0 – Legacy storage with NFS / VMFS
  • Phase 1 – Legacy storage with NFS / VMFS + Storage IO Control and Storage DRS
  • Phase 2 – Hybrid solutions (Legacy storage + acceleration solutions or hybrid storage)
  • Phase 3 – Object granular policy driven (scale out) storage

<edit>

Maybe I should have abstracted a bit more:

  • Phase 0 – Legacy storage
  • Phase 1 – Legacy storage + basic hypervisor extensions
  • Phase 2 – Hybrid solutions with hypervisor extensions
  • Phase 3 – Fully hypervisor / OS integrated storage stack

</edit>

I have written about Software Defined Storage multiple times in the last couple of years, have worked with various solutions which are considered to be “Software Defined Storage”. I have a certain view of what the world looks like. However, when I talk to some of our customers reality is different, some seem very happy with what they have in Phase 0. Although all of the above is the way of the future, and for some may be reality today, I do realise that Phase 1, 2 and 3 may be far away for many. I would like to invite all of you to share:

  1. Which phase you are in, and where you would like to go to?
  2. What you are struggling with most today that is driving you to look at new solutions?

Good Read: Virtual SAN data locality white paper

I was reading the Virtual SAN Data Locality white paper. I think it is a well written paper, and really enjoyed it. I figured I would share the link with all of you and provide a short summary. (http://blogs.vmware.com/vsphere/files/2014/07/Understanding-Data-Locality-in-VMware-Virtual-SAN-Ver1.0.pdf)

The paper starts with an explanation of what data locality is (also referred to as “locality of reference”), and explains the different types of latency experienced in Server SAN solutions (network, SSD). It then explains how Virtual SAN caching works, how locality of reference is implemented within VSAN and also how VSAN does not move data around because of the high cost compared to the benefit for VSAN. It also demonstrates how VSAN delivers consistent performance, even without a local read cache. The key word here is consistent performance, something that is not in the case for all Server SAN solutions. In some cases, a significant performance degradation is experienced minutes long after a workload has been migrated. As hopefully all of you know vSphere DRS runs every 5 minutes by default, which means that migrations can and will happen various times a day in most environments. (Seen environments where 30 migrations a day was not uncommon.) The paper then explains where and when data locality can be beneficial, primarily when RAM is used and with specific use cases (like View) and then explains how CBRC aka View Accelerator (in RAM deduplicated read cache) could be used for this purpose. (Does not explain how other Server SAN solutions leverage RAM for local read caching in-depth, but sure those vendors will have more detailed posts on that, which are worth reading!)

Couple of real gems in this paper, which I will probably read a couple of times in the upcoming days!

vSphere 5.5 and disk limits (mClock scheduler caveat)

I mentioned the new disk IO scheduler in vSphere 5.5 yesterday. When discussing this new disk IO scheduler one thing that was brought to my attention is a caveat around disk limits. Lets get started by saying that disk limits are a function of the host local disk scheduler and not, I repeat, not Storage IO Control. This is an often made mistake by many.

Now, when setting a limit on a virtual disk you define a limit in IOPS. The IOPS specified is the maximum number of IOPS the virtual machine can drive. The caveat is is as follows: IOPS takes the IO size in to account. (It does this as a 64KB IO has a different cost than a 4KB IO.) The calculation is in multiples of 32KB. Note that if you do a 4KB IO it is counted as one IO, however if you do a 64KB IO it is counted as two IOs. Any IO larger than 32KB will be 2 IOs at a minimum as it is rounded up.  In other words, a 40KB IO would be 2 IOs and not 1.25 IOs. This also implies that there could be an unexpected result when you have an application doing relatively large blocksize IOs. If you set a limit of 100 IOPS but your app is doing 64KB IOs than you will see your VM being limited to 50 IOPS as each 64KB IO will count as 2 IOs instead of 1. So the formula here is: ceil(IO Size / 32).

I think that is useful to know when you are limiting your virtual machines. Especially cause this is a change in behaviour compared to vSphere 5.1.

New disk IO scheduler used in vSphere 5.5

When 5.1 was released I noticed the mention of “mClock” in the advanced settings of a vSphere host. I tried enabling it but failed miserably. A couple of weeks back I noticed the same advanced setting again, but this time also noticed it was enabled. So what is this mClock thingie? Well mClock is the new disk IO scheduler used in vSphere 5.5. There isn’t much detail on mClock by itself other than an academic paper by Ajay Gulati.

disk io scheduler

The paper describes in-depth why mClock was designed / developed, it primarily was to provide a better IO scheduling mechanism that allows for limits, shares and yes also reservations. The paper also describes some interesting details around how different IO sizes and latency is taken in to account. I recommend anyone who likes reading brain hurting material to take a look at it. I am also digging internally for some more human readable material, If I find out more I will let you guys know!