• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vstorage

VMTN Podcast number 71, vStorage API

Duncan Epping · Nov 4, 2009 ·

This weeks topic of the VMware Communities Roundtable was the vStorage API. It was most likely one of the most technical roundtables in months. Especially the part about the Virtual Disk Development Kit (aka VDDK) was educational! If you missed the episode you can download it here or subscribe to the podcast with iTunes here!

Here are the links that were dropped in the chat window for those who are interested:

  • VMware – vStorage API for dataprotection
  • Anton Gostev – What is VMware vStorage API?
  • Chad Sakac – So, what does vStorage really mean?
  • VMware – Virtual Disk Development Kit Documentation
  • VMware – VDDK FAQ
  • VMTN – Install VDDK on vMA
  • SNIA.org – Hypervisor Storage Interfaces for Storage Optimization White Paper DRAFT rev 0.5a

EMC Powerpath/VE

Duncan Epping · Oct 30, 2009 ·

My colleague Lee Dilworth, SRM/BC-DR Specialist, pointed me out to an excellent whitepaper by EMC. This whitepaper describes the difference between Powerpath/VE and MRU, Fixed and Round Robin.

Key results:

  • Powerpath/VE provides superior load-balancing performance across multiple paths using FC or iSCSI.
  • Powerpath/VE seamlessly integrates and takes control of all device I/O, path selection, and failover without the need for additional configuration.
  • VMware NMP requires that certain configuration parameters be specified to achieve improved performance.

I recommend reading the whitepaper to get a good understanding of where a customer would benefit from using EMC Powerpath/VE. The whitepaper gives a clear picture of the load balancing capabilities of Powerpath/VE compared to MRU, Fixed and Round Robin.  It also shows that there’s less manual configuration to be done when using Powerpath/VE, and as just revealed by Chad Sakac on twitter an integrated patching solution will be introduced with ESX/vCenter 4.0 Update 1!

VMTN Podcast number 70, Storage…

Duncan Epping · Oct 28, 2009 ·

As John Troyer is extremely busy and has a huge backlog of Podcast notes to publish and I don’t have a life I thought I would throw all links in a blog post so that you have something to read. The Topic of Podcast Number 70 was Storage and to be more specific Thin Provisioning. If you missed the episode you can download it here or subscribe to the podcast with iTunes here!

  • vSphere Blog – Details on blogging contest
  • vSphere Blog – Winning Blogging Contest Cycle 1
  • vSphere Blog – Winning Blogging Contest Cycle 2
  • vSphere Blog – Winning Blogging Contest Cycle 3
  • VMware.com – What’s new in VMware vSphere Storage
  • Vroom Blog – Storage performance improvements
  • Virtual Geek – Thin on Thin?
  • NetApp on YouTube – The finer points of dedupe
  • Yellow-Bricks – 8MB Block size
  • vCritical – PowerShell Prevents Datastore Emergencies
  • Virtu-Al – VI Toolkit One-Liner: VM Guest Disk Sizes
  • VirtualInsanity – Get Thin Provisioning working for you in vSphere
  • Yellow-Bricks – Storage VMotion and moving to a Thin Provisioned disk
  • NetApp on Youtube – vStorage APIs for Array integration

What’s that ALUA exactly?

Duncan Epping · Sep 29, 2009 ·

Of course by now we have all read the excellent and lengthy posts by Chad Sakac on ALUA. I’m just a simple guy and usually try to summarize posts like Chad’s in a couple of lines which makes it easier for me to remember and digest.

First of all ALUA stands for “Asymmetric Logical Unit Access”. As Chad explains and as a google search shows it’s common for midrange arrays these days to have ALUA support. With midrange we are talking about EMC Clariion, HP EVA and others. My interpretation of ALUA is that you can see any given LUN via both storage processors as active but only one of these storage processors “owns” the LUN and because of that there will be optimized and unoptimized paths. The optimized paths are the ones with a direct path to the storage processor that owns the LUN. The unoptimized paths have a connection with the storage processor that does not own the LUN but have an indirect path to the storage processor that does own it via an interconnect bus.

In the past when you configured your HP EVA(Active/Active according to VMware terminology) attached VMware environment you would have had two(supported) options as pathing policies. The first option would be Fixed and the second MRU. Most people used Fixed however and tried to balance the I/O. As Frank Denneman described in his article this does not always lead to the expected results. This is because the path selection might not be consistent within the cluster and this could lead to path thrashing as one half of the cluster is accessing the LUN through storage processor A and the other half through storage processor B.

This “problem” has been solved with vSphere. VMware vSphere is aware of what the most optimal path is to the LUN. In other words VMware knows which processor owns which LUNs and sends traffic preferably directly to the owner. If the optimized path to a LUN is dead an unoptimized path will be selected and within the array the I/O will be directed via an interconnect to the owner again. The pathing policy MRU also takes optimized / unoptimized paths into account. Whenever there’s no optimized path available MRU will use an unoptimized path; when an optimized path returns MRU will switch back to the optimized path. Cool huh!?!

What does this mean in terms of selecting the correct PSP? Like I said you will have three options: MRU, Fixed and RR. Picking between MRU and Fixed is easy in my opinion as MRU is aware of optimized and unoptimized paths it is less static and error prone than Fixed. When using MRU however be aware of the fact that your LUNs need to be equally balanced between the storage processors, if they are not you might be overloading one storage processor while the other is doing absolutely nothing. This might be something you want to make your storage team aware off. The other option of course is Round Robin. With RR 1000 commands will be send down a path before it switches over to the next one. Although theoretically this should lead to a higher throughput I haven’t seen any data to back this “claim” up. Would I recommend using RR? Yes I would, but I would also recommend to perform benchmarks to ensure you are making the right decision.

Long Distance VMotion

Duncan Epping · Sep 21, 2009 ·

As you might have noticed last week I’m still digesting all the info from VMworld. One of the coolest new supported technologies is Long Distance VMotion. A couple of people already wrote a whole article on this session so I will not be doing this. (Chad Sakac, Joep Piscaer) However I do want to stress some of the best practices / requirement to make this work.

Requirements:

  • An IP network with a minimum bandwidth of 622 Mbps is required.
  • The maximum latency between the two VMware vSphere servers cannot exceed 5 milliseconds (ms).
  • The source and destination VMware ESX servers must have a private VMware VMotion network on the same IP subnet and broadcast domain.
  • The IP subnet on which the virtual machine resides must be accessible from both the source and destination VMware ESX servers. This requirement is very important because a virtual machine retains its IP address when it moves to the destination VMware ESX server to help ensure that its communication with the outside world (for example, with TCP clients) continues smoothly after the move.
  • The data storage location including the boot device used by the virtual machine must be active and accessible by both the source and destination VMware ESX servers at all times.
  • Access from VMware vCenter, the VMware Virtual Infrastructure (VI) management GUI, to both the VMware ESX servers must be available to accomplish the migration.

Best practices:

  • Create HA/DRS Clusters on a per site basis. (Make sure I/O stays local!)
  • A single vDS (like the Cisco Nexus 1000v) across clusters and sites.
  • Network routing and policies need to be synchronized or adjusted accordingly.

Most of these are listed in this excellent whitepaper from VMware, Cisco and EMC by the way.

Combining this current available technology with what Banjot discussed during his VMworld session regarding HA futures I think the possibilities are endless. One of the most obvious ones is of course Stretched HA Clusters. When adding VMotion into the mix a stretched HA/DRS Cluster would be a possibility. This would require other thresholds of course but how cool would it be if DRS would re-balance your clusters based on specific pre-determined and configurable thresholds?!

Stretched HA/DRS Clusters would however mean that the cluster needs to be carved into sub-clusters to make sure I/O stays local. You don’t want to run your VMs on site A while their VMDKs are stored on site B. This of course depends on the array technology being used. (Active / Active, as in one virtual array would solve this.) During Banjot session it was described as “tagged” hosts in a cross site Cluster and during the Long Distance VMotion session it’s described as “DRS being aware of WAN link and sidedness”. I would rather use the term “sub-cluster” or “host-group”. Although this all seems to be still far away it seems to be much closer than we expect. Long Distance VMotion is supported today. Sub-clusters aren’t available yet but knowing VMware, and looking at the competition,  they will go full steam ahead.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 6
  • Page 7
  • Page 8
  • Page 9
  • Page 10
  • Page 11
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in