• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

IO DRS – Providing Performance Isolation to VMs in Shared Storage Environments (TA3461)

Duncan Epping · Sep 16, 2009 ·

This was probably one of the coolest sessions of VMworld. Irfan Ahmad was the host of this session and some of you might know him from Project PARDA. The PARDA whitepaper describes the algorithm being used and how the customer could benefit from this in terms of performance. As Irfan stated this is still in a research phase. Although the results are above expectations it’s still uncertain if this will be included in a future release and if it does when this will be. There are a couple of key take aways that I want to share:

  • Congestion management on a per datastore level -> limits on IOPS and set shares per VM
  • Check the proportional allocation of the VMs to be able to identify bottlenecks.
  • With I/O DRS throughput for tier 1 VMs will increase when demanded (More IOPS, lower latency) of course based on the limits / shares specified.
  • CPU overhead is limitied -> my take: with the new hardware of today I wouldn’t worry about an overhead of a couple percent.
  • “If it’s not broken, don’t fix it” -> if the latency is low for all workloads on a specific datastore do not take action, only above a certain threshold!
  • I/O DRS does not take SAN congestion in account, but SAN is less likely to be the bottleneck
  • Researching the use of Storage VMotion move around VMDKs when there’s congestion on the array level
  • Interacting with queue depth throttling
  • Dealing with end-points and would co-exist with Powerpath

That’s it for now… I just wanted to make a point. There’s a lot of cool stuff coming up. Don’t be fooled by the lack of announcements(according to some people, although I personally disagree) during the keynotes. Start watching the sessions, there’s a lot of knowledge to be gained!

Related

Management & Automation, Server ESX, esxi, vcenter, vmworld, vSphere

Reader Interactions

Comments

  1. Chris Wolf says

    16 September, 2009 at 16:30

    Good post, Duncan. Regarding “I/O DRS does not take SAN congestion into account,” that is part of my case for vCenter extensibility (and associated APIs that allow external input into DRS criteria). The technology is already there – Virtual Instruments demoed a proof-of-concept on this topic at VMworld Europe. If VMware’s going to offer the feature, why not take the time to get it right? Of course, it’s easier said than done. I/O accounting is a complex issue. Still, I’d like to see capabilities for external input, or exchange of control to an external I/O accounting mechanism. My two cents…

  2. Duncan says

    16 September, 2009 at 21:29

    My guess would be that they are working on it and knowing EMC they are taking the lead on this. Should be possible indeed with vStorage.

  3. David Owen says

    16 September, 2009 at 22:42

    “but SAN is less likely to be the bottleneck”

    Yup in my experiance this is usualy the case and is often not so easy to diagnose. Would be the next logical step for DRS to manage this alot better.
    Lab Manager has some cool features. I like that you can clean up the datastore of expired VMs and that you can spread the load.
    I hope they bring somthing like this into vcenter.

  4. Chad Sakac says

    18 September, 2009 at 02:46

    Duncan, we are indeed working this furiously. The project views the end-to-end picture as the goal. PARDA is part of the answer. Storage VMotion based on datastore I/O envelope is another (with EMC’s FAST as a “hardware accelerated” variant), and a third part is the end-to-end I/O path (PowerPath and end-to-end I/O tagging in the unified fabric).

    There’s so much exciting stuff that I can’t talk about.

    The usual guidance applies. If you look at SS5140 and SS5240, and imagine the things I’m implying – we’re working on it 🙂

  5. Narasimha says

    28 January, 2011 at 12:43

    Hi,

    Can anyone plase explain me briefly about RDM, benefits of RDM & what will happen to RDM while migration.

    Thanks.

    • Duncan says

      28 January, 2011 at 13:36

      While migrating what?
      I hardly ever use RDMs as they are less flexible than VMDKs. Sometimes you need to use RDMs when you want to use native array snapshotting in combination with specific applications but in a typical environment there is no real benefit.

      • Narasimha says

        28 January, 2011 at 14:59

        Thanks for your time Duncan, i mean while “vmotion” there is any impact on the RDM attched to a vm.

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in