This was probably one of the coolest sessions of VMworld. Irfan Ahmad was the host of this session and some of you might know him from Project PARDA. The PARDA whitepaper describes the algorithm being used and how the customer could benefit from this in terms of performance. As Irfan stated this is still in a research phase. Although the results are above expectations it’s still uncertain if this will be included in a future release and if it does when this will be. There are a couple of key take aways that I want to share:
- Congestion management on a per datastore level -> limits on IOPS and set shares per VM
- Check the proportional allocation of the VMs to be able to identify bottlenecks.
- With I/O DRS throughput for tier 1 VMs will increase when demanded (More IOPS, lower latency) of course based on the limits / shares specified.
- CPU overhead is limitied -> my take: with the new hardware of today I wouldn’t worry about an overhead of a couple percent.
- “If it’s not broken, don’t fix it” -> if the latency is low for all workloads on a specific datastore do not take action, only above a certain threshold!
- I/O DRS does not take SAN congestion in account, but SAN is less likely to be the bottleneck
- Researching the use of Storage VMotion move around VMDKs when there’s congestion on the array level
- Interacting with queue depth throttling
- Dealing with end-points and would co-exist with Powerpath
That’s it for now… I just wanted to make a point. There’s a lot of cool stuff coming up. Don’t be fooled by the lack of announcements(according to some people, although I personally disagree) during the keynotes. Start watching the sessions, there’s a lot of knowledge to be gained!
Chris Wolf says
Good post, Duncan. Regarding “I/O DRS does not take SAN congestion into account,” that is part of my case for vCenter extensibility (and associated APIs that allow external input into DRS criteria). The technology is already there – Virtual Instruments demoed a proof-of-concept on this topic at VMworld Europe. If VMware’s going to offer the feature, why not take the time to get it right? Of course, it’s easier said than done. I/O accounting is a complex issue. Still, I’d like to see capabilities for external input, or exchange of control to an external I/O accounting mechanism. My two cents…
Duncan says
My guess would be that they are working on it and knowing EMC they are taking the lead on this. Should be possible indeed with vStorage.
David Owen says
“but SAN is less likely to be the bottleneck”
Yup in my experiance this is usualy the case and is often not so easy to diagnose. Would be the next logical step for DRS to manage this alot better.
Lab Manager has some cool features. I like that you can clean up the datastore of expired VMs and that you can spread the load.
I hope they bring somthing like this into vcenter.
Chad Sakac says
Duncan, we are indeed working this furiously. The project views the end-to-end picture as the goal. PARDA is part of the answer. Storage VMotion based on datastore I/O envelope is another (with EMC’s FAST as a “hardware accelerated” variant), and a third part is the end-to-end I/O path (PowerPath and end-to-end I/O tagging in the unified fabric).
There’s so much exciting stuff that I can’t talk about.
The usual guidance applies. If you look at SS5140 and SS5240, and imagine the things I’m implying – we’re working on it 🙂
Narasimha says
Hi,
Can anyone plase explain me briefly about RDM, benefits of RDM & what will happen to RDM while migration.
Thanks.
Duncan says
While migrating what?
I hardly ever use RDMs as they are less flexible than VMDKs. Sometimes you need to use RDMs when you want to use native array snapshotting in combination with specific applications but in a typical environment there is no real benefit.
Narasimha says
Thanks for your time Duncan, i mean while “vmotion” there is any impact on the RDM attched to a vm.