• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

performance

Performance Week?!

Duncan Epping · Mar 14, 2009 ·

It seems to be performance week this week at VMware. It started out with Eric Horschman’s reply to Virtualization Review’s hypervisor performance comparisson. What amazed me the most, besides the results, is the fact that Keith Ward writes that the methodology was discussed with VMware and VMware agreed that it was fair. Reading Eric’s response this certainly isn’t the case. What stood out in Eric’s reply and what also surprised me when reading the original article by Rick Vanover is the following:

“The fact that ESX is completing so many more CPU, memory, and disk operations than Hyper-V obviously means that cycles were being used on those components as opposed to SQL Server.”

Now I’m not going to analyse Rick’s test or Eric’s response, there’s no need to do so. Others are far more capable of doing that, like for instance Chris Wolf. Chris has a valid point, we really need SPECvirt, and we need it fast to get wrid of these endless discussions. For those not familiar with the SPECvirt initiative you can dive into it here. SPECs motto describes best what it’s about:”The key realization was that an ounce of honest data was worth more than a pound of marketing hype”. [Read more…] about Performance Week?!

vmktree 0.3.0 out of beta!

Duncan Epping · Mar 10, 2009 ·

If we look at the VMTN community today there are a whole lot of people sharing powershell scripts, perl and even .net programs. Back in the days of ESX 2.x there wasn’t such a huge community, but there was one tool that everyone knew about and probably everyone tested and used at one point, vmktree!

Lars Trøen is man behind vmktree and he just released 0.3.0. For those of you who don’t know what vmktree is:

vmktree is a free web tool that shows you the graphs of resource usage of VMware ESX Server, VMware Server (on Linux), GSX Server (on Linux) and a few other data center devices (ilo/ilo2/rsa2/ds4000).

On VMware Server (and GSX) and ESX 3.x vmktree provides it’s own agent that collects system statistics and does not depend on vmkusage like it does on ESX 2.x. On ESX 3.x there is no agent installed on the ESX server itself as all values are polled from the machine vmktree is installed on.

vmktree is compatible with ESX and ESXi and needs to be installed outside of the Service Console, in contrary to previous versions. Lars created a great howto, which includes a CentOS jeos VM. I hope Lars can find some extra time and get that live esxtop back in again!

Virtualized MMU and Transparent page sharing

Duncan Epping · Mar 6, 2009 ·

I’ve been doing Citrix XenApp performance tests over the last couple of days. Our goal was simple: as many user sessions on a single ESX host as possible, not taking per VM cost in account. Reading the Project VRC performance tests we decided to give both 1 vCPU VM’s and 2 vCPU VM’s a try. Because the customer was using brand new Dell hardware with AMD processors we also wanted to test with “virtualized MMU” set to forced. For a 32Bit Windows OS this setting needs to be set to force other wise it will not be utilized. (Alan Renouf was so kind to write a couple of lines of Powershell that enabled this feature for a specific VM, Cluster or just every single VM you have. Thanks Alan!)

We wanted to make sure that the user experience wasn’t degraded and that ESX would still be able to schedule tasks within a reasonable %RDY Time, < 20% per VM. Combine the 1vCPU, 2vCPU with and without virtualized MMU and you’ve got 4 test situations. Like I said our goal was to get as much user sessions on a box as possible. Now we didn’t conduct a real objective well prepared performance test so I’m not going to elaborate on the results in depth, in this situation 1vCPU with virtualized MMU and scale out of VMs resulted in the most user sessions per ESX host. [Read more…] about Virtualized MMU and Transparent page sharing

RE: max num vCPU’s Malaysia VMware Communities

Duncan Epping · Feb 28, 2009 ·

I was just reading this article on the Malaysia VMware Communities website I’ve read a couple of article on their website that didn’t make sense but this this time I’m going to respond cause it might set people on the wrong foot. Anyone is of course entitled to their own opinion and views, but please reread your article and check the facts before you publish, especially when your blog is featured on planetv12n. A short outtake of the blog:

If we refer to the current version which is ESX 3.5 u3, the maximum number of Vcpu per ESX server is only 128 per ESX Servers. Personally, I think the number of Vcpu per ESX servers is too minimal. Imagine if we do run a servers with 4 or 8 physical CPU sockets and we consolidate 40 : 1 Physical server in our virtualization environment, we will hit to the bottleneck on maximum numbers of Vcpu per ESX servers but not due to the CPU consumption

Reading this short section one might think, why reply it makes sense? No, it doesn’t make sense at all:

  • The current limit isn’t 128, it’s 192 vCPU’s.
    So even with a 40:1 ratio and all VMs provisioned with 4 vCPU’s you wouldn’t hit this limit. Read the max config guide, it’s the bible for virtualization consultants.
  • But even more important: co-scheduling and over provisioning will impact performance. With most VM’s running 2 or even 4 vCPU’s scheduling will be almost impossible even with the relaxed co-scheduling techniques ESX is using these days. In other words, please don’t use multi vCPU VMs as a standard, you can read more on c0-scheduling here.

The author asked VMware to bump up the max number of vCPU’s. Now for a VDI environment this can and will be useful I think. Again if you are hitting the number with a 16 core machine, you might need to reconsider your provisioning strategy.

I expect the number to go up… especially after watching Stephen Herrod’s keynote at VMworld Europe 2009.

Project PARDA

Duncan Epping · Feb 15, 2009 ·

My colleague Irfan just emailed me about a new paper he has been working on and which will be presented at the FAST 2009 conference. Irfan wrote this mind blowing paper with Ajay Gulati and Carl Waldspurger. PARDA: Proportional Allocation of Resources for Distributed Storage Access.

Some of you might recognize Carl’s name by the way  cause he also wrote the famous “Memory Resource Management in VMware ESX Server” paper, which explains the effect of TPS and content based page page sharing.

I didn’t say mind blowing just to make it sound cool, this is one of those papers that will make your brain hurt… well at least for me as a Consultant it does.

A short outtake that explains what PARDA is about:

PARDA, a novel software system that enforces proportional-share fairness among distributed hosts accessing a storage array, without assuming any support from the array itself. PARDA uses latency measurements to detect overload, and adjusts issue queue lengths to provide fairness, similar to aspects of flow control in FAST TCP.

There’s one thing that stands out in my opinion after reading the paper a couple of times:

Combining a distributed flow control mechanism with a fair local scheduler allows us to provide end-to-end IO allocations to VMs. However, an interesting alternative is to apply PARDA flow control at the VM level, using per-VM latency measurements to control per-VM window sizes directly, independent of how VMs are mapped to hosts. This approach is appealing, but it also introduces new challenges that we are currently investigating. For example, per-VM allocations may be very small, requiring new techniques to support fractional window sizes, as well as efficient distributed methods to compensate for short-term burstiness.

So you can see where this might be going in the near future. Memory shares, CPU shares and Storage Shares. Combine this with OVF and vApp, which gives you the opportunity to add these details in the VM’s metadata, and you’ve got one big SLA driven virtualized environment or should I say a vCloud?!? This is not what the paper is about by the way, the paper describes what PARDA is, does and how it’s been tested… and it includes the results of course.

Irfan also wrote a short blog article on the paper you might also want to check this one, and don’t forget to add him to your bookmarks/rss reader!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 15
  • Page 16
  • Page 17
  • Page 18
  • Page 19
  • Interim pages omitted …
  • Page 21
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in