• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

performance

Re: Large Pages (@gabvirtualworld @frankdenneman @forbesguthrie)

Duncan Epping · Jan 26, 2011 ·

I was reading an article by one of my Tech Marketing colleagues, Kyle Gleed and coincidentally Gabe published an article about the same topic to which Frank replied and just now Forbes Guthrie… the topic being Large Pages. I have written about this topic many times in the past and both Kyle, Gabe, Forbes and Frank mentioned the possible impact of large pages so I won’t go into detail.

There appears to be a lot of concerns around the benefits and the possible downside of leaving it enabled in terms of monitoring memory usage. There are a couple of things I want to discuss as I have the feeling that not everyone fully understands the concept.

First of all what are the Large/Small Pages? Small Pages are regular 4k memory pages and Large Pages are 2m pages. I guess the difference is pretty obvious. Now as Frank explained when using Large Pages there is a difference in TLB(translation lookaside buffer) entries; basically a VM provisioned with 2GB would need would need a 1000 TLB entries with Large Pages and 512.000 with Small Pages. Now you might wonder what this has got to do with your VM, well that’s easy… If you have an CPU that has EPT(Intel) or RVI(AMD) capabilities the VMkernel will try to back ALL pages with Large Pages.

Please read that last sentence again and spot what I tried to emphasize. All pages. So in other words where Gabe was talking about “does your Application really benefit from” I would like to state  that that is irrelevant. We are not merely talking about just your application, but about your VM as a whole. By backing all pages by Large Pages the chances of TLB misses are decreased, and for those who never looked into what the TLB does I would suggest reading this excellent wikipedia page. Let me give you the conclusion though, TLB misses will increase latency from a memory perspective.

That’s not just it, the other thing I wanted to share is the “impact” of breaking up the large pages into small pages when there is memory pressure. As Frank so elegantly stated “the VMkernel will resort to share-before-swap and compress-before-swap”. There is no nicer way of expressing uber-sweetness I guess. Now one thing that Frank did not mention though is that if the VMkernel detects memory pressure has been relieved it will start defragmenting small pages and form large pages again so that the workload can benefit again from the performance increase that these bring.

Now the question remains what kind of performance benefits can we expect as some appear to be under the impression that when the application doesn’t use large pages there is no benefit. I have personally conducted several tests with a XenApp workload and measured a 15% performance increase and on top of that less peaks and lower response times. Now this isn’t a guarantee that you will see the same behavior or results, but I can assure it is beneficial for your workload regardless of what types of pages are used. Small on Large or Large on Large, all will benefit and so will you…

I guess the conclusion is, don’t worry too much as vSphere will sort it out for you!

How cool is TPS?

Duncan Epping · Jan 10, 2011 ·

Frank and I have discussed this topic multiple times and it was briefly mentioned in Frank’s excellent series about over-sizing virtual machines; Zero Pages, TPS and the impact of a boot-storm. Pre-vSphere 4.1 we have seen it all happen, a host fails and multiple VMs need to be restarted. Temporary contention exists as it could take up to 60 minutes before TPS completes. Or of course when the memory pressure thresholds are reached the VMkernel requests TPS to scan memory and collapse pages if and where possible. However, this is usually already too late resulting in ballooning or compressing (if your lucky) and ultimately swapping. If it is an HA initiated “boot-storm” or for instance you VDI users all powering up those desktops at the same time, the impact is the same.

Now one of the other things I also wanted to touch on was Large Pages, as this is the main argument our competitors are using against TPS. Reason for this being that Large Pages are not TPS’ed as I have discussed in this article and many articles before that one. I even heard people saying that TPS should be disabled as most Guest OS’es being installed today are 64Bit and as such ESX(i) will back even Small Pages (Guest OS) by Large Pages and TPS will only add unnecessary overhead without any benefits… Well I have a different opinion about that and will show you with a couple of examples why TPS should be enabled.

One of the major improvements in vSphere 4.0 is that it recognizes zeroed pages instantly and collapses them. I have dug around for detailed info but the best I could publicly find about it was in the esxtop bible and I quote:

A zero page is simply the memory page that is all zeros. If a zero guest physical page is detected by VMKernel page sharing module, this page will be backed by the same machine page on each NUMA node. Note that “ZERO” is included in “SHRD”.

(Please note that this metric was added in vSphere 4.1)

I wondered what that would look like in real life. I isolated one of my ESXi host (24GB of memory) in my lab and deployed 12 VMs with 3GB each with Windows 2008 64-Bit installed. I booted all of them up in literally seconds and as Windows 2008 zeroes out memory during boot I knew what to expect:

I added a couple of arrows so that it is a bit more obvious what I am trying to show here. On the top left you can see that TPS saved 16476MB and used 15MB to store unique pages. As the VMs clearly show most of those savings are from “ZERO” pages. Just subtract ZERO from SHRD (Shared Pages) and you will see what I mean. Pre-vSphere 4.0 this would have resulted in severe memory contention and as a result more than likely ballooning (if the balloon driver is already started, remember it is a “boot-storm”) or swapping.

Just to make sure I’m not rambling I disabled TPS (by setting Mem.ShareScanGHz to 0) and booted up those 12 VMs again. This is the result:

As shown at the top, the hosts status is “hard” as a result of 0 page sharing and even worse, as can be seen on a VM level, most VMs started swapping. We are talking about VMkernel swap here, not ballooning. I guess that clearly shows why TPS needs to be enabled and where and when you will benefit from it. Please note that you can also see “ZERO” pages in vCenter as shown in the screenshot below.

One thing Frank and I discussed a while back, and I finally managed to figure out, is why after boot of a Windows VM the “ZERO” pages still go up and fluctuate so much. I did not know this but found the following explanation:

There are two threads that are specifically responsible for moving threads from one list to another. Firstly, the zero page thread runs at the lowest priority and is responsible for zeroing out free pages before moving them to the zeroed page list.

In other words, when an application / service or even Windows itself “deprecates” the page it will be zeroed out by the “zero page thread” aka garbage collector at some point. The Page Sharing module will pick this up and collapse the page instantly.

I guess there is only one thing left to say, how cool is TPS?!

Cool Tool: vmktree

Duncan Epping · Dec 23, 2010 ·

Ever since ESX 2.5 I have always been looking for cool free tools to monitor my hosts. I guess one of the oldest free tools out there is vmktree. Especially in the 2.x timeframe vmktree helped me out solving some weird performance issues. Back then vmktree was still dependent on vmkusage (who remembers that one?) but as of ESX 3.0 vmktree utilizes the api to gather the details needed to plot the graphs.

I lost track of vmktree for a while but when I noticed the announcement this week that 0.4.1 was released I decided to give it a spin again. I logged into my vSphere Management Appliance (vMA) and downloaded vmktree with wget. Installed it following the procedures mentioned in the announcement and literally minutes later I could see the first values coming in. To make sure I had something to show you guys I added a limit of 200MB on a virtual machine. As you know I love esxtop but esxtop are still just “dry numbers” which makes it difficult to see a trend. As you can see in the following screenshot, vmktree makes this trend pretty obvious. (Balloon driver is really active and the size of the balloon is increasing._

Besides memory, of course vmktree has more to offer on both per VM and Host level. For instance on a per VM level you can also see CPU and Storage statistics. On a Host level you can see CPU, Storage and Network. Of course these would include things like Latency, Bus resets, dropped packets, disk space usage… you name it, it is in there.

I know there are a lot of vendors these days offering free monitoring solutions, but the cool thing about vmktree is that it is maintained by just a single person Lars Troen. I can only imagine how much work maintaining a tool like this is. Thanks Lars for helping me out by writing this excellent tool! I would like to ask everyone to give it a try, and of course to provide feedback to Lars so that he can possibly improve vmktree over time.

Introducing voiceforvirtual.com

Duncan Epping · Nov 24, 2010 ·

At VMworld I met up with the guys presenting the Storage I/O Control session, Irfan Ahmad and Chethan Kumar. As many of you hopefully know Irfan has always been active in the social media space (virtualscoop.org). Chethan however is “new” and just started his own blog.

Chethan is a Senior Member of the Performance Engineering team  At VMware. He focuses on characterizing / troubleshooting performance of Enterprise Applications (mostly databases) in virtual environments using VMware products. Chethan has also studied the performance characteristic of the VMware storage stack and was one of the people who produced this great whitepaper on Storage I/O Control. Chethan just released his first article and I am sure many excellent articles will follow. Make sure you add him to your bookmarks/rss reader.

Running Virtual Center Database in a Virtual Machine

I just completed an interesting project. For years, we at VMware believed that SQL server databases run well when virtualized. We have illustrated this through several benchmark studies published as white papers. It was time for us to look at real applications. One such application that can be found in most of the vSphere based virtual environments is the database component of the vCenter server (the brain behind a vSphere environment). Using the vCenter database as the application and the resource intensive tasks of the vCenter databases (implemented as stored procedures in SQL server-based databases) as the load generator, I compared the performance of these resource intensive tasks in a virtual machine (in a vSphere 4.1 host) to that in a native server.

vStorage APIs for Array Integration aka VAAI

Duncan Epping · Nov 23, 2010 ·

It seems that a lot of vendors are starting to update their firmware to enable virtualized workloads from the vStorage APIs for Array Integration, also known as VAAI. Not only the vendors are starting to show interest, also the bloggers are picking up on it. Hence the reason I wanted to reiterate some of the excellent details out there and wanted to make sure everyone understands what VAAI brings. Although currently there are “only” three major improvements they can and probably will make a huge difference:

  1. Hardware Offloaded Copy
    Up to 10x faster VM deployment, cloning, Storage vMotion etc. VAAI offloads the copy task to the array, enabling the usage of native storage based mechanism resulting in a decrease of deployment time but equally important reducing the amount of data flowing between the array and server. Check this post by Bob Plankers and this one by Matt Liebowitz which clearly demonstrates the power of hardware offloaded copies! (reducing cloning from 19Minutes to 6Minutes!)
  2. Write Same/Zero
    10 x times less I/O for common tasks. Take for instance a zero-out process. It typically sends the same SCSI command several times. By enabling this option the same command will be repeated by the storage platform resulting in reduced utilization of the server while decreasing the time span of the action.
  3. Hardware Offloaded Locking
    SCSI Reservation Conflicts…. How many times have I heard that during Health Checks / Design Reviews and while troubleshooting performance related issues. Well VAAI solves those issues as well by offloading the locking mechanism to the array, also known as Atomic Test & Set aka ATS. It will more than likely reduce latency in an environment where thin-provisioned disks are used or linked clones, or even where VMware based snapshots are used. ATS removes the need to lock the full VMFS volume but instead locks a block when an update needs to occur.

One thing I wanted to point out here, which I haven’t seen mentioned yet, is that VAAI will actually allow you to have larger VMFS volumes. Now don’t get me wrong, I am not saying that you can go beyond 2TB-512b by enabling VAAI… My point is that by having VAAI enabled you will reduce the “load” on the array and on the servers. I placed quotes around load as it will not reduce the load from a VM perspective. What I am trying to get at is that many people have limited the amount of VMs per VMFS volume because of “SCSI Reservation Conflicts”. With VAAI this will change. Now you can keep your calculations “simple” and base your VMFS size on the amount of eggs you can have in a single basket and the sum of all VMs IOPS requirements.

After reading about all of this goodness I bet many of you want to use it straight away, well of course your array will need to support it first. Tomi Hakala created a nice list of all storage platforms that are currently supported and those that will be supported soon including a time frame. If your array is supported this KB explains perfectly how to enable/disable it.

I started out with saying that there are currently only three major enhancements…. that means indeed that there is more coming up in the future. Some of which I can’t discuss and others that I can as those were already mentioned at VMworld. (If you have access to TA7121 watch it!) I can’t say when they will be available or in which release, but I think it is great to know more enhancements are being worked on.

  • Dead Space Reclamation
    Dead space is previously written blocks that are no longer used by the VM. Currently in order to reclaim diskspace (for instance when you’ve deleted a lot of files) these blocks you will need to zero them out with for instance sdelete and then Storage vMotion the VM. Dead Space Reclamation will enable the storage system to reclaim these dead blocks by giving block liveness information.
  • Out-of-space conditions notifications
    This is very much an improvement for day-to-day operations. It will enable notification of possible “out-of-space” conditions on both the array vendor’s tool both also within the vSphere client!

Must reads:

Chad Sakac – What does VAAI mean to you?
Bob Plankers – If you ever needed convincing about VAAI
AndreTheGiant – VAAI
VMware KB – VAAI FAQ
VMware Support Blog – VAAI changes the way storage is handled
Matt Liebowitz – Exploring the performance benefits of VAAI
Bas Raayman – What is VAAI, and how does it add spice to my life as a VMware admin?

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 5
  • Page 6
  • Page 7
  • Page 8
  • Page 9
  • Interim pages omitted …
  • Page 21
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in