• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

performance

Benchmarking an HCI solution with legacy tools

Duncan Epping · Nov 17, 2016 ·

I was driving back home from Germany on the autobahn this week when thinking about 5-6 conversations I have had the past couple of weeks about performance tests for HCI systems. (Hence the pic on the rightside being very appropriate ;-)) What stood out during these conversations is that many folks are repeating the tests they’ve once conducted on their legacy array and then compare the results 1:1 to their HCI system. Fairly often people even use a legacy tool like Atto disk benchmark. Atto is a great tool for testing the speed of your drive in your laptop, or maybe even a RAID configuration, but the name already more or less reveals its limitation: “disk benchmark”. It wasn’t designed to show the capabilities and strengths of a distributed / hyper-converged platform.

Now I am not trying to pick on Atto as similar problems exist with tools like IOMeter for instance. I see people doing a single VM IOMeter test with a single disk. In most hyper-converged offerings that doesn’t result in a spectacular outcome, why? Well simply because that is not what the solution is designed for. Sure, there are ways to demonstrate what your system is capable off with legacy tools, simply create multiple VMs with multiple disks. Or even with a single VM you can produce better results when picking the right policy as vSAN allows you to stripe data across 12 devices for instance (which can be across hosts, diskgroups etc). Without selecting the right policy or having multiple VMs, you may not be hitting the limits of your system, but simply the limits of your VM virtual disk controller, host disk controller, single device capabilities etc.

But there is even a better option, pick the right toolset and select the right workload(Surely only doing 4k blocks isn’t representative of your prod environment). VMware has developed a benchmarking solution that works with both traditional as well as with hyper-converged offerings called HCIBench. HCIBench can be downloaded for free, and used for free, through the VMware Flings website. Instead of that single VM single disk test, you will now be able to test many VMs with multiple disks to show how a scale-out storage system behaves. It will provide you great insights of the capabilities of your storage system, whether that is vSAN or any other HCI solution, or even a legacy storage system for that matter. Just like the world of storage has evolved, so has the world of benchmarking.

How thermal paste can impact VM performance

Duncan Epping · Jun 30, 2016 ·

On twitter a tweet from Frank flew by pointing to an article which was written by one of my VMware colleagues: Matt Bradford aka @VMSpot. I hadn’t seen the article, while it was written in 2014 and I am surprised it never caught more attention. Matt describes in his post how the use and placement of thermal paste can influence VM performance. Who would have thought of that, and I am seriously impressed they managed to get to the bottom of this!

We haven’t had our HP BL460c Gen8’s with the new Xeon E5-2697 v2 12 core processors long. Last week we started to get e-mails from the help desk that users were complaining about sluggish performance in Citrix. Oddly, all of the XenApp VM’s happened to live on the same ESXi host. I say oddly because performance issues rarely seem to fall in line as they did here. We immediately evacuated the host and admitted it to the infirmary cluster.

…..

It didn’t seem to matter if the CPU’s were under load or idle, the temperature would not stray from 69°c. This had to be an issue with the temperature sensors, I thought. So we pulled the host and removed the heat sinks so we could look at the CPU’s through a thermal camera we borrowed from engineering.

I am not going to post the full article here, go over to Matt’s blog and have a read. It is flabbergasting if you ask me, and definitely one of the coolest reads in a long time. And thanks Frank for bringing this one up. I just had to share in on a broader platform.

That reminds me, maybe it is time to bring back my “favourite reads” post I did for a long time on the VMTN Blog, but host it here instead. Hmmm. Ah well, lets make a start here and follow up with “Recommended reads” posts in the future:

  • Compare and Contrast: Photon Controller vs VIC (vSphere Integrated Containers) by Cormac Hogan, explains the difference between these two different products/solutions. It is a great way to learn more about how VMware enables cloud native apps.
  • New Home Lab Hardware – Dual Socket Xeon v4 by Frank Denneman. I am starting to wonder who is the craziest in terms of home lab. Maybe we should do a contest, not sure Frank will win as there are some folks who have 3-4 clusters at home like Erik Bussink. Nevertheless, I like how Frank breaks down each component of his new addition.
  • Test driving ContainerX on VMware vSphere by William Lam. Always interested in learning more about what it is former VMware engineers are doing. Pradeep Padala is the CTO for ContainerX which William tested out and described in this article.
  • VMware HCL in JSON format and VMware HCL check with PowerCLI by Florian Grehl. Very useful if you want to programmatically validate your current environment against the VMware HCL.

That’s it for now, enjoy reading.

Introduction to vSphere Flash Read Cache aka vFlash

Duncan Epping · Aug 26, 2013 ·

vSphere 5.5 was just announced and of course there are a bunch of new features in there. One of the features which I think people will appreciate is vSphere Flash Read Cache (vFRC), formerly known as vFlash. vFlash was tech previewed last year at VMworld and I recall it being a very popular session. In the last 6-12 months host local caching solutions have definitely become more popular and interesting as SSD prices keep dropping and thus investing in local SSD drives to offload IO gets more and more interesting. Before anyone asks, I am not going to do a comparison with any of the other host local caching solutions out there. I don’t think I am the right person for that as I am obviously biased.

As stated, vSphere Flash Read Cache is a brand new feature which is part of vSphere 5.5. It allows you to leverage host local SSDs and turn that in to a caching layer for your virtual machines. The biggest benefit of using host local SSDs of course is the offload of IO from the SAN to the local SSD. Every read IO that doesn’t need to go to your storage system means resources can be used for other things, like for instance write IO. That is probably the one caveat I will need to call out, it is “write through” caching only at this point, so essential a read cache system. Now, by offloading reads, potentially it could help improving write performance… This is not a given, but could be a nice side effect.

Just a couple of things before we get in to configuring it. vFlash aggregates local flash devices in to a pool, this pool is referred too as a “virtual flash resource” in our documentation. So in other words, if you have 4 x 200 GB SSD you end up with a 800GB virtual flash resource. This virtual flash resource has a filesystem sitting on top of it called “VFFS” aka “Virtual Flash File System”. As far as I know it is a heavily flash optimized version of VMFS, but don’t pin me on this one as I haven’t broken it down yet.

So now that I know what it is and does, how do I install it, what are the requirements and limitations? Well lets start with the requirements and limitations first.

Requirements and limitations:

  • vSphere 5.5 (both ESXi and vCenter)
  • SSD Drive / Flash PCIe card
  • Maximum of 8 SSDs per VFFS
  • Maximum of 4TB physical Flash-based device size
  • Maximum of 32TB virtual Flash resource total size (8x4TB)
  • Cumulative 2TB VMDK read cache limit
  • Maximum of 400GB of virtual Flash Read Cache per Virtual Machine Disk (VMDK) file

So now that we now the requirements, how do you enable / configure it? Well as with most vSphere features these days the setup it fairly straight forward and simple. Here we go:

  • Open the vSphere Web Client
  • Go to your Host object
  • Go to “Manage” and then “Settings”
  • All the way at the bottom you should see “Flash Read Cache Resource Management”
    • Click “Add Capacity”
    • Select the appropriate SSD and click OK
      Introduction to vSphere Flash Read Cache aka vFlash
  • Now you have a cache created, repeat for other hosts in your cluster. Below is what your screen will look like after you have added the SSD.

Now you will see another option below “Flash Read Cache Resource Management” and it is called “Cache Configuration” this is for the “Swap to host cache” / “Swap to SSD” functionality that was introduced with vSphere 5.0.

Now that you have enabled vFlash on your host, what is next? Well you enable it on your virtual machine, yes I agree it would have been nice to enable it for a full cluster or for a datastore as well but this is not part of the 5.5 release unfortunately. It is something that will be added at some point in the future though. Anyway, here is how you enable it on a Virtual Machine:

  • Right click the virtual machine and select “Edit Settings”
  • Uncollapse the harddisk you want to accelerate
  • Go to “Flash Read Cache” and enter the amount of GB you want to use as a cache
    • Note there is an advanced option, at this section you can also select the block size
    • The block size could be important when you want to optimize for a particular application

Not too complex right? You enable it on your host and then on a per virtual machine level and that is it… It is included with Enterprise Plus from a licensing perspective, so those who are at the right licensing level get it “for free”.

Cool Tool: VisualEsxtop

Duncan Epping · Jul 8, 2013 ·

My ESXTOP page is still one of the most visited pages I have, it actually comes in on a second spot just right after the HA Deepdive. Every once in a while I revise the page and this week it was time to add VisualEsxtop to the list of tools people should use. I figured I would write a regular blog post first and roll it up in to the page at the same time. So what is VisualEsxtop?

VisualEsxtop is an enhanced version of resxtop and esxtop. VisualEsxtop can connect to VMware vCenter Server or ESX hosts, and display ESX server stats with a better user interface and more advanced features.

That sounds nice right? Lets have a look how it works, this is what I did to get it up and running:

  • Go to “http://labs.vmware.com/flings/visualesxtop” and click “download”
  • Unzip “VisualEsxtop.zip” in to a folder you want to store the tool
  • Go to the folder
  • Double click “visualesxtop.bat” when running Windows (Or follow William’s tip for the Mac)
  • Click “File” and “Connect to Live Server”
  • Enter the “Hostname”, “Username” and “Password” and hit “Connect”
  • That is it…

Now some simple tips:

  • By default the refresh interval is set to 5 seconds. You can change this by hitting “Configuration” and then “Change Interval”
  • You can also load Batch Output, this might come in handy when you are a consultant for instance and a customers sends you captured data, you can do this under: File -> Load Batch Output
  • You can filter output, very useful if you are looking for info on a specific virtual machine / world! See the filter section.
  • When you click “Charts”  and double click “Object Types” you will see a list of metrics that you can create a chart with. Just unfold the ones you need and double click them to add them to the right pane

There are a bunch of other cool features in their like color-coding of important metrics for instance. Also the fact that you can show multiple windows at the same time is useful if you ask me and of course the tooltips that provide a description of the counter! If you ask me, a tool everyone should download and check out.

If you have feedback, make sure to leave a comment on the flings site as the engineers of this tool will be tracking that to see where improvements can be made.

 

Startup Intro: Infinio

Duncan Epping · Jun 20, 2013 ·

Infinio is demo’ing their brand new product today at Tech Field Day #9. I was briefed by Infinio a couple of weeks back and figured I would share some details with you. Infinio is releasing a product called Infinio Accelerator and describes it as a “downloadable storage performance” solution. That sounds nice, but what does that mean?

Infinio has developed a virtual appliance that sits in between your virtual machine storage traffic and your NFS datastore. Note I said “NFS datastore” and not just “datastore”, as NFS is their current focus. Why just NFS and not block storage? Currently that is because of the architecture they have chosen, or better said due to how they intercept traffic going to or coming from the datastore.

The Infinio virtual appliance enhances storage performance by caching IO. Their primary use case is to do caching in memory. So what does it look like? Basically every host in the cluster gets an Infinio appliance installed. This appliance has 2 vCPUs and 8GB of memory by default and from that memory a shared caching pool is created to accelerate read IO. (Yes there is a downside to using an appliance, read this article by Frank.) The nice thing is that this pool of memory is cluster wide deduplicated, considering though the appliance holds 8GB of memory that deduplication is a requirement if you ask me. (Just revealed at TFD is that the appliance will get deployed with 4, 8 or 16GB memory based on the amount of memory in the host.) The other key word here is “read IO”, for now Infinio Accelerator is a read cache solution, so no write back, but that might change in the future, who knows. The video below also mentions SSD caching, the Tech Field Day session revealed that this is something that is being worked on to be included in the future.

One thing where Infinio definitely excels is the installation / configuration process, and even the purchase options are simple. You download a simple installer, point it to your vCenter Server, do a couple of “next / next / finish” actions and that is that. You want to buy the product? It will be even easier then installing, just hit the website, grab your creditcard and that is it. Definitely something I always appreciate, companies keeping it simple.

One thing I want to call, I asked this question during the TFD broadcast, as that today there is no direct integration with vCenter Server or with VC Ops. In my opinion a missed opportunity, especially considering the product is focused on the virtualization market.

How do they compare to other caching solutions out there? Well that is difficult to say at the moment, if I can find the time and get some proper SSDs in my lab I might test and compare the various solutions at some point. If you ask me there are benefits to both SSD/Flash and “in memory” caching. What will determine their success is: how it is implemented (product quality), where they sit in the I/O stack, how resilient the solution is and what kind of caching they offer. As I said, maybe more in the future on this.

That is all about I can share for now, for some more details I suggest watching the 8 minute pitch by their Co-founder and CEO Arun Agarwal all the way at the bottom or the Tech Field Day introduction videos and deepdive.

When will it be available? The public beta is scheduled to be available around VMworld, and Infinio is aiming for a GA release in Q4 of 2013.

Tech Field Day – Introductions

Tech Field Day – Demo

Tech Field Day – Deepdive / How it works

8 Minute Pitch

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Interim pages omitted …
  • Page 21
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in