• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vflash

Introduction to vSphere Flash Read Cache aka vFlash

Duncan Epping · Aug 26, 2013 ·

vSphere 5.5 was just announced and of course there are a bunch of new features in there. One of the features which I think people will appreciate is vSphere Flash Read Cache (vFRC), formerly known as vFlash. vFlash was tech previewed last year at VMworld and I recall it being a very popular session. In the last 6-12 months host local caching solutions have definitely become more popular and interesting as SSD prices keep dropping and thus investing in local SSD drives to offload IO gets more and more interesting. Before anyone asks, I am not going to do a comparison with any of the other host local caching solutions out there. I don’t think I am the right person for that as I am obviously biased.

As stated, vSphere Flash Read Cache is a brand new feature which is part of vSphere 5.5. It allows you to leverage host local SSDs and turn that in to a caching layer for your virtual machines. The biggest benefit of using host local SSDs of course is the offload of IO from the SAN to the local SSD. Every read IO that doesn’t need to go to your storage system means resources can be used for other things, like for instance write IO. That is probably the one caveat I will need to call out, it is “write through” caching only at this point, so essential a read cache system. Now, by offloading reads, potentially it could help improving write performance… This is not a given, but could be a nice side effect.

Just a couple of things before we get in to configuring it. vFlash aggregates local flash devices in to a pool, this pool is referred too as a “virtual flash resource” in our documentation. So in other words, if you have 4 x 200 GB SSD you end up with a 800GB virtual flash resource. This virtual flash resource has a filesystem sitting on top of it called “VFFS” aka “Virtual Flash File System”. As far as I know it is a heavily flash optimized version of VMFS, but don’t pin me on this one as I haven’t broken it down yet.

So now that I know what it is and does, how do I install it, what are the requirements and limitations? Well lets start with the requirements and limitations first.

Requirements and limitations:

  • vSphere 5.5 (both ESXi and vCenter)
  • SSD Drive / Flash PCIe card
  • Maximum of 8 SSDs per VFFS
  • Maximum of 4TB physical Flash-based device size
  • Maximum of 32TB virtual Flash resource total size (8x4TB)
  • Cumulative 2TB VMDK read cache limit
  • Maximum of 400GB of virtual Flash Read Cache per Virtual Machine Disk (VMDK) file

So now that we now the requirements, how do you enable / configure it? Well as with most vSphere features these days the setup it fairly straight forward and simple. Here we go:

  • Open the vSphere Web Client
  • Go to your Host object
  • Go to “Manage” and then “Settings”
  • All the way at the bottom you should see “Flash Read Cache Resource Management”
    • Click “Add Capacity”
    • Select the appropriate SSD and click OK
      Introduction to vSphere Flash Read Cache aka vFlash
  • Now you have a cache created, repeat for other hosts in your cluster. Below is what your screen will look like after you have added the SSD.

Now you will see another option below “Flash Read Cache Resource Management” and it is called “Cache Configuration” this is for the “Swap to host cache” / “Swap to SSD” functionality that was introduced with vSphere 5.0.

Now that you have enabled vFlash on your host, what is next? Well you enable it on your virtual machine, yes I agree it would have been nice to enable it for a full cluster or for a datastore as well but this is not part of the 5.5 release unfortunately. It is something that will be added at some point in the future though. Anyway, here is how you enable it on a Virtual Machine:

  • Right click the virtual machine and select “Edit Settings”
  • Uncollapse the harddisk you want to accelerate
  • Go to “Flash Read Cache” and enter the amount of GB you want to use as a cache
    • Note there is an advanced option, at this section you can also select the block size
    • The block size could be important when you want to optimize for a particular application

Not too complex right? You enable it on your host and then on a per virtual machine level and that is it… It is included with Enterprise Plus from a licensing perspective, so those who are at the right licensing level get it “for free”.

Introducing startup PernixData – Out of stealth!

Duncan Epping · Feb 20, 2013 ·

There are many startups out there that do something with storage these days. To be honest, many of them do the same thing and at times I wonder why on earth everyone focuses on the same segment and tries to attack it with the same product / feature set. One of the golden rules for any startup should be that you have a unique solution that will sell itself. Yes I realize that it is difficult, but if you want to succeed you will need to stand out.

About a year ago Satyam Vaghani (former VMware principal engineer who was responsible for VMFS, VAAI, VVOLs etc.) and Poojan Kumar (former VMware Data products lead and ex-Oracle Exadata founder) decided to start a company – PernixData. PernixData was conceptualized based on their experiences working on the intersection of virtualization, flash based storage and data. Today PernixData is revealed to the world. For those who don’t know, Pernix means “agile”. But what is PernixData about?

How many of you haven’t experienced storage performance problems? It probably is, in fact, the number one bottleneck in most virtualized environments. Convincing your manager (director / VP) that you need a new ultra-fast (and expensive) storage device is not easy; far from it. On top of that, data will always hit the network first before being acknowledged and every read will go over your storage network. How cool would it be if there was a seamless software solution that solves all your storage performance problems without you requiring to rip and replace your existing storage assets?

Server-side flash overcomes problems associated with network based storage and server-side caching solutions provide some respite. Yet, server-side caching solutions usually neither satisfy enterprise class requirements for availability nor transparently support clustered hypervisor features such as VMware vMotion. In addition, while they accelerate reads they fail to do much for writes. Customers are then stuck between either overhauling their entire storage infrastructure or going with caching solutions that work for limited use cases. PernixData is about to release a cool new product – a flash virtualization platform – that bridges this gap. By picking up where hypervisors left off, PernixData is planning to become the VMware of server flash and is aiming to do to server flash what VMware did to CPU and memory. So, what is this flash virtualization platform and why would you need it?

PernixData’s flash virtualization platform virtualizes all flash resources across all server nodes in a vCenter Server cluster into a single high-performance, enterprise class data tier. The great thing is that this happens in a transparent way. PernixData sits completely within the hypervisor and in the data-path of your virtual machine. Note that there are no requirements to install anything in the guest (virtual machine). PernixData is not a virtual appliance because virtual appliances introduce performance overhead and would need to be managed with all costs and complexity associated.

PernixData is also flash technology agnostic. It can leverage SSD or PCIe flash (or both) within the platform. The nice thing is that PernixData uses a scale-out architecture. As you add hosts with flash they can be dynamically added to the platform. On top of that, PernixData does both read and write acceleration while providing full data protection and is fully compatible with VM mobility solutions like vMotion, Storage vMotion, HA, DRS and Storage DRS.

Even more exciting PernixData will support both Write-through and Write-back modes. The cool part is that PernixData also ensures IO is replicated for high availability purposes. You don’t want to run your VM in Write-back mode when you cannot guaranteed data is highly available right?! I guess that is one of the unique selling points of the solution. A distributed, scale out, flash virtualization platform which is not only flash agnostic but also non-disruptive for your virtual workloads.

I would imagine this is many times cheaper than buying a new storage array. Even without knowing what the cost of PernixData will be, or which flash device (PCIe or SSD) you would decide to use… I bet when it comes to overall costs of the solution (product + implementation costs) it will be many many times cheaper.

As I started off with, the golden rule for any startup should be that they have a unique solution that sells itself. I am confident that PernixData FVP has just that by being a disruptive technology that solves a big problem in virtualized environments  in a scale-out and transparent manner while leveraging your existing storage investments.

If you want to be kept up to date, make sure to follow Satyam, Poojan , Charlie and PernixData on twitter. If you are interested in joining the PernixData FVP Beta, make sure to sign up!

Make sure to also read Frank’s article on PernixData.

<update>

I recommend watching the Storage Field Day videos for more details from Satyam Vaghani himself, note the playlist this is 4 videos!

</update>

VMworld session report: INF-STO2223 – Tech Preview vSphere Integration with Existing Storage

Duncan Epping · Sep 7, 2012 ·

A couple of weeks ago I posted an article about Virtual Volumes aka vVOLs. This week at VMworld Thomas (Tom) Phelan and Vijay Ramachandran delivered a talk which again addressed this topic but they added Virtual Flash to the mix. The session was “INF-STO2223”.

For those attending Barcelona, sign up for it! It is currently scheduled once on Wednesday at 14:00.

The session started out with a clear disclaimer, this was a technology preview and there is no guarantee whatsoever that this piece of technology will ever be released.

Tom Phelan covered Virtual Flash and Vijay covered Virtual Volumes but as Virtual Volumes was extensively covered in my other blog post I would like to refer back to that blog post for more details on that topic. This blog post will discuss the “Virtual Flash” portion of the presentation, virtual flash or vFlash in short is often also called “SSD caching”.

The whole goal of the Virtual Flash project is to allow vSphere to manage SSD as a cluster resource, just like CPU and memory today. Sounds familiar right for those who read the blog post about vCloud Distributed Storage?! The result of this project should be a framework which allows partners to insert their caching solution and utilize SSD resources more effectively without some of the current limitations.

Virtual Flash may be VM-transparent but also VM-aware. Meaning that it should for instance be possible to allocate resources per virtual machine or virtual disk. Some controls that should be included are reservations, shares and limits. On top of that, it should fully work with  vMotion and integrate with DRS.

Two concepts were explained:

  1. VM transparent caching
  2. VM-aware caching

VM transparent caching uses a hypervisor kernel caching module which sits directly in the virtual disk’s data path. It can be used in two modes, write thru cache (read only) and write back cache (read and write). On top of that it will provide the ability to migrate cache content during a vMotion or discard the cache.

VM-aware caching is a type of caching where the Virtual Flash resource is presented directly to the virtual machine as a device. This allows the virtual machine to control the caching algorithm. The cache will in this case automatically “follow” the virtual machine during migration. It should be pointed out that if the VM is powered off the cache is flushed.

For those managing virtual environments, architecting them or providing health check services… think about the most commonly faced problem, yes that typically is storage performance related. Just imagine for a second having a caching solution at your disposal which could solve most of these problems…. Indeed that would be awesome. Hopefully we will hear more soon!

  • « Go to Previous Page
  • Page 1
  • Page 2

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in