• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Server

Frequently asked questions about vSphere Flash Read Cache

Duncan Epping · Sep 11, 2013 ·

Last week I received many emails on the topic of vFlash so I figured I would start collecting these questions and answer them in a “frequently asked questions about vSphere Flash Read Cache” article. That way the info it is out in the open and it should be fairly easy to find using google. If you haven’t done so yet, I would recommend starting by reading my introduction to vSphere Flash Read Cache. If you already have, here are the questions (and answers) I received so far:

  • In a cluster where some hosts have vFlash resources and others do not, can I vMotion a VM with vFlash assigned to a host without vFlash resources?
    • In order to successfully migrate a VM vFlash resources are required on the destination host
  • When migrating a VM I have various options, in essence I can move the cache or drop it. What is the impact of moving the cache?
    • When selecting to move the cache essential an “X-vMotion” (cross vMotion / non-shared-disk migration) is performed. Meaning that the “local” vFlash cache file that is stored on the VFFS filesystem is copied from source to destination. This results in the migration taking longer, but also means that the cache is hot instantly and does not need to be warmed up (impact of dropping the cache is the warm up that needs to happen again).
  • What are the requirements and considerations to run vSphere Flash Read Cache from a VM/Guest perspective?
    • The requirements from a guest perspective are minimal, other than VM hardware level 10 (5.5 compatiblity) there are no requirements. Considerations are also minimal, of course the IO pattern will matter, if you do 90% writes then the benefit will be minimal. If you have a specific IO size then a consideration could be changing the block size.
  • Will a VM be moved by DRS when it has vFlash enabled on a disk?
    • DRS treats a virtual machine with vFlash enabled as if “should vm-host” affinity rules are applied. Meaning that it will only move a virtual machine when absolutely needed, example being when entering maintenance mode or when it is the only way to solve over-utilization. (Similar to “low latency VM” functionality)
  • What additional considerations (if any) need to be made for backing up vFRC supported guests; at guest and/or VM layer?
    • There are no direct considerations with regards to backup. Considering vSphere Flash Read Cache is “write-through” there is no such a thing as “dirty data”. Both cache and storage will be in-sync at all times. One thing to realize is though that when you do a full VM image level backup, the VM will need to be restored to a host that has vFlash enabled in order for it to be powered on. Also note that with other solutions which do write-back back-up could be a potential caveat / concern. Discuss the potential impact with those vendors as each implementation differs.
  • What is the purpose of the “block size” which I can specify in the advanced settings? What is the minimum and maximum size?
    • The block size can be configured from 4KB to 1024KB. In order to optimize for I/O performance and minimize overhead try to align the vFlash blocksize to the blocksize used by your Guest OS / Application. You can leverage a tool like vscsiStats to identify the blocksize used by your virtual machine(s).
  • What happens to a virtual machine which is accelerated and the flash device on the host it is running on fails?
    • When a flash device fails which is being used for vFlash and a VM is actively using it then the VM will continue running but will experience a performance degradation as all reads will need to come from the storage device instead of SSD. In other words, an SSD failure does not cause any form of disruption to the workload but could potentially result in a degraded performance (similar to the performance before enabling vFlash)
  • What if I run out of capacity on my flash device?
    • When a flash device runs out of capacity you will simply not be able to reserve flash resources for new virtual machines or virtual machines for which you want to allocate flash resources.
  • Does vSphere Flash Read Cache do anything for writes like other solutions do?
    • Well I guess the short answer is “no, not directly”. It is called “vSphere Flash Read Cache” for a reason I guess… But when you think about it, although it doesn’t provide write-back caching the “read caching” portion will free up expensive array resources used for those reads. These expensive resources can now be used for… indeed… writes. So although vSphere Flash Read Cache does not provide write-back caching, only write-through, it might lead to accelerated writes simply because of the decrease of stress on your storage system.
  • Is it possible to configure vSphere Flash Read Cache on a virtual machine with Raw Device Mappings (RDM)?
    • vSphere Flash Read Cache does not support Physical RDMs vSphere Flash Read Cache does support Virtual RDMs.
  • Can I use vFlash on a virtual machine which is stored on a VSAN datastore?
    • No you cannot enable vFlash on a virtual machine which is stored on a VSAN datastore. Considering VSAN already provides read caching and write bufferering there is no real value in enabling that second layer of caching and subsequently add complexity along with it.
  • If I have VSAN enabled and also a NAS/SAN datastore, can my VM that is running on the NAS/SAN datastore be enabled for vFRC?
    • Yes, when the VM is stored on the NAS/SAN datastore you can enable vFRC for this particular VM.
  • Is there a minimum size for vSphere Flash Read Cache allocation to a virtual machine?
    • The minimum size for vSphere Flash Read cache is dependent on the selected blocksize. With a 4KB blocksize the minimum is 4MB, and with a 1MB blocksize the minimum is 276MB
  • What happens if I have my block size configured to 1024KB and I do a 4KB IO?
    • Within that 1024KB cache block your 4KB IO will be stored, the rest of the block (1020KB in this scenario) will be marked as “empty”. Hence the reason it is important to align your IO block size with your cache block size to avoid wastage.
  • What happens to my cache when I resize it on a powered-on VM?
    • When the cache is resized all cache is discarded. This results in a period of time where the cache will need to warm up again.
  • If a host fails and virtual machines with vFRC enabled need to be failed over and there are no sufficient resources available what happens?
    • vFRC allocations are reservations. The behavior is similar to that of a CPU or memory reservation. When there is not enough unreserved capacity available then HA will not be able to power-on the virtual machine.
  • When I configure vSphere Flash Read Cache the maximum cache size seems to be limited to 200GB, what is supported and how do I change this?
    • The maximum cache size per virtual machine is 400GB, however in the current release the cache size is limited to 200GB. You can however change the advanced setting called “VFLASH.MacCacheFileSizeMB” to 409600 if desired. Do note that you will need to change this setting on every host if you want the virtual machine to be able to vMotion between hosts or want vSphere HA to be able to restart a virtual machine.
  • How do I monitor a VM to see how much cache is being used? Is there anything in vCenter or ESXTOP to help me?
    • vCenter contains multiple metrics on a virtual machine level. Just look at the advanced section and more explicitly the virtual disk counters starting with “Virtual Flash …”. There are also various helpful metrics to be found using “esxcli storage vflash cache stats get -c <cache name>. This will for instance provide you with the cache hit rate, latency etc. I have not found any esxtop counters yet.
  • When a virtual machine with vSphere Flash Read Cache enabled is vMotioned including the cache which network is used to transfer the cache?
    • When a VM is migrated / vMotioned including the cache then the “XvMotion / enhance vMotion / non-shared migration” logic is used to move the cache content from source to destination. The data is transferred over the vMotion network. Ensure that sufficient bandwidth is available to shorten the migration time.
  • When flash content is migrated along with the virtual machine, will the migration process make a distinction between hot and cold cache?
    • XvMotion is responsible for migrating the cache from source to destination. Today it doesn’t know the difference and will copy the full file from source to destination. Meaning that if you have a cache size of 10GB the full 10GB will be copied regardless of it being used or not.
  • Can I use the same disk that is divided into partition for vSphere Flash Read Cache & VSAN SSD tier?
    • This is not supported! Both VSAN and vSphere Flash Read Cache require there own flash device.
  • The SSD in my host is being reported in vSphere as “non-SSD”. According to support this is a known issue with the generation of server I am using. Will this “mis-reporting” of the disk type affect my ability to configure a vFlash? <added 17/Sept/2013>
    • Yes it will, you will need to tag the SSD as local using (example below is what I use in my lab, your identifier will be different). And in this case I claim it as being “local” and as “SSD”.
      esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device mpx.vmhba2:C0:T0:L0 –option “enable_local enable_ssd”
  • If I disable Strict Admission Control for vSphere HA, would it be able to power-on a VM with vFRC enabled? <added 17/Sept/2013>
    • No, admission control does not have anything to do with powering on virtual machines. Even if you disable admission control and there are no sufficient flash resources available then a virtual machine with vFRC enabled cannot be powered on.
    • Also note, that HA Admission Control does not take vFRC resources in to account whatsoever. So in an environment running vFRC you can technically use 100% of all flash resources, whilst Admission Control was setup in an N+1 fashion.
  • Is vFRC as good as the caching solution by vendor X? <added 18/Sept/2013>
    • I have had this question many times. It is difficult to answer as it depends. It depends on:
      • Your requirements from a functional perspective (read vs write back caching)
      • Type of workload you are running (read vs write, VDI vs Server)
      • Support requirements (Not all solutions out there are on the VMware compatibility list)
      • Operational requirements (vFRC needs to be enabled per virtual disk, some vendors do per datastore for instance etc)
      • Budget and even current vSphere licenses used (vFRC is included with Enterprise Plus)
  • x

If you have any other questions, don’t hesitate to drop them here and I will add them to the FAQ when applicable to the broader audience.

Startup News Flash part 5

Duncan Epping · Sep 10, 2013 ·

After the VMworld storm has slowly died down it is back to business again… This also means less frequent updates, although we are slowly moving towards VMworld Barcelona and I suspect there will be some new announcements at that time. So what happened in the world of flash/startups the last two weeks? This is Startup News Flash part 5, and it seems to be primarily around either funding rounds or acquisitions.

Probably one of the biggest rounds of funding I have seen for a “new world” storage company… $ 150 million. Yes, that is a lot of money. Congrats Pure Storage! I would expect an IPO at some point in the near future, and hopefully they will be expanding their EMEA based team. PureStorage is one of those companies which has always intrigued me. As GigaOm suggests this boosts will probably be used to lower the prices, but personally I would prefer a heavy investment in things like disaster recovery and availability. It is an awesome platform, but in my opinion it needs dedupe aware sync and a-sync replication! That should include VMware SRM integration from day 1 of course!

Flash is hot… Virident just got acquired by Western Digital for $685 million. Makes sense if you consider WD is known as the “hard disk” company, they need to keep on growing their business and the business of hard disks is going to be challenging in the upcoming years with SSD becoming cheaper and cheaper. Considering this is the second acquisition (sTec being the other) related to flash by WD you can say that they mean business.

I just noticed that Cisco announced they intent to acquire Whiptail for $415 million in cash. Interesting to see Cisco moving in to the storage space and definitely a smart move if you ask me.  With UCS for compute and Whiptail for storage they will be able to deliver the full stack considering they more or less already own the world of networking. Will be interesting to see how they integrate it in their UCS offerings. For those who don’t know, Whiptail is an all flash array (afa) which leverages a “scale out” approach, so start small and increase capacity by adding new boxes. Of course they offer most functionality other AFA vendors do, for more details I recommend reading Cormac’s excellent article.

To be honest, no one knew what to expect from this public VSAN Beta announcement. Would we get a couple of hundred registrations or thousands? Well I can tell you that they are going through the roof, make sure to register though if you want to be a part of this! Run it at home nested, run it on your test cluster at the office, do whatever you want with it… but make sure to provide feedback!

 

VMware vSphere Virtual SAN design considerations…

Duncan Epping · Sep 9, 2013 ·

I have been playing a lot with vSphere Virtual SAN (VSAN) in the last couple of months… I figured I would write down some of my thoughts around creating a hardware platform or constructing the virtual environment when it comes to VSAN. There are some recommended practices and there are some constraints, I aim to use this blog post to gather all of these Virtual SAN design considerations. Please read the VSAN introduction, how to install VSAN in your virtual lab and “How do you know where an object is located” to get a better understanding of the product. There is a long list of VSAN blogs that can be found here: vmwa.re/vsan

The below is all based on vSphere 5.5 Virtual SAN (public) Beta and my interpretation and thoughts based on various conversations with colleagues, engineering and reading various documents.

  • vSphere Virtual SAN (VSAN) clusters are limited to a maximum total of 32 hosts and there is a minimum of 3 hosts. VSAN is also currently limited to 100 VMs per host, resulting in a maximum of 3200 VMs in a 32 host cluster. Please note that HA currently has a limit of 2048 protected VMs in a single Datastore.
  • It is recommended to dedicate a 10GbE NIC port to your VSAN VMkernel traffic, although 1GbE is fully supported it could be a limiting factor in I/O intensive environments. Both VSS and VDS are supported.
  • It is recommended to have a VSAN VMkernel on every physical NIC! Ensure to configure them in a “active/standby” configuration so that when you have 2 physical NIC ports and 2 VSAN VMkernel’s each of them will have its own port. Do note that multiple VSAN VMkernel NICs on a single host on the same subnet is not a supported configuration, in  different subnets it is supported.
  • IP Hash Load Balancing is supported by VSAN, but due to limited number of IP-addresses between source/destination load balancing benefits could be limited. In other words, an etherchannel formed out of 4x1GbE NIC will most likely not result in 4GbE.
  • Although Jumbo Frames are fully supported with VSAN they do add a level of operational complexity. When Jumbo Frames are enabled ensure these are enabled end-to-end!
  • VSAN requires at a minimum 1 SSD and 1 Magnetic Disk per diskgroup on a host which is contributing storage. Each diskgroup can have a maximum of 1 SSD and 7 magnetic disks. When you have more than 7 HDDs or two or more SSDs you will need to create additional diskgroups.
  • Each host that is providing capacity to the VSAN datastore has at least one local diskgroup. There is a maximum of 5 disk groups per host!
  • It can beneficial to create multiple smaller disk groups instead of larger diskgroups. More diskgroups means smaller failure domains and more cache drives / queues.
  • Ensure when sizing your environment to take data replicas in to account. If your environment needs N+1 or N+2 (etc) resiliency factor this in accordingly.
  • SSD capacity does not count towards total VSAN datastore capacity. When sizing your environment, do not include SSD capacity in your totalized capacity calculation.
  • It is a recommended practice to have a minimum 1:10 ratio of SSD capacity to HDD capacity in each disk group. In other words, when you have 1TB of HDD capacity, it is recommended to have at least 100GB of SSD capacity. Note that VMware’s recommendation has changed since BETA, new recommendation is:
    • 10 percent of the anticipated consumed storage capacity before the number of failures to tolerate is considered
  • By default, 70% of the available SSD capacity will be used as read cache and 30% will be used as a write buffer. As in most designs, when it comes to cache/buffer –> more = better.
  • Selecting the SSD with the right performance profile can make a 5x-10x difference in VSAN performance easily, chose carefully and wisely. Both SSD and PCIe flash solutions are supported, but there are requirements! Make sure to check the HCL before purchasing new hardware. My tip Intel S3700, great price/performance balance.
  • VSAN relies on VM Storage Policies for policy based management. There is a default policy under the hood, but you cannot see this within the UI. As such it is a recommended practice to create a new standard policy for your environment after VSAN has been configured. It is recommended to start with all settings set to default, ensure “Number of failures to tolerate” is configured to 1. This guarantees that when a single host fails virtual machines can be restarted and recovered from this failure with minimal impact on the environment. Attach this policy to your virtual machines when migrating them to VSAN or during virtual machine provisioning.
  • Configure vSphere HA isolation response to “power-off” to ensure that virtual machines which reside on an isolated host can be safely restarted.
  • Ensure vSphere HA admission control policy (“host failures to tolerate” or the “percentage based) aligns with your VSAN availability strategy. In other words, ensure that both compute and storage are configured using the same “N+x” availability approach.
  • When defining your VM Storage Policy avoid unnecessary usage of “flash read cache reservation”. VSAN has internal read cache optimization algorithms, trust it like you trust the “host scheduler” or DRS!
  • VSAN does not support virtual machine disks greater than 2TB-512b, VMs which require larger VMDKs are not suitable candidates at this point in time for VSAN.
  • VSAN does not support FT, DPM, Storage DRS or Storage I/O Control. It should be noted though that VSAN internally takes care of scheduling and balancing when required. Storage DRS and SIOC are designed for SAN/NAS environments.
  • Although supported by VSAN, it is recommended practice to keep the hosts/disk configuration for a VSAN cluster similar. Non-uniform cluster configuration could lead to variations in performance and could make it more complex to stay compliant to defined policies after a failure.
  • When adding new SSDs or HDDs ensure these are not pre-formatted. Note that when VSAN is configured to “automatic mode” disks are added to existing disk groups or new disk groups are created automatically.
  • Note that vSphere HA behaves slightly different in a VSAN enabled cluster, here are some of the changes / caveats
    • Be aware that when HA is turned on in the cluster, FDM agent (HA) traffic goes over the VSAN network and not the Management Network. However, when an potential isolation is detected HA will ping the default gateway (or specified isolation address) using the Management Network.
    • When enabling VSAN ensure vSphere HA is disabled. You cannot enable VSAN when HA is already configured. Either configure VSAN during the creation of the cluster or disable vSphere HA temporarily when configuring VSAN.
    • When there are only VSAN datastores available within a cluster then Datastore Heartbeating is disabled. HA will never use a VSAN datastore for heartbeating.
    • When changes are made to the VSAN network it is required to re-configure vSphere HA.
  • VSAN requires a RAID Controller / HBA which supports passthrough mode or pseudo passthrough mode. Validate with your server vendor if the included disk controller has support for passthrough. An example of a passthrough mode controller which is sold separately is the LSI SAS 9211-8i.
  • Ensure log files are stored externally to your ESXi hosts and VSAN by leveraging vSphere’s syslog capabilities.
  • ESXi can be installed on: USB, SD and Magnetic Disk. Hosts with 512GB or more memory are only supported when ESXi is installed on magnetic disk.

That is it for now. When more comes to mind I will add it to the list!

VMworld TV, it just rocks!

Duncan Epping · Sep 8, 2013 ·

VMworld TV has been around for a long time, and I truly enjoy the videos. I know what kind of work goes in to recording these episodes and the long hours these guys make, so thanks for doing this Eric, Jeremy and production/camera crew! It is awesome work, and even though I attend VMworld, and the only way to get an idea about the atmosphere at the event and the awesomeness of some of the new products VMware announced and even VMware partners announced. I wanted to embed every video in this post, but considering they recorded over 40 that ended up being a bit too much. My fav. videos are in bold, here are the titles and links:

  • Welcome to VMworld 2013 San Francisco – http://bit.ly/1dPXg0t
  • VMworld TV Music Video for VMworld 2013 – http://bit.ly/1ar3NOW
  • VMworld 2013 Welcome Reception – http://bit.ly/15DR2sC
  • Interview with Douglas Gourlay at VMworld 2013 – http://bit.ly/19seOLj
  • Interview with Chad Sakac of EMC at VMworld 2013 – http://bit.ly/157ZyU3
  • Trend Micro Talks about VMware NSX Integration – http://bit.ly/17IJKDC
  • Interview with Sridhar Devarapalli of Citrix – http://bit.ly/1aVhncA
  • VMworld TV Investigates VMTurbo – http://bit.ly/1cSnCMu
  • VMware NSX Partners Talk about Working with VMware – http://bit.ly/1aVhrcq
  • Behind the Scenes at the VMworld 2013 Hands-On Labs – http://bit.ly/1cSnKLO
  • VMworld TV Crashes the CTO Party! – http://bit.ly/1aVhuoH
  • Interview with Brocade at VMworld 2013 – http://bit.ly/1dQaHxJ
  • Behind the Scenes with Simplivity – http://bit.ly/1cSnUTx
  • Interview with F5 Networks – http://bit.ly/17IKlW0
  • Exclusive Demo of CloudPhysics – http://bit.ly/1aVhDZ4
  • Interview with Aurelie Fonteny of Cumulus Networks – http://bit.ly/1cSocd1
  • Liquidware Labs to Discuss Profile Unity – http://bit.ly/1aVhFjN
  • Meet the Team Behind Nutanix – http://bit.ly/1cSolxi
  • Interview with Martin Casado – http://bit.ly/1cSopNz
  • Interview with Susan Gudenkauf – http://bit.ly/1dQbjDr
  • VMware CTO, Global Field, Paul Strong – http://bit.ly/1dQbnTL
  • Meet the Team Behind vCenter Log Insight – http://bit.ly/1cSowbT
  • Meet Mark Leake, Director of Product Marketing – http://bit.ly/1aVhLI2
  • Interview with Bruce Davie about VMware NSX – http://bit.ly/17ILg8K
  • Interview with Darla Hershberger about VMware Log Insight – http://bit.ly/1dQbxdP
  • Interview with Naomi Sullivan about VMware vCloud Automation – http://bit.ly/14ki3Gs
  • Interview with Neverfail at VMworld 2013 – http://bit.ly/17ILtJ6
  • Behind the Scenes for VMworld U.S. 2013 Keynotes – http://bit.ly/1cSoNLT
  • Interview and Demo with HyTrust – http://bit.ly/1aVhViH
  • Meet the Team Behind Rapid7 – http://bit.ly/1cSoTDi
  • Interview with Brad Hedlund on Network Virtualization – http://bit.ly/1aVhZir
  • Check out the VMworld 2013 U.S. Hang Space – http://bit.ly/1cSoZuz
  • Interview with Frank Denneman on his New Book – http://bit.ly/1aVi0CX
  • Interview with Jim Silvera on VMware vCenter Ops – http://bit.ly/1dQbXRf
  • Interview with Bethany Mayer – http://bit.ly/17IM25N
  • What is the Software-Defined Data Center? – http://bit.ly/160xW38
  • Meet the Team Behind the VMware vCloud Hybrid Service – http://bit.ly/18E3SrF
  • Behind the Scenes at VMworld Day 2 General Session – http://bit.ly/1dQcfHY
  • VMworld 2013 U.S. Day 1 Highlights – http://bit.ly/17IMpNE
  • VMworld 2013 U.S. Day 2 Highlights – http://bit.ly/17IMtx0
  • VMworld 2013 U.S. Day 3 Highlights – http://bit.ly/1dQcp1X
  • Wrap-up of VMworld 2013 U.S. – http://bit.ly/15DPnmJ

 

Want a free digital copy of the vSphere Design Pocketbook?

Duncan Epping · Sep 6, 2013 ·

I just noticed that PernixData is offering free copies of the vSphere Design Pocketbook. Only thing you will need to do is register here. I believe at VMworld they handed out roughly 1500 copies of these, and they have been very well received. For those who don’t know, this book was “authored by the community” with people like Frank Denneman, Cormac Hogan, Jason Nash, Eric Sloof, Vaughn Stewart and I deciding which consideration was in and which was one out. (Somehow we needed to ensure message weren’t conflicting or potentially “damaging” to an environment.)

Hopefully PernixData will have some more physical copies in Barcelona, but just in case they don’t… sign up for the free e-copy!

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 117
  • Page 118
  • Page 119
  • Page 120
  • Page 121
  • Interim pages omitted …
  • Page 336
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in