• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

6.2

VSAN 6.2 : Why going forward FTT=2 should be your new default

Duncan Epping · Mar 1, 2016 ·

I’ve been talking to a lot of customers the past 12-18 months, if one thing stood out is that about 98% of all our customers used Failures To Tolerate = 1. This means that 1 host or disk could die/disappear without losing data. Most of the customers when talking to them about availability indicated that they would prefer to use FTT=2 but cost was simply too high.

With VSAN 6.2 all of this will change. Today with a 100GB disk FTT=1 results in 200GB of required disk capacity. With FTT=2 you will require 300GB of disk capacity for the same virtual machine, which is an extra 50% capacity required compared to FTT=1. The risk, for most people, did not appear to weigh up against the cost. With RAID-5 and RAID-6 the math changes, and the cost of extra availability also changes.

The 100GB disk we just mentioned with FTT=1 and the Failure Tolerance Method set to “RAID-5/6” (only available for all flash) means that the 100GB disk requires 133GB of capacity. Already that is a saving of 67GB compared to “RAID-1”. But that savings is even bigger when going to FTT=2, now that 100GB disk requires 150GB of disk capacity. This is less than “FTT=1” with “RAID-1” today and literally half of FTT=2 and FTM=RAID-1. On top of that, the delta between FTT=1 and FTT=2 is also tiny, for an additional 17GB disk space you can now tolerate 2 failures. Lets put that in to a table, so it is a bit easier to digest. (Note that you can sort the table by clicking on a column header.)

FTTFTMOverheadVM sizeCapacity required
1Raid-12x100GB200GB
1Raid-5/61.33x100GB133GB
2Raid-13x100GB300GB
2Raid-5/61.5x100GB150GB

Of course you need to ask yourself if your workload requires it, does it make sense with desktops? Well for most desktops it probably doesn’t… But for your Exchange environment maybe it does, for your databases maybe it does, for your file servers, print servers, for your web farm even it can make a difference. That is why I feel that the standard used “FTT” setting is going to change slowly, and will (should) be FTT=2 in combination with FTM set to “RAID-5/6”. Now let it be clear, there is a performance difference between FTT=2 with FTM=RAID-1 vs FTT=2 with FTM=RAID-6 (same applies for FTT=1) and of course there is a CPU resource cost as well. Make sure to benchmark what the “cost” is for your environment and make an educated decision based on that. I believe though that in the majority of cases the extra availability will outweigh the cost / overhead, but still this is up to you to determine and decide. What is great about VSAN in my opinion is the fact that we offer you the flexibility to decide per workload what makes sense.

VSAN 6.2 : Sparse Swap, what is it good for?

Duncan Epping · Feb 29, 2016 ·

I already briefly touched on Sparse Swap in my VSAN 6.2 launch article. When talking about space efficiency and VSAN 6.2 most people will immidiately bring up RAID-5/6 and/or deduplication and compression. Those of course are definitely the big ticket items for VSAN 6.2, there is no doubt. Sparse Swap however is one of those tiny little enhancements to VSAN that can make a big difference in terms of cost and space efficiency. And although I already briefly discussed it, I would like to go over it again and show you an example why it makes a difference and when it can make a big difference.

First off all a bit of history. Up to VSAN 6.1 all “swap files” were created with a 100% space reservation. This means that when you deploy a VM and the VM has 4GB of memory, and no memory reservation defined, that a swap file will be created and 4GB of disk space will be reserved for this swap file. Now keep in mind that in order to ensure availability that swap file is not a single 4GB object, but actually 2 x 4GB. You can imagine that with a single VM the cost of that swap file is negligible. But with 100VMs per host and 1600 in a cluster that single 4GB swap file per VM now results in:

1600 VMs * 4GB * 2 (FTT overhead) =12800GB capacity reservation

Note that even with with RAID5 or RAID6 the FTT overhead would still be 2*, this is because swap is a special object and VM Storage Policies do not apply to it. Note that no other VM / object can reserve or use the space which is reserved for those swap files. When Sparse Swap however is enabled (advanced host setting) no capacity will be reserved for those VM swap files. This means that instead of losing 12800GB capacity you now don’t lose anything.

When should you use this? Well first and foremost when you don’t overcommit on memory! If you are planning to overcommit on memory then please do not use this functionality as you will need the swap file when there are no memory pages available. I hope that this is clear and you only use it when you are not overcommitting on memory. Linked clone desktops is one of those use cases where swap files are a significant portion of the total required datastore capacity, leveraging Sparse Swap will allow you to reduce the cost, especially when running all-flash. So now that we know why, how do you enable it? Well that is really simple:

esxcfg-advcfg -s 1 /VSAN/SwapThickProvisionDisabled

I hope this article makes it clear that this small enhancement can go a long way! Oh and before I forget, this neat small but useful enhancement was the result of a feature request a customer filed about 3 – 4 months ago, just think about that for a second, that is agility / flexibility right there, and yes our customers come first.

EMC and VMware introduce VxRail, a new hyper-converged appliance

Duncan Epping · Feb 16, 2016 ·

As most of you know I’ve been involved in Virtual SAN in some shape or form since the very first release. Reason I was very excited about Virtual SAN is because I felt it would provide anyone the ability to develop a hyper-converged offering. Many VMware partners have already done this, and with the VSAN Ready Node program growing and enhancing every day (more about this soon) customers have an endless list of options to chose from. Today EMC and VMware introduce a new hyper-converged appliance: VxRail.

vxrail

I am not going to make this an extremely long post, as my friend Chad has already done that of course and there is no point in repeating his blog word for word. I do feel however that VxRail truly is the best both EMC and VMware have to offer. The great thing about VxRail in my opinion is that it can be configured in anyway you like. From 6 all the way up to 28 cores per CPU, from 64GB of memory all the way up to 512GB of memory, from 3.6TB of storage all the way up to 19TB of storage. And yes that was per “node” not per appliance. And considering the roadmap, I can see those numbers increasing fast as well. Also note that we are talking “hybrid” and “all-flash” models here. I have to agree with Chad, I think that all-flash will be preferably to hybrid. The tipping point in terms of economics have definitely been reached, especially when you take the various data services in to account that VSAN has to offer.

These are the models which VCE will offer for All-Flash. Note that you can start with 3 nodes and scale up in 1 node increments.

What I think is great about VxRail is the fact (besides that it comes with vSphere and VSAN) that it comes with additional services like for instance RecoverPoint for VM (15 VMs for free per appliance), which is completely integrated with the Web Client by the way. (For those who don’t know, RecoverPoint provides sync and a-sync replication.) Or for instance S-3 compliant object storage is provided out of the box, 10TB license is included for free per appliance. On top of that there is integration built in with Data Domain.

Must be expensive right? Well actually it isn’t. Smallest configuration starts at $60k list price… Great price point, and I can’t wait for the first boxes to hit the street. Heck I need to talk Chad in to sending me one of those All-Flash models for our lab at some point.

The 10% rule for VSAN caching, calculate it on a VM basis not disk capacity!

Duncan Epping · Feb 16, 2016 ·

Over the last couple of weeks I have been talking to customers a lot about VSAN 6.2 and how to design / size their environment correctly. Since the birth of VSAN we have always spoken about 10% cache to capacity ratio to ensure performance is where it needs to be. When I say 10% cache to capacity ratio, I should actually say the following:

The general recommendation for sizing flash capacity for Virtual SAN is to use 10% of the anticipated consumed storage capacity before the NumberOfFailuresToTolerate is considered.

Reality is though that what most customers did was they looked at their total capacity, cut it in half (FTT=1) and then said “we will take 10%” of that. So a 10TB VSAN Datastore would require “10% of 5TB” in terms of cache. This is fast way of indeed calculating what your caching requirements are… That is, if ALL of your virtual machines have the same “availability” requirements. Because even in 6.1 and prior the outcome would change if you had VMs which required FTT=2 or FTT=3 or even FTT=0. (Although I would not recommend FTT=0.)

With VSAN 6.2 this is amplified even more. Why? Well as you hopefully read, VSAN 6.2 introduces space efficiency functionality (for all-flash) like deduplication, compression, RAID-5 or RAID-6 over the network. The following diagram depicts what that looks like. In this case we show RAID-6 with 4 data blocks and 2 parity blocks, which is capable of tolerating 2 failures anywhere in the cluster of 6 hosts.

raid-6

If you look at the above, and take that old “FTT=1”  or “FTT=2” diagram in mind you quickly realize that the effective capacity per datastore is not as easy to calculate as it was in the past. Lets take a look at a simple example to show the impact which using certain data services can have on your design / sizing.

  • 1000 VMs with on average 50GB disk space required
  • 1000 * 50GB = 50TB

Lets take a look at both FTT=1 and FTT=2 with and without Raid-5/6 enabled. The calculations are pretty simple. Note that “FTT” stands for “Failures to Tolerate” and FTM stands for “Failure Tolerance Method”.

FTTFTMCalculationResult
1Raid-11000 VMs * 50GB * 2 (overhead)100TB
1Raid-5/61000 VMs * 50GB * 1.33 (overhead)66.5TB
2Raid-11000 VMs * 50GB * 3 (overhead)150TB
2Raid-5/61000 VMs * 50GB * 1.5 (overhead)75TB

Now if you look at the table, you will see there is a big difference between the capacity requirements for FTT=2 using RAID-1 and FTT=2 using RAID-5/6 even between the FTT=1 variations the difference is significant. You can imagine that when you base your required cache capacity simply on the required disk capacity that the math will be off. Assuming that the amount of hot data in all cases is 10% the difference could be substantial. However, when you base your cache requirements on the initial 10% of “1000* 50GB” the result never changes!

And in this case I haven’t even taken deduplication and compression in to account, you can imagine that with a data reduction of 2x using VSAN compression and deduplication that the math will change again for the caching tier, well that is if you do it wrong and calculate it based on the actual capacity layer… To summarize, when you do your VSAN design and sizing, make sure to always base it on the Virtual Machine size, it is the safest and definitely the easiest way to do the math!

For more details on RAID-5/6 and – or on Deduplication and Compression make sure to read Cormac’s excellent articles on these topics.

What’s new for Virtual SAN 6.2?

Duncan Epping · Feb 10, 2016 ·

Yes, finally… the Virtual SAN 6.2 release has just been announced. Needless to say, but I am very excited about this release. This is the release that I have personally been waiting for. Why? Well I think the list of new functionality will make that obvious.  There are a couple of clear themes in this release, and I think it is fair to say that data services / data efficiency is most important. Lets take a look at the list of what is new first and then discuss them one by one

  • Deduplication and Compression
  • RAID-5/6 (Erasure Coding)
  • Sparse Swap Files
  • Checksum / disk scrubbing
  • Quality of Service / Limits
  • In mem read caching
  • Integrated Performance Metrics
  • Enhanced Health Service
  • Application support

That is indeed a good list of new functionality, just 6 months after the previous release that brought you Stretched Clustering, 2 node Robo etc. I’ve already discussed some of these as part of the Beta announcements, but lets go over them one by one so we have all the details in one place. By the way, there also is an official VMware paper available here.

Deduplication and Compression has probably been the number one ask from customers when it comes to features requests for Virtual SAN since version 1.0. The Deduplication and Compression is a feature which can be enabled on an all-flash configuration only. Deduplication and Compression always go hand-in-hand and is enabled on a cluster level. Note that Deduplication and Compression are referred to as nearline dedupe / compression, which basically means that deduplication and compression happens during destaging from the caching tier to the deduplication tier.

VSAN 6.2

Now lets dig a bit deeper. More specifically, deduplication granularity is 4KB and will happen first and is then followed by an attempt to compress the unique block. This block will only be stored compressed when it can be compressed down to 2KB or smaller. The domain for deduplication is the disk group in each host. Of course the question then remains, what kind of space savings can be expected? It depends is the answer. In our environments, and our testing, have shown space savings between 2x and 7x. Where 7x arefull clone desktops (optimal situation) and 2x is a SQL database. Results in other words will depend on your workoad.

Next on the list is RAID-5/6 or Erasure Coding as it is also referred to. In the UI by the way, this is configurable through the VM Storage Policies and you do this through defining the “Fault Tolerance Method” (FTM). When you configure this you have two options: RAID-1 (Mirroring) and RAID-5/6 (Erasure Coding). Depending on how FTT (failures to tolerate) is configured when RAID-5/6 is selected you will end up with a 3+1 (RAID-5) configuration for FTT=1 and 4+2 for FTT=2.

VSAN RAID-6

Note that “3+1” means you will have 3 data blocks and 1 parity block, in the case of 4+2 this means 4 data blocks and 2 parity blocks. Note that again this functionality is only available for all-flash configurations. There is a huge benefit to using it by the way:

Lets take the example of a 100GB Disk:

  • 100GB disk with FTT =1 & FTM=RAID-1 set –> 200GB disk space needed
  • 100GB disk with FTT =1 & FTM=RAID-5/6 set –> 130.33GB disk space needed
  • 100GB disk with FTT =2 & FTM=RAID-1 set –> 300GB disk space needed
  • 100GB disk with FTT =2 & FTM=RAID-5/6 set –> 150GB disk space needed

As demonstrated, the space savings are enormous, especially with FTT=2 the 2x savings can and will make a big difference. Having that said, do note that the minimum number of hosts required also change. For RAID-5 this is 4 (remember 3+1) and 6 for RAID-6 (remember 4+2). The following two screenshots demonstrate how easy it is to configure it and what the layout looks of the data in the web client.


Sparse Swap Files is a new feature that can only be enabled by setting an advanced setting. It is one of those features that is a direct result of a customer feature request for cost optimization. As most of you hopefully know, when you create VM with 4GB of memory a 4GB swap file will be created on a datastore at the same time. This is to ensure memory pages can be assigned to that VM even when you are overcommitting and there is no physical memory available. With VSAN when this file is created it is created “thick” at 100% of the memory size. In other words, a 4GB swap file will take up 4GB which can’t be used by any other object/component on the VSAN datastore. When you have a handful of VMs there is nothing to worry about, but if you have thousands of VMs then this adds up quickly. By setting the advanced host setting “SwapThickProvisionedDisabled” the swap file will be provisioned thin and disk space will only be claimed when the swap file is consumed. Needless to say, but we only recommend using this when you are not overcommitting on memory. Having no space for swap and needed to write to swap wouldn’t make your workloads happy.

Next up is the Checksum / disk scrubbing functionality. As of VSAN 6.2 for every write (4KB) a checksum is calculated and stored separately from the data (5-byte). Note that this happens even before the write occurs to the caching tier so even an SSD corruption would not impact data integrity. On a read of course the checksum is validated and if there is a checksum error it will be corrected automatically. Also, in order to ensure that over time stale data does not decay in any shape or form, there is a disk scrubbing process which reads the blocks and corrects when needed. Intel crc32c is leveraged to optimize the checksum process. And note that it is enabled by default for ALL virtual machines as of this release, but if desired it can be disabled as well through policy for VMs which do not require this functionality.

Another big ask, primarily by service providers, was Quality of Service functionality. There are many aspects of QoS but one of the major asks was definitely the capability to limit VMs or Virtual Disks to a certain number of IOPS through policy. This simply to prevent a single VM from consuming all available resources of a host. One thing to note is that when you set a limit of 1000 IOPS VSAN uses a block size of 32KB by default. Meaning that when pushing 64KB writes the 1000 IOPS limits is actual 500. When you are doing 4KB writes (or reads for that matter) however, we still count with 32KB blocks as this is a normalized value. Keep this in mind when setting the limit.

When it comes to caching there was also a nice “little” enhancement. As of 6.2 VSAN also has a small in-memory read cache. Small in this case means 0.4% of a host’s memory capacity up to a max of 1GB. Note that this in-memory cache is a client side cache, meaning that the blocks of a VM are cached on the host where the VM is located.

Besides all these great performance and efficiency enhancements of course a lot of work has also been done around the operational aspects. As of VSAN 6.2 no longer do you as an admin need to dive in to the VSAN observer, but you can just open up the Web Client to see all performance statistics you want to see about VSAN. It provides a great level of detail ranging from how a cluster is behaving down to the individual disk. What I personally feel is very interesting about this performance monitoring solution is that all the data is stored on VSAN itself. When you enable the performance service you simply select the VSAN storage policy and you are set. All data is stored on VSAN and also all the calculations are done by your hosts. Yes indeed, a distributed and decentralized performance monitoring solution, where the Web Client is just showing the data it is provided.

Of course all new functionality, where applicable, has health check tests. This is one of those things that I got used to so fast, and already take for granted. The Health Check will make your life as an admin so much easier, not just the regular tests but also the pro-active tests which you can run whenever you desire.

Last but not least I want to call out the work that has been done around application support, I think especially the support for core SAP applications is something that stands out!

If you ask me, but of course I am heavily biased, this release is the best release so far and contains all the functionality many of you have been asking for. I hope that you are as excited about it as I am, and will consider VSAN for new projects or when current storage is about to be replaced.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 5
  • Page 6
  • Page 7
  • Page 8
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in