• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

VMware

Are Nested Fault Domains supported with 2-node configurations with vSAN 8.0 ESA?

Duncan Epping · Oct 28, 2022 ·

Short answer, yes 2-node configurations with vSAN 8.0 ESA support Nested Fault Domains. Meaning that when you have a 2-node configuration you can also protect your data within each host with RAID-1, RAID-5, or RAID-6! The configuration of this is pretty straightforward. You create a policy with “Host Mirroring” and select the protection you want in each host. The screenshot below demonstrates this.

In the above example, I mirror the data across hosts and then have a RAID-5 configuration within each host. Now when I create a RAID-5 configuration within each host I will get the new vSAN ESA 2+1 configuration. (2 data blocks, 1 parity block) If you have 6 devices or more in your host, you can also create a RAID-6 configuration, which is 4+2. (4 data blocks, 2 parity blocks) This provides a lot of flexibility and can lower the overhead when desired compared to RAID-1. (RAID-1 = 100% overhead, RAID-5 = 50% overhead for 2+1, RAID-6 = 50% overhead) When you use RAID-5 and RAID-6 and look at the layout of the data it will look as shown in the next two screenshots, the first screenshot shows the RAID-5 configuration, and the second the RAID-6 configuration.

vSAN ESA 2-node nested fault domain raid-5

vSAN ESA 2-node nested fault domain raid-6

One thing you may wonder when looking at the screenshots is why they also have a RAID-1 configuration for the VMDK object, this is the “performance leg” that vSAN ESA implements. For RAID-5, which is “FTT=1”, this means you get 2 components. For RAID-6, which is FTT=2, this means you will get 3 components so you can tolerate 2 failures.

I hope that helps answer some of the questions folks had on this subject!

 

Running vSAN ESA? Change the default storage policy to RAID-5/6!

Duncan Epping · Oct 14, 2022 ·

Most of you have read all about vSAN ESA by now. If you have not, you can find my article here, and a dozen articles on core.vmware.com by the Tech Marketing team. What is going to make a huge difference with the Express Storage Architecture is that you get RAID-5 efficiency at RAID-1 performance. This is discussed by Pete Koehler in this blog post in-depth, so no point in me reiterating it. On top of that, the animated gif below demonstrates how it actually works and shows why it not only performance well, but also why it is so efficient from a capacity stance. As we only have a single tier of flash, the system uses it in a smart way and introduces additional layers so that both reads and writes are efficient.

Now one thing I do want to point out is that if you create your ESA cluster, you will need to verify the default storage policy assigned to the vSAN Datastore. In my case this was the regular vSAN Storage Policy, which means RAID-1 configuration for the performance leg, and RAID-1 for the capacity leg. Now, I want to get the most of my system from a capacity perspective and I want to test this new level of performance for RAID-5, even though I only have 4 hosts (which gives me a 2+1 RAID-5 set).

Of course you can select a policy everytime you deploy a VM, but I prefer to keep things simple, so I change the default storage policy on my datastore. Simply click on the Datastore icon in the vSphere Client, then select your vSANDatastore and click on “Configure” and “General”. Next click on “Edit” where is says “Default Storage Policy” and then select the policy you want to have applied to VMs by default for this datastore. As shown below, for me that is RAID-5!

Unexplored Territory Podcast 28 – Data is the new oil! Featuring Christos Karamanolis

Duncan Epping · Oct 5, 2022 ·

After attending the session Christos hosted at VMware Explore, Frank and I felt it would be a good idea to record a podcast with him. In this episode, we discuss the two relatively unknown products and projects, Project Moneta and VMware Data Service Manager. For those who don’t know, Christos used to be the CTO for Storage and Availability, and now is one of the two Fellow’s at VMware. Christos mainly focusses on data management, and how VMware can help customers solve their problems in this space. Listen via Spotify https://spoti.fi/3RvSniF, Apple https://apple.co/3SxMFOn, or just use the embedded player below.

vSAN File Services fails to create file share with error Failed to create the VDFS File System.

Duncan Epping · Oct 4, 2022 ·

Last week on our internal slack channel one of the field folks had a question. He was hitting a situation where vSAN File Services failed when creating a file share with the error “Failed to create the VDFS File System”. We went back and forth a bit and after a while I jumped on Zoom to look at the issue, and troubleshoot the environment. After testing various combinations of policies I noticed that a particular policy worked, whole another policy did not. At first it looked like that stretched cluster policies would not work but after creating a new policy with a different name it did work. One thing left, the name of the policy. It appears that the use of special characters in the VM Storage Policy name results in the error “Failed to create the VDFS File System”. In this particular case the VM Storage Policy that was used was “stretched – mirrored FTT=1 RAID-1”. The character that was causing the issue was the “=” character.

How do you resolve it? Simply change the name of the policy. For instance, the following would work: “stretched – mirrored FTT1 RAID-1”.

Cleaning up old vSAN File Services OVF files on vCenter Server

Duncan Epping · Oct 3, 2022 ·

There was a question last week about the vSAN File Services OVF Files, the question was about the location where they were stored. I did some digging in the past, but I don’t think I ever shared this. The vSAN File Services OVF is stored on vCenter Server (VCSA) in a folder, for each version. The folder structure looks as show below, basically each version of an OVF has a directory with required OVF files.

root@vcsa-duncan [ ~ ]# ls -lha /storage/updatemgr/vsan/fileService/

total 24K

vsan-health users 4.0K Sep 16 16:09 .

vsan-health root  4.0K Nov 11  2020 ..

vsan-health users 4.0K Nov 11  2020 ovf-7.0.1.1000

vsan-health users 4.0K Mar 12  2021 ovf-7.0.2.1000-17692909

vsan-health users 4.0K Nov 24  2021 ovf-7.0.3.1000-18502520

vsan-health users 4.0K Sep 16 16:09 ovf-7.0.3.1000-20036589

root@vcsa-duncan [ ~ ]# ls -lha /storage/updatemgr/vsan/fileService/ovf-7.0.1.1000/

total 1.2G

vsan-health users 4.0K Nov 11  2020 .

vsan-health users 4.0K Sep 16 16:09 ..

vsan-health users 179M Nov 11  2020 VMware-vSAN-File-Services-Appliance-7.0.1.1000-16695758-cloud-components.vmdk

vsan-health users 5.9M Nov 11  2020 VMware-vSAN-File-Services-Appliance-7.0.1.1000-16695758-log.vmdk

vsan-health users  573 Nov 11  2020 VMware-vSAN-File-Services-Appliance-7.0.1.1000-16695758_OVF10.mf

vsan-health users  60K Nov 11  2020 VMware-vSAN-File-Services-Appliance-7.0.1.1000-16695758_OVF10.ovf

vsan-health users 998M Nov 11  2020 VMware-vSAN-File-Services-Appliance-7.0.1.1000-16695758-system.vmdk

I’ve asked the engineering team, and yes, you can simply delete obsolete versions if you need the disk capacity.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 7
  • Page 8
  • Page 9
  • Page 10
  • Page 11
  • Interim pages omitted …
  • Page 123
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in