• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vSAN

How VSAN handles a disk or host failure

Duncan Epping · Sep 18, 2013 ·

I have had this question multiple times by now, I wanted to answer it in the Virtual SAN FAQ but I figured I would need some diagrams and probably more than 2 or 3 sentences to explain this. How are host or disk failures in a Virtual SAN cluster handled? I guess lets start with the beginning, and I am going to try to keep it simple.

I explained some of the basics in my VSAN intro post a couple of weeks back, but it never hurts to repeat this. I think it is good to explain the IO path first before talking about the failures. Lets look at a 4 host cluster with a single VM deployed. This VM is deployed with the default policy, meaning “stripe width” of 1 and “failures to tolerate” to 1 as well. When deployed in this fashion the following is the result:

In this case you can see: 2 mirrors of the VMDKs and a witness. These VMDKs by the way are the same, they are an exact copy. What else did we learn from this (hopefully) simple diagram?

  • A VM does not necessarily have to run on the same host as where its storage objects are sitting
  • The witness lives on a different host than the components it is associated with in order to create an odd number of hosts involved for tiebreaking under a network partition
  • The VSAN network is used for communication / IO etc

Okay, so now that we know these facts it is also worth knowing that VSAN will never place the mirror on the same host for availability reasons. When a VM writes the IO is mirrored by VSAN and will not be acknowledged back to the VM until all have completed. Meaning that in the example above both the acknowledgement from “esxi-02” and “esxi-03” will need to have been received before the write is acknowledge to the VM. The great thing here is though that all writes will go to flash/ssd, this is where the write-buffer comes in to play. At some point in time VSAN will then destage the data to your magnetic disks, but this will happen without the guest VM knowing about it… [Read more…] about How VSAN handles a disk or host failure

Frequently asked questions about Virtual SAN / VSAN

Duncan Epping · Sep 16, 2013 ·

After I published the vSphere Flash Read Cache FAQ many asked if I would also do a blog post for frequently asked questions about Virtual SAN / VSAN. I guess it makes sense considering Virtual SAN / VSAN being such a hot topic. So here are the questions I have received so far, followed by the answers of course. If you have a question do not hesitate to leave a comment.

** updated to reflect VSAN GA **

  • Can I add a host to a VSAN cluster which does not have local disks?
    • Yes a VSAN cluster can consist of hosts which are not contributing to VSAN storage. You will need to create a VSAN VMkernel and simply add it to the cluster. Note that you will need at a minimum 3 hosts which contribute storage to VSAN
  • VSAN requires an SSD, what is it used for?
    • The SSD is used for read caching (70%) and write buffering (30%). Every write will go to SSD first and will be destaged to HDD later.
  • When creating my VSAN VM Storage Policy, when do I use “failures to tolerate” and when do I use “stripe width”?
    • Failures to tolerate is all about availability, this is what you define when your virtual machine will need to be available when a host or disk group has failed. So if you want to take 1 host failure in to account, you define the policy to 1. This will then create 2 data objects and 1 witness in your cluster. Stripe width is about performance (read performance when not in cache and write destaging). Setting it to two or higher will result in data being striped across multiple disks. When used in conjunction with “failures” to tolerate this could potentially result in data of a single VM stored on multiple disks on multiple hosts.
  • Is there a default storage policy for VSAN?
    • Yes there is a policy applied by default to all VMs on a VSAN datastore but you cannot see this policy within the vSphere UI. You can see that a default policy is defined to various classes using the following command: esxcli vsan policy getdefault. By default an N+1 failures to tolerate policy is applied so that even in the case where user forgets to create and set a policy objects are made resilient. It is not recommended to change the default policy.
  • How is data striped across multiple disks on a host when stripe width is set to 2?
    • When stripe width is set to 2 first of all there is no guarantee that the data is striped across disks within a host. VSAN has it’s own algorythm to determine where data should be placed and as such it could happen that although you have sufficient disks in all host your data is striped across multiple hosts instead of disks within a host. When data is striped this is done in chunks of 1MB.
  • What is the purpose of “disk groups” since VSAN will create one datastore anyway?
    • A disk group defines the SSD that is used for caching/buffering in front of a set of HDDs. Basically a disk groups is a way of mapping HDDs to an SSD. Each disk group will have 1 SSD and a maximum of 7 disks.
  • How many disks can a single host contribute to VSAN?
    • Max 5 diskgroup
    • Each disk group needs 1 SDD and 1 HDD at a mininum and 7 HDDs at a maximum
    • HDD count max per host = 5 x 7 = 35
    • SSD count max per host = 5 x 1 = 5
  • Are both SSD and PCIe Flash cards supported?
    • Yes both are supported but check the HCL for more details around this as there are guidelines and requirements
  • Is 10GbE a hard requirement for VSAN?
    • 10GbE is not a hard requirement for VSAN. VSAN works perfectly fine in smaller environments, including labs, with 1GbE. Do note that 10GbE is a recommendation.
  • Why is it recommended for HA’s isolation response to be configured to “powered-off”?
    • When VSAN is enabled vSphere HA uses the VSAN VMkernel network for heartbeating. When a host does not receive any heartbeats, it is most likely that the host is also isolated/partitioned from a VSAN perspective from the rest of the cluster. In this state it is recommended to power-off the virtual machine as a new copy will be powered-on by HA on the remaining hosts in the cluster automatically. This way when the host comes out of isolation the situation where 2 VMs with the same identity are on the network does not occur.
  • Can I partition my SSD or disks so that I can use them for other (install ESXi / vFlash) purposes?
    • No you cannot partition your SSD or HDD(s). Virtual SAN will only, and always, claim entire disks. With VSAN it probably makes most sense to install ESXi on an internal USB/SD card, this to maximize the capacity for VSAN.
  • Does VSAN support deduplication or compression?
    • In the current version VSAN does not support deduplication or compression. The most expensive resource in your VSAN cluster is SSD/Flash, hence duplication of data is most relevant on that layer. While having multiple copies of your data results in two copies on HDDs, and two temporary copies in the distributed write buffer (30% of the SSDs), the distributed read cache portion of the Flash (70%) will only contain a single copy of any cached data.
  • Can VSAN leverage SAN/NAS datastores?
    • VSAN currently does not support the use of SAN/NAS datastores. Disks will need to be “local” and directly passed to the host.
  • I was told VSAN does thin disks by default, if I set Object Space Reservation to 100% does that mean the VMDK will be eager zero thick provisioned?
    • No it does not mean the VM will be thick provisioned, or a portion for that matter, when you define Object Space Reservation. Object Space Reservation is all about the numbers used by VSAN when calculation used disk space / available disk space etc. When Object Space Reservation is set to 100% on a disk of 25GB then this disk will be a thin provisioned disk but VSAN will do its math with 100% used of 25GB. I guess you can compare it to a memory reservation.
  • Does VSAN use iSCSI or NFS to connect hosts to the datastore?
    • VSAN does not use either of these two to connect hosts to a datastore. It uses a proprietary mechanism.
  • What is the impact of maintenance mode in a VSAN enabled cluster?
    • There are three ways of placing a host which is providing storage to your VSAN datastore in maintenance mode:
      1) Full Data Migration – All data residing on the host will be migrated. Impact: Could take a long time to complete.
      2) Ensure accessibility – VSAN ensures that all VMs will remain accessible by migrating the required data to other hosts. Impact: Potentially availability policies are violated.
      3) No Data Migration – No data will be migrated. Impact: Depending on the “failures to tolerate” policy defined some VMs might become unusable.
      The safest option is option 1, with option 2 being the preferred and default as it is the fastest to complete. I guess the question is why you are placing the host in maintenance mode and how fast it will become available again. Option 3 is a fall back, in caseyou really need to get into maintenance mode fast and don’t care about potential data loss.
  • Are there any features of vSphere which aren’t supported/compatible with VSAN?
    • Currently vSphere Distributed Power Management, Storage DRS and Storage IO Control are not supported with VSAN.
  • How do I add a Virtual SAN / VSAN license?
    • VSAN licenses are applied on a cluster level. Open the Webclient click on your VSAN enabled cluster, click the “Manage” tab followed by “Settings”. Under “Configuration” click “Virtual SAN Licensing” and then click “Assign License Key”.
  • How will Virtual SAN be priced / licensed?
    • VSAN is licensed per socket, the price is $ 2495 per socket or $ 50,- per VDI user. Note that the license includes the Distributed Switch and VM Storage Policies, even when using a vSphere license lower than Enterprise Plus!
  • If a host has failed and as such data is lost and all VMs were protected N+1, how long will it take before VSAN starts rebuilding the lost data?
    • VSAN will identify which objects are out of compliance (those which had N+1 and were stored on that host) and starts a time-out period of 60 minutes. It has a time-out period to avoid an unnecessary and costly full sync of data. If the host returns within those 60 minutes then the differences will copied to that host. When a VM has multiple mirrors it doesn’t notice the failure, this 60 minute period is all about going back to full policy compliance, i.e. being able to satisfy additional failures may they occur.
  • When a virtual machines moves around in a cluster will its objects follow to keep IO local?
    • No, objects (virtual disks for instance) do not follow the virtual machine. Just imagine what the cost/overhead of moving virtual disks between hosts would be each time DRS suggests a migration. Instead IO can be done remotely. Meaning that although your virtual machine might run on host-1 from a CPU/Mem perspective, its virtual disks could be physically located on host-2 and host-3.
  • When a Virtual Machine is migrated to another host,  is the situation such that after a vMotion the SDD cache is lost (temporary performance hit) and the cache will be rebuilt over time?
    • No cache will not be lost and there is no need to rebuilt/warm the cache up again. Cache will be accessed remotely when needed.
  • Does VSAN support Fault Tolerance aka FT?
    • No, VSAN does not support Fault Tolerance in this release.
  • The SSD in my host is being reported in vSphere as “non-SSD”. According to support this is a known issue with the generation of server I am using. Will this “mis-reporting” of the disk type affect my ability to configure a VSAN?
    • Yes it will, you will need to tag the SSD as local using (example below is what I use in my lab, your identifier will be different). And in this case I claim it as being “local” and as “SSD”.
      esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device mpx.vmhba2:C0:T0:L0 –option “enable_local enable_ssd”
  • It was mentioned that it will take 60 minutes after a failure before VSAN starts the automatic repair. Is it possible to shorten this time-out value?
    • **disclaimer: Although I do not recommend changing this value, I was told it is supported**
      Yes it is possible to shorten this time-out value by configuring the advanced setting named “VSAN.ClomRepairDelay” on every host in your VSAN cluster.
  • Why can’t I use datastore heartbeat functionality in VSAN only cluster?
    • There is no requirement for heartbeat datastores. The reason you do not have this functionality when you only have a VSAN datastore is because HA will use the VSAN network for heartbeats. So if a host is isolated from the VSAN network and cannot send heartbeats, it is safe to say that it will also not be able to update a heartbeat region remotely as such making it pointless to enable this feature in a VSAN only environment.
  • Are there specific Best Practices around deploying View on VSAN?
    • Yes there are, primarily around availability / caching and capacity reservations. Andre Leibovici wrote an article on this topic, read it!
  • Can the VSAN VMkernel of hosts in a cluster be part of a different subnet?
    • VSAN VMkernel’s need to be part of the same subnet. Different subnet for one (or multiple) hosts within a VSAN cluster is not supported. When using multiple VMkernel interfaces per host each interface needs to be part of a different subnet!
  • Does VSAN support being stretched across multiple geographical locations?
    • In the current version VSAN will not support “metro” clustering.
  • Is there a difference between a host failing and a disk gradually failing?
    • Yes there is a difference. There are various failure stated and depending on the state it also determines how fast VSAN will spin up a new mirror. The two failure states are “absent” and “degraded”. Degraded is where a disks has failed and the system has recognized this as such and knows it isn’t coming back. In this case VSAN recognizes this “degraded” state and will create a new mirror of the impacted objects immediately, as there is no point in waiting for 60 minutes when you know it isn’t coming back soon. The “absent” state means that VSAN doesn’t know if it is coming back any time soon, this could be a host that has failed or for instance when you yank a disk, in this case the 60 minute time-out starts.
  • Is there any explanation around how VSAN handles disk failures or host failures?
    • Yes, I wrote an article on this topic. Please read “How VSAN handles a disk or host failure” for more details.
  • What happens when an SSD fails in a VSAN cluster?

    • An SSD sits in front of a Disk Group as the read cache / write buffer. When the SSD fails then the disk group and all the components stored on it are marked as degraded. VSAN will then instanties new mirror copies where applicable and when sufficient disk capacity is available. For more details read this post.
  • Does vSphere support TRIM for SSDs?
    • No, TRIM is currently not supported/leveraged.
  • What are the Maximum Numbers for Virtual SAN GA?
    • 32 hosts per cluster
    • 100 VMs per host maximum
    • 3200 VMs per cluster maximum
    • 2048 VMs HA protected per cluster maximum
    • 2 million IOPS tested
  • How do I size a VSAN datastore / cluster?
    • I developed a sizing calculator which can be found here.
  • How do I monitor VSAN performance?
    • Performance can easily be monitored using the VSAN Observer tool. This has been discussed by various people: here, here and here, here.
  • What’s likely to affect VSAN performance ?
    • Performance is most likely affected by leveraging cheap flash devices or incorrectly configured policies. In the case a workload is highly random and has a large “working set” it could be that many of the IOs will need to come from disk, this can also impact performance depending on the disk type used and the number of disk stripes.
  • Why is  Storage DRS not supported in VSAN ?
    • VSAN only provides a single datastore and has its own placement and balancing algorithms.
  • What will happen when the whole environment goes down and power back on again ? Do we run some sort of integrity check ?
  • Is VSAN dependent on vCenter ? Can I configure VSAN if vCenter is down ?
    • VSAN is not dependent on vCenter. It can be configured from the console using “esxcli” and can even be configured and used before vCenter is up and running. William Lam wrote two articles around how to bootstrap vCenter on a single host running VSAN. (here and here)
  • Could you have locality in VSAN ? Does locality make sense at all compared to other solutions ?
    • By default VSAN does not have a “data locality” concept as I explained here. However, for View environments CBRC is fully supported and that provides a local read cache for desktops.
  • Is vCops aware of VSAN datastore?
    • The current version of VC Ops has limited functionality in its current release. The upcoming version of VC Ops will include more statistics and ways of monitoring a VSAN datastore.
  • How do you backup your VM’s in VSAN ? Just usual existing backup procedures ?
    • VDP supports VSAN and various storage vendors are going through testing/releasing a new version of their product as we speak. VMs stored on a VSAN database should not be treated differently then regular VMs.
  • Does VSAN support any data reduction mechanisms like deduplication or compression?
    • In the current version deduplication or compression is not included.
  • x

If you have a question, please don’t hesitate to ask… Over time I will add more and more to this list so come back regularly.

VMware vSphere Virtual SAN design considerations…

Duncan Epping · Sep 9, 2013 ·

I have been playing a lot with vSphere Virtual SAN (VSAN) in the last couple of months… I figured I would write down some of my thoughts around creating a hardware platform or constructing the virtual environment when it comes to VSAN. There are some recommended practices and there are some constraints, I aim to use this blog post to gather all of these Virtual SAN design considerations. Please read the VSAN introduction, how to install VSAN in your virtual lab and “How do you know where an object is located” to get a better understanding of the product. There is a long list of VSAN blogs that can be found here: vmwa.re/vsan

The below is all based on vSphere 5.5 Virtual SAN (public) Beta and my interpretation and thoughts based on various conversations with colleagues, engineering and reading various documents.

  • vSphere Virtual SAN (VSAN) clusters are limited to a maximum total of 32 hosts and there is a minimum of 3 hosts. VSAN is also currently limited to 100 VMs per host, resulting in a maximum of 3200 VMs in a 32 host cluster. Please note that HA currently has a limit of 2048 protected VMs in a single Datastore.
  • It is recommended to dedicate a 10GbE NIC port to your VSAN VMkernel traffic, although 1GbE is fully supported it could be a limiting factor in I/O intensive environments. Both VSS and VDS are supported.
  • It is recommended to have a VSAN VMkernel on every physical NIC! Ensure to configure them in a “active/standby” configuration so that when you have 2 physical NIC ports and 2 VSAN VMkernel’s each of them will have its own port. Do note that multiple VSAN VMkernel NICs on a single host on the same subnet is not a supported configuration, in  different subnets it is supported.
  • IP Hash Load Balancing is supported by VSAN, but due to limited number of IP-addresses between source/destination load balancing benefits could be limited. In other words, an etherchannel formed out of 4x1GbE NIC will most likely not result in 4GbE.
  • Although Jumbo Frames are fully supported with VSAN they do add a level of operational complexity. When Jumbo Frames are enabled ensure these are enabled end-to-end!
  • VSAN requires at a minimum 1 SSD and 1 Magnetic Disk per diskgroup on a host which is contributing storage. Each diskgroup can have a maximum of 1 SSD and 7 magnetic disks. When you have more than 7 HDDs or two or more SSDs you will need to create additional diskgroups.
  • Each host that is providing capacity to the VSAN datastore has at least one local diskgroup. There is a maximum of 5 disk groups per host!
  • It can beneficial to create multiple smaller disk groups instead of larger diskgroups. More diskgroups means smaller failure domains and more cache drives / queues.
  • Ensure when sizing your environment to take data replicas in to account. If your environment needs N+1 or N+2 (etc) resiliency factor this in accordingly.
  • SSD capacity does not count towards total VSAN datastore capacity. When sizing your environment, do not include SSD capacity in your totalized capacity calculation.
  • It is a recommended practice to have a minimum 1:10 ratio of SSD capacity to HDD capacity in each disk group. In other words, when you have 1TB of HDD capacity, it is recommended to have at least 100GB of SSD capacity. Note that VMware’s recommendation has changed since BETA, new recommendation is:
    • 10 percent of the anticipated consumed storage capacity before the number of failures to tolerate is considered
  • By default, 70% of the available SSD capacity will be used as read cache and 30% will be used as a write buffer. As in most designs, when it comes to cache/buffer –> more = better.
  • Selecting the SSD with the right performance profile can make a 5x-10x difference in VSAN performance easily, chose carefully and wisely. Both SSD and PCIe flash solutions are supported, but there are requirements! Make sure to check the HCL before purchasing new hardware. My tip Intel S3700, great price/performance balance.
  • VSAN relies on VM Storage Policies for policy based management. There is a default policy under the hood, but you cannot see this within the UI. As such it is a recommended practice to create a new standard policy for your environment after VSAN has been configured. It is recommended to start with all settings set to default, ensure “Number of failures to tolerate” is configured to 1. This guarantees that when a single host fails virtual machines can be restarted and recovered from this failure with minimal impact on the environment. Attach this policy to your virtual machines when migrating them to VSAN or during virtual machine provisioning.
  • Configure vSphere HA isolation response to “power-off” to ensure that virtual machines which reside on an isolated host can be safely restarted.
  • Ensure vSphere HA admission control policy (“host failures to tolerate” or the “percentage based) aligns with your VSAN availability strategy. In other words, ensure that both compute and storage are configured using the same “N+x” availability approach.
  • When defining your VM Storage Policy avoid unnecessary usage of “flash read cache reservation”. VSAN has internal read cache optimization algorithms, trust it like you trust the “host scheduler” or DRS!
  • VSAN does not support virtual machine disks greater than 2TB-512b, VMs which require larger VMDKs are not suitable candidates at this point in time for VSAN.
  • VSAN does not support FT, DPM, Storage DRS or Storage I/O Control. It should be noted though that VSAN internally takes care of scheduling and balancing when required. Storage DRS and SIOC are designed for SAN/NAS environments.
  • Although supported by VSAN, it is recommended practice to keep the hosts/disk configuration for a VSAN cluster similar. Non-uniform cluster configuration could lead to variations in performance and could make it more complex to stay compliant to defined policies after a failure.
  • When adding new SSDs or HDDs ensure these are not pre-formatted. Note that when VSAN is configured to “automatic mode” disks are added to existing disk groups or new disk groups are created automatically.
  • Note that vSphere HA behaves slightly different in a VSAN enabled cluster, here are some of the changes / caveats
    • Be aware that when HA is turned on in the cluster, FDM agent (HA) traffic goes over the VSAN network and not the Management Network. However, when an potential isolation is detected HA will ping the default gateway (or specified isolation address) using the Management Network.
    • When enabling VSAN ensure vSphere HA is disabled. You cannot enable VSAN when HA is already configured. Either configure VSAN during the creation of the cluster or disable vSphere HA temporarily when configuring VSAN.
    • When there are only VSAN datastores available within a cluster then Datastore Heartbeating is disabled. HA will never use a VSAN datastore for heartbeating.
    • When changes are made to the VSAN network it is required to re-configure vSphere HA.
  • VSAN requires a RAID Controller / HBA which supports passthrough mode or pseudo passthrough mode. Validate with your server vendor if the included disk controller has support for passthrough. An example of a passthrough mode controller which is sold separately is the LSI SAS 9211-8i.
  • Ensure log files are stored externally to your ESXi hosts and VSAN by leveraging vSphere’s syslog capabilities.
  • ESXi can be installed on: USB, SD and Magnetic Disk. Hosts with 512GB or more memory are only supported when ESXi is installed on magnetic disk.

That is it for now. When more comes to mind I will add it to the list!

How do you know where an object is located with Virtual SAN?

Duncan Epping · Sep 5, 2013 ·

You must have been wondering the same thing after reading the introduction to Virtual SAN. Last week at VMworld I received many questions on this topic, so I figured it was time for a quick blog post on this matter. How do you know where a storage object resides with Virtual SAN when you are striping across multiple disks and have multiple hosts for availability purposes, what about Virtual SAN object location? Yes I know this is difficult to grasp, even with just multiple hosts for resiliency where are things placed? The diagram gives an idea, but that is just from an availability perspective (in this example “failures to tolerate” is set to 1). If you have stripe width configured for 2 disks then imagine what could happen that picture. (Before I published this article, I spotted this excellent primer by Cormac on this exact topic…)

Luckily you can use the vSphere Web Client to figure out where objects are placed:

  • Go to your cluster object in the Web Client
  • Click “Monitor” and then “Virtual SAN”
  • Click “Virtual Disks”
  • Click your VM and select the object

The below screenshot depicts what you could potentially see. In this case the Policy was configured with “1 host failure to tolerate” and “disk striping set to 2”. I think the screenshot explains it pretty well, but lets go over it.

The “Type” column shows what it is, is it a “witness” (no data) or a “component” (data). The “Component state” shows you if it is available (active) or not at the moment. The “Host” column shows you on which host it currently resides and the “SSD Disk Name” column shows which SSD is used for read caching and write buffering. If you go to the right you can also see on which magnetic disk the data is stored in the column called  “Non-SSD Disk Name”.

Now in our example below you can see that “Hard disk 2” is configured in RAID 1 and then immediately following with RAID 0. The “RAID 1” refers to “availability” in this case aka “component failures” and the “RAID 0” is all about disk striping. As we configured “component failures” to 1 we can see two copies of the data, and we said we would like to stripe across two disks for performance you see a “RAID 0” underneath. Note that this is just an example to illustrate the concept, this is not a best practice or recommendation as that should be based on your requirements! Last but not least we see the “witness”, this is used in case of a failure of a host. If host 10.20.177.19 would fail or be isolated from the network somehow then the witness would be used by host 10.20.177.17 to claim ownership. Makes sense right?

Virtual SAN object location

Hope this helps understanding Virtual SAN object location a bit better… When I have the time available, I will try to dive a bit more in to the details of Storage Policy Based Management.

Testing vSphere Virtual SAN in your virtual lab with vSphere 5.5

Duncan Epping · Sep 2, 2013 ·

For those who want to start testing the beta of vSphere Virtual SAN in their lab with vSphere 5.5 I figured it would make sense to describe how I created my nested lab. (Do note that performance will be far from optimal) I am not going to describe how to install ESXi nested as there are a billion articles out there that describe how to do that.I suggest creating ESXi hosts with 3 disks each and a minimum of 5GB of memory per host:

  • Disk 1 – 5GB
  • Disk 2 – 20GB
  • Disk 3 – 200GB

After you have installed ESXi and imported a vCenter Server Appliance (my preference for lab usage, so easy and fast to set up!) you add your ESXi hosts to your vCenter Server. Note to the vCenter Server NOT to a Cluster yet.

Login via SSH to each of your ESXi hosts and run the following commands:

  • esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device mpx.vmhba2:C0:T0:L0 –option “enable_local enable_ssd”
  • esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device mpx.vmhba3:C0:T0:L0 –option “enable_local”
  • esxcli storage core claiming reclaim -d mpx.vmhba2:C0:T0:L0
  • esxcli storage core claiming reclaim -d mpx.vmhba3:C0:T0:L0

These two commands ensure that the disks are seen as “local” disks by Virtual SAN and that the “20GB” disk is seen as an “SSD”, although it isn’t using an SSD. There is another option which might even be better, you can simply add a VMX setting to specify the disks are SSDs. Check William’s awesome blog post for the how to.

After running these two commands we will need to make sure the hosts are configured properly for Virtual SAN. First we will add them to our vCenter Server, but without adding them to a cluster! So just add them on a Datacenter level.

Now we will properly configure the host. We will need to create an additional VMkernel adapter, do this for each of the three hosts:

  1. Click on your host within the web client
  2. Click “Manage” -> “Networking” -> “VMkernel Adapters”
  3. Click the “Add host networking” icon
  4. Select “VMkernel Network Adapter”
  5. Select the correct vSwitch
  6. Provide an IP-Address and tick the “Virtual SAN” traffic tickbox!
  7. Next -> Next -> Finish

When this is configured for all three hosts, configure a cluster:

  1. Click your “Datacenter” object
  2. On the “Getting started” tab click “Create a cluster”
  3. Give the cluster a name and tick the “Turn On” tickbox for Virtual SAN
  4. Also enable HA and DRS if required

Now you should be able to move your hosts in to the cluster. With the Web Client for vSphere 5.5 you can simply drag and drop the hosts one by one in to the cluster. VSAN will now be automatically configured for these hosts… Nice right. When all configuration tasks are completed just click on your Cluster object and then “Manage” -> “Settings” -> “Virtual SAN”. Now you should see the amount of hosts part of the VSAN cluster, number of SSDs and number of data disks.

Now before you get started there is one thing you will need to do, and that is enable “VM Storage Policies” on your cluster / hosts. You can do this via the Web Client as follows:

  • Click the “home” icon
  • Click “VM Storage Policies”
  • Click the little policy icon with the green checkmark, second from the left
  • Select your cluster and click “Enable” and then close

Now note that you have enabled VM Storage Policies, there are no pre-defined policies. Yes there is a “default policy”, but you can only see that on the command line. For those interested just open up an SSH session and run the following command:

~ # esxcli vsan policy getdefault
Policy Class Policy Value
------------ --------------------------------------------------------
cluster (("hostFailuresToTolerate" i1) )
vdisk (("hostFailuresToTolerate" i1) )
vmnamespace (("hostFailuresToTolerate" i1) )
vmswap (("hostFailuresToTolerate" i1) ("forceProvisioning" i1))
~ #

Now this means that in the case of “hostFailuresToTolerate”, Virtual SAN can tolerate a 1 host failure before you potentially lose data. In other words, in a 3 node cluster you will have 2 copies of your data and a witness. Now if you would like to have N+2 resilience instead of N+1 it is fairly straight forward. You do the following:

  • Click the “home” icon
  • Click “VM Storage Policies”
  • Click the “New VM Storage Policy” icon
  • Give it a name, I used “N+2 resiliency” and click “Next”
  • Click “Next” on Rule-Sets and select a vendor, which will be “vSan”
  • Now click <add capability> and select “Number of failures to tolerate” and set it to 2 and click “Next”
  • Click “Next” -> “Finish”

That is it for creating a new profile. Of course you can make these as complex as you want, their are various other options like “Number of disk stripes” and “Flash read cache reservation %”. For now I wouldn’t recommend tweaking these too much unless you absolutely understand the impact of changing these.

In order to use the profile you will go to an existing virtual machine and you right click it and do the following:

  • Click “All vCenter Actions”
  • Click “VM Storage Service Policies”
  • Click “Manage VM Storage Policies”
  • Select the appropriate policy on “Home VM Storage Policy” and do not forget to hit the “Apply to disks” button
  • Click OK

Now the new policy will be applied to your virtual machine and its disk objects! Also while deploying a new virtual machine you can in the provisioning workflow immediately select the correct policy so that it is deployed in a correct fashion.

These are some of the basics for testing VSAN in a virtual environment… now register and get ready to play!

  • « Go to Previous Page
  • Go to page 1
  • Interim pages omitted …
  • Go to page 54
  • Go to page 55
  • Go to page 56
  • Go to page 57
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

29-08-2022 – VMware Explore US
07-11-2022 – VMware Explore EMEA
….

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2022 · Log in