• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

u1

New beta of the vSphere 5.5 U1 Hardening Guide released

Duncan Epping · May 28, 2014 ·

Mike Foley just announced the new release of the vSphere 5.5 U1 Hardening Guide. Note that it is still labeled as a “beta” as Mike is still gathering feedback, however the document should be finalized first week of June.

For those concerned about security, this is an absolute must read! As always, before implementing ANY of these recommendations make sure to test them on a test cluster and test expected functionality of both the vSphere platform and the virtual machines and applications running on top of it.

Nice work Mike!

One versus multiple VSAN disk groups per host

Duncan Epping · May 22, 2014 ·

I received two questions on the same topic this week so I figured it would make sense to write something up quickly. The questions were around an architectural decision for VSAN, one versus multiple VSAN disk groups per host. I have explained the concept of disk groups already in various posts but in short this is what a disk group is and what the requirements are:

A disk group is a logical container for disks used by VSAN. Each disk groups needs at a minimum 1 magnetic disk and can have a maximum of 7 disks. Each disk group also requires 1 flash device.

VSAN disk groups

Now when designing your VSAN cluster at some point the question will arise should I have 1 or multiple disk groups per host? Can and will it impact performance? Can it impact availability?

There are a couple of things to keep in mind when it comes to VSAN if you ask me. The flash device which is part of each disk group is the caching/buffer layer for those disks, without the flash device the disks will also be unavailable. As such a disk group can be seen as a “failure domain”, because if the flash device fails the whole disk group is unavailable for that period of time. (Don’t worry, VSAN will automatically rebuild all your components that are impacted automatically.) Another thing to keep in mind is performance. Each flash device will provide an X amount of IOPS.A higher total number of IOPS could (probably will) change performance drastically, however it should be noted that capacity could still be a constraint. If this all sounds a bit fluffy lets run through an example!

  • Total capacity required: 20TB
  • Total flash capacity: 2TB
  • Total number of hosts: 5

This means that per host we will require:

  • 4TB of disk capacity (20TB/5 hosts)
  • 400GB of flash capacity (2TB/5 hosts)

This could simply result in each host having 2 x 2TB NL-SAS and 1 x 400GB flash device. Lets assume your flash device is capable of delivering 36000 IOPS… You can see where I am going right? What if I would have 2 x 200GB flash and 4x 1TB magnetic disks instead? Typical the lower capacity drives will do less write IOPS but for the Intel S3700 for instance that is 4000 less. So instead of 1 x 36000 IOPS it would result in 2 x 32000 IOPS. Yes, that could have a nice impact indeed….

But not just that, we also have more disk groups and smaller fault domains as a result. On top of that we will end up with more magnetic disks which means more IOPS per GB capacity in general. (If an NL-SAS drive does 80 IOPS for 2TB then two NL-SAS drives of 1TB will do 160 IOPS. Which means same TB capacity but twice the IOPS if you need it.)

In summary, yes there is a benefit in having more disk groups per hosts and as such more flash devices…

How long will VSAN rebuilding take with large drives?

Duncan Epping · May 15, 2014 ·

I have seen this question popping up various times now where people want to know how long VSAN rebuilding will take with large drives. And it was something that was asked on twitter as well today, and I think there are some common misconceptions out there when it comes to rebuilding. Maybe this tweet summarizes those misconceptions best:

https://twitter.com/rbergin/status/466908885165424641

There are a couple of things I feel need to be set straight here:

  1. VSAN is an object store storage solution, each disk is a destination for objects
  2. There is no filesystem or RAID set spanning disks

I suggest you read the above twice, now if you know that there is no RAID set spanning disks or a single filesystem formatted across multiple you can conclude the following: If a disk fails then what is on the disk will need to be rebuild. Lets look at an example:

I have a 4TB disk with 1TB capacity used by virtual machine objects. The 4TB disk fails. Now the objects are more than likely out of compliance from an availability stance and VSAN will start rebuilding the missing components of those objects. Notice I said “objects and components” and not “disk”. This means that VSAN will start reconstructing the 1TB worth of components of those impacted objects, and not the full 4TB! The total size of the lost components is what matters, and not the total size of the lost disk.

Now when VSAN starts rebuilding it is good to know that all hosts that hold components of impacted objects will contribute to the rebuild. Even better, VSAN does not have to wait for the failed disk to be replaced or return for duty… VSAN used the whole VSAN cluster as a hot spare and will start rebuilding those components within your cluster, as long as there is sufficient disk capacity available of course. On top of that, the rebuilding logic of VSAN is smart… it will not just go all out but it will instead take the current workload consideration. If you have virtual machines which are doing a lot of IO than VSAN, while rebuilding, is smart enough to prioritize the rebuilding of those components in such a way that it will not hurt your workloads.

Now the question remains, how long will it take to rebuild 1TB worth of lost components? Well that depends… And what does it depend on?

  • Total size of components to be rebuild of impacted objects
  • Number of hosts in the cluster
    • Number of hosts contributing to the rebuild
  • Number of disks per host
  • Network infrastructure
  • Current workload of VMs within the cluster

A lot of variables indeed, difficult for me to predict how long it will take. This is something

Oh, and before I forget, congrats to the VSAN team for winning best of Microsoft TechEd in the virtualization category. WHAT? Yes you read that correctly…

Using differently sized disks in a VSAN environment

Duncan Epping · May 13, 2014 ·

Internally someone asked this question, and at the Italian VMUG I had someone asking me the same question… What if I want to scale out, or scale-up, and need to add differently sized disks to my existing VSAN environment? Will that be an as expensive exercise as with (some) traditional RAID systems?

Some of you have introduced new disks to RAID sets in the pasts may have seen this: you add a 2TB disk to a RAID config that only has 1TB disks and you waste 1TB as the RAID set only includes the capacity of the other disks. VSAN is not like this fortunately!

With VSAN you can scale-up and scale-out dynamically. VSAN does not, to a certain extend, care about the disk capacity. For VSAN the disk is just destination to store objects, and there is no filesystem or lower level formatting going on to stripe blocks across the disks, sure it uses a filesystem… but this is “local” to the disk, and not across disks. So whether it is a 1TB disk you add to an environment with all 1TB disks, or you add a 2TB disk, it will not matter to VSAN. Same applies to replacing disks by the way, if you need to replace a 1TB disk because it has gone bad and would like to use a 2TB disk instead… go ahead! Each disk will have its own filesystem, and the full capacity can be used by VSAN!

The question then arises, will it make a difference if I use RAID-0 or Passthrough at the disk controller level? Again, it does not. Keep in mind that when you do RAID-0 configurations for VSAN that each disk is in its own RAID-0 configuration. Meaning that if you have 3 disks, you will have 3 x RAID-0 set each containing 1 disk. Of course, there is a small implication here when you replace disks as you will need to remove the old RAID-0 set with that disk and create a new RAID-0 set with the new disk, but that is fairly straight forward.

One thing to keep in mind though, from an architectural / operational perspective… if you swap out a 1TB disk for a 2TB disk then you will need to ask yourself will this impact the experience for my customers. Will the performance be different? Because 100 IOps coming from the same disk for 1TB is different then 100IOps coming from the same disk for 2TB, as you will (simply said) be sharing the 100IOps with more VMs (capacity). In short: 100 IOps for a 1000GB disk = 0,1 IOps per GB BUT 100 IOps for a 2000GB disk = 0,05 IOps per GB, you can see the potential impact right… You have more capacity per disk, but with the same number of IOps being provided by that disk. Hopefully though the majority of IO (all writes will for sure, and most reads) will be handles by flash so the impact should be relatively low. Still, something to consider.

HA restarts in a DR/DA event

Duncan Epping · May 3, 2014 ·

I received a couple of questions last week about HA restarts in the scenario where a full site failure has occurred or a part of the storage system has failed and needs to be taken over by another datacenter. Yes indeed this is related to stretched clusters and HA restarts in a DR/DA event.

The questions were straight forward, how does the restart time-out work and what happens after the last retry? I wrote about HA restarts and the sequence last year, so lets just copy and paste that here:

  • Initial restart attempt
  • If the initial attempt failed, a restart will be retried after 2 minutes of the previous attempt
  • If the previous attempt failed, a restart will be retried after 4 minutes of the previous attempt
  • If the previous attempt failed, a restart will be retried after 8 minutes of the previous attempt
  • If the previous attempt failed, a restart will be retried after 16 minutes of the previous attempt

You can extend the restart retry by increasing the value “das.maxvmrestartcount”. And then after every 15/16 minutes a new restart will be attempted. The question this triggered though is why would it even take 4 retries? The answer I got was: we don’t know if we will be able to fail over the storage within 30 minutes and if we will have sufficient compute resources…

Here comes the sweet part about vSphere HA, it actually is a pretty smart solution, it will know if VMs can be restarted or not. In this case as the datastore is not available there is absolutely no point in even trying and HA as such will not even bother. As soon as the storage becomes available though the restart attempts will start. Same applies to compute resource, if for whatever reason there is insufficient unreserved compute resources to restart your VMs then HA will wait for them to become available… nice right!?! Do note I emphasized the word “unreserved” as that is what HA cares about and not actually about used resources.

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Page 6
  • Interim pages omitted …
  • Page 10
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in