• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

6.1

Removing stretched VSAN configuration?

Duncan Epping · Dec 15, 2015 ·

I had a question today around how to safely remove a stretched VSAN configuration without putting any of the workloads in danger. This is fairly straight forward to be honest, there are 1 or 2 things though which are important. (For those wondering why you would want to do this, some customers played with this option and started loading workloads on top of VSAN and then realized it was still running in stretched mode.) Here are the steps required:

  1. Click on your VSAN cluster and go to Manage and disable the stretched configuration
    • This will remove the witness host, but will leave 2 fault domains in tact
  2. Remove the two remaining fault domains
  3. Go to the Monitor section and click on Health and check the “virtual san object health”. Most likely it will be “red” as the “witness components” have gone missing. VSAN will repair this automatically by default in 60 minutes. We prefer to take step 4 though asap after removing the failure domains!
  4. Click “repair object immediately”, now witness components will be recreated and the VSAN cluster will be healthy again.
  5. Click “retest” after a couple of minutes

By the way, that “repair object immediately” feature can also be used in the case of a regular host failure where “components” have gone absent. Very useful feature, especially if you don’t expect a host to return any time soon (hardware failure for instance) and have the spare capacity.

Doing a stretched VSAN design?

Duncan Epping · Nov 27, 2015 ·

As I have given various people already individually the formulas needed to calculate how much bandwidth is required I figured I would share this as well. If you are doing a stretched VSAN design you will want to read this excellent paper by Jase McCarty. This paper describes the bandwidth requirements between the “data sites” and from the data sites to the “witness site”. It provides the formula needed, and it will show you that the “general guidelines” provided during launch were relatively conservative. In many cases especially the the connection to the witness location can be low bandwidth. Just have a read when you are designing a stretched VSAN and do the math.

http://www.vmware.com/files/pdf/products/vsan/vmware-virtual-san-6.1-stretched-cluster-bandwidth-sizing.pdf

Virtual SAN: Generic storage platform of the future

Duncan Epping · Nov 20, 2015 ·

Over the last couple of weeks I have been presenting at various events on the topic of Virtual SAN. One of the sections in my deck is a bit about the future of Virtual SAN and where it is heading towards. Someone tweeted one of the diagrams in my slides recently which got picked up by Christian Mohn who provided his thoughts on the diagram and what it may mean for the future. I figured I would share my story behind this slide, which is actually a new version of a slide that was originally presented by Christos and also discussed in one of his blog posts. First, lets start with the diagram:

If you look at VSAN today and ask people what VSAN is today then most will answer: a “virtual machine” storage system. But VSAN to me is much more than that. VSAN is a generic object storage platform, which today is used to primarily store virtual machines. But these objects can be anything if you ask me, and on top of that can be presented as anything.

So what is it VMware is working towards, what is our vision? VSAN was designed to serve as a generic object storage platform from the start, and is being extended to serve as a platform to different types of data by providing an abstraction layer. In the diagram you see “REST” and “FILE” and things like Mesos and Docker, it isn’t difficult to imagine what types of workloads we envision to run on top of VSAN and what types of access you have to resources managed by VSAN. This could be through a native Rest API that is part of the platform which can be used by developers directly to store their objects on or through the use of a specific driver for direct “block” access for instance.

Combine that with the prototype of the distributed filesystem which was demonstrated at VMworld and I think it is fair to say that the possibilities are endless. VSAN isn’t just a storage system for virtual machines, it is a generic object based storage platform which leverages local resources and will be able to share those in a clustered fashion in any shape or form in the future. Christian definitely had a point, in which shape or form all of this will be delivered has to be seen though, this is not something I can (or want) to speculate on. Whether that is through Photon Platform, or something else is in my opinion besides the point. Even today VSAN has no dependencies on vCenter Server and can be fully configured, managed and monitoring using the APIs and/or the different command-line interface options we offer. Agility and choice have always been the key design principles for the platform.

Where things will go exactly and when this will happen is still to be seen. But if you ask me, exciting times are ahead for sure, and I can’t wait to see how everything plays out.

 

SMP-FT support for Virtual SAN ROBO configurations

Duncan Epping · Oct 12, 2015 ·

When we announced Virtual SAN 2-node ROBO configurations at VMworld we received a lot of great feedback and responses. A lot of people asked if SMP-FT was supported in that configuration. Apparently many of the customers using ROBO still have legacy applications which can use some form of extra protection against a host failure etc. The Virtual SAN team had not anticipated this and had not tested this explicit scenario unfortunately so our response had to be: not supported today.

We took the feedback to the engineering and QA team and these guys managed to do full end-to-end tests for SMP-FT on 2-node Virtual SAN ROBO configurations. Proud to announce that as of today this is now fully supported with Virtual SAN 6.1! I want to point out that still all SMP-FT requirements do apply, which means 10GbE for SMPT-FT! Nevertheless, if you have the need to provide that extra level of availability for certain workloads, now you can!

2 is the minimum number of hosts for VSAN if you ask me

Duncan Epping · Oct 1, 2015 ·

In 2013 I wrote an article about the minimum number of hosts for Virtual SAN. Since then this post has started living its own life. Somehow people have misunderstood my post and used/abused it in many shapes and forms. When I look at the size of a traditional cluster (non-VSAN) the minimum size is 2. From an availability perspective I ask myself what is the risk I am willing to take. What does that mean?

In a previous life I did many projects for SMB customers. My SMB customers typically had somewhat in the range of 2-5 hosts. With the majority having 2-3. In many cases those having 2-3 hosts were running roughly a similar number of virtual machines. The difference between the two situations “2 hosts” versus “3 hosts” was whether during times of maintenance (upgrading / updating) or failure if the ability to restart the virtual machine after a secondary failure. Many customers decided to go with 2 node clusters. Key reason for it being price vs risk. At normal operations risk is low, but the price of an additional host was relatively high.

Now compare this to Virtual SAN and you will see the same applies. With Virtual SAN we have a minimum of 3 hosts, well in a ROBO configuration you can have 2 with an external witness. This means that from a support perspective the bare minimum of dedicated physical hosts required for VSAN is 2. There you go, 2 is the bare minimum for ROBO. For non-ROBO 3 is the minimum. Fully supported, offers all functionality and similar to 4 hosts.

Is having an extra host a good plan? Yes of course it is. HA / DRS / VSAN (and any other scale-out storage solution for that matter) will benefit from more hosts. You as a customer need to ask yourself what the risk is, and if the cost is justifiable.

PS1: A question just came in, want to make that it is clear. Even in a 2-host ROBO configuration you can do maintenance! A single copy of the data and the witness remains available and will have quorum.

PS2: No, you cannot host your “witness” VM on the VSAN cluster itself, this is not supported as the witness is the quorom for the cluster and it should be outside of the cluster to provide certainty of the state in the case of a failure.

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Also visit!

For the Dutch-speaking audience, make sure to visit RunNerd.nl to follow my running adventure, read shoe/gear/race reviews, and more!

Do you like Hardcore-Punk music? Follow my Spotify Playlist!

Do you like 80s music? I got you covered!

Copyright Yellow-Bricks.com © 2026 · Log in