• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

VMware

Can I still provision VMs when a vSAN Stretched Cluster site has failed? Part II

Duncan Epping · Dec 18, 2019 ·

3 years ago I wrote the following post: Can I still provision VMs when a VSAN Stretched Cluster site has failed? Last week I received a question on this subject, and although officially I am not supposed to work on vSAN in the upcoming three months I figured I could test this in the evening easily within 30 minutes. The question was simple, in my blog I described the failure of the Witness Host, what if a single host fails in one of the two “data” fault domains? What if I want to create a snapshot for instance, will this still work?

So here’s what I tested:

  • vSAN Stretched Cluster
  • 4+4+1 configuration
    • Meaning, 4 hosts in each “data site” and a witness host, for a total of 8 hosts in my vSAN cluster
  • Create a VM with cross-site protection and RAID-5 within the location

So I first failed a host in one of the two data sites. When I fail the host, the following is what happens when I create a VM with RAID-1 across sites and RAID-5 within a site:

  • Without “Force Provisioning” enabled the creation of the VM fails
  • When “Force Provisioning” is enabled the creation of the VM succeeds, the VM is created with a RAID-0 within 1 location

Okay, so this sounds similar to the originally described scenario, in my 2016 blog post, where I failed the witness. vSAN will create a RAID-0 configuration for the VM. When the host returns for duty the RAID-1 across locations and RAID-5 within each location is then automatically created. On top of that, you can snapshot VMs in this scenario, the snapshots will also be created as RAID-0. One thing to mind is that I would recommend removing “force provisioning” from the policy after the failure has been resolved! Below is a screenshot of the component layout of the scenario by the way.

I also retried the witness host down scenario, and in that case, you do not need to use the “force provisioning” option. One more thing to note. The above will only happen when you create a RAID configuration which is impossible to create as a result of the failure. If 1 host fails in a 4+4+1 stretched cluster you would like to create a RAID-1 across sites and a RAID-1 within sites then the VM would be created with the requested RAID configuration, which is demonstrated in the screenshot below.

Joined GigaOm’s David S. Linthicum on a podcast about cloud, HCI and Edge.

Duncan Epping · Oct 14, 2019 ·

A while ago I had the pleasure to join David S. Linthicum from GigaOm on their Voices in Cloud Podcast. It is a 22 minute podcast where we discuss various VMware efforts in the cloud space, edge computing and of course HCI. You can find the episode here, where they also have the full transcript for those who prefer to read instead of listen to a guy with a Dutch accent. It was a fun experience for sure, I always enjoy joining podcast’s and talking tech… So if you run a podcast and are looking for a guest, don’t hesitate to reach out!

Of course you can also find Voices in Cloud on iTunes, Google Play, Spotify, Stitcher, and other platforms.

Can you move a vSAN Stretched Cluster to a different vCenter Server?

Duncan Epping · Sep 17, 2019 ·

I noticed a question today on one of our internal social platforms, the question was if you can move a vSAN Stretched Cluster to a different vCenter Server. I can be short, I tested it and the answer is yes! How do you do it? Well, we have a great KB that actually documents the process for a normal vSAN Cluster and the same applies to a stretched cluster. When you add the hosts to your new vCenter Server and into your newly created cluster it will pull in the fault domain details (stretched cluster configuration details) from the hosts itself, so when you go to the UI the Fault Domains will pop up again, as shown in the screenshot below.

What did I do? Well in short, but please use the KB for the exact steps:

  • Powered off all VMs
  • Placed the hosts into maintenance mode (do not forget about the Witness!)
  • Disconnected all hosts from the old vCenter Server, again, do not forget about the witness
  • Removed the hosts from the inventory
  • Connected the Witness to the new vCenter Server
  • Created a new Cluster object on the new vCenter Server
  • Added the stretched cluster hosts to the new cluster on the new vCenter Server
  • Took the Witness out of Maintenance Mode first
  • Took the other hosts out of maintenance

That was it, pretty straight forward. Of course, you will need to make sure you have the storage policies in both locations, and you will also need to do some extra work if you use a VDS. Nevertheless, it works pretty much straight-forward and as you would expect it to work!

VMworld Reveals: vMotion innovations

Duncan Epping · Sep 3, 2019 ·

At VMworld, various cool new technologies were previewed. In this series of articles, I will write about some of those previewed technologies. Unfortunately, I can’t cover them all as there are simply too many. This article is about enhancements that will be introduced in the future to vMotion, the was session HBI1421BU. For those who want to see the session, you can find it here. This session was presented by Arunachalam Ramanathan and Sreekanth Setty. Please note that this is a summary of a session which is discussing a Technical Preview, this feature/product may never be released, and this preview does not represent a commitment of any kind, and this feature (or it’s functionality) is subject to change. Now let’s dive into it, what can you expect for vMotion in the future.

The session starts with a brief history of vMotion and how we are capable today to vMotion VMs with 128 vCPUs and 6 TB of memory. The expectation is though that vSphere in the future will support 768 vCPUs and 24 TB of memory. Crazy configuration if you ask me, that is a proper Monster VM.

[Read more…] about VMworld Reveals: vMotion innovations

Runecast Analyzer 3.0!

Duncan Epping · Aug 21, 2019 ·

This week I had a brief conversation with the folks from Runecast. I have been following them since day 1 and they have made a big impression on me from the start. During the conversation the Runecast folks shared with me that Runecast Analyzer 3.0 was going to be announced today and they gave a quick overview and demo of what would be announced and included in 3.0. They also quickly went over the functionality that was added the past year, some things which really were well adopted by customers were HIPAA and DISA-STIG compliance feature. Also Horizon support and security auto-remediation capabilities. Another thing that customers really appreciated were the upgradability simulations (beta feature), where Runecast validates your environment against the HCL.

Stan (Runecast CEO) also mentioned that this year Runecast signed up a customer with over 10k hosts, as you can imagine a lot of the work in the past 12 months was focused on scalability and performance at that level of scale. But that is not what today’s announcement is about, today Runecast is announcing 3.0. In 3.0 there are some great enhancements to the platform again. First of all, production-ready HCL Analysis for vSphere and vSAN. On top of that, the ESXi Upgrade Simulation is now GA, and the log analysis has been improved. Runecast is also introducing a new H5 Client plugin-in with new widgets and a dark theme! Just look at it below, you have got to love the dark theme!

But as I mentioned, there’s more to it than just the H5 Client Plugin, the HCL Analysis and the Upgrade Simulation are two key features if you ask me. During the demo, Stan showed me the below screen, and I think that by itself makes it worth testing out Runecast. It simply shows you in one overview if your environment is compliant to the HCL or not, and if it is not compliant, which combination of firmware and driver you should be using to make it compliant. In this example, the driver should be upgraded to 2.0.42. A very useful feature if you ask me. Note that this will work for both vSphere and vSAN and all components needed to run either of these.

Just as useful is the Upgrade Simulation by the way, are you considering upgrading? Make sure to run this first so you know if you will end up in a supported state or not?! And some of you may say that VMware has similar capabilities in their product, but the Runecast appliance doesn’t need to be connected to the internet at all times. You can regularly update the dataset and run these compliancy and upgrade checks (or any of the other checks) regularly offline. Especially for customers where internet access is challenging (dark sites) this is very helpful.

All in all, some very useful updates to an already very useful solution.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 24
  • Page 25
  • Page 26
  • Page 27
  • Page 28
  • Interim pages omitted …
  • Page 123
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 ยท Log in