• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

6.7

Can you move a vSAN Stretched Cluster to a different vCenter Server?

Duncan Epping · Sep 17, 2019 ·

I noticed a question today on one of our internal social platforms, the question was if you can move a vSAN Stretched Cluster to a different vCenter Server. I can be short, I tested it and the answer is yes! How do you do it? Well, we have a great KB that actually documents the process for a normal vSAN Cluster and the same applies to a stretched cluster. When you add the hosts to your new vCenter Server and into your newly created cluster it will pull in the fault domain details (stretched cluster configuration details) from the hosts itself, so when you go to the UI the Fault Domains will pop up again, as shown in the screenshot below.

What did I do? Well in short, but please use the KB for the exact steps:

  • Powered off all VMs
  • Placed the hosts into maintenance mode (do not forget about the Witness!)
  • Disconnected all hosts from the old vCenter Server, again, do not forget about the witness
  • Removed the hosts from the inventory
  • Connected the Witness to the new vCenter Server
  • Created a new Cluster object on the new vCenter Server
  • Added the stretched cluster hosts to the new cluster on the new vCenter Server
  • Took the Witness out of Maintenance Mode first
  • Took the other hosts out of maintenance

That was it, pretty straight forward. Of course, you will need to make sure you have the storage policies in both locations, and you will also need to do some extra work if you use a VDS. Nevertheless, it works pretty much straight-forward and as you would expect it to work!

Runecast Analyzer 3.0!

Duncan Epping · Aug 21, 2019 ·

This week I had a brief conversation with the folks from Runecast. I have been following them since day 1 and they have made a big impression on me from the start. During the conversation the Runecast folks shared with me that Runecast Analyzer 3.0 was going to be announced today and they gave a quick overview and demo of what would be announced and included in 3.0. They also quickly went over the functionality that was added the past year, some things which really were well adopted by customers were HIPAA and DISA-STIG compliance feature. Also Horizon support and security auto-remediation capabilities. Another thing that customers really appreciated were the upgradability simulations (beta feature), where Runecast validates your environment against the HCL.

Stan (Runecast CEO) also mentioned that this year Runecast signed up a customer with over 10k hosts, as you can imagine a lot of the work in the past 12 months was focused on scalability and performance at that level of scale. But that is not what today’s announcement is about, today Runecast is announcing 3.0. In 3.0 there are some great enhancements to the platform again. First of all, production-ready HCL Analysis for vSphere and vSAN. On top of that, the ESXi Upgrade Simulation is now GA, and the log analysis has been improved. Runecast is also introducing a new H5 Client plugin-in with new widgets and a dark theme! Just look at it below, you have got to love the dark theme!

But as I mentioned, there’s more to it than just the H5 Client Plugin, the HCL Analysis and the Upgrade Simulation are two key features if you ask me. During the demo, Stan showed me the below screen, and I think that by itself makes it worth testing out Runecast. It simply shows you in one overview if your environment is compliant to the HCL or not, and if it is not compliant, which combination of firmware and driver you should be using to make it compliant. In this example, the driver should be upgraded to 2.0.42. A very useful feature if you ask me. Note that this will work for both vSphere and vSAN and all components needed to run either of these.

Just as useful is the Upgrade Simulation by the way, are you considering upgrading? Make sure to run this first so you know if you will end up in a supported state or not?! And some of you may say that VMware has similar capabilities in their product, but the Runecast appliance doesn’t need to be connected to the internet at all times. You can regularly update the dataset and run these compliancy and upgrade checks (or any of the other checks) regularly offline. Especially for customers where internet access is challenging (dark sites) this is very helpful.

All in all, some very useful updates to an already very useful solution.

Site locality in a vSAN Stretched Cluster?

Duncan Epping · May 28, 2019 ·

On the community forums, a question was asked around the use of site locality in a vSAN Stretched Cluster. When you create a stretched cluster in vSAN you can define within a policy how the data needs to be protected. Do you want to replicate across datacenters? Do you want to protect the “site local data” with  RAID-1 or RAID-5/6? All of these options are available within the UI.

What if you decide to not stretch your object across locations, is it mandatory to specify which datacenter the object should reside in?

The answer is simple: no it is not. The real question, of course is, would be: should you define the location? Most definitely! If you wonder how to do this, simplicy specify it within the policy you define for these objects as follows:

The above screenshot is taken from the H5 client, if you are still using the Web Client it probably looks slightly different (Thanks Seamus for the screenshot):

Why would you do this? Well, that is easy to explain. When the objects of a VM get provisioned the decision will be made per object where to place it. If you have multiple disks, and you haven’t specified the location, you could find yourself in the situation where disks of a single non-stretched VM are located in different datacenters. This is, first of all, terrible for performance, but maybe more importantly also would impact availability when anything happens to the network between the datacenters. So when you use site locality for non-stretched VMs, make sure to also configure the location so that your VM and objects will align as demonstrated in the below diagram.

 

Impact of adding Persistent Memory / Optane Memory devices to your VM

Duncan Epping · May 22, 2019 ·

I had some questions around this in the past month, so I figured I would share some details around this. As persistent memory (Intel Optane Memory devices for instance) is getting more affordable and readily available more and more customers are looking to use it. Some are already using it for very specific use cases, usually in situations where the OS and the App actually understand the type of device being presented. What does that mean? At VMworld 2018 there was a great session on this topic and I captured the session in a post. Let me copy/paste the important bit for you, which discusses the different modes in which a Persistent Memory device can be presented to a VM.

  • vPMEMDisk = exposed to guest as a regular SCSI/NVMe device, VMDKs are stored on PMEM Datastore
  • vPMEM = Exposes the NVDIMM device in a “passthrough manner, guest can use it as block device or byte addressable direct access device (DAX), this is the fastest mode and most modern OS’s support this
  • vPMEM-aware = This is similar to the mode above, but the difference is that the application understands how to take advantage of vPMEM

But what is the problem with this? What is the impact? Well when you expose a Persistent Memory device to the VM, it is not currently protected by vSphere HA, even though HA may be enabled on your cluster. Say what? Yes indeed, the VM which has the PMEM device presented to it will be disabled for vSphere HA! I had to dig deep to find this documented anywhere, and it is documented in this paper. (Page 47, at the bottom.) So what works and what not? Well if I understand it correctly:

  • vSphere HA >> Not supported on vPMEM enabled VMs, regardless of the mode
  • vSphere DRS >> Does not consider vPMEM enabled VMs, regardless of the mode
  • Migration of VM with vPMEM / vPMEM-aware >> Only possible when migrating to host which has PMEM
  • Migration of VM with vPMEMDISK >> Possible to a host without PMEM

Also note, as a result (data is not replicated/mirrored) a failure could potentially lead to loss of data. Although Persistent Memory is a great mechanism to increase performance, it is something that should be taken into consideration when you are thinking about introducing it into your environment.

Oh, if you are wondering why people are taking these risks in terms of availability, Niels Hagoort just posted a blog with a pointer to a new PMEM Perf paper which is worth reading.

 

 

DRS Advanced Setting IsClusterManaged

Duncan Epping · May 7, 2019 ·

On Reddit, someone asked what DRS advanced setting IsClusterManaged does and if it is even legit. I can confirm it is legit, it is a setting which was introduced to prevent customers from disabling DRS while the cluster is managed by vCloud Director for instance. As disabling DRS would lead to deleting resource pools, which would be a very bad situation to find yourself in when you run vCloud Director as it leans on DRS Resource Pools heavily. So if you see the advanced setting IsClusterManaged in your environment for DRS, just leave it alone, it is there for a reason. (Most likely because you are using something like vCloud Director…)

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Interim pages omitted …
  • Page 7
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in