A while ago I had the pleasure to join David S. Linthicum from GigaOm on their Voices in Cloud Podcast. It is a 22 minute podcast where we discuss various VMware efforts in the cloud space, edge computing and of course HCI. You can find the episode here, where they also have the full transcript for those who prefer to read instead of listen to a guy with a Dutch accent. It was a fun experience for sure, I always enjoy joining podcast’s and talking tech… So if you run a podcast and are looking for a guest, don’t hesitate to reach out!
I noticed a question today on one of our internal social platforms, the question was if you can move a vSAN Stretched Cluster to a different vCenter Server. I can be short, I tested it and the answer is yes! How do you do it? Well, we have a great KB that actually documents the process for a normal vSAN Cluster and the same applies to a stretched cluster. When you add the hosts to your new vCenter Server and into your newly created cluster it will pull in the fault domain details (stretched cluster configuration details) from the hosts itself, so when you go to the UI the Fault Domains will pop up again, as shown in the screenshot below.
What did I do? Well in short, but please use the KB for the exact steps:
- Powered off all VMs
- Placed the hosts into maintenance mode (do not forget about the Witness!)
- Disconnected all hosts from the old vCenter Server, again, do not forget about the witness
- Removed the hosts from the inventory
- Connected the Witness to the new vCenter Server
- Created a new Cluster object on the new vCenter Server
- Added the stretched cluster hosts to the new cluster on the new vCenter Server
- Took the Witness out of Maintenance Mode first
- Took the other hosts out of maintenance
That was it, pretty straight forward. Of course, you will need to make sure you have the storage policies in both locations, and you will also need to do some extra work if you use a VDS. Nevertheless, it works pretty much straight-forward and as you would expect it to work!
At VMworld, various cool new technologies were previewed. In this series of articles, I will write about some of those previewed technologies. Unfortunately, I can’t cover them all as there are simply too many. This article is about enhancements that will be introduced in the future to vMotion, the was session HBI1421BU. For those who want to see the session, you can find it here. This session was presented by Arunachalam Ramanathan and Sreekanth Setty. Please note that this is a summary of a session which is discussing a Technical Preview, this feature/product may never be released, and this preview does not represent a commitment of any kind, and this feature (or it’s functionality) is subject to change. Now let’s dive into it, what can you expect for vMotion in the future.
The session starts with a brief history of vMotion and how we are capable today to vMotion VMs with 128 vCPUs and 6 TB of memory. The expectation is though that vSphere in the future will support 768 vCPUs and 24 TB of memory. Crazy configuration if you ask me, that is a proper Monster VM.
This week I had a brief conversation with the folks from Runecast. I have been following them since day 1 and they have made a big impression on me from the start. During the conversation the Runecast folks shared with me that Runecast Analyzer 3.0 was going to be announced today and they gave a quick overview and demo of what would be announced and included in 3.0. They also quickly went over the functionality that was added the past year, some things which really were well adopted by customers were HIPAA and DISA-STIG compliance feature. Also Horizon support and security auto-remediation capabilities. Another thing that customers really appreciated were the upgradability simulations (beta feature), where Runecast validates your environment against the HCL.
Stan (Runecast CEO) also mentioned that this year Runecast signed up a customer with over 10k hosts, as you can imagine a lot of the work in the past 12 months was focused on scalability and performance at that level of scale. But that is not what today’s announcement is about, today Runecast is announcing 3.0. In 3.0 there are some great enhancements to the platform again. First of all, production-ready HCL Analysis for vSphere and vSAN. On top of that, the ESXi Upgrade Simulation is now GA, and the log analysis has been improved. Runecast is also introducing a new H5 Client plugin-in with new widgets and a dark theme! Just look at it below, you have got to love the dark theme!
But as I mentioned, there’s more to it than just the H5 Client Plugin, the HCL Analysis and the Upgrade Simulation are two key features if you ask me. During the demo, Stan showed me the below screen, and I think that by itself makes it worth testing out Runecast. It simply shows you in one overview if your environment is compliant to the HCL or not, and if it is not compliant, which combination of firmware and driver you should be using to make it compliant. In this example, the driver should be upgraded to 2.0.42. A very useful feature if you ask me. Note that this will work for both vSphere and vSAN and all components needed to run either of these.
Just as useful is the Upgrade Simulation by the way, are you considering upgrading? Make sure to run this first so you know if you will end up in a supported state or not?! And some of you may say that VMware has similar capabilities in their product, but the Runecast appliance doesn’t need to be connected to the internet at all times. You can regularly update the dataset and run these compliancy and upgrade checks (or any of the other checks) regularly offline. Especially for customers where internet access is challenging (dark sites) this is very helpful.
All in all, some very useful updates to an already very useful solution.
A question just came in, and I figured other people may have the same question so I would share it. The question was if a vSAN IO limit would impact resync traffic or for instance SvMotion? In this case the customer defines limits within each policy to ensure VMs do not interfere with other VMs or excessively uses IO resources. Especially in cloud environments this can be useful, or when running production and test/dev on the same cluster. The concern, of course, was if this limit would impact for instance recovery times after a failure. Because you can imagine that a limit of 50 IOPS would be devastating when a VM (or multiple VMs) need to have objects resynced.
The answer is simple: no, the IO limit specified within a policy does not impact resync traffic (or SvMotion for that matter). It only applies to Guest IO to a VMDK, namespace or swap object. Which means that it is safe to set limits when it comes to recovery times.