Recently, I read a fantastic LinkedIn post by Francisco Perez van der Oord, the founder of ITQ, in which he explained why he believes in Broadcom’s strategic direction and the VMware portfolio. It was an interesting read, and for me, a great reason to invite one of our most valued partners in EMEA to the show. I want to thank Francisco for taking the time to sit down, as I know he has a crazy schedule. Listen via the embedded player below, or via Spotify (bit.ly/3WxwuV9), Apple (bit.ly/43FWW2G)!
#105 – How do I enable vSAN ESA Global Deduplication in 9.0?
For this episode I invited our raging reporter on scene Pete Koehler! Pete is going over all the benefits, requirements, and limitations around vSAN ESA Global Deduplication. This feature was released with 9.0.1 and is available per request as discussed in this blog post. If you want to get access make sure to sign up using the form in the blog.
Just as a summary, right now vSAN ESA Global Deduplication requires:
- Version 9.0.1
- 25GbE (or higher) networking
- Telemetry needs to be enabled (CEIP)
There are a few limitations:
- Minimum of 3 hosts and maximum of 16 hosts in a cluster
- Does not support the use of Stretched Cluster functionality
- Cannot be combined with data-at-rest-encryption
Listen via Spotify (bit.ly/4oaCbnS), Apple (bit.ly/4np3slG), or the embedded player below:
#104 – Exploring recent Ransomware Recovery and Data Recovery announcements with Jatin Jindal
At VMware Explore it was obvious, the interest for VMware’s on-premises Ransomware Recovery solution is huge! Hence, I asked Jatin Jindal to join the show to go over what the VMware Ransomware Recovery solution entails, what the differences are between a ransomware recovery process and a disaster recovery scenario, and he talks about various roadmap items like tag-based selection, seeding, QLC support, and vSAN Cyber ReadyNodes. Interested in participating in the upcoming Storage, Data Protection, and Data Beta Programs? Sign up now by filling out this form: https://docs.google.com/forms/d/e/1FAIpQLSeXBC6_oAnkS8vCFztuLQFHx0qZ5xxJSmxbMkyPBvDFM0lHLg/viewform
You can listen to the episode via Spotify (bit.ly/3IWQCwz), Apple (bit.ly/4o6YVoG), or via the embedded player below!
vSAN Stretched Cluster vs Fault Domains in a “campus” setting?
I got this question internally recently: Should we create a vSAN Stretched Cluster configuration or create a vSAN Fault Domains configuration when we have multiple datacenters within close proximity on our campus? In this case, we are talking about less than 1ms latency RTT between buildings, maybe a few hundred meters at most. I think it is a very valid question, and I guess it kind of depends on what you are looking to get out of the infrastructure. I wrote down the pros and cons, and wanted to share those with the rest of the world as well, as it may be useful for some of you out there. If anyone has additional pros and cons, feel free to share those in the comments!
vSAN Stretched Clusters:
- Pro: You can replicate across fault domains AND protect additionally within a fault domain with R1/R5/R6 if required.
- Pro: You can decide whether VMs should be stretched across Fault Domains or not, or just protected within a fault domain/site
- Pro: Requires less than 5MS RTT latency, which is easily achievable in this scenario
- Con/pro: you probably also need to think about DRS/HA groups (VM-to-Host)
- Con: From an operational perspective, it also introduces a witness host, and sites, which may complicate things, and at the various least requires a bit more thinking
- Con: Witness needs to be hosted somewhere
- Con: Limited to 3 Fault Domains (2x data + 1x witness)
- Con: Limited to 20+20+1 configuration
vSAN Fault Domains:
- Pro: No real considerations around VM-to-host rules usually, although you can still use it to ensure certain VMs are spread across buildings
- Pro: No Witness Appliance to manage, update or upgrade. No overhead of running a witness somewhere
- Pro: No design considerations around “dedicated” witness sites and “data site”, each site has the same function
- Pro: Can also be used with more than 3 Fault Domains or Datacenters, so could even be 6 Fault Domains, for instance
- Pro: Theoretically can go up to 64 hosts
- Con: No ability to protect additionally within a fault domain
- Con: No ability to specify that you don’t want to replicate VMs across Fault Domains
- Con/Pro: Requires sub-1ms RTT latency at all times, which is low, but will be achievable in a campus cluster, usually
#103 – The performance impact of Memory Tiering featuring Qasim Ali and Todd Muirhead
The last few months, I’ve had many discussions about Memory Tiering. When I saw a brand new performance white paper being released, I knew it was time to invite two of the authors to the podcast. Qasim Ali and Todd Muirhead go over the ins and outs of Memory Tiering, they discuss the basics, but also explain in-depth what the potential performance impact is when enabling this feature in your environment. You can listen on Apple Podcasts, Spotify, the embedded player below, or any podcast app of your choice!
If you’d like to know more, visit the following links!