This is the moment you all have been waiting for, vSphere 8.0 was just announced. There are some great new features and capabilities in this release, and in this blog post I am going to be discussing some of these.
First of all, vSphere Distributed Services Engine. What is this? Well basically it is Project Monterey. For those who have no idea what Project Monterey is, it is VMware’s story around SmartNICs or Data Processing Units (DPUs) as they are typically called. These devices are basically NICs on steroids, NICs with a lot more CPU power, memory capacity, and bandwidth/throughput. These devices not only enable you to push more packets and do it faster, they also provide the ability to run services directly on these cards.
Services? Yes, with these devices you can for instance offload NSX services from the CPU to the DPU. This not only brings NSX to the layer where it belongs, the NIC, it also frees up x86 cycles. Note, that in vSphere 8 it means that an additional instance of ESXi is installed on the DPU itself. This instance is managed by vCenter Server, just like your normal hosts, and it is updated/upgraded using vLCM. In other words, from an operational perspective, most will be familiarized fast. Now having said that, in this first release, the focus very much is on acceleration, not as much on services.
The next major item is Tanzu Kubernetes Grid 2.0. I am not the expert on this, Cormac Hogan is, so I want to point everyone to his blog. What for me probably is the major feature that this version brings is Workload Availability Zones. It is a feature that Frank, Cormac, and I were involved in during the design discussions a while back, and it is great to finally see it being released. Workload Availability Zones basically enable you to deploy a Tanzu Kubernetes Cluster across vSphere Clusters. As you can imagine this enhances resiliency of your deployment, the diagram below demonstrates this.
For Lifecycle Management also various things were introduced. I already mentioned the vLCM now support DPUs, which is great as it will make managing these new entities in your environment so much easier. vLCM now also can manage Stand Alone Host’s via the API, and vLCM can remediate hosts placed into maintenance mode manually now as well. Why is this important? Well this will help customers who want to remediate hosts in parallel to decrease the maintenance window. For vCenter Server lifecycle management, there also was a major improvement. vSphere 8.0 now has the ability to store the vCenter Server cluster state in a distributed key-value store running on the ESXi hosts in the cluster. Why would it do this? Well it basically provides the ability to roll back to the last known state since the last backup. In other words, if you added a host to the cluster after the last backup, this is now stored in the distributed key-value store. When a backup is then restored after a failure, vCenter and the distributed key-value store will then sync so that the last known state is restored.
Last lifecycle management-related feature I want to discuss is vSphere Configuration Profiles. vSphere Configuration Profiles is a feature that is released as Tech Preview and over time will replace Host Profiles. vSphere Configuration Profiles introduces the “desired-state” model to host configuration, just like vLCM did for host updates and upgrades. You define the desired state, you attach it to a cluster and it will be applied. Of course, the current state and desired state will be monitored to prevent configuration drift from occurring. If you ask me, this is long overdue and I hope many of you are willing to test this feature and provide feedback so that it can be officially supported soon.
For AI and ML workload a feature is introduced which enables you to create Device Groups. What does this mean? It basically enables you to logically link two devices (NIC and GPU, or GPU and GPU) together. This is typically done with devices that are either linked (GPUs for instance through something like NVIDIA NVLINK) or a GPU and a NIC which are tightly coupled as they are on the same PCIe Switch connected to the same CPU, bundling these and exposing them as a pair to a VM (through Assignable Hardware) with an AI/ML workload simply optimizes the communication/IO as you avoid the hop across the interconnect as shown in the below diagram.
On top of the above firework, there are also many new smaller enhancements. Virtual Hardware version 20 for instance is introduced, and this enables you to manage your vNUMA configuration via the UI instead of via advanced settings. Also, full support for Windows 11 at scale is introduced by providing the ability to automatically replace the required vTPM device when a Windows 11 VM is cloned, ensuring that each VM has a unique vTPM device.
There’s more, and I would like to encourage you to read the material on core.vmware.com/vsphere, and for TKG read Cormac’s material! I also highly recommend this post about what is new for core storage.
DP says
Real question – where will vSphere 8 be installed? My own experience is seeing the datacenter stagnate or shrink. Really curious what others are seeing?
Duncan Epping says
Considering VMware has 400.000+ customers, and many very large cloud partners like Google, AWS, Microsoft, Oracle (and 5k+ service providers). I think it will be installed in plenty of places 🙂
Jay S says
Many government and higher education customers still run on-premise workloads. Vsphere 8 will be a nice upgrade for customer looking to advance more into the AI/ML and edge workloads.
Jason Kirk says
I am a tech consultant who works with some very large enterprises. I have NEVER seen cloud adoption shrink data center footprint. In most cases it is still growing contrary to customer plans.
VirtualVaibhav says
is vSphere 8 available for download?
Duncan Epping says
No, it was announced… GA will follow soon.
Chris says
Is there going to be another Technical Deep Dive book for vSphere 8?
Duncan Epping says
Undecided at the moment.
Chris says
Thx Duncan, IMO considering how vSphere 8 is changing how cpu handles cycles by offloading to DPU, I’d like to fully understand the changes. We have over 5,000 ESXi hosts and want to fully understand it before if/when we upgrade.
Duncan Epping says
Understood, but keep in mind, it is now just offloading “random” cycles”. it is very specific. The first iteration will just focus on offloading the network stack. And even if you upgrade to 8.x, you will need to have supported DPUs, and you will need to configure the offload, it won’t happen automatically.
Chris says
Any news on Deep dive book for vsphere 8?
Duncan Epping says
No news, not sure if we are going to do one yet to be honest. If we do, it would probably be with U1 or U2.
Dave says
It’ll be interesting to see if ESXi v8 will require a DPU to run/be installed, forcing a host hardware retrofit or refresh in the datacenters. Or if you can run on existing hardware (without a DPU) in some reduced/legacy mode.
Duncan Epping says
It won’t require a DPU, that is optional.
sts098 says
Can vSphere 8 manage ESXi 7 hosts? ESXi6.7 Hosts?
Duncan Epping says
I have not seen those details being shared, but afaik vCenter 8 will be able to manage 7.x hosts (it works in our lab). I don’t have any 6.7 hosts so I can’t test that, unfortunately.
Alan says
I just went to upgrade my 6.7 licences to 7 as oart of an upgrade due to the expiration of support on 6.7 and found i can also upgrade to vCenter 8. Oh dear what to do !!
Olivier Blondeaux says
Duncan, I have a question: about the “adaptative” RAID-5. Basically, it is a good idea. But in the particular case of a 5 x ESXi, why not keep the 3+1 scheme? This would benefit from a disk space consumption of x1.33 instead of x1.5 as is the case with the new 2+1 scheme?
Duncan Epping says
I can ask the team what the reasoning is, I wasn’t part of those design discussions/decisions.
Olivier Blondeaux says
Thanks. I am very interessting ! In fact, I have another question ; with ESA, we have no edup for the moment… but in a future release?
Duncan Epping says
That is something that is being considered for a future release indeed.
Dennis says
Duncan, I see that you can’t use vSphere Configuration Profiles if you use vSphere Distributed Switch for your cluster. I think a lot of people are using distributed switches (at least I am) and would like to use Configuration Profiles. Can you tell when this will be solved ?
Duncan Epping says
The feature is still “tech preview”, which basically means it is not “production ready” but you can get comfortable with it in lab environments etc. I have not seen any public statements when it will be GA and if that will include support for the vDS so unfortunately I can’t share anything at this point.