I just wanted to do a short post before the weekend, in vSAN 6.7 U3 there’s a great capacity overview screen. It shows a couple of things, first of all, it provides a simple bar that shows “data written”, “reserved space” and “free space”. The second section provides you the ability to figure out what happens to your capacity consumption if you would change the policy on all VMs. The third section gives you a nice breakdown of the capacity per category and a great circular diagram which shows immediately what kind of data is consuming your capacity. Very useful!
vSphere
VMworld Reveals: Armed and Ready (ESXi on ARM, #OCTO2944BU)
At VMworld, various cool new technologies were previewed. In this series of articles, I will write about some of those previewed technologies. Unfortunately, I can’t cover them all as there are simply too many. This article is about ESXi on ARM, which was session OCTO2944BU. For those who want to see the session, you can find it here. This session was presented by Andrei Warkentin and Daniel Beveridge. Please note that this is a summary of a session which is discussing a tech preview, these features may never be released, and this preview does not represent a commitment of any kind, and this feature (or it’s functionality) is subject to change. Now let’s dive into it, what is VMware doing with ARM?
First of all, what caught my interest in this session was the fact that Hivecell was mentioned. Hivecell is a rather unique solution which allows you to stack ARM hosts. What is unique about that? Well when I say stack, I mean stack in the physical sense. The interesting part here is that only the first node will need to have power and networking cables, and the rest will receive power and networking through a magnetic link. William wrote about it extensively, so go here to read more about these guys. Really cool solution if you ask me.
The session started with an intro to ARM and the various use cases. I wrote about that extensively when Chris Wolf and Daniel discussed that at VMworld 2018. So I am not going to reiterate that either, just click the link to figure out why ARM could be interesting. What was new in this session then compared to last year? Well they showed a couple of things which I have not seen shown in public before.
First thing that was discussed was the fact that VMware is looking to support the AWS ARM instances (A1 instance) which were introduced a while ago. The plan is to not only support ARM, but also support Elastic Network Interfaces(ENI) and Elastic Block Storage(EBS). All of it managed through vCenter Server of course. VMware is now looking for early validation customers and partners.
[Read more…] about VMworld Reveals: Armed and Ready (ESXi on ARM, #OCTO2944BU)
VMworld Reveals: Disaster Recovery / Business Continuity enhancements! (#HCI2894BU and #HBI3109BU)
At VMworld, various cool new technologies were previewed. In this series of articles, I will write about some of those previewed technologies. Unfortunately, I can’t cover them all as there are simply too many. This article is about enhancements in the business continuity/disaster recovery space. There were 2 sessions where futures were discussed, namely HCI2894BU and HBI3109BU. Please note that this is a brief summary of those sessions, and these are discussing a Technical Preview, these features/products may never be released, and these previews do not represent a commitment of any kind, and this feature (or it’s functionality) is subject to change. Now let’s dive into it, what can you expect for disaster recovery in the future?
The first session I watched was HCI2894BU, this was all about Site Recovery Manager. I think the most interesting part is the future support for Virtual Volumes (vVols) for Site Recovery Manager. It may sound like something simple, but it isn’t. When the version of SRM ships that supports vVols keep in mind that your vVol capable storage system also needs to support it. At day 1 HPe Nimble, HPe 3PAR and Pure Storage will support it and Dell EMC and NetApp are actively working on support. The requirements are that the storage system needs to be vVols 2.0 compliant and support VASA 3.0. Before they dove into the vVols implementation, some history was shared and the current implementation. I found it interesting to know that SRM has over 25.000 customers and has protected more than 3.000.000 workloads over the last decade.
VMworld Reveals: vMotion innovations
At VMworld, various cool new technologies were previewed. In this series of articles, I will write about some of those previewed technologies. Unfortunately, I can’t cover them all as there are simply too many. This article is about enhancements that will be introduced in the future to vMotion, the was session HBI1421BU. For those who want to see the session, you can find it here. This session was presented by Arunachalam Ramanathan and Sreekanth Setty. Please note that this is a summary of a session which is discussing a Technical Preview, this feature/product may never be released, and this preview does not represent a commitment of any kind, and this feature (or it’s functionality) is subject to change. Now let’s dive into it, what can you expect for vMotion in the future.
The session starts with a brief history of vMotion and how we are capable today to vMotion VMs with 128 vCPUs and 6 TB of memory. The expectation is though that vSphere in the future will support 768 vCPUs and 24 TB of memory. Crazy configuration if you ask me, that is a proper Monster VM.
Impact of adding Persistent Memory / Optane Memory devices to your VM
I had some questions around this in the past month, so I figured I would share some details around this. As persistent memory (Intel Optane Memory devices for instance) is getting more affordable and readily available more and more customers are looking to use it. Some are already using it for very specific use cases, usually in situations where the OS and the App actually understand the type of device being presented. What does that mean? At VMworld 2018 there was a great session on this topic and I captured the session in a post. Let me copy/paste the important bit for you, which discusses the different modes in which a Persistent Memory device can be presented to a VM.
- vPMEMDisk = exposed to guest as a regular SCSI/NVMe device, VMDKs are stored on PMEM Datastore
- vPMEM = Exposes the NVDIMM device in a “passthrough manner, guest can use it as block device or byte addressable direct access device (DAX), this is the fastest mode and most modern OS’s support this
- vPMEM-aware = This is similar to the mode above, but the difference is that the application understands how to take advantage of vPMEM
But what is the problem with this? What is the impact? Well when you expose a Persistent Memory device to the VM, it is not currently protected by vSphere HA, even though HA may be enabled on your cluster. Say what? Yes indeed, the VM which has the PMEM device presented to it will be disabled for vSphere HA! I had to dig deep to find this documented anywhere, and it is documented in this paper. (Page 47, at the bottom.) So what works and what not? Well if I understand it correctly:
- vSphere HA >> Not supported on vPMEM enabled VMs, regardless of the mode
- vSphere DRS >> Does not consider vPMEM enabled VMs, regardless of the mode
- Migration of VM with vPMEM / vPMEM-aware >> Only possible when migrating to host which has PMEM
- Migration of VM with vPMEMDISK >> Possible to a host without PMEM
Also note, as a result (data is not replicated/mirrored) a failure could potentially lead to loss of data. Although Persistent Memory is a great mechanism to increase performance, it is something that should be taken into consideration when you are thinking about introducing it into your environment.
Oh, if you are wondering why people are taking these risks in terms of availability, Niels Hagoort just posted a blog with a pointer to a new PMEM Perf paper which is worth reading.