• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

devops

Instant Clone in vSphere 6.7 rocks!

Duncan Epping · May 1, 2018 ·

I wrote a blog post a while back about VMFork, which then afterwards got rebranded to Instant Clone. In the vSphere 6.7 release there has been a major change to the architecture of VMFork aka Instant Clone, so I figured I would do an update on the post. As an update doesn’t stand out from the rest of the content I am sharing it as a new post.

Instant Clone was designed and developed to provide a mechanism that allows you to instantaneously create VMs. In the early days it was mainly used by folks who want to deploy desktops, by the desktop community this was often referred to as “just in time” desktops. These desktops would literally be created when the user tried to login, it is indeed that fast. How did this work? Well a good way to describe it is that it is essentially a “vMotion” of a VM on the same host with a linked clone disk. This essentially leads to a situation which looks as follows:

On a host you had a parent VM and a child VM associated with it. You would have a shared base disk, shared memory and then of course unique memory pages and a delta disk for (potential) changes written to disk. The reason customers primarily used this only with VDI at first was the fact that there was no public API for it. Of course folks like Alan Renouf and William Lam fought hard for public APIs internally and they managed to get things like the PowerCLI cmdlets and python vSphere SDK pushed through. Which was great, but unfortunately not 100% fully supported. On top of that there were some architectural challenges with the 1.0 release of Instant Clones. Mainly caused by the fact that VMs were pinned to a host (next to their parent VM) and as such things like HA, DRS, vMotion wouldn’t work. Now with version 2.0 this all changes. William already wrote an extensive blog post about it here. I just went over all of the changes and watched some internal training, and I am going to write some of my findings/learnings down as well, just so that it sticks… First let’s list the things that stood out to me:

  • Targeted use cases
    • VDI
    • Container hosts
    • Big data / hadoop workers
    • DevTest
    • DevOps
  • There are two workflows for instant clone
    • Instant clone a running a VM, source and generated VMs continue running
    • Instant clone a frozen VM, source is frozen using guestRPC at point in time defined by customer
  • No UI yet, but “simple API” available
  • Integration with vSphere Features
    • Now supported: HA, DRS, vMotion (Storage / XvMotion etc)
  • Even when TPS is disabled (default) VMFork still leverages the P-Share technology to collapse the memory pages for efficiency
  • There is no explicit parent-child relationship any longer

Let’s look at the use cases first, I think the DevTest / DevOps is interesting. You could for instance do an Instant Clone (live) of a VM and then test an upgrade for instance for the application running within the VM. For this you would use the first workflow that I mentioned above: instant clone a running VM. What happens here in the workflow is fairly straight forward. I am using William’s screenshots of the diagrams the developers created to explain it. Thanks William, and dev team 🙂

instant clone

Now note that above when the first clone is created the source gets a delta disk as well. This is to ensure that the shared disk doesn’t change as that would cause problems for the target. Now when a 2nd VM is created and a 3 the source VM gets additional delta’s. This as you can imagine isn’t optimal and will over time even potentially slow down the source VM. Also one thing to point out is that although the mac address changes for the generated VM, you as the admin still need to make sure the Guest OS picks this up. As mentioned above, as there’s no UI in vSphere 6.7 for this functionality you need to use the API. If you look at the MOB you can actually find the InstantClone_Task and simply call that, for a demo scroll down. But as said, be careful as you don’t want to end up with the same VM with the same IP on the same network multiple times. You can get around the Mac/IP conflict issue rather easy and William has explained how in his post here. You can even change the port group for the NIC, for instance switch over to an isolated network only used for testing these upgrade scenarios etc.

That second workflow would be used for the following use cases: VDI, Container Hosts, Hadoop workers… all more or less the same type of use case. Scale out identical VMs fast! In this case, well lets look at the diagram first:

instant clone

In the above scenario the Source VM is what they call “frozen”. You can freeze a VM by leveraging the vmware-rpctool and run it with “instantclone.freeze”. This needs to happen from within the guest, and note that you need to have VMware tools installed to have vmware-rpctool available. When this is executed, the VM goes in to a frozen state, meaning that no CPU instructions are executed. Now that you froze the VM you can go through the same instant clone workflow. Instant Clone will know that the VM is frozen. After the instant clone is create you will notice that there’s a single delta disk for the source VM and each generated VM will have its own delta disk as shown above. Big benefit is that the source VM won’t have many delta disks. Plus, you know for sure that every single VM you create from this frozen VM is 100% identical as they all resume from the exact same point in time. Of course when the instant clone is created the new VM is “unfrozen / resumed”, the source will remain frozen. Note that if for whatever reason the source is restarted / power cycled then the “frozen state” is lost. Another added benefit of the frozen VM is that you can automate the “identity / IP / mac” issue when leveraging the “frozen source VM” workflow. How do you do this? Well you disable the network, freeze it, instant clone it (unfreezes automatically), make network changes, enable network. William just did a whole blog post on how to do various required Guest changes, I would highly recommend reading this one as well!

So before you start using Instant Clone, first think about which of the two workflows you prefer and why. So what else did I learn?

As mentioned, and this is something I never realized, but even when TPS is disabled Instant Clone will still share the memory pages through the P-Share mechanism. P-Share is the same mechanism that TPS leverages to collapse memory pages. I always figured that you needed to re-enable TPS again (with or without salting), but that is not the case. You can’t even disable the use of P-Share at this point in time… Which personally I don’t think is a security concern, but you may think different about it. Either way, of course I tested this, below you see the screenshot of the memory info before and after an instant clone. And yes, TPS was disabled. (Look at the shares / saving values…)

Before:

After:

Last but not least, the explicit parent-child relationship caused several problems from a functionality stance (like HA, DRS, vMotion etc not being supported). Per vSphere 6.7 this is no longer the case. There is no strict relationship, and as such all the features you love in vSphere can be fully leveraged even for your Instant Clone VMs. This is why they call this new version of Instant Clone “parentless”.

If you are wondering how you can simply test it without diving in to the API too deep and scripting… You can use the Managed Object Browser (MOB) to invoke the method as mentioned earlier. I recorded this quick demo that shows this, which is based on a demo from one of our Instant Clone engineers. I recommend watching it in full screen as it is much easier to follow that way. (or watch it on youtube in a larger window…) Pay attention, as it is a quick demo, instant clone is extremely fast and the workflow is extremely simple.

And that’s it for now. Hope that helps those interested in Instant Clone / VMFork, and maybe some of you will come up with some interesting use cases that we haven’t thought about it. Would be good if you have use cases to share those in the comment section below. Thanks,

Operational Efficiency (You’re not Facebook/Google/Netflix)

Duncan Epping · Dec 8, 2014 ·

In previous roles, also before I joined VMware, I was a system administrator and a consultant. The tweets below reminded me of the kind of work I did in the past and triggered a train of thought that I wanted to share…

@jtmcarthur56 That's only achievable when you have 50,000 servers running one application

— Howard Marks (@DeepStorageNet) December 3, 2014

Howard has a great point here. For some reason many people started using Google, Facebook or Netflix as the prime example of operational efficiency. Startups use it in their pitches to describe what they can bring and how they can simplify your life, and yes I’ve also seen companies like VMware use it in their presentations.When I look back at when I managed these systems my pain was not the infrastructure (servers / network / storage)… Even though the environment I was managing was based on what many refer to as legacy: EMC Clariion, NetApp FAS or HP EVA. The servers were never really the problem to manage either, sure updating firmware was a pain but not my biggest pain point. Provisioning virtual machines was never a huge deal… My pain was caused by the application landscape many of my customers had.

At companies like Facebook and Google the ratio of Application to Admin is different as Howard points out. I would also argue that in many cases the applications are developed in-house and are designed around agility, availability and efficiency… Unfortunately for most of you this is not the case. Most applications are provided by vendors which don’t really seem to care about your requirements, they don’t design for agility and availability. No, instead they do what is easiest for them. In the majority of cases these are legacy monolithic (cr)applications with a simple database which all needs to be hosted on a single VM and when you get an update that is where the real pain begins. At one of the companies I worked for we had a single department using over 80 different applications to calculate mortgages for the different banks and offerings out there, believe me when I say that that is not easy to manage and that is where I would spent most of my time.

I do appreciate the whole DevOps movement and I do see the value in optimizing your operations to align with your business needs, but we also need to be realistic. Expecting your IT org to run as efficient as Google/Facebook/Netflix is just not realistic and is not going to happen. Unless of course you invest deep and develop the majority of your applications in-house, and do so using the same design principles these companies use. Even then I doubt you would reach the same efficiency, as most simply won’t have the scale to reach it. This does not mean you should not aim to optimize your operations though! Everyone can benefit from optimizing operations, from re-aligning the IT department to the demands of todays world, from revising procedures… Everyone should go through this motion, constantly, but at the same time stay realistic. Set your expectations based on what lands on the infrastructure as that is where a lot of the complexity comes in.

We are DevOps

Duncan Epping · Nov 27, 2014 ·

devopsOver the last couple of months I have started running in to more and more customers who are wondering what that DevOps thing is they keep hearing about. They want to know if they need to start hiring DevOps engineers and which software they need to procure for DevOps. I more or less already alluded to what I think it really is or means in my blog post about The Phoenix Project, let me re-use a quote from the review that I wrote for that book:

After reading the book I am actually left wondering if DevOps is the right term, as it is more BizDevOps then anything else. All of IT enabling the development of business through operational efficiency / simplicity.

DevOps is not something you buy, it is not about specific tools you use, it is a state-of-mind … an operational model, a certain level of maturity. I would argue that it is just a new fancy way of describing IT maturity. At VMware we have had this professional services engagement called Operational Readiness where IT (Ops and Dev) and business owners would be interviewed to identify the shortcomings in terms of the IT offerings and agility, the outcome would a set of recommendations that would allow an organization to align better with the business demands. (This engagement has been around for at least 6 years now to my knowledge.)

Typically these types of engagements would revolve around people and process and focus less on the actual tools used. The theme of the recommendations generally was around breaking down the silos in IT (between the various teams in an IT department: dev / ops / security / networking / storage), and of course reviewing processes / procedures. It is strange how even today we still encounter the same types of problems we encountered years ago. You can deploy a new virtual machine in literally minutes, you can configure physical servers in about 10 minutes (when the installation is fully automated)… yet it takes 3 weeks to get a networking port configured, 2 weeks to get additional LUNs, 4 days to prepare that test/dev environment or even worse the standard change process from start to finish takes 6 weeks.

What is probably most striking is that we live in an ever changing world, the pace at which this happens is unbelievably fast and we happen to work in the industry which enables this… Yet, when you look at IT (in most cases) we forget to review our processes (or design) and do not challenge the way we are doing things today. We (no not you I know, but that guy sitting next to you) take what was described 5 years ago and blindly automate that. We use the processes we developed for the physical world in a virtualized world, and we apply the same security policies and regulations to a virtual machine as to a physical machine. In many cases, unfortunately, from a people perspective things are even far worse… no communication whatsoever between the silos besides through an ancient helpdesk ticketing tool probably, sadly enough.

In todays world, if you want to stay relevant, it is important that you can anticipate as fast as possible to the (ever changing) demands of your business / customers. IT has the power to enable this. This is what this so-called “Operational Readiness” was there for, identify the operational and organizational pain-points, solve them and break down those silos to cater for the business needs. In todays world the expected level of operational maturity is a couple of levels higher even, and that level is what people (to a certain extent) refer to when they talk about DevOps in my opinion.

So the question then remains, what can you do to ensure you stay relevant? Lets make it clear that DevOps is not something you buy, it is not a role in your organization, and it is not a specific product, it is an IT mindset… hence the title: we are DevOps. Joe Baguley’s keynote at the UK VMUG was recorded, and although he did not drop the word DevOps he does talk about staying relevant, what it is IT does (provide applications), how you can help your company to beat the competition and what your focus should be. (On top of that, he does look DevOps with his beard and t-shirt!) I highly recommend watching this thought provoking keynote. Make sure to sit down afterwards, do nothing for 30 to 60 minutes besides reflecting back on what you have done the last 12 months and then think about what it is you can do to improve business development, whether new or existing markets, for your company.

Project Fargo aka VMFork and TPS?

Duncan Epping · Nov 11, 2014 ·

I received some questions this week around how VMFork (aka Instant Clone) will work when TPS is disabled in the future, already answered some questions in comments but figured this would be easier to google. First of all, I would like to point out that in future versions TPS will not be globally disabled, but rather it will disabled for inter-VM page collapsing. Within the VM though pages will be collapsed as normal and the way it works is that each virtual machine configuration will contain a salt and all virtual machines with the same salt will share pages… However, each virtual machine by default will have a unique salt. Now this is where a VMFork’ed virtual machine will differ in the future.

VMFork’ed virtual machines in the future will share the salt, which means that “VMFork groups” can be considered a security domain and pages will be shared between all of these VMs. In other words, the parent and all of its children have the same salt and will share pages (see sched.mem.pshare.salt). If you have a different parent then pages between those VMFork Groups (both parents and its children) will not be shared.

Recommended Read: The Phoenix Project

Duncan Epping · Nov 3, 2014 ·

Last week when traveling to China I finally had the time to read a book which I had on my “to read” list for a long time: The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win.

I just posted a review up on Amazon.com and figured I would share it with my readers as well as I felt this book is worth promoting, although many of my fellow bloggers/tweeps have done this already. Let me copy the review for your convenience:

Reading the book one thing stands out is that it is all very recognizable if you have ever worked for a company which is moving in to new spaces and has a business relying on IT. I have been there and many of the situations sounded / felt very familiar to me. I found it a very enjoyable read and educational at the same time to a certain degree. Now here is he caveat, although it is a book about IT and DevOps it is very much written as a novel. This is something you need to take in to consideration when you buy is and when you read it, and ultimately review it. I felt that when you read it as a novel it is an excellent light and easy read with the right amount of details needed to help you learn about what DevOps can bring to your business. After reading the book I am actually left wondering if DevOps is the right term, as it is more BizDevOps then anything else. All of IT enabling the development of business through operational efficiency / simplicity.

phoenix project The book was written by Gene Kim, Kevin Behr and George Spafford and the book revolves around an IT Manager (Bill) who is struggling to align IT agility / flexibility with business needs for the Phoenix Project. As I mentioned in the review many of the situations actually sounded very familiar to what I have experienced in previous roles before joining VMware, so I could relate to a lot of the challenges described in the book, and I think that is why is was also very entertaining. At the same time, it is humorous but also fairly light reading so before you know it you are a couple of chapters in.

In my Amazon review I mentioned that after reading the book I was left wondering whether “DevOps” was the right term as to many sys admins the connotation of DevOps seems to be a negative one. When reading the book, and looking back at my own experience the goal is allowing the development of business for your company and whether that is new business, increase of volume, or a full transformation is besides the point even. Key is that you will only get there when all of IT is aligned and working towards that common goal.

I don’t read too many IT books as typically they are dry and I struggle to get through them. Phoenix Project was the opposite, if you are like me then definitely give this a try. Although it is not a deep technical book, as I stated it is more a novel, I am sure everyone gets something out of it. I read the Kindle version, it was definitely worth the 9.99, but if you prefer a paper copy then you can find it on Amazon for less then 16 dollars which is still a great buy! Recommended read for sure!

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the HCI BU at VMware. He is a VCDX (# 007) and the author of multiple books including "vSAN Deep Dive" and the “vSphere Clustering Technical Deep Dive” series.

Upcoming Events

04-Feb-21 | Czech VMUG – Roadshow
25-Feb-21 | Swiss VMUG – Roadshow
04-Mar-21 | Polish VMUG – Roadshow
09-Mar-21 | Austrian VMUG – Roadshow
18-Mar-21 | St Louis Usercon Keynote

Recommended reads

Sponsors

Want to support us? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2021 · Log in