• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

BC-DR

Sharing the #TechConfessions video

Duncan Epping · Nov 27, 2017 ·

At VMworld I sat down with Amy Lewis and had my Tech Confession. Where/When/How did I have my Software Defined  world.

I would also highly recommend the tech confession by William Lam and Alan Renouf, both are very interesting. Make sure to follow the channel, and watch the other videos as well, I know there are a lot more interesting videos coming soon!

New whitepaper available: vSphere Metro Storage Cluster Recommended Practices (6.5 update)

Duncan Epping · Oct 24, 2017 ·

I had many requests for an updated version of this paper, so the past couple of weeks I have been working hard. The paper was outdated as it was last updated around the vSphere 6.0 timeframe, and it was only a minor update. I looked at every single section and added in new statements and guidance around vSphere HA Restart Priority for instance. So for those running a vSphere Metro Storage Cluster / Stretched Cluster of some kind, please read the brand new vSphere Metro Storage Cluster Recommended Practices (6.5 update) white paper.

It is available on storagehub.vmware.com in PDF and for reading within your browser. Any questions and comments, please do not hesitate to leave them here.

  • vSphere Metro Storage Cluster Recommended Practices online
  • vSphere Metro Storage Cluster Recommended Practices PDF

 

New white paper available: vSphere APIs for I/O Filtering (VAIO)

Duncan Epping · Oct 13, 2017 ·

Over the past couple of weeks Cormac Hogan and I have been updating various Core Storage white papers which had not been touched in a while for different reasons. We were starting to see more and more requests come in for update content and as both of used to be responsible for this at some point in the past we figured we would update the papers and then hand them over to technical marketing for “maintenance” updates in the future.

You can expect a whole series of papers in the upcoming weeks on storagehub.vmware.com and the first one was just published. It is on the topic of the vSphere APIs for I/O Filtering and provides an overview of what it is, where it sits in the I/O path and how you can benefit from it. I would suggest downloading the paper, or reading it online on storagehub:

  • vSphere APIs for I/O Filtering White Paper online
  • vSphere APIs for I/O Filtering White Paper download

#vmworld #STO1770BU – Tech Preview of Integrated Data Protection for vSAN

Duncan Epping · Aug 30, 2017 ·

This session was hosted by Michael Ng and Shobhan Lakkapragada and is all about Data Protection in the world of vSAN. Note that this was also a tech preview and features may or may not ever make it in to a future release. The session started with Shobhan explaining the basics of vSAN and the current solutions that are available for vSAN data resiliency, I am not going to rehash that as I am going to assume that you have read most of my articles on those topics already.

Vision: Native Data Protection for vSAN. Provide the ability to specify in policy how many snapshots you would like per VM and how often, and what the retention should be. These snapshots will be stored locally. However, it will also be possible to specify in policy if data needs to moved outside of the primary datacenter. For instance, move data once every 4 hours to the DR site or the Archival Site, also referred to “local protection” and “remote protection”. Not just to vSAN by the way for “remote protection”, but also NFS, Data Domain and even S3 based storage. This is the overall vision of what we are trying to achieve with the native data protection feature.

First problem we will need to solve is snapshotting. The current vSphere/vSAN snapshotting mechanism will not scale to the extend it will need to scale. A new snapshotting mechanism is being worked on which will give far better performance and scale. The design goal is to support up to 100 snapshots per VM with a low (minimal) performance impact. The technology is developed on vSAN, but not tied to vSAN, this may be expanded to vSphere overall.

Michael now took over and started diving deeper in the functionality that we are aiming to provide. First of all “native local data protection”. This is where the snapshots which are created through a schedule in a policy are stored locally on the datastore. This is a “first line of defense” mechanism where we can recover VMs really fast by simply going to a previous snapshot. Snapshots can be created in an application consistent state, even leveraging VSS providers. What is critical if you ask me is that all of this uses the familiar SPBM policies. If you know how to create a policy then you can configure data protection!

In the demo Michael showed the H5 interface next for vSAN Data Protection. A policy is created with the new capabilities that are there as part of vSAN Data Protection. It is shown how you can can specify RPO, RTO, application consistency etc. The policy is created and next that policy is now attached to VMs. Next the snapshot catalog view was demoed. The H5 UI shows the catalog on a per VM basis, but of course there are various views. In this case the per VM view shows all the snapshots, whether they are locally stored or remotely, and it provides you the option to move back and forth in time. Very useful when you need to restore an older snapshot. When you click a snapshot you will then see all the details of that snapshot.

In the next demo Michael shows how to restore a snapshot, not the most spectacular demo, why not? Well because it is dead simple. First he simulates a data file corruption and then goes to the H5 UI, right clicks the VM and goes to the restore option. Next selects the snapshot he wants to restore and even restores it as “new VM”, which is a linked clone, but it can also be restored as a fully independent VM. In the case you want to restore fully independently a linked clone (sort of) will be created and in the back-end the instance will be migrated to being independently. So the recover is instantly and over time the task of making it independently will complete. During the recovery by the way, there’s even the option to have the VM recovered without networking, or you can customize the VM as well to avoid conflicts.

When the recovery finished Michael showed how the “corrupted file” was succesfully restored. Or actually I should say, the VM was restored to the ‘last known good state’, as this is not a file level restore but a VM level restore.

Besides snapshotting / restoring it is of course also possible to closely monitor the state of your protected VMs. Creating snapshots is important, but being to restore them is even more important. Custom health checks are being developed for vSAN Data Protection which shows you the current state of data protection in your environment. Is the service running, are VM snapshots created, are they crash consistent?

And with that the session ended. Very impressive demoes and interesting feature, I cannot wait to see this being released! Again, when the session is published, I will share the link. Thanks Michael and Shobhan.

Can you use the management IPs as the isolation address for HA?

Duncan Epping · Aug 11, 2017 ·

There was a question on VMTN this week about the use of the management IP’s in a “smaller” cluster as the isolation address for vSphere HA. The plan was to disable the default isolation address (default gateway) and then add every management IP as an isolation address. In this case 5 or 6 IP’s would be added. I had to think this through and went through the steps of what happens in the case of an isolation event:

  1. no traffic between secondary and primary or primary and secondary hosts (depending on whether the primary is isolated or one of the secondary hosts)
  2. if it was a secondary which is potentially isolated then the secondary will start a “primary election process”
  3. if it was the primary which is potentially isolated then the primary will try to ping the isolation addresses
  4. if it was a secondary and there’s no response to the election process then the secondary host will ping the isolation address after it has elected itself as primary host
  5. if there’s no response to any of the pings (happen in parallel) then the isolation is declared and the isolation response is triggered

Now the question is: will there be a response when the host tries to ping itself while it is isolated, as you need to add all ip-addresses to “isolation address” options for it to make sense… And that is what I tested. It will ping all isolation addresses. All but one will fail, the one that will be successful is the management IP address of the host which is isolated. (You can still ping your own IP when the NICs are disconnected even.) Leaving the VMs running as one of the isolation addresses responded.

In other words, don’t do this. The isolation address should be a reliable address outside of the ESXi host, preferably on the same network as the management.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 9
  • Page 10
  • Page 11
  • Page 12
  • Page 13
  • Interim pages omitted …
  • Page 63
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in