• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

drs

Black Friday Gift: Free copy of the vSphere 6.7 Clustering Deep Dive, thanks Rubrik (ebook)

Duncan Epping · Nov 23, 2018 ·

Many asked us if the ebook would be made available for free again. Today I have the pleasure of announcing that Frank, Niels and I have worked once again with Rubrik and the VMUG organization to make the vSphere 6.7 Clustering Deep Dive book available for free! Yes, that is 0 USD / EURO, or whatever your currency is. As the book signing at VMworld was wildly popular, which resulted in the follow up discussion about the ebook.

Ready to up your vSphere game? Join us at #VMworld booth #P305 for a complimentary copy of @ClusterDeepDive + the chance to meet authors @DuncanYB @FrankDenneman @NHagoort! More info: https://t.co/0DQ7nI1wzX pic.twitter.com/7nIGEvjdBF

— Rubrik (@rubrikInc) November 2, 2018

You want a copy? All that we expect you to do is register on Rubrik’s website using your own email address. Anyway, register and start your download engines, pick up a fresh copy of the vSphere Clustering Deep Dive here!

HA Futures: Per VM Admission Control – Part 4 of 4 – (Please comment!)

Duncan Epping · Nov 2, 2018 ·

As admission control hasn’t evolved in the past years, we figured we would include another potential Admission Control change. Right now when you define admission control you do this cluster-wide. You can define you want to tolerate 1 failure for instance, but some VMs simply may be more important than other VMs. What do you do in that case?

Well if that is the case then with today’s implementation you are stuck. This became very clear when customers started using the vSAN policies and defined different “failures to tolerate” for different workloads, it just makes sense. But as mentioned, HA does not allow you to do this. So our proposal is the following: Per VM FTT Admission Control.

In this case you would be able to define Host Failures To Tolerate on a per VM basis. This would provide a couple of benefits in my opinion:

  • You can set a higher Host Failures To Tolerate for critical workloads, increasing the chances of being to restart them when a failure has occurred
  • Aligning the HA Host Failures To Tolerate with the vSAN Host Failures To Tolerate, resulting in similar availability from a compute and storage point of view
  • Lower resource fragmentation by providing on a per VM basis Admission Control, even when using “slot based algorithm”
  • Of course you can use the new admission control types as mentioned in my earlier post.

Hopefully that is clear, and hopefully, it is a proposal you appreciate. Please leave a comment if you find this useful, or if you don’t find this useful. Please help shape the future of HA!

HA Admission control: How can I check how much reserved resources are used?

Duncan Epping · Sep 14, 2018 ·

I had this question twice in the past three months, so somehow this is something which isn’t clear to everyone. With HA Admission Control you set aside capacity for fail-over scenarios, this is capacity which is reserved. But of course VMs also use reservations, how can you see what the total combined used reserved capacity is including Admission Control?

Well, that is actually pretty simple it appears. If you open the Web Client, or the H5 client, and go to your cluster then you have a “Monitor” tab, under the Monitor tab there’s an option called “Resource Reservation” and it has graphs for both CPU as well as Memory. This actually includes admission control. To verify this I took a screenshot in the H5 client before enabling admission control, and after enabling it, as you can see a big increase in “reserved resources”, which indicates that admission control is taken into consideration.

Before:

After:

This is also covered in our Deep Dive book, by the way, if you want to know more, pick it up.

VMworld Video: vSphere 6.7 Clustering Deep Dive

Duncan Epping · Sep 3, 2018 ·

As all videos are posted for VMworld (and nicely listed by William), I figured I would share the session Frank Denneman and I presented. It ended up in the Top 10 Sessions on Monday, which is always a great honor. We had a lot of positive feedback and comments, thanks for that! Most importantly, it was a lot of fun again to be up on stage at VMworld talking about this content after 6 years of absence or so. For those who missed it, watch it here:

https://s3-us-west-1.amazonaws.com/vmworld-usa-2018/VIN1249BU.mp4

Also very much enjoyed the book signing session at the Rubrik booth with Niels and Frank. I believe Rubrik gave away around 1000 copies of the book. Hoping we can repeat this huge success in EMEA. But more on that later. If you haven’t picked up the book yet and won’t be at VMworld Europe, consider picking it up through Amazon, e-book is 14.95 USD only.


All-Flash Stretched vSAN Cluster and VM/Host Rules

Duncan Epping · Aug 20, 2018 ·

I had a question last week about the need for DRS Rules or also known as VM/Host Rules in an All-Flash Stretched vSAN Infrastructure. In a vSAN Stretched Cluster, there is read locality implemented. The read locality functionality, in this case, ensures that: reads will always come from the local fault domain.

This means that in the case of a stretched environment, the reads will not have to traverse the network. As the maximum latency is 5ms, this avoids a potential performance hit of 5ms for reads. The additional benefit is also that in a hybrid configuration we avoid needing to re-warm the cache. For the (read) cache re-warming issue we recommend our customers to implement VM/Host Rules. These rules ensure that VMs always run on the same set of hosts, in a normal healthy situation. (This is explained on storagehub here.)

What about an All-Flash cluster, do you still need to implement these rules? The answer to that is: it depends. You don’t need to implement it for “read cache” issues, as in an all-flash cluster there is no read cache. Could you run without those rules? Yes, you can, but if you have DRS enabled this also means that DRS freely moves VMs around, potentially every 5 minutes. This also means that you will have vMotion traffic consuming the inter-site links, and considering how resource hungry vMotion can be, you need to ask yourself if cross-site load balancing adds anything, what the risk is, what the reward is? Personally, I would prefer to load balance within a site, and only go across the link when doing site maintenance, but you may have a different view or set of requirements. If so, then it is good to know that vSAN and vSphere support this.

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Interim pages omitted …
  • Page 19
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in