• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vSphere

VMworld Video: vSphere 6.7 Clustering Deep Dive

Duncan Epping · Sep 3, 2018 ·

As all videos are posted for VMworld (and nicely listed by William), I figured I would share the session Frank Denneman and I presented. It ended up in the Top 10 Sessions on Monday, which is always a great honor. We had a lot of positive feedback and comments, thanks for that! Most importantly, it was a lot of fun again to be up on stage at VMworld talking about this content after 6 years of absence or so. For those who missed it, watch it here:

https://s3-us-west-1.amazonaws.com/vmworld-usa-2018/VIN1249BU.mp4

Also very much enjoyed the book signing session at the Rubrik booth with Niels and Frank. I believe Rubrik gave away around 1000 copies of the book. Hoping we can repeat this huge success in EMEA. But more on that later. If you haven’t picked up the book yet and won’t be at VMworld Europe, consider picking it up through Amazon, e-book is 14.95 USD only.


UI Confusion: VM Dependency Restart Condition Timeout

Duncan Epping · Sep 3, 2018 ·

Various people have asked me, and I wrote about this before in several articles but as part of a longer article which makes it difficult to find. When specifying the restart priority or restart dependency you can specify when the next batch of VMs should be powered on. Is that when the VMs are powered on when they are scheduled for being powered on, when VMware Tools reports them as running or when the application heartbeat reports itself?

In most cases, customers appear to go for either “powered on” or “VMware Tools” heartbeat. But what happens when one of the VMs in the batch is not successfully restarted? Well HA waits… For how long? Well that depends:

In the UI you can specify how long HA needs to wait by using the option called “VM Dependency Restart Condition Timeout”. This is the time-out in seconds used when one (or multiple VMs) can’t be restarted. So we initiate the restart of the group, and we will start the next batch when the first is successfully restart or when the time-out has been exceeded. By default, the time-out is 600 seconds, and you can override this in the UI.

What is confusing about this setting is the name, it states “VM Dependency Restart Condition Timeout”. So does this time-out apply to “Restarts Priority” or does it apply to “Restart Dependency” or maybe both? The answer is simple, this only applies to “Restart Priority”. Restart Dependency is a rule, a hard rule, a must rule, which means there’s no time-out. We wait until all VMs are restarted when you use restart dependency. Yes, the UI is confusing as the option mentions “dependency” where it should really talk about “priority”. I have reported this to engineering and PM, and hopefully it will be fixed in one of the upcoming releases.

Must read white paper: Persistent Memory performance with vSphere 6.7

Duncan Epping · Aug 14, 2018 ·

Today I noticed this whitepaper titled: Persistent Memory Performance on vSphere 6.7. An intriguing topic for sure as it is something “relatively new and something I haven’t encountered too much in the field. Yes, I talk about Persistent Memory, aka NVDIMMs, in my talks usually but then it typically relates to vSAN. I have not seen too many publications from VMware on this topic, so I figured I would share this publication with you:

  • Persistent Memory Performance in vSphere 6.7 – https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/pmem-vsphere67-perf.pdf
    Persistent memory (PMEM) is a new technology that has the characteristics of memory but retains data through power cycles. PMEM bridges the gap between DRAM and flash storage. PMEM offers several advantages over current technologies like:

    • DRAM-like latency and bandwidth
    • CPU can use regular load/store byte-addressable instructions
    • Persistence of data across reboots and crashes

The paper starts with a brief intro and then explains the different modes in which PMEM can be used, either as a “disk” (vPMEMDisk) or surfaced up to the guest OS as an NVDIMM (vPMEM). With the latter option, there’s also the ability to have some form of application awareness, which is referred to as the 3rd mode (vPMEM-aware).

I am not going to copy and paste the findings, as the paper has a lot of interesting data and you should go through it. One thing I found most interesting is the huge decrease in latency. Anyway, read the paper and get familiar with persistent memory / NVDIMMs, as this technology will start changing the way we design HCI platforms in the future and cater for low latency / high throughput applications in traditional environments.

What happened to MaxCostPerEsx41DS? It doesn’t seem to work in vSphere 6.x?

Duncan Epping · Aug 13, 2018 ·

Today I received a question which also caught me by surprise, someone updated from vSphere 5.0 and he noticed that when doing an SDRS Maintenance Mode that the setting MaxCostPerEsx41DS did not work. This setting actually limits the number of active SvMotions on a single datastore. You can imagine that this can be desired when you are “limited” in terms of performance. I was a bit surprised as I had not heard that these settings changed at all. Also, a quick search on internal pages and externally did not deliver any results. After a discussion with some support folks and some more digging, I found a reference to a naming change. Not surprising I guess, but as per vSphere 6.0 the setting is called MaxCostPerEsx6xDS. So if you would like to limit the number of SvMotion’s active at the same time, please note the change in names.

For more background on this topic I would like to refer to Frank’s excellent blog on this topic here.

You asked for it: vSphere 6.7 Clustering Deep Dive ebook, now available!

Duncan Epping · Aug 10, 2018 ·

We knew when we released the paper version of the book that many would yell: What about an e-book? Although sales numbers of the Host Deep Dive and previous Clustering Deep Dive books have shown that by far most people prefer a printed copy, we decided to go ahead and create an ebook as well. It is not as simple unfortunately as simply uploading a PDF or an MS Word file. We had to spend evenings reformatting the book in an e-book authoring tool, compile it, review it, fix issues, compile again etc. Nevertheless, it is done!

So what we did is we just uploaded it to Amazon, and we made it available for 14,95 USD, or whatever that roughly converts to in your local currency in your local store. We also noticed there was a bundling option, so as soon as the ebook and the paper copy are linked you can buy the ebook alongside the paper copy for only 2,99 USD. (Linking the book may still take a couple of days, we’ve initiated the process with Amazon and are waiting for them to complete it.)

You wanted it, so go out and pick it up, right before the weekend! Also, note that we have both the ebook and the paper version available right now, we are working on linking the books, so you can get a nice deal for both versions. Also, I would highly recommend picking up the Host Deep Dive books as well, and while you are at it pick up the VDI guide, it is an excellent read! Amazon links are on the right side for your convenience.

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 18
  • Page 19
  • Page 20
  • Page 21
  • Page 22
  • Interim pages omitted …
  • Page 159
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in