• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

How do I configure an HA vpxd.das advanced setting?

Duncan Epping · Nov 7, 2012 ·

On the community forums someone asked a question around how to set “config.vpxd.das.electionWaitTimeSec”. I was looking at the documentation and it is indeed not really clear on what / where / how to set an HA vpxd.das advanced setting. This KB article kind explains it, but let me summarize it and simplify it.

There are various sorts of advanced settings, but for HA three in particular:

  • das.* –> Cluster level advanced setting.
  • fdm.* –> FDM host level advanced setting (FDM = Fault Domain Manager = vSphere HA)
  • vpxd.* –> vCenter level advanced setting.

How do you configure these?

  • Cluster Level
    • In the vSphere Client: Right click your cluster object, click “edit settings”, click “vSphere HA” and hit the “Advanced Options” button.
    • In the Web Client: Click “Hosts and Clusters”, click your cluster object, click the “Manage” tab, click “Settings” and “vSphere HA”, hit the “Edit” button
  • FDM Host Level
    • Open up an SSH session to your host and edit “/etc/opt/vmware/fdm/fdm.cfg”
  • vCenter Level
    • In the vSphere Client: Click “Administration” and “vCenter Server Settings”, click “Advanced Settings”
    • In the Web Client: Click “vCenter”, click “vCenter Servers”, select the appropriate vCenter Server and click the “Manage” tab, click “Settings” and “Advanced Settings”

By the way, this KB also lists all HA advanced settings that are relevant… might be worth reading as well. Hope this helps configuring your HA vpxd.das advanced setting.

VMFS File Sharing Limits increased to 32

Duncan Epping · Nov 6, 2012 ·

I was reading this white paper about VMware View 5.1 and VMFS File Locking today. It mentions the 8 host cluster limitation for VMware View with regards to linked clones and points to VMFS file sharing limits as the cause for this. While this is true in a way, VMware View 5.1 is limited to 8 host clusters for linked clones on VMFS Datastores, the explanation doesn’t cover all details or reflect the current state of vSphere / VMFS. (Although there is a fair bit of details in there about VMFS prior to vSphere 5.1.)

What the paper doesn’t mention is that in vSphere 5.1 this “file sharing limit” has been increased from 8 to 32 for VMFS Datastores. Cormac Hogan wrote about this a while ago. So to be clear, VMFS is fully capable today of sharing a file with 32 hosts in a cluster. VMware View doesn’t support that yet unfortunately, but for instance VMware vCloud Director 5.1 does support it today.

I still suggest reading the white paper, as it does help getting a better understanding of VMFS and View internals!

VXLAN basics and use cases (when / when not to use it)

Duncan Epping · Nov 2, 2012 ·

I have been getting so many hits on my blog for VXLAN I figured it was time to expand a bit on what I have written about so far. My first blog post was about Configuring VXLAN, the steps required to set it up in your vSphere environment. As I had many questions about the physical requirements I followed up with an article about exactly that, VXLAN Requirements. Now I am seeing more and more questions around where and when VXLAN would be a great fit, so lets start with some VXLAN basics.

The first question that I would like to answer is what does VXLAN enable you to do?

In short, and I am trying to make it as simple as I possibly can here… VXLAN allows you to create a logical network for your virtual machines across different networks. More technically speaking, you can create a layer 2 network on top of layer 3. VXLAN does this through encapsulation. Kamau Wanguhu wrote some excellent articles about how this works, and I suggest you read that if you are interested. (VXLAN Primer Part 1, VXLAN Primer Part 2) On top of that I would also highly recommend Massimo’s Use Case article, some real useful info in there! Before we continue, I want to emphasize that you could potentially create 16 million networks using VXLAN, compare this to the ~4000 VLANs and you understand by this technology is important for the software defined datacenter.

Where does VXLAN fit in and where doesn’t it (yet)?

First of all, lets start with a diagram.

vxlan basics - 01

In order for the VM in Cluster A which has “VLAN 1” for the virtual machine network to talk to the VM in Cluster B (using VLAN 2) a router is required. This by itself is not overly exciting and typically everyone will be able to implement it by the use of a router or layer 3 switching device. In my example, I have 2 hosts in a cluster just to simplify the picture but imagine this being a huge environment and hence the reason many VLANs are created to restrict the failure domain / broadcast domain. But what if I want VMs in Cluster A to be in the same domain as the VMs in Cluster B? Would I go around and start plumbing all my VLANs to all my hosts? Just imagine how complex that will get fairly quickly. So how would VXLAN solve this?

Again, diagram first…

vxlan basics - 02

Now you can see a new component in there, in this case it is labeled as “vtep”. This stand for VXLAN Tunnel End point. As Kamau explained in his post, and I am going to quote him here as it is spot on…

The VTEPs are responsible for encapsulating the virtual machine traffic in a VXLAN header as well as stripping it off and presenting the destination virtual machine with the original L2 packet.

This allows you to create a new network segment, a layer 2 over layer 3. But what if you have multiple VXLAN wires? How does a VM in VXLAN Wire A communicate to a VM in VXLAN Wire B? Traffic will flow through an Edge device, vShield Edge in this case as you can see in the diagram below.

vxlan basics - 03

So how about applying this cool new VXLAN technology to an SRM infrastructure or a Stretched Cluster infrastructure? Well there are some caveats and constraints (right now) that you will need to know about, some of you might have already spotted one in the previous diagram. I have had these questions come up multiple times, so hence the reason I want to get this out in the open.

  1. In the current version you cannot “stitch” VXLAN wires together across multiple vCenter Servers, or at least this is not supported.
  2. In a stretched cluster environment a VXLAN implementation could lead to traffic tromboning.

So what do I mean with traffic tromboning? (Also explained in this article by Omar Sultan.) Traffic tromboning means that potentially you could have traffic flowing between sites because of the placement of a vShield Edge device. Lets depict it to make it clear, I stripped this to the bare minimum leaving VTEPs, VLANs etc out of the picture as it is complicated enough.

In this scenario we have two VMs both sitting in Site A, and cluster A to be more specific… even the same host! Now when these VMs want to communicate with each other they will need to go through their Edge device as they are on a different wire, represented by a different color in this diagram. However, the Edge device sits in Site B. So this means that for these VMs to talk to each other traffic will flow through the Edge device in Site B and then come back to Site A to the exact same host. Yes indeed, there could be an overhead associated with that. And with two VMs that probably is minor, with 1000s of VMs that could be substantial. Hence the reason I wouldn’t recommend it in a Stretched environment.

vxlan basics - 04

Before anyone asks though, yes VMware is fully aware of these constraints and caveats and are working very hard towards solving these, but for now… I personally would not recommend using VXLAN for SRM or Stretched Infrastructures. So where does it fit?

I think in this post there are already a few mentioned but lets recap. First and foremost, the software defined datacenter. Being able to create new networks on the fly (for instance through vCloud Director, or vCenter Server) adds a level of flexibility which is unheard of. Also those environments which are closing in on the 4000 VLAN limitation. (And in some platforms this is even less.) Other options are sites where each cluster has a given set of VLANs assigned but these are not shared across cluster and do have the requirement to place VMs across clusters in the same segment.

I hope this helps…

Bandwidth requirements for long distance vMotion

Duncan Epping · Oct 31, 2012 ·

I received a question a while back about the bandwidth requirements for long distance vMotion, aka live migration across distance. I was digging through some of the KBs around stretched clusters and must say they weren’t really clear, or at least not consistently clear…

Thanks everyone. Is Long Distance vMotion still requiring a minimum of 622 (1Gb) in current versions? /cc @duncanyb

— Kurt Bales (@networkjanitor) October 3, 2012

I contacted support and asked them for a statement but have had no clear response yet. The following statements is what I have been able to validate when it comes to “long distance vmotion”. So this is no VMware support statement, but my observations:

  • Maximum latency of 5 milliseconds (ms) RTT (round trip time) between hosts participating in vMotion, or 10ms RTT between hosts participating with Enterprise Plus (Metro vMotion feature).
  • <update>As of 2013 the official required bandwidth is 250Mbps per concurrent vMotion</update>
  • Source and destination vSphere hosts must have a network interface on the same IP subnet and broadcast domain.

There are no longer any direct bandwidth requirements as far as I have been able to validate. The only requirement VMware seems to have are the ones mentioned above around maximum tolerated latency and layer 2 adjacency. If this statement changes I will update this blog post accordingly.

PS: There are various KBs that mention 622Mbps, but there are also various that don’t list it. I have requested our KB team to clear this up.

Working with CA signed certificates in your vSphere environment?

Duncan Epping · Oct 30, 2012 ·

Are you working with CA signed certificates in your vSphere environment? You might want to check out these recently published KB articles. They will definitely help understanding the whole process around installing and configuring them. (Thanks Simon for pointing these out!)

  • Configuring CA signed certificates for VMware vCenter Server 5.0.x
    http://kb.vmware.com/kb/2015421
  • Configuring certificates signed by a Certificate Authority (CA) for vCenter Server Appliance 5.1
    http://kb.vmware.com/kb/2036744
  • Configuring CA signed SSL certificates for vSphere Update Manager in vCenter 5.1
    http://kb.vmware.com/kb/2037581
  • Creating certificate requests and certificates for the vCenter 5.1 components
    http://kb.vmware.com/kb/2037432
  • Configuring CA signed SSL certificates for vCenter SSO in vCenter 5.1
    http://kb.vmware.com/kb/2035011
  • Configuring CA signed SSL certificates for the Web Client and Log Browser in vCenter 5.1
    http://kb.vmware.com/kb/2035010
  • Configuring CA signed SSL certificates for the Inventory service in vCenter 5.1
    http://kb.vmware.com/kb/2035009
  • Configuring OpenSSL for installation and configuration of CA signed certificates in the vSphere environment
    http://kb.vmware.com/kb/2015387
  • Configuring CA signed certificates for ESXi 5.x hosts
    http://kb.vmware.com/kb/2015499
  • Configuring CA signed certificates for vCenter 5.1
    http://kb.vmware.com/kb/2035005
  • Implementing CA signed SSL certificates with vSphere 5.0
    http://kb.vmware.com/kb/2015383
  • Implementing CA signed SSL certificates with vSphere 5.1
    http://kb.vmware.com/kb/2034833
  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 201
  • Page 202
  • Page 203
  • Page 204
  • Page 205
  • Interim pages omitted …
  • Page 492
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in