• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • Unexplored Territory Podcast
  • HA Deepdive
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

distributed vswitch

Configuring VXLAN…

Duncan Epping · Oct 3, 2012 ·

Yesterday I got an email about configuring VXLAN. I was in the middle of re-doing my lab so I figured this would be a nice exercise. First I downloaded vShield Manager and migrated from regular virtual switches to a Distributed Switch environment. I am not going to go in to any depth around how to do this, this is fairly straight forward. Just right click the Distributed Switch and select “Add and Manage Hosts” and follow the steps. If you wondering what the use-case for VXLAN would be I recommend reading Massimo’s post.

VXLAN is an overlay technique and encapsulates layer 2 in layer 3. If you want to know how this works technically you can find the specs here. I wanted to create a virtual wire in my cluster. Just assume this is a large environment, I have many clusters and many virtual machines. In order to provide some form of isolation I would need to create a lot of VLANs and make sure these are all plumbed to the respective hosts… As you can imagine, it is not as flexible as one would hope. In order to solve this problem VMware (and partners) introduced VXLAN. VXLAN enables you to create a virtual network, aka a virtual wire. This virtual wire is a layer 2 segment and while the hosts might be in different networks the VMs can still belong to the same.

I deployed the vShield virtual appliance as this is a requirement for using VXLAN. After deploying it you will need to configure the network. This is fairly simple:

  • Login to the console of the vShield Manager (admin / default)
  • type “enable” (password is “default”)
  • type “setup” and provide all the required details
  • log out

Now the vShield Manager virtual appliance is configured and you can go to “https://<ip addres of vsm>/. You can login using admin / default. Now you will need to link this vShield Manager to vCenter Server:

  • Click “Settings & Reports” in the left pane
  • Now you should be on the “Configuration” tab in the “General” section
  • Click edit on the “vCenter Server” section and fill out the details (ip or hostname / username / password)

Now you should see some new shiny bright objects in the left pane when you start unfolding:

Now lets get VXLAN’ing

  • Click your “datacenter object” (in my case that is “Cork”)
  • Click the “Network virtualization” tab
  • Click “Preparation” –> “Connectivity“
  • Click “Edit” and tick your “cluster(s)” and click “Next“
  • I changed the teaming policy to “failover” as I have no port channels configured on my physical switches, depending on your infra make the changes required and click “finish“

An agent will now be installed on the hosts in your cluster. This is a “vib” package that  handles VXLAN traffic and a new vmknic is created. This vmknic is created with DHCP enabled, if needed in your environment you can change this to a static address. Lets continue with finalizing the preparation.

  • Click “Segment ID“
  • Enter a pool of Segment IDs, note that if you have multiple vSMs this will need to be unique as a segment ID will be assigned to a virtual wire and you don’t want virtual wires with the same ID. I used “5000 – 5900”
  • Fill out the “Multicast address range“, I used 225.1.1.1-225.1.4.254

Now that we have prepped the host we can begin creating a virtual wire. First we will create a network scope, the scope is the boundary of your virtual network. If you have 5 clusters and want them to have access to the same virtual wires you will need to make them part of the same network scope

  • Click “network scopes“
  • Click the “green plus” symbol to “add a network scope“
  • Give the scope a name and select the clusters you want to add to this network scope
  • Click “OK“

Now that we have defined our virtual network boundaries aka “network scope” we can create a virtual wire. The virtual wire is what it is all about, a “layer 2” segment.

  • Click “networks“
  • Click the “green plus” symbol to “create a VXLAN network“
  • Give it a name
  • Select the “network scope“

In the example below you see two virtual wires…

Now you have created a new virtual wire aka VXLAN network. You can add virtual machines to it by simply selecting the network in the NIC config section. The question of course remains, how do you get in / out of the network? You will need a vShield Edge device. So lets add one…

  • Click “Edges“
  • Click the “green plus” symbol to “add an Edge“
  • Give it a name
  • I would suggest, if you have this functionality, to tick the “HA” tickbox so that Edge is deployed in an “active/passive” fashion
  • Provide credentials for the Edge device
  • Select the uplink interface for this Edge
  • Specify the default gateway
  • Add the HA options, I would leave this set to the default
  • And finish the config

Now if you had a virtual wire, and it needed to be connected to an Edge (more than likely) make sure to connect the virtual wire to the Edge by going back to “Networks”. Select the wire and then the “actions dial” and click “Connect to Edge” and select the correct edge device.

Now that you have a couple of wires you can start provisioning VMs or migrating VMs to them. Simply add them to the right network during the provisioning process.

Migrating to a VDS switch when using etherchannels

Duncan Epping · Feb 21, 2012 ·

Last week at PEX I had a discussion around migrating to a Distributed Switch (VDS) and some of the challenges one of our partners faced. During their migration they ran in to a lot of network problems, which made them decide to change back to a regular vSwitch. They were eager to start using the VDS but could not take the risk again to run in to the problems they faced.

I decided to grab a piece of paper and we quickly drew out the current implemented architecture at this customer site and discussed the steps the customer took to migrate. The steps described were exactly as the steps documented here and there was absolutely nothing wrong with it. At least not at first sight…. When we dived in to their architecture a bit more a crucial keyword popped up, ether channels. Why is this a problem? Well look at this process for a minute:

  • Create Distributed vSwitch
  • Create dvPortgroups
  • Remove vmnic from vSwitch0
  • Add vmnic to dvSwitch0
  • Move virtual machines to dvSwitch0 port group

Just imagine you are using an ether channel and traffic is being load balanced, but now you have one “leg” of the ether channel ending up in dvSwitch0 and one in vSwitch0. Yes not a pretty sight indeed. In this scenario the migration path would need to be:

  1. Create Distributed vSwitch
  2. Create dvPortgroups
  3. Remove  all the ports from the etherchannel configuration on the physical switch
  4. Change vSwitch load balancing from “IP Hash” to “Virtual Port ID”
  5. Remove vmnic from vSwitch0
  6. Add vmnic to dvSwitch0
  7. Move virtual machines to dvSwitch0 port group

For the second vmnic only steps 5 and 6 (and any subsequent NICs) would need to be repeated. After this the dvPor group can be configured to use “IP-hash” load balancing and the physical switch ports can be added to the etherchannel configuration again. You can repeat this for additional portgroups and VMkernel NICs.

I do want to point out, that I am personally not a huge fan of etherchannel configurations in virtual environments. One of the reason is the complexity which often leads to problems when things are misconfigured. An example of when things can go wrong is displayed above. If you don’t have any direct requirements to use IP-hash… use Load Based Teaming on your VDS instead, believe me it will make your life easier in the long run!

 

Distributed vSwitches, go Hybrid or go Distributed?

Duncan Epping · Apr 21, 2011 ·

Yesterday I was answering some question in the VMTN Forums when I noticed that someone referred to my article about Hybrid vs full Distributed vSwitch Architectures. This article is almost two years old and definitely in desperate need of a revision. Back in 2009 when Distributed vSwitches where just introduced my conclusion in this discussion was:

If vCenter fails there’s no way to manage your vDS. For me personally this is the main reason why I would most like not recommend running your Service Console/VMkernel portgroups on a dvSwitch. In other words: Hybrid is the way to go…

As with many things my conclusion / opinion was based on my experience with the Distributed vSwitch and I guess you could say it was based on how comfortable I was with the Distributed vSwitch and the hardware that we used at that point in time. Since then much has changed and as such it is time to revise my conclusion.

In many environments today Converged Networks are a reality which basically means less physical NICs but more bandwidth. Less physical NICs results in having less options with regards to the way you architect your environment. Do you really need you management network on a vSwitch? What is the impact of not having it on a vSwitch? I guess it all comes down to what kind of risks you are willing to take, but also how much risk actually is involved. I started rethinking this strategy and came to the conclusion that actually the amount of risk you are taking isn’t as big as we  once all thought it was.

What is the actual issue when running vCenter virtually connected to a Distributed Switch? I can hear many of you repeat the quote from above “there’s no way to manage your vDS”, but think about it for a second… Do you really need to manage your vDS in a scenario where vCenter is down? And if so, wouldn’t you normally want to get your Management Platform up and running first before you start making changes? I know I would. But what if you really really need to make changes to your management network and vCenter isn’t available? (That would be a major corner case scenario by it self but anyway…) Couldn’t you just remove 1 NIC port from your dvSwitch and temporarily create a vSwitch with your Management Network? Yes you can! So what is the impact of that?

I guess it all comes down to what you are comfortable with and a proper operational procedure! But why? Why not just stick to Hybrid? I guess you could, but than again why not benefit from what dvSwitches have to offer? Especially in a converged network environment being able to use dvSwitches will make your life a bit easier from an operational perspective. On top of that you will have that great dvSwitch only Load Based Teaming to your disposal, load balancing without the need to resort to IP-Hash. I guess my conclusion is: Go Distributed… There is no need to be afraid if you understand the impact and risks and mitigate these with solid operational procedures.

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist and Distinguished Engineering Architect at Broadcom. Besides writing on Yellow-Bricks, Duncan is the co-author of the vSAN Deep Dive and the vSphere Clustering Deep Dive book series. Duncan is also the host of the Unexplored Territory Podcast.

Follow Us

  • X
  • Spotify
  • RSS Feed
  • LinkedIn

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2025 · Log in