• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

vswitch

How do I change the name of a vSwitch with vSphere 7.0 U2 and higher?

Duncan Epping · Jun 14, 2021 ·

Some of you may have noticed it already, and some may not, but a lot of the configuration details that were traditionally stored in “esx.conf” have now moved elsewhere. The question is where did it go? Well it went into “configstore” and with “configstore” now also comes a commandline interface called “configstorecli”. I briefly mentioned this in a previous post a few weeks ago. Today I noticed a question on VMTN around renaming a vswitch on a host and how you can do this now that the vswitch details have disappeared from esx.conf.

I figured I should be able to test this in my lab and write a short howto. So here we go.

You can look at the current network configuration for your vSwitch using the following command:

configstorecli config current get -c esx -g network_vss -k switches

Then what you can do is dump the info in a json file, which you will then be able to edit:

configstorecli config current get -c esx -g network_vss -k switches > vswitch.json

The file will look something like this:

How do I change the name of a vSwitch with vSphere 7.0 U2 and higher?

After you made the required changes, you then load the configuration using the json file:

configstorecli config current set -c esx -g network_vss -k switches -i vswitch.json --overwrite

I changed the name of my vSwitch0 to “vSwitchDuncan” and as you can see below, the change worked! Although do note, you will need to reboot the host before you see the change!

For those who prefer video content, I also created a quick demo which shows the above process:

Virtualization networking strategies…

Duncan Epping · Dec 18, 2014 ·

I was asked a question on LinkedIn about the different virtualization networking strategies from a host point of view. The question came from someone who recently had 10GbE infrastructure introduced in to his data center and the way the network originally was architected was with 6 x 1 Gbps carved up in three bundles of 2 x 1Gbps. Three types of traffic use their own pair of NICs: Management, vMotion and VM. 10GbE was added to the current infrastructure and the question which came up was: should I use 10GbE while keeping my 1Gbps links for things like management for instance? The classic model has a nice separation of network traffic right?

Well I guess from a visual point of view the classic model is nice as it provides a lot of clarity around which type of traffic uses which NIC and which physical switch port. However in the end you typically still end up leveraging VLANs and on top of the physical separation you also provide a logical separation. This logical separation is the most important part if you ask me. Especially when you leverages Distributed Switches and Network IO Control you can create a great simple architecture which is fairly easy to maintain and implement both from a physical and virtual point of view, yes from a visual perspective it may be bit more complex but I think the flexibility and simplicity that you get in return definitely outweighs that. I definitely would recommend, in almost all cases, to keep it simple. Converge physically, separate logically.

vSwitch Traffic Shaping, what is what?

Duncan Epping · Jun 30, 2014 ·

I was troubleshooting an issue where vMotion would time-out constantly, I had no clue where it was coming from so I started digging. In this case the environment was using a regular vSwitch and 10GbE networking. When I took a closer look I noticed that some form of traffic shaping was applied, as unfortunately the Distributed vSwitch was not an option for this environment. Now traffic shaping was enabled and the peak value was specified and the rest was left to the default value… and unfortunately this is exactly what cause the problem.

So when it comes to vSwitch Traffic Shaping, what is what? There are 3 settings you can set per portgroup:

  • Average Bandwidth – specified in Kbps
  • Peak Bandwidth – specified in Kbps
  • Burst Size – specified in KB

So if you have a 10Gbps NIC port for your traffic this means you have a total of 10,485,760 Kbps. When you enable vSwitch Traffic Shaping by default it is set to have “Average Bandwidth” to 100,000 Kbps , Peak Bandwidth to 100,000 Kbps and Burst Size to 1024,00 KB. So what does that mean? Well it means that if you enable it and do not change the values that the traffic is limited to 100,000 Kbps. 100,000 Kbps is… yes roughly 100Mbps, even less to be more precise: 97.6Mbps. Which is not a lot indeed, and not even a supported configuration for vMotion.

So what if I simply bump up the Peak Bandwidth to lets say 5Gbps, as I do not want vMotion to ever consume more than half of the NIC port (note, vSwitch traffic shaping is only for egress aka outbound traffic). Well setting the peak bandwidth sounds like it may do something, but probably not what you would hope for as this is how the settings are applied:

By default the traffic stream will get what is specified by “Average Bandwidth”. However, it is possible to exceed this when needed by specifying a higher “Peak Bandwidth” value. Your traffic will be allowed to burst until the value of “Burst Size” has been exceeded. In other words, in the above example when only Peak Bandwidth is increased this would lead to the following: By default the traffic is limited to 100Mbps, however it can peak to 5Gbps but only for 100MB worth of data traffic. As you can imagine in the case of vMotion when the full memory content of a VM is transferred that 100MB is hit within a second, after which the vMotion process is throttled back to 100Mbps and the remainder of the VM memory takes ages to copy and eventually times out.

So if you apply traffic shaping using your vSwitch, make sure to think through the numbers. In the above scenario for instance, specifying a 5Gbps Average and Peak would be what was desired.

Back to Basics: Using the vSphere 5.1 Web Client to configure a vSwitch

Duncan Epping · Sep 13, 2012 ·

In the previous articles we created a Datacenter, a cluster and added hosts to it. Now that we have done that we can start finalizing the configuration. This is just one example out of the many ways to configure networking for an ESXi host, and I kept it really really simple. This is not following any best practices, I just wanted to show some of the steps. In this scenario I have 4 network cards per host and I have VLANs for each network segment. Separating traffic through the use of VLAN is highly recommended and is a best practice.

Lets configure the virtual switch first. I will use a “standard vSwitch” for now. In this case we will set all vmnics to active on the vSwitch and control NIC usage on a portgroup level. [Read more…] about Back to Basics: Using the vSphere 5.1 Web Client to configure a vSwitch

Distributed vSwitches and vCenter outage, what’s the deal?

Duncan Epping · Feb 8, 2012 ·

Recently my colleague Venky Deshpande released  a whitepaper around VDS Best Practices. This white paper describes various architectural options when adopting a VDS only strategy. A strategy of which I can see the benefits. On Facebook multiple people made comments around why this would be a bad practice instead of a best practice, here are some of the comments:

“An ESX/ESXi host requires connectivity to vCenter Server to make vDS operations, such as powering on a VM to attach that VM’s network interface.”

“The issue is that if vCenter is a VM and changes hosts during a disaster (like a total power outage) and then is unable to grant itself a port to come back online.”

I figured the best way to debunk all these myths was to test it myself. I am confident that it is no problem, but I wanted to make sure that I could convince you. So what will I be testing?

  • Network connectivity after Powering-on a VM which is connected to a VDS while vCenter is down.
  • Network connectivity restore of vCenter attached to a VDS after a host failure.
  • Network connectivity restore of vCenter attached to a VDS after HA has moved the VM to a different host and restarted it.

Before we start I think it is useful to rehash something, which is different types of portgroups which is described in more depth in this KB:

  • Static binding – Port is immediately assigned and reserved for it when VM is connected to the dvPortgroup through vCenter. This happens during the provisioning of the virtual machine!
  • Dynamic binding – Port is assigned to a virtual machine only when the virtual machine is powered on and its NIC is in a connected state. The Port is disconnected when the virtual machine is powered off or the virtual machine’s NIC is disconnected. (Deprecated in 5.0)
  • Ephemeral binding – Port is created and assigned to a virtual machine when the virtual machine is powered on and its NIC is in a connected state. The Port is deleted when the virtual machine is powered off or the virtual machine’s NIC is disconnected. Ephemeral Port assignments can be made through ESX/ESXi as well as vCenter.

Hopefully this makes it clear straight away that their should be no problem at all, “Static Binding” is the default and even when vCenter is down a VM which has been provisioned before vCenter went down can easily be powered on and will have network access. I don’t mind spending some lab hours on this, so lets put this to a test. Lets use the defaults and see what the results are.

First I made sure all VMs were connected to a dvSwitch. I powered of a VM and checked the “Network settings and this is what it revealed… a port already assigned even when powered off:

This is not the only place you can see port assignments, you can verify it on the VDS’s “ports” tab:

Now lets test this, as that is ultimately what it is all about. First test, Network connectivity after Powering-on a VM which is connected to a VDS while vCenter is down:

  • Connected VM to dvPortgroup with static binding (is the default and best practice)
  • Power off VM
  • Power off vCenter VM
  • Connect vSphere Client to host
  • Power on VM
  • Ping VM –> Positive result
  • You can even see on the command line that this VM uses its assigned port:
    esxcli network vswitch dvs vmware list
    Client: w2k8-001.eth0
    DVPortgroup ID: dvportgroup-516
    In Use: true
    Port ID: 137

Second test, Network connectivity restore of vCenter attached to a VDS after a host failure:

  • Connected vCenter VM to dvPortgroup with static binding (is the default and best practice)
  • Power off vCenter VM
  • Connect vSphere Client to host
  • Power on vCenter VM
  • Ping vCenter VM –> Positive result

Third test, Network connectivity restore of vCenter attached to a VDS after HA has moved the VM to a different host and restarted it.

  • Connected vCenter VM to dvPortgroup with static binding (is the default and best practice)
  • Yanked the cable out of the ESXi host on which vCenter was running
  • Opened a ping to the vCenter VM
  • HA re-registered the vCenter VM on a different host and powered it on
    • The re-register / power-on took roughly 45 – 60 seconds
  • Ping vCenter VM –> Positive result

I hope this debunks some of those myths floating around. I am the first to admit that there are still challenges out there, these will hopefully be addressed soon, but I can assure you that your virtual machines will regain connection as soon as they are powered on through HA or manually… yes even when your vCenter Server is down.

 

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

About the Author

Duncan Epping is a Chief Technologist in the Office of the CTO in the Cloud Infrastructure Business Group (CIBG) at VMware. Besides writing on Yellow-Bricks, Duncan co-authors the vSAN Deep Dive book series and the vSphere Clustering Deep Dive book series. Duncan also co-hosts the Unexplored Territory Podcast.

Follow Me

  • Twitter
  • LinkedIn
  • Spotify
  • YouTube

Recommended Book(s)

Advertisements




Copyright Yellow-Bricks.com © 2023 · Log in