I had a question last week about multi NIC vMotion. The question was if multi NIC vMotion was a multi initiator / multi target solution. Meaning that, if available, on both the source and the destination multiple NICs are used for the vMotion / migration of a VM. Yes it is!
It is complex process as we need vMotion to able to handle mixes of 10GbE and 1GbE NICs.
When we start the process we will check, from the vCenter side, each host and determine the total combined pool of bandwidth available for vMotion. In other words, if you have 2x1GbE NICs and 1x10GbE NIC, then that host has a pool of 12GbE worth of bandwidth. We will do the same for the source and the destination host. Then, we will walk down each host’s list of vMotion vmknics, pairing off NICs until we’ve exhausted the bandwidth pool.
There are many combinations possible, but lets discuss a few just to provide a better idea of how this works:
- If the source host has 1x1GbE NIC and the dest 1x1GbE NIC, we’ll open one connection between the these two hosts.
- If the source has 3x1GbE NICs and the destination 1x10GbE NIC, then we’ll open one connection from each source-side 1GbE NIC to the destination’s 10GbE NIC – so a total of three socket connections all to the dest’s single 10GbE NIC.
- If the source has 15x1GbE NICs and the destination 1x10GbE NIC and 5x1GbE NICs, then we’ll direct the first 10 source-side 1GbE NICs to connect to the dest’s 10GbE NIC, then the remaining pair of 5 1GbE vmknics will connect to each other – 15 connections in all.
Keep in mind that if the hosts are mismatched, we will create connections between vmknics until one of the sides is “depleted”. In other words if the source has 2 x 1GbE and the destination 1 x 1GbE only 1 connection would be opened.
Martin says
Once again, thanks for yet another great post. Credit to the person who asked the question for thinking out the box.
So, in essence from a design point it becomes crucial to determine the vmnics to be assigned to vmotion then.
What I’d like to know is hoe does the mixed config affect the actual speed. For example will the migration be faster if the source has 2x 10gbE and destination has 10x 1gbE? Or vice vera?
Koen Warson says
Hi,
Nice topic. Can you confirm also that if you use combinations like 2x10GbE on source and destination that the actual simultanious vMotions go from 8 to 16 ? Or is this not the case ?
Kind regards,
Koen Warson
Stefan Jagger (@StefanJagger) says
It’s good to have it clarified although I kind of expected this to happen, it is VMware we’re talking about here. 🙂
What happens if you have 10Gb NIC at each host and are using NIOC to determine the speed? Would the vMotion auto negotiate speed between the two endpoints and transfer at the maximum available?
Also what happens if one host is 10Gb NIC NIOC and other host is 5Gb NIC’s… transfer at max available up until the 5Gb limit?
Thanks
Stefan
Duncan Epping says
@Martin: I have not seen any tests around performance difference between 10×1 vs 1×10 to be honest. I would expect that the outcome would be the same.
@Koen: 8 per host max
@Stefan: Not sure I understand your question. But one side would never be able to push more than 5Gbps anyway?
Paul says
Do you need to setup a VMKernel port per 1GB uplink on a host. So if I have a vMotion switch with 3x1GB uplinks, do I need three VMKernel ports with failover order applied as such;
vSwitch2
vMotion1 – vmnic1 active / vmnic2 unused / vmnic3 unused
vMotion2 – vmnic1 unused / vmnic2 active / vmnic3 unused
vMotion3 – vmnic1 unused / vmnic2 unused / vmnic3 active
Or is the setup completely different? Or do I only need a single VMKernel with vMotion enabled for the entire switch to use all uplinks?
Duncan Epping says
@Paul: http://www.yellow-bricks.com/2011/09/17/multiple-nic-vmotion-in-vsphere-5/
Dwayne Lessner says
Hi Duncan
If you are creating a vMotion Network on a Blade chassis and want to keep the virtual machine traffic isolated to the chassis, does each vmk port on Host A need to be able to talk to both vmkernal port on Host B? I was hoping that it could still work with vmk1 on A talking to vmk1 B and vmk2 on A talking to vmk2 B. This doesn’t appear to be the behaviour though.
I have HP Chassis with two isloated networks.
Dwayne
@dlink7
Erik Bussink says
Hiya Duncan,
Just like dwayne, i’m interested to know in Multiple vMotion in. blade chassis. Does it makes sens to spread the vMotion vmk interface on two separate VLAn, so that Host A Port 1 speaks to Host B Port 1 and that Host A Port 2 speaks to Host B Port 2 ?
Would we still get the benefits of Multi NIC vMotion ?
Thanks Duncan
Duncan Epping says
I am not sure I understand the question.
Bilal Hashmi says
@Erik I think I sort of understand what you are asking.. Basically what you are saying is that on your chassis, you obviously dont have the same number of physical connections as the number of nic each host has in that chassis.
For example, lets assume you have 16 blades in a chassis with 4 nics each on each blade/host. Your chassis only has two 10GB interfaces and that gets shared across all nics on all blades/hosts. You want to know if you use stay with 4 nics/blade and use two for VM traffic and two for mgmt and vMotion(of course vMotion on seperate network), can you still benefit from Multi nic vMotion and if perhaps putting the vMotion on two diff VLANs would help. Thats your question? correct me if I am wrong.
The way I understand multi nic vMotion is that you leverage different nics (hence called multi nic), why do you want to have two VLANs for your vMotion? Assuming each nic on all blade is 1GB, what you could do is have 3PG, 1 for mgmt, 1 for vmk1 and 1 for vmk2. You can configure mgmt with whatever teaming you prefer, configure vmk1 with nic1 active and nic2 standby and finally configure vmk2 with nic2 active and nic1 standby.
If both host A and B are setup that way, when you vMotion a machine from host A to B, vmk1 and vmk2 on both hosts get engaged giving you a total of 2GB pipe. All on one VLANs. Keep in mind the key is to set diff NICs as active NICs so you can utilize all of them. If you have the same NIC team setting on all your vmk interfaces, you will only use one NIC. So make sure nic1 is active on vmk1 but standby on vmk2, nic2 is active one vmk2 but standby on vmk1 and so on…
Now if you vMotion network is so loud that you need it to be on a separate network because stuff keeps moving around very often, then I guess DRS is going nuts for a reason, it must be time time to look at your DRS aggressiveness and if you need more or beefier hosts etc..
Hope this helps, of course Duncan knows about this way more than I do, so if he thinks what I stated is not correct, go with what he says 🙂
Bilal Hashmi says
EDIT from previous reply:
Now if you vMotion network is so loud that you need it to be on a separate network because stuff keeps moving around very often, …
What I meant to say is:
Now if your vMotion network is so loud that you need two separate VLANs just for vMotion because stuff keeps moving around…..
Erik Bussink says
Hiya Duncan & Bilal,
I did my testing of Multi-NIC over Multi-VLAN today and here are the resuts… http://www.bussink.ch/?p=262
I wanted two different VLAN so that I can use the vMotion traffic on a Cisco UCS Chassis with two Fabric Interconnect, and not have the vMotion traffic flow from one Fabric Interconnect over the network switches to the other Fabric Interconnect.
Shady El-malatawey says
Dear Duncan,
First of all, I’d thank you so much for this marvelous post.
Second, in your post:
“If the source has 15x1GbE NICs and the destination 1x10GbE NIC and 5x1GbE NICs, then we’ll direct the first 10 source-side 1GbE NICs to connect to the dest’s 10GbE NIC, then the remaining pair of 5 1GbE vmnics will connect to each other – 15 connections in all”
I need to understand this in more details. What I know is that we make many port groups on DVS and each one has only one dvUplink group used and the others standby and this will work normally with 5x1GB NICs on both hosts (host A&B). For the rest of NICs on host A (10x1GB NICs) and 1x10GB NIC on host B, how can I do the configuration to make all of the 10 NICs on A direct their traffic to 10GB NIC on host B..?!?!! If I use the normal method with a vMotion port group, I’ll use only one dvUplink group which will contain only one NIC from host A and the 10GB NIC on host B..?!!!!