About a week ago I wrote this article about VSAN and Network IO Control. I originally wrote a longer article that contained more options for configuring the network part but decided to leave a section out of it for simplicity sake. I figured as more questions would come I would publish the rest of the content I developed. I guess now is the time to do so.
In the configuration described below we will have two 10GbE uplinks teamed (often referred to as “etherchannel” or “link aggregation”). Due to the physical switch capabilities the configuration of the virtual layer will be extremely simple. We will take the following recommended minimum bandwidth requirements in to consideration for this scenario:
- Management Network –> 1GbE
- vMotion VMkernel –> 5GbE
- Virtual Machine PG –> 2GbE
- Virtual SAN VMkernel interface –> 10GbE
When the physical uplinks are teamed (Multi-Chassis Link Aggregation) the Distributed Switch load balancing mechanism is required to be configured as:
- IP-Hash
or - LACP
It is required to configure all portgroups and VMkernel interfaces on the same Distributed Switch using either LACP or IP-Hash depending on the type physical switch used. Please note all uplinks should be part of the same etherchannel / LAG. Do not try to create anything fancy here as a physically and virtually incorrectly configured team can and probably will lead to more down time!
- Management Network VMkernel interface = LACP / IP-Hash
- vMotion VMkernel interface = LACP / IP-Hash
- Virtual Machine Portgroup = LACP / IP-Hash
- Virtual SAN VMkernel interface = LACP / IP-Hash
As various traffic types will share the same uplinks we also want to make sure that no traffic type can push out other types of traffic during times of contention, for that we will use the Network IO Control shares mechanism.
We will work under the assumption that only have 1 physical port is available and all traffic types share the same physical port for this exercise. Taking a worst case scenario approach in to consideration will guarantee performance even in a failure scenario. By taking this approach we can ensure that Virtual SAN always has 50% of the bandwidth to its disposal while leaving the remaining traffic types with sufficient bandwidth to avoid a potential self-inflicted DoS. When both Uplinks are available this will equate to 10GbE, when only one uplink is available the bandwidth is also cut in half; 5GbE. It is recommended to configure shares for the traffic types as follows:
Traffic Type | Shares | Limit |
Management Network | 20 | n/a |
vMotion VMkernel Interface | 50 | n/a |
Virtual Machine Portgroup | 30 | n/a |
Virtual SAN VMkernel Interface | 100 | n/a |
The following diagram depicts this configuration scenario.
Taj says
Why not use LBT ?
Duncan Epping says
Because this blogpost is about etherchannels 🙂
Joe says
Duncan,
I have been playing around with NIOC and I cannot get it to go over 100 shares for any individual setting am I missing something?
Thanks
Duncan Epping says
Argh, I wrote this article up as a draft and was planning on changing that… but completely forgot. Will fix it asap.
Chris says
For production, should I use dedicated physical ports for:
◾Management Network –> 1GbE
◾vMotion VMkernel –> 5GbE
◾Virtual Machine PG –> 2GbE
◾Virtual SAN VMkernel interface –> 10GbE
Or is it OK to have a 1GbE physical management port and then put everything else on two physical 10GbE ports?
David says
Hi Duncan,
do you know if it´s required to use a switch like on the VSA or can I use direct connections for the vSAN links? If I have just 3 hosts, it might make sense to use 2x 10G ports on each host and save the expensive switch. Because the VSA requires switches it is often a problem with the costs for SMB when you plan to use 10G…
Greetings from Hamburg,
David
Duncan Epping says
Not sure to be honest, I would suspect so.
chris says
Our SE said a switch was required if we wanted support….