• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

VSAN and Network IO Control / VDS part 2

Duncan Epping · Nov 12, 2013 ·

About a week ago I wrote this article about VSAN and Network IO Control. I originally wrote a longer article that contained more options for configuring the network part but decided to leave a section out of it for simplicity sake. I figured as more questions would come I would publish the rest of the content I developed. I guess now is the time to do so.

In the configuration described below we will have two 10GbE uplinks teamed (often referred to as “etherchannel” or “link aggregation”). Due to the physical switch capabilities the configuration of the virtual layer will be extremely simple. We will take the following recommended minimum bandwidth requirements in to consideration for this scenario:

  • Management Network –> 1GbE
  • vMotion VMkernel –> 5GbE
  • Virtual Machine PG –> 2GbE
  • Virtual SAN VMkernel interface –> 10GbE

When the physical uplinks are teamed (Multi-Chassis Link Aggregation) the Distributed Switch load balancing mechanism is required to be configured as:

  1. IP-Hash
    or
  2. LACP

It is required to configure all portgroups and VMkernel interfaces on the same Distributed Switch using either LACP or IP-Hash depending on the type physical switch used. Please note all uplinks should be part of the same etherchannel / LAG. Do not try to create anything fancy here as a physically and virtually incorrectly configured team can and probably will lead to more down time!

  • Management Network VMkernel interface = LACP / IP-Hash
  • vMotion VMkernel interface = LACP / IP-Hash
  • Virtual Machine Portgroup = LACP / IP-Hash
  • Virtual SAN VMkernel interface = LACP / IP-Hash

As various traffic types will share the same uplinks we also want to make sure that no traffic type can push out other types of traffic during times of contention, for that we will use the Network IO Control shares mechanism.

We will work under the assumption that only have 1 physical port is available and all traffic types share the same physical port for this exercise. Taking a worst case scenario approach in to consideration will guarantee performance even in a failure scenario. By taking this approach we can ensure that Virtual SAN always has 50% of the bandwidth to its disposal while leaving the remaining traffic types with sufficient bandwidth to avoid a potential self-inflicted DoS. When both Uplinks are available this will equate to 10GbE, when only one uplink is available the bandwidth is also cut in half; 5GbE. It is recommended to configure shares for the traffic types as follows:

 

Traffic Type Shares Limit
Management Network  20 n/a
vMotion VMkernel Interface  50 n/a
Virtual Machine Portgroup  30 n/a
Virtual SAN VMkernel Interface  100 n/a

 

The following diagram depicts this configuration scenario.

Share it:

  • Tweet

Related

Server, Software Defined, Storage, vSAN 5.5, network, network io control, virtual san, VMware, vsan, vSphere

Reader Interactions

Comments

  1. Taj says

    12 November, 2013 at 16:22

    Why not use LBT ?

    • Duncan Epping says

      12 November, 2013 at 16:36

      Because this blogpost is about etherchannels 🙂

  2. Joe says

    16 January, 2014 at 15:54

    Duncan,

    I have been playing around with NIOC and I cannot get it to go over 100 shares for any individual setting am I missing something?

    Thanks

    • Duncan Epping says

      16 January, 2014 at 17:19

      Argh, I wrote this article up as a draft and was planning on changing that… but completely forgot. Will fix it asap.

  3. Chris says

    31 January, 2014 at 21:14

    For production, should I use dedicated physical ports for:

    ◾Management Network –> 1GbE
    ◾vMotion VMkernel –> 5GbE
    ◾Virtual Machine PG –> 2GbE
    ◾Virtual SAN VMkernel interface –> 10GbE

    Or is it OK to have a 1GbE physical management port and then put everything else on two physical 10GbE ports?

  4. David says

    12 March, 2014 at 16:32

    Hi Duncan,
    do you know if it´s required to use a switch like on the VSA or can I use direct connections for the vSAN links? If I have just 3 hosts, it might make sense to use 2x 10G ports on each host and save the expensive switch. Because the VSA requires switches it is often a problem with the costs for SMB when you plan to use 10G…

    Greetings from Hamburg,
    David

    • Duncan Epping says

      12 March, 2014 at 18:09

      Not sure to be honest, I would suspect so.

      • chris says

        12 March, 2014 at 22:23

        Our SE said a switch was required if we wanted support….

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

Feb 9th – Irish VMUG
Feb 23rd – Swiss VMUG
March 7th – Dutch VMUG
May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in