• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

Using a vSphere custom TCP/IP Stack for iSCSI

Duncan Epping · Jan 31, 2018 ·

For continued updated guidance on iSCSI and routed traffic/custom IP stacks I would like to refer you to storagehub, all iSCSI best practices can be found here.

I noticed a question today on an internal slack channel and it was about the use of custom TCP/IP stacks for iSCSI storage environments. Cormac and I updated a bunch of Core Storage whitepapers recently, and one of them was the iSCSI Best Practices white paper. It appears that this little section about routing and custom TCP/IP stacks is difficult to find, so I figured I would share it here as well. The executive summary is simple: Using custom TCP/IP stacks for iSCSI storage in vSphere is not supported.

What is nice though is that with vSphere 6.5 you can now set a gateway per vmkernel interface, anyway here’s the blurb from the paper:

As mentioned before, for vSphere hosts, the management network is on a VMkernel port and therefore uses the default VMkernel gateway. Only one VMkernel default gateway can be configured on a vSphere host per TCP/IP Stack. You can, however, add static routes from the command line or configure a gateway for each individual VMkernel port.

Setting a gateway on a per VMkernel port granular level has been introduced in vSphere 6.5 and allows for a bit more flexibility. The gateway for a VMkernel port can simply be defined using the vSphere Web Client during the creation of the VMkernel interface. It is also possible to configure it using esxcli. Note: At the time of writing the use of a custom TCP/IP Stack is not supported for iSCSI!

I hope that clarifies things, and makes this support statement easier to find.

Related

Storage 6.5, best practices, iscsi, vSphere

Reader Interactions

Comments

  1. kamruddin chowdhury says

    31 January, 2018 at 20:38

    At first, I am begging pardon to post my qurey here since it is not related to custom TCP/IP Stack.

    We are implementing hybrid vSAN 6.6 on Dell PowerEdge 730. All the caching and capacity disks are attached to same controller (PERC H730 mini). Storage controller and disks both have cache. I have disabled the controller cache. Should I also disable the disk cache ? This decision is an urgent requirement pls.

    Thanks in advance.

  2. James Hess says

    31 January, 2018 at 23:23

    Even if it ever were supported…. Custom TCP/IP stacks while a nice idea have a startling limitation:
    You cannot reconfigure, edit, or remove them.

    Basic alterations are not supported…. so when using custom TCP/IP Stacks the host configuration has to be made in a certain order, and then there are “settings” that cannot be changed through the lifecycle of the host.

    This is significant when configuring an environment where you need uniformity for ease of troubleshooting — for example VMKernel port 8 (vmk8) must be the exact SAME vMotion network for all hosts in the cluster; You have to fully plan and configure all your custom TCP/IP stacks in advance and can’t easily revise these settings later without deleting and re-creating all your VMKernels, which is kind of ridiculous….

    Temporarily re-assigning a VMK port to a different TCP stack for Troubleshooting purposes is also not an option.

    Thus, while Custom TCP stacks SHOULD be a useful feature, the implementation steals back advantages.

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

May 24th – VMUG Poland
June 1st – VMUG Belgium

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2023 · Log in