• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

Yellow Bricks

by Duncan Epping

  • Home
  • ESXTOP
  • Stickers/Shirts
  • Privacy Policy
  • About
  • Show Search
Hide Search

iscsi

iSCSI IQN may have changed after upgrade to 7.0 U2

Duncan Epping · Jun 29, 2021 · 1 Comment

Last week I noticed some folks reporting that they had an issue with upgrades to 7.0 U2 from 7.0 U1. The issue they experienced was not being able to access their iSCSI Datastores any longer. I did some digging internally and found out there was a change in how we store the iSCSI IQN when the IQN is randomly created. Now note, this problem only exists for randomly created IQNs, so if you have a custom-named iSCSI IQN then you can stop reading here. If you have a random IQN and also have access control defined for your initiators, then you will want to read this if you are planning on upgrading.

Basically what happens if you upgrade from 7.0 U1 to 7.0U2 and you use a randomly generated IQN is that we regenerate the IQN after a reboot. What does that look like, well on VMTN user Nebb2k8 posted this:

7U1d = iqn.1998-01.com.vmware:labesx06-4ff17c83

7U2a = iqn.1998-01.com.vmware:labesx07:38717532:64

As you can see, the format also changed.

So if you lose (or lost) access after the upgrade, you simply copy the newly generated IQN and add it to the access control list of your storage system for the LUNs it applies to. Make sure to remove the old IQNs. Another option of course is to configure the randomly generated IQN as a custom IQN, this is pretty straightforward as shown below for “vmhba67”. You could create new IQNs, or you could re-use the randomly generated old IQNs if you want to keep them the same.

$ esxcli iscsi adapter get -A vmhba67

 vmhba67
   Name: iqn.1998-01.com.vmware:w1-hs3-n2503.eng.vmware.com:452738760:67

$ esxcli iscsi adapter set -A vmhba67 -n iqn.1998-01.com.vmware:w1-hs3-n2503.eng.vmware.com:452738760:67

If you would like to know more about this issue, make sure to read this KB article, or read this article by Jason Massae which also provides some PowerCLI code to get/set the IQN.

Using a vSphere custom TCP/IP Stack for iSCSI

Duncan Epping · Jan 31, 2018 ·

For continued updated guidance on iSCSI and routed traffic/custom IP stacks I would like to refer you to storagehub, all iSCSI best practices can be found here.

I noticed a question today on an internal slack channel and it was about the use of custom TCP/IP stacks for iSCSI storage environments. Cormac and I updated a bunch of Core Storage whitepapers recently, and one of them was the iSCSI Best Practices white paper. It appears that this little section about routing and custom TCP/IP stacks is difficult to find, so I figured I would share it here as well. The executive summary is simple: Using custom TCP/IP stacks for iSCSI storage in vSphere is not supported.

What is nice though is that with vSphere 6.5 you can now set a gateway per vmkernel interface, anyway here’s the blurb from the paper:

As mentioned before, for vSphere hosts, the management network is on a VMkernel port and therefore uses the default VMkernel gateway. Only one VMkernel default gateway can be configured on a vSphere host per TCP/IP Stack. You can, however, add static routes from the command line or configure a gateway for each individual VMkernel port.

Setting a gateway on a per VMkernel port granular level has been introduced in vSphere 6.5 and allows for a bit more flexibility. The gateway for a VMkernel port can simply be defined using the vSphere Web Client during the creation of the VMkernel interface. It is also possible to configure it using esxcli. Note: At the time of writing the use of a custom TCP/IP Stack is not supported for iSCSI!

I hope that clarifies things, and makes this support statement easier to find.

vSphere and iSCSI storage best practices

Duncan Epping · Nov 1, 2017 ·

And here’s the next paper I updated. This time it is the iSCSI storage best practices for vSphere. It seems that we have now overhauled most of the core storage white papers. You can find them all under core storage on storagehub.vmware.com, but for your convenience I will post the iSCSI links below here as well:

  • Best Practices For Running VMware vSphere On iSCSI (web based reading)
  • Best Practices For Running VMware vSphere On iSCSI (pdf download)

One thing I want to point out, as it is a significant change compared to the last version of the paper is the following: In the past vSphere did not support IPSec so for the longest time this was also not supported for iSCSI as a result. When reviewing all available material I noticed that although vSphere now supports IPSec with IPv6 only there was no statement around iSCSI.

So what does that mean for iSCSI? Well, it is supported as of vSphere 6.0 to have IPSec enabled on the vSphere Software iSCSI implementation, but only for IPv6 implementations and not for IPv4! Note however, that there’s no data on the potential performance impact, and enabling IPSec could (I should probably say “will” instead of “could”) introduce latency / overhead. In other words, if you want to enable this make sure to test the impact it has on your workload.

Back to Basics: Using the vSphere 5.1 Web Client to configure iSCSI

Duncan Epping · Sep 14, 2012 ·

In this article I will take you through the steps required to setup iSCSI using the vSphere 5.1 Web Client. In most iSCSI environment the VMware software iSCSI adapter is used, so that is what I will use. I had already setup a storage VMkernel NIC in one of my previous post, read that if you haven’t yet. Adding a software adapter can be done in a couple of simple steps:

  • On the “Manage” section of your host click on “Storage”
  • Click the green “plus” and select “Software iSCSI adapter”
  • Click “OK”
  • Now a new adapter will be added to the “Storage Adapters” list

[Read more…] about Back to Basics: Using the vSphere 5.1 Web Client to configure iSCSI

Resolved: Slow booting of ESXi 5.0 when iSCSI is configured

Duncan Epping · Nov 6, 2011 ·

My colleague Cormac posted an article about this already, but I figured it was important enough to rehash some of content. As many of you have experienced there was an issue with ESXi 5.0 in iSCSI environments. Booting would take a fair amount of time due to the increase of the amount of retries in the case creating a connection to the array would fail.

This is what the log file would typically look like:

iscsid: cannot make a connection to 192.168.1.20:3260 (101,Network is unreachable)
iscsid: Notice: Reclaimed Channel (H34 T0 C1 oid=3)
iscsid: session login failed with error 4,retryCount=3
iscsid: Login Target Failed: iqn.1984-05.com.dell:powervault.md3000i.6002219000a14a2b00000000495e2886 [email protected] addr=192.168.1.20:3260 (TPGT:1 ISID:0xf) err=4
iscsid: Login Failed: iqn.1984-05.com.dell:powervault.md3000i.6002219000a14a2b00000000495e2886 [email protected] addr=192.168.1.20:3260 (TPGT:1 ISID:0xf) Reason: 00040000 (Initiator Connection Failure)

This is explained in KB 2007108 which also contains the download link. Make sure to download it and update your environment if you are running iSCSI.

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Primary Sidebar

About the author

Duncan Epping is a Chief Technologist in the Office of CTO of the Cloud Platform BU at VMware. He is a VCDX (# 007), the author of the "vSAN Deep Dive", the “vSphere Clustering Technical Deep Dive” series, and the host of the "Unexplored Territory" podcast.

Upcoming Events

29-08-2022 – VMware Explore US
07-11-2022 – VMware Explore EMEA
17-11-2022 – VMUG UK
….

Recommended Reads

Sponsors

Want to support Yellow-Bricks? Buy an advert!

Advertisements

Copyright Yellow-Bricks.com © 2022 · Log in