A question we receive a lot is what kind of zoning should be implemented for our storage solution? The answer is usually really short and simple: at least single initiator zoning.
Single initiator zoning is something we have always recommend in the field (VMware PSO Consultants/Architects) and something that is clearly mentioned in our documentation… at least that’s what I thought.
On page 31 of the SAN Design and Deploy guide we clearly state the following:
When a SAN is configured using zoning, the devices outside a zone are not visible to the devices inside the zone. When there is one HBA or initiator to a single storage processor port or target zone, it is commonly referred to as single zone. This type of single zoning protects devices within a zone from fabric notifications, such as Registered State Change Notification (RSCN) changes from other zones. In addition, SAN traffic within each zone is isolated from the other zones. Thus, using single zone is a common industry practice.
That’s crystal clear isn’t it? Unfortunately there’s another document floating around which is called “Fibre Channel SAN Configuration Guide” and this document states the following on page 36:
- ESX Server hosts that use shared storage for virtual machine failover or load balancing must be in one zone.
- If you have a very large deployment, you might need to create separate zones for different areas of functionality. For example, you can separate accounting from human resources.
So which one is correct and which one isn’t? I don’t want any confusion around this. The first document, the SAN Design and Deploy guide is correct. VMware recommends single initiator zoning. Of course if you want to do “single initiator / single target” that would even be better, but single initiator is the bare minimum. Now let’s hope the VMware Tech Writers can get that document fixed…
Lode says
Thanks for clarifying, Duncan.
I also refer to the Brocade best practices (http://www.brocade.com/downloads/documents/white_papers/Zoning_Best_Practices_WP-00.pdf) when explaining zoning. They also recommend single initiator (and single target) zoning. For ports that carry both disk and tape traffic, they recommend to create two zones rather than overlapping the targets.
Jason Boche says
Single Initiator Zoning for sure.
That second example is just a poor choice of words which could (and should) be cleaned up.
I understand the point they were trying to get across – all hosts in a cluster should be zoned such that they can all see the same shared storage, but not necessarily using a single zone to accomplish it!
Paul says
Or, to put it another way, trust VMware PSO.
AFidel says
When you guys say single target are you talking about a single port on a controller or one zone per controller? Typically I have 4 zones per host, HostPortA->controllerA, HostPortA->controllerB, HostPortB->ControllerA, and HostPortB->ControllerB where ControllerA and ControllerB are aliases to the ports from that controller that are visible to the appropriate fabric.
Jason Boche says
@AFidel Single host initiator port to many storage targets is fine. The main thing is you don’t want host initators talking to each other, thus you isolate them to their own zones.
AFidel says
Thanks Jason, that was my understanding of best practices from my training from various vendors and sessions I attended at SNW but I wanted to make sure I wasn’t missing something VMWare specific.
dconvery says
I remember the resounding “HUH?!?” that I let out when I originally read those docs. Any storage vendor worth its salt will have a best practices guide for vSphere implementations. I usually also refer to those as well.
Most of the time, I see a “zoning by initiator” configuration. Each initiator will be in a zone that includes all of the target ports from a single array. I have always recommended separate zones for separate devices. I have also seen a “one-for-one” zoning configuration, where each zone has only one initiator and one target. But that is rare.
russellcorey says
Big proponent of single initiator/multiple target based zoning. One big zone is potentially fraught with peril.
One question I have, where is the practice that all systems in a cluster together should be in a zone together coming from? Is it just a misunderstanding people have with how FCP zoning works?
Doug says
I’ll go with option 2 here. There is a LOT of misunderstanding about how FC zoning works. Single initiator has always been the best practice from the beginning.
Brian says
So is it overkill to do 1:1 (initiator to target). If I had 2 fabrics, 2 storage arrays I would end up with two zones per fabric per server (4 zones).
That’s what I have always done just to be safe.
David Francis says
Single zone and initiator as can lead to less config mistakes , rather than one big zone with all your hosts.
Also good to balance your HBA across all your front end controller ports on your array.
Doug says
Brian, four zones is what I’d have in your scenario — maybe more. Overkill, IMHO, is to have a zone per frontend port on the same storage array. Some people do it, but I’m not convinced that there is a benefit there that justifies the number of zones to manage.
Guido says
@Paul, trust Duncan 😉
dguyadeen says
The best practices for SAN zoning have always been single-initiator zoning. As per my FC Troubleshooting course (8 yrs ago).
Single-initiator zoning reduces RSCN, GPN_ID (Get Port Name) , GID_FT (Get Port Identifiers) & PLOGI (Port Logins).
I will shoot over a doc to you..
Cheers,
Denis
Aran says
The storage vendor documentation should supersede the VMware documentation. For instance, in some EMC configurations single-initiator:single-target zoning is required. This is mainly for deployments where an array port can act as an initiator as well as a target (some array based replication software does this). Yes this can mean a lot of zones (for one of my 8 host VM clusters this is four zones per host) but doing it right upfront will prevent problems in the future.
que says
I think I’m finally starting to understand single initiator zoning, although I’m still a bit confused about the difference between hard and soft zoning… Initially, it seemed as if hard zoning and single initiator zoning were the same thing, but the comments on this post seem to indicate otherwise.
Now I’m curious about how other people decide which LUNs to make accessible by which ESX hosts. We have it set up per cluster so DRS works, but that makes it a bit more difficult to move VMs between clusters.
Also, any storage-related recommendations on the number of ESX hosts in an HA/DRS cluster? Our storage team wants to limit them to 8-12 hosts, but I can’t find a good resource that explains why that would be a preferred practice. I’d prefer to make them as large as possible so there are fewer clusters to manage and to reduce the amount of resources reserved for HA.
Phoenix says
Hi there,
just wanted to give my two cents to the discussion.
EMC recommends Single/Single Zoning first because Mirrorview (the EMC mirroring functionality) sets the “highest” frontend port to target AND initiator, second reason is: The storage processors running a windows OS.
That means in certain conditions it can actually happen that each port comes up as initiator. You can see this in the connectivity when the SPs register themselves on each other and that can cause a severe impact on the performance.
Therefore I always use single/single on EMC storage systems. 🙂
In fact if you once created a single/single zoning and you copied the configfile all you need to do in the next configuration is enter the aliases for the WWNs (I never ever use single/multi zoning on port zoning environments), download the config, open wordpad or notepad and just copy/paste the settings you want. Clear the config, upload it to the switch, check it, and enable it. That’s it. Of course you can only use it on a redundant fabric, because you have to stop one fabric to do the config change.
Ted says
Stumbled across this article as this is something I’m struggling with at the mo.
I understand the single initiator/multiple target zoning approach and appreciate that sometimes there will be variance to this, as per EMC quoted above.
My question is, how many target ports? For example, if you have a storage clustered array with a total of 16 target ports would include all the target ports within the same zone? How many target ports is too many and what would you base the maximum on?
Thanks
Guido Gariup says
Hi Duncan, one question. What is the best practice for zoning when you use NPIV inside ESXi host. In that situation you have the physical ESXi HBAs’ WWN and also the VM’s WWN. How do you have to configure the zones? Do you have to put the physical ESXi HBA WWN into the same zone as the VM WWN?