Every once in a while I go through some articles and see if they need to be revised or not. As there are over 1400 articles on yellow-bricks.com that is not an easy task, I can tell you that. Today I stumbled on this article I wrote early 2010. This article discussed the use of a “swing lun” to limit the amount of LUNs masked to a single host. Let me copy/paste the part that I want to revise:
In my design I usually propose a so called “Transfer Volume”. This Volume(NFS or VMFS) can be used to transfer VMs to a different cluster. Yes there’s a slight operational overhead here, but is also reduces overhead in terms of traffic to a LUN and decreases the chance of scsi reservation conflicts etc.
Here’s the process:
- Storage VMotion the VM from LUN on Array 1 to Transfer LUN
- VMotion VM from Cluster A to Cluster B
- Storage VMotion the VM from Transfer LUN to LUN on Array 2
Of course these don’t necessarily need to be two separate arrays, it could just as easily be a single array with a group of LUNs masked to a particular cluster. For the people who have a hard time visualizing it:
I guess this is a great example of why you need to revise your design with every release… This used to be a valid workaround to limit the amount of LUNs attached to a Cluster while maintaining the flexibility to move between clusters using Storage vMotion and vMotion. With vSphere 5.1 that is no longer needed now that we have enhanced functionality for vMotion. (Frank has an awesome vMotion deepdive… read it) Make sure to update your design and make the needed changes to your infrastructure if and when required…
It does seem to eliminate the need for a swing LUN if you’re going between arrays. But it’s still going to be much faster to have the swing LUN if you stay on one array, isn’t it? VAAI only happens if both clusters can see the destination datastore. I guess it depends on your environment, do 2 svmotions using VAAI go faster than one over the vMotion network.
So you would need to weigh the “elapsed time of the migration” against the “cost” of an extra volume and the operational overhead of having to do multiple steps…
I have proposed this design pattern to a customer in the past and chose NFS for the swing LUN. I was afraid to propose a block LUN (VMFS) for fear that an All Paths Down condition would now take down two or more clusters instead of just one cluster. Although I haven’t worked in a large VMware on NFS shop to know if NFS APD conditions wreck whole ESX Clusters in the same way of not.
Can a transfer LUN be used to move VMs from one esxi 5.1 cluster to another esxi 5.1 cluster when they are managed by seperate virtual center servers? We have new hardware and would like to migrate across with out adding the old hosts to the new virtual center server as this would break our backups. The LUN would be prsented to the old and new hardware. VMs would be moved across to the LUN, powered down, removed from inventory, added to the inventory on the new hardware and powered back on. At that stage a sVotiom will be done to move the VM to the destination LUN.