Using a CNAME (DNS alias) to mount an NFS datastore

I was playing around in my lab with NFS datastores today. I wanted to fail-over a replicated NFS datastore without the need to re-register the virtual machines running on them. I had mounted the NFS datastore using the IP address and as that is used to create the UUID it was obvious that it wouldn’t work. I figured there should be a way around it but after a quick search on the internet I still hadn’t found anything yet.

I figured it should be possible to achieve this using a CNAME but also recalled something around vCenter screwing this up again. I tested it anyway and with success. This is what I did:

  • Added both NFS servers to DNS
  • Create a CNAME (DNS Alias) and pointed to the “active” NFS server
    • I used the name “nasdr” to make it obvious what it is used for
  • Created an NFS share (drtest) on the NFS server
  • Mount the NFS export using vCenter or though the CLI
    • esxcfg-nas -a -o nasdr -s /drtest drtest
  • Check the UUID using vCenter or through the CLI
    • ls -lah /vmfs/volumes
    • example output:
      lrwxr-xr-x    1 root     root           17 Feb  6 10:56 drtest -> e9f77a89-7b01e9fd
  • Created a virtual machine on the nfsdatastore
  • Enabled replication to my “standby” NFS server
  • I killed my “active” NFS server environment (after validating it had completed replication)
  • Changed the CNAME to point to the secondary NFS server
  • Unmounted the volume old volume
    • esxcfg-nas -d drtest
  • I did a vmkping to “nasdr” just to validate the destination IP had changed
  • Rescanned my storage using “esxcfg-rescan -A”
  • Mounted the new volume
    • esxcfg-nas -a -o nasdr -s /drtest drtest
  • Checked the UUID using the CLI
    • ls -lah /vmfs/volumes
    • example output:
      lrwxr-xr-x    1 root     root           17 Feb  6 13:09 drtest -> e9f77a89-7b01e9fd
  • Powered on the virtual machine now running on the secondary NFS server

As you can see, both volumes had the exact same UUID. After the fail-over I could power-on the virtual machine. No need to re-register the virtual machines within vCenter first. Before I wanted to share it with the world I reached out to my friends at NetApp. Vaughn Stewart connected me with Peter Learmonth who validated my findings and actually pointed me to a blog article he wrote about this topic. I suggest to head-over to Peter’s article for more details on this.

Be Sociable, Share!


    1. Brian M says

      Great Article as always, Duncan!

      Question Regarding your testing environment – Are you using VSAs for this testing? If so, which?

    2. says

      Hey great article Duncan although, myself, I’m a little hesitant about relying on DNS as a single point of failure/attack for consolidated workloads in a mission critical environment.

    3. Kris De Coen says

      @jason for mission critical environments we are using NetApp metrocluster for a stretched, high-available DR solution based on NFS

    4. says

      Duncan, I’d be very careful here. Things are drastically different between how vSphere 5 and previous versions handle this. I’ve seen tons of problems with DNS mounted datastores and was told that 5 was completely re-written (regarding vpx and vCenter db changes) to properly handle this kind of stuff. I’ve never had a problem with mismatched UUIDs on the hosts, but rather how vCenter sees the datastores. I’m talking issues with some VMs not vMotioning if datastores are mounted with or without a trailing “/” on different hosts, etc. For more info check out 284085.

    5. says

      @Brian: I have an NFS Filer in my lab and use the Celerra VSA for testing also.

      @Jason: I have been hesitant as well, and you could always use “host files” to achieve the same of course… although this seems to be a hassle.

      @Dave: that is great to hear. would be interested in knowing what the solution will look like.

    6. James Hess says

      There’s another way to not change the IP address.

      Use a virtual IP address for each target. When you are failing over NFS targets, move the IP address on the network.

      Or you could also edit /etc/hosts on each ESXi host,
      at least then you don’t need to be concerned about DNS caching.

    7. says

      Great info as always Duncan!

      Have you looked at SmartConnect from Isilon? With SmartConnect you don’t have to modify your CNAME.

      I agree that Isilon is a little different use case than multiple (independent/replicated) NFS datastores, but rather a multi-node NFS target. And yes, each node can answer independently. That’s where SmartConnect brings value, as it can effectively load balance traffic across nodes.

    8. says

      @JASON BOCHE Jason, I’d be a little hesitant running a mission critical load at all, if DNS wasn’t trustworthy. If it hasn’t got a userbase/application that needs to connect to it, then you have a point.

      Fundamentally, DNS is there for a reason, to abstract the need to manage IP addresses and other paraphernalia that applications and users shouldn’t need to worry about. This almost harks back to “is storage level replication equal to DR”? No – as application data and THE TRANSACTION are not guaranteed consistent just because the storage array completed all the writes asked of it.

      For critical apps that aren’t time sensitive to failovers (e.g. DR MTTR of 4+ hours) then a DNS CNAME for finding the service is right on the ticket, after you’ve sorted the transactional consistency of the data. For stuff that needs to be “available” quicker than that, MetroCluster and similar virtual IP failover techniques are needed i.e. finding isn’t needed, just the same “key” or IP address to get at it.

      Oh – great article Duncan!

    9. says

      @Andy: Did you check the link at the bottom and did you read the full article? I know it used to cause problems. That has changed indeed, and if you want to avoid re-registering this is the only option for now.

      @Jase: I don’t have an Isilon cluster in my lab yet. But we are working on that.

    10. says

      @Duncan –
      I will look for you at PEX. We can talk then, but basically we are thinking Metrocluster, but this could be a nice alternative if there is a budget constraint.


    11. James Hess says

      “@JASON BOCHE Jason, I’d be a little hesitant running a mission critical load at all, if DNS wasn’t trustworthy.”

      Most of us would like to virtualize our DNS servers, I think; DNS is a key application service, that has significant administrative burdens of its own, with a light workload, very suitable candidate for virtualization. Not virtualizing DNS servers adds administrative burden and costs, due to the increased number of physical servers to manage (for small deployments, an especially large increase), and loss of management features.

      I would say that implementing something simpler on the ESXi hosts that does not rely on DNS (don’t even enable DNS on ESXi hosts) results in a more reliable infrastructure, because the amount of external dependencies are reduced.

      If you do not use DNS for finding NFS share location, and you also virtualize DNS on the very same hosts; the vSphere hosts can startup and attach the storage, before the virtualized DNS service has started up.

      On the other hand, if virtualized DNS servers must be available for the ESXi host to connect all datastores; the environment then has a problem, in case all servers need to be powered on from a cold stop (a datacenter total power outage).

      As the host boots, the NFS datastores will be “all paths down”; resulting in VMs getting marked inaccessible, and sometimes getting labelled “Unknown” and not able to be redeemed without a host reboot.

      Listing even one network datastore by hostname causes some problems in the case of virtualized DNS, even if all the other datastores were added by IP address.

      So you basically get left then, putting virtualized DNS servers on local storage, which restricts DRS/vMotion, and HA capabilities are lost.

      DNS servers are highly critical, and these are among the worst servers to lose HA capability for.

      Since you can only list two DNS servers in the ESXi configuration, and cannot specify the timeout period, this would strongly favor using DNS clustering inside the guest OS as a possible solution, but this adds a very large administrative burden compared to simple /etc/hosts solutions.

      Should you need to boot cold in this situation, when using DNS to identify your NFS storage, a minimum of two hosts that can boot guests running DNS are required, you need to make sure a host with DNS1 on it comes up first, then after that, start the host containing DNS2, and once DNS2 is up, reboot the host containing DNS1.

      There is no chance of cleanly automating recovery from total cold start, in this case.