3.6. Testing the Resource Configuration


You can validate your system configuration with the following procedure. You should be able to mount the exported file system with either NFSv3 or NFSv4.
  1. On a node outside of the cluster, residing in the same network as the deployment, verify that the NFS share can be seen by mounting the NFS share. For this example, we are using the 192.168.122.0/24 network.
    # showmount -e 192.168.122.200
    Export list for 192.168.122.200:
    /nfsshare/exports/export1 192.168.122.0/255.255.255.0
    /nfsshare/exports         192.168.122.0/255.255.255.0
    /nfsshare/exports/export2 192.168.122.0/255.255.255.0
    
  2. To verify that you can mount the NFS share with NFSv4, mount the NFS share to a directory on the client node. After mounting, verify that the contents of the export directories are visible. Unmount the share after testing.
    # mkdir nfsshare
    # mount -o "vers=4" 192.168.122.200:export1 nfsshare
    # ls nfsshare
    clientdatafile1
    # umount nfsshare
  3. Verify that you can mount the NFS share with NFSv3. After mounting, verify that the test file clientdatafile1 is visible. Unlike NFSv4, since NFSV3 does not use the virtual file system, you must mount a specific export. Unmount the share after testing.
    # mkdir nfsshare
    # mount -o "vers=3" 192.168.122.200:/nfsshare/exports/export2 nfsshare
    # ls nfsshare
        clientdatafile2
    # umount nfsshare
  4. To test for failover, perform the following steps.
    1. On a node outside of the cluster, mount the NFS share and verify access to the clientdatafile1 we created in Section 3.3, “NFS Share Setup”.
      # mkdir nfsshare
      # mount -o "vers=4" 192.168.122.200:export1 nfsshare
      # ls nfsshare
      clientdatafile1
      
    2. From a node within the cluster, determine which node in the cluster is running nfsgroup. In this example, nfsgroup is running on z1.example.com.
      [root@z1 ~]# pcs status
      ...
      Full list of resources:
       myapc  (stonith:fence_apc_snmp):       Started z1.example.com
       Resource Group: nfsgroup
           my_lvm     (ocf::heartbeat:LVM):   Started z1.example.com
           nfsshare   (ocf::heartbeat:Filesystem):    Started z1.example.com
           nfs-daemon (ocf::heartbeat:nfsserver):     Started z1.example.com 
           nfs-root   (ocf::heartbeat:exportfs):      Started z1.example.com
           nfs-export1        (ocf::heartbeat:exportfs):      Started z1.example.com
           nfs-export2        (ocf::heartbeat:exportfs):      Started z1.example.com
           nfs_ip     (ocf::heartbeat:IPaddr2):       Started  z1.example.com
           nfs-notify (ocf::heartbeat:nfsnotify):     Started z1.example.com
      ...
      
    3. From a node within the cluster, put the node that is running nfsgroup in standby mode.
      [root@z1 ~]# pcs node standby z1.example.com
    4. Verify that nfsgroup successfully starts on the other cluster node.
      [root@z1 ~]# pcs status
      ...
      Full list of resources:
       Resource Group: nfsgroup
           my_lvm     (ocf::heartbeat:LVM):   Started z2.example.com
           nfsshare   (ocf::heartbeat:Filesystem):    Started z2.example.com
           nfs-daemon (ocf::heartbeat:nfsserver):     Started z2.example.com 
           nfs-root   (ocf::heartbeat:exportfs):      Started z2.example.com
           nfs-export1        (ocf::heartbeat:exportfs):      Started z2.example.com
           nfs-export2        (ocf::heartbeat:exportfs):      Started z2.example.com
           nfs_ip     (ocf::heartbeat:IPaddr2):       Started  z2.example.com
           nfs-notify (ocf::heartbeat:nfsnotify):     Started z2.example.com
      ...
      
    5. From the node outside the cluster on which you have mounted the NFS share, verify that this outside node still continues to have access to the test file within the NFS mount.
      # ls nfsshare
      clientdatafile1
      
      Service will be lost briefly for the client during the failover briefly but the client should recover in with no user intervention. By default, clients using NFSv4 may take up to 90 seconds to recover the mount; this 90 seconds represents the NFSv4 file lease grace period observed by the server on startup. NFSv3 clients should recover access to the mount in a matter of a few seconds.
    6. From a node within the cluster, remove the node that was initially running running nfsgroup from standby mode. This will not in itself move the cluster resources back to this node.
      [root@z1 ~]# pcs node unstandby z1.example.com

      Note

      Removing a node from standby mode does not in itself cause the resources to fail back over to that node. This will depend on the resource-stickiness value for the resources. For information on the resource-stickiness meta attribute, see Configuring a Resource to Prefer its Current Node in the Red Hat High Availability Add-On Reference.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.