Ce contenu n'est pas disponible dans la langue sélectionnée.

6.3. NFS


Red Hat Gluster Storage has two NFS server implementations, Gluster NFS and NFS-Ganesha. Gluster NFS supports only NFSv3 protocol, however, NFS-Ganesha supports NFSv3 and NFSv4 protocols.

6.3.1. Support Matrix

The following table contains the feature matrix of the NFS support on Red Hat Gluster Storage 3.1 and later:
Table 6.5. NFS Support Matrix
Features glusterFS NFS (NFSv3) NFS-Ganesha (NFSv3) NFS-Ganesha (NFSv4)
Root-squash Yes Yes Yes
All-squash NoYes Yes
Sub-directory exportsYes Yes Yes
LockingYes Yes Yes
Client based export permissionsYes Yes Yes
NetgroupsYesYesYes
Mount protocolsUDP, TCPUDP, TCPOnly TCP
NFS transport protocolsTCPUDP, TCPTCP
AUTH_UNIXYesYesYes
AUTH_NONEYesYesYes
AUTH_KRBNoYesYes
ACLsYesNoYes
DelegationsN/AN/ANo
High availabilityYes (but with certain limitations. For more information see, "Setting up CTDB for NFS")YesYes
Multi-headYesYesYes
Gluster RDMA volumesYesNot supportedNot supported
DRCNot supportedYesYes
Dynamic exportsNoYesYes
pseudofsN/AN/AYes
NFSv4.1N/AN/AYes

Note

  • Red Hat does not recommend running NFS-Ganesha with any other NFS servers, such as, kernel-NFS and Gluster NFS servers.
  • Only one of NFS-Ganesha, gluster-NFS or kernel-NFS servers can be enabled on a given machine/host as all NFS implementations use the port 2049 and only one can be active at a given time. Hence you must disable kernel-NFS before NFS-Ganesha is started.

6.3.2. Gluster NFS (Deprecated)

Warning

Gluster-NFS is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends the use of Gluster-NFS, and does not support its use in new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3.
Linux, and other operating systems that support the NFSv3 standard can use NFS to access the Red Hat Gluster Storage volumes.

Note

From the Red Hat Gluster Storage 3.2 release onwards, Gluster NFS server will be disabled by default for any new volumes that are created. You can restart Gluster NFS server on the new volumes explicitly if needed. This can be done running the “mount -t nfs” command on the client as below:
# mount -t nfs HOSTNAME:VOLNAME MOUNTPATH
On any one of the server node:
# gluster volume set VOLNAME nfs.disable off
However, existing volumes (using Gluster NFS server) will not be impacted even after upgrade to Red Hat Gluster Storage 3.2 and will have implicit enablement of Gluster NFS server.
Differences in implementation of the NFSv3 standard in operating systems may result in some operational issues. If issues are encountered when using NFSv3, contact Red Hat support to receive more information on Red Hat Gluster Storage client operating system compatibility, and information about known issues affecting NFSv3.
NFS ACL v3 is supported, which allows getfacl and setfacl operations on NFS clients. The following options are provided to configure the Access Control Lists (ACL) in the glusterFS NFS server with the nfs.acl option. For example:
  • To set nfs.acl ON, run the following command:
    # gluster volume set VOLNAME nfs.acl on
  • To set nfs.acl OFF, run the following command:
    # gluster volume set VOLNAME nfs.acl off

Note

ACL is ON by default.
Red Hat Gluster Storage includes Network Lock Manager (NLM) v4. NLM protocol allows NFSv3 clients to lock files across the network. NLM is required to make applications running on top of NFSv3 mount points to use the standard fcntl() (POSIX) and flock() (BSD) lock system calls to synchronize access across clients.
This section describes how to use NFS to mount Red Hat Gluster Storage volumes (both manually and automatically) and how to verify that the volume has been mounted successfully.

Important

On Red Hat Enterprise Linux 7, enable the firewall service in the active zones for runtime and permanent mode using the following commands:
To get a list of active zones, run the following command:
# firewall-cmd --get-active-zones
To allow the firewall service in the active zones, run the following commands:
# firewall-cmd --zone=zone_name --add-service=nfs --add-service=rpc-bind
# firewall-cmd --zone=zone_name --add-service=nfs --add-service=rpc-bind --permanent

6.3.2.1. Setting up CTDB for Gluster NFS (Deprecated)

In a replicated volume environment, the CTDB software (Cluster Trivial Database) has to be configured to provide high availability and lock synchronization for Samba shares. CTDB provides high availability by adding virtual IP addresses (VIPs) and a heartbeat service.
When a node in the trusted storage pool fails, CTDB enables a different node to take over the virtual IP addresses that the failed node was hosting. This ensures the IP addresses for the services provided are always available. However, locks are not migrated as part of failover.

Important

On Red Hat Enterprise Linux 7, enable the CTDB firewall service in the active zones for runtime and permanent mode using the below commands:
To get a list of active zones, run the following command:
# firewall-cmd --get-active-zones
To add ports to the active zones, run the following commands:
# firewall-cmd --zone=zone_name --add-port=4379/tcp
# firewall-cmd --zone=zone_name --add-port=4379/tcp  --permanent

Note

Amazon Elastic Compute Cloud (EC2) does not support VIPs and is hence not compatible with this solution.
6.3.2.1.1. Prerequisites
Follow these steps before configuring CTDB on a Red Hat Gluster Storage Server:
  • If you already have an older version of CTDB (version <= ctdb1.x), then remove CTDB by executing the following command:
    # yum remove ctdb
    After removing the older version, proceed with installing the latest CTDB.

    Note

    Ensure that the system is subscribed to the samba channel to get the latest CTDB packages.
  • Install CTDB on all the nodes that are used as NFS servers to the latest version using the following command:
    # yum install ctdb
  • CTDB uses TCP port 4379 by default. Ensure that this port is accessible between the Red Hat Gluster Storage servers.
6.3.2.1.2. Port and Firewall Information for Gluster NFS
On the GNFS-Client machine, configure firewalld to add ports used by statd, nlm and portmapper services by executing the following commands:
# firewall-cmd --zone=public --add-port=662/tcp --add-port=662/udp \
--add-port=32803/tcp --add-port=32769/udp \
--add-port=111/tcp --add-port=111/udp
# firewall-cmd --zone=public --add-port=662/tcp --add-port=662/udp \
--add-port=32803/tcp --add-port=32769/udp \
--add-port=111/tcp --add-port=111/udp --permanent
Execute the following step on the client and server machines:
  • On Red Hat Enterprise Linux 7, edit /etc/sysconfig/nfs file as mentioned below:
    # sed -i '/STATD_PORT/s/^#//' /etc/sysconfig/nfs

    Note

    This step is not applicable for Red Hat Enterprise Linux 8.
  • Restart the services:
    • For Red Hat Enterprise Linux 6:
      # service nfslock restart
      # service nfs restart

      Important

      Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide
    • For Red Hat Enterprise Linux 7:
      # systemctl restart nfs-config
      # systemctl restart rpc-statd
      # systemctl restart nfs-mountd
      # systemctl restart nfslock

      Note

      This step is not applicable for Red Hat Enterprise Linux 8.
6.3.2.1.3. Configuring CTDB on Red Hat Gluster Storage Server
To configure CTDB on Red Hat Gluster Storage server, execute the following steps:
  1. Create a replicate volume. This volume will host only a zero byte lock file, hence choose minimal sized bricks. To create a replicate volume run the following command:
    # gluster volume create volname replica n ipaddress:/brick path.......N times
    where,
    N: The number of nodes that are used as Gluster NFS servers. Each node must host one brick.
    For example:
    # gluster volume create ctdb replica 3 10.16.157.75:/rhgs/brick1/ctdb/b1 10.16.157.78:/rhgs/brick1/ctdb/b2 10.16.157.81:/rhgs/brick1/ctdb/b3
  2. In the following files, replace "all" in the statement META="all" to the newly created volume name
    /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
    /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
    For example:
    META="all"
      to
    META="ctdb"
  3. Start the volume.
    # gluster volume start ctdb
    As part of the start process, the S29CTDBsetup.sh script runs on all Red Hat Gluster Storage servers, adds an entry in /etc/fstab for the mount, and mounts the volume at /gluster/lock on all the nodes with Gluster NFS server. It also enables automatic start of CTDB service on reboot.

    Note

    When you stop the special CTDB volume, the S29CTDB-teardown.sh script runs on all Red Hat Gluster Storage servers and removes an entry in /etc/fstab for the mount and unmounts the volume at /gluster/lock.
  4. Verify if the file /etc/sysconfig/ctdb exists on all the nodes that is used as Gluster NFS server. This file contains Red Hat Gluster Storage recommended CTDB configurations.
  5. Create /etc/ctdb/nodes file on all the nodes that is used as Gluster NFS servers and add the IPs of these nodes to the file.
    10.16.157.0
    10.16.157.3
    10.16.157.6
    The IPs listed here are the private IPs of NFS servers.
  6. On all the nodes that are used as Gluster NFS server which require IP failover, create /etc/ctdb/public_addresses file and add the virtual IPs that CTDB should create to this file. Add these IP address in the following format:
    <Virtual IP>/<routing prefix><node interface>
    For example:
    192.168.1.20/24 eth0
    192.168.1.21/24 eth0
  7. Start the CTDB service on all the nodes by executing the following command:
    # service ctdb start

Note

CTDB with gNFS only provides node level high availability and is not capable of detecting NFS service failure. Therefore, CTDB does not provide high availability if the NFS service goes down while the node is still up and running.

6.3.2.2. Using Gluster NFS to Mount Red Hat Gluster Storage Volumes (Deprecated)

You can use either of the following methods to mount Red Hat Gluster Storage volumes:

Note

Currently GlusterFS NFS server only supports version 3 of NFS protocol. As a preferred option, always configure version 3 as the default version in the nfsmount.conf file at /etc/nfsmount.conf by adding the following text in the file:
Defaultvers=3
In case the file is not modified, then ensure to add vers=3 manually in all the mount commands.
# mount nfsserver:export -o vers=3 /MOUNTPOINT
RDMA support in GlusterFS that is mentioned in the previous sections is with respect to communication between bricks and Fuse mount/GFAPI/NFS server. NFS kernel client will still communicate with GlusterFS NFS server over tcp.
In case of volumes which were created with only one type of transport, communication between GlusterFS NFS server and bricks will be over that transport type. In case of tcp,rdma volume it could be changed using the volume set option nfs.transport-type.
After mounting a volume, you can test the mounted volume using the procedure described in .Section 6.3.2.2.4, “Testing Volumes Mounted Using Gluster NFS (Deprecated)”
6.3.2.2.1. Manually Mounting Volumes Using Gluster NFS (Deprecated)
Create a mount point and run the mount command to manually mount a Red Hat Gluster Storage volume using Gluster NFS.
  1. If a mount point has not yet been created for the volume, run the mkdir command to create a mount point.
    # mkdir /mnt/glusterfs
  2. Run the correct mount command for the system.
    For Linux
    # mount -t nfs -o vers=3 server1:/test-volume /mnt/glusterfs
    For Solaris
    # mount -o vers=3 nfs://server1:38467/test-volume /mnt/glusterfs
Manually Mount a Red Hat Gluster Storage Volume using Gluster NFS over TCP
Create a mount point and run the mount command to manually mount a Red Hat Gluster Storage volume using Gluster NFS over TCP.

Note

glusterFS NFS server does not support UDP. If a NFS client such as Solaris client, connects by default using UDP, the following message appears:
requested NFS version or transport protocol is not supported
The option nfs.mount-udp is supported for mounting a volume, by default it is disabled. The following are the limitations:
  • If nfs.mount-udp is enabled, the MOUNT protocol needed for NFSv3 can handle requests from NFS-clients that require MOUNT over UDP. This is useful for at least some versions of Solaris, IBM AIX and HP-UX.
  • Currently, MOUNT over UDP does not have support for mounting subdirectories on a volume. Mounting server:/volume/subdir exports is only functional when MOUNT over TCP is used.
  • MOUNT over UDP does not currently have support for different authentication options that MOUNT over TCP honors. Enabling nfs.mount-udp may give more permissions to NFS clients than intended via various authentication options like nfs.rpc-auth-allow, nfs.rpc-auth-reject and nfs.export-dir.
  1. If a mount point has not yet been created for the volume, run the mkdir command to create a mount point.
    # mkdir /mnt/glusterfs
  2. Run the correct mount command for the system, specifying the TCP protocol option for the system.
    For Linux
    # mount -t nfs -o vers=3,mountproto=tcp server1:/test-volume /mnt/glusterfs
    For Solaris
    # mount -o proto=tcp, nfs://server1:38467/test-volume /mnt/glusterfs
6.3.2.2.2. Automatically Mounting Volumes Using Gluster NFS (Deprecated)
Red Hat Gluster Storage volumes can be mounted automatically using Gluster NFS, each time the system starts.

Note

In addition to the tasks described below, Red Hat Gluster Storage supports Linux, UNIX, and similar operating system's standard method of auto-mounting Gluster NFS mounts.
Update the /etc/auto.master and /etc/auto.misc files, and restart the autofs service. Whenever a user or process attempts to access the directory it will be mounted in the background on-demand.
Mounting a Volume Automatically using NFS
Mount a Red Hat Gluster Storage Volume automatically using NFS at server start.
  1. Open the /etc/fstab file in a text editor.
  2. Append the following configuration to the fstab file.
    HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR nfs defaults,_netdev, 0 0
    Using the example server names, the entry contains the following replaced values.
    server1:/test-volume /mnt/glusterfs nfs defaults,_netdev, 0 0
Mounting a Volume Automatically using NFS over TCP
Mount a Red Hat Gluster Storage Volume automatically using NFS over TCP at server start.
  1. Open the /etc/fstab file in a text editor.
  2. Append the following configuration to the fstab file.
    HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR nfs defaults,_netdev,mountproto=tcp 0 0
    Using the example server names, the entry contains the following replaced values.
    server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,mountproto=tcp 0 0
6.3.2.2.3. Automatically Mounting Subdirectories Using NFS (Deprecated)
The nfs.export-dir and nfs.export-dirs options provide granular control to restrict or allow specific clients to mount a sub-directory. These clients can be authenticated during sub-directory mount with either an IP, host name or a Classless Inter-Domain Routing (CIDR) range.
nfs.export-dirs
This option is enabled by default. It allows the sub-directories of exported volumes to be mounted by clients without needing to export individual sub-directories. When enabled, all sub-directories of all volumes are exported. When disabled, sub-directories must be exported individually in order to mount them on clients.
To disable this option for all volumes, run the following command:
# gluster volume set VOLNAME nfs.export-dirs off
nfs.export-dir
When nfs.export-dirs is set to on, the nfs.export-dir option allows you to specify one or more sub-directories to export, rather than exporting all subdirectories (nfs.export-dirs on), or only exporting individually exported subdirectories (nfs.export-dirs off).
To export certain subdirectories, run the following command:
# gluster volume set VOLNAME nfs.export-dir subdirectory
The subdirectory path should be the path from the root of the volume. For example, in a volume with six subdirectories, to export the first three subdirectories, the command would be the following:
# gluster volume set myvolume nfs.export-dir /dir1,/dir2,/dir3
Subdirectories can also be exported based on the IP address, hostname, or a Classless Inter-Domain Routing (CIDR) range by adding these details in parentheses after the directory path:
# gluster volume set VOLNAME nfs.export-dir subdirectory(IPADDRESS),subdirectory(HOSTNAME),subdirectory(CIDR)
# gluster volume set myvolume nfs.export-dir /dir1(192.168.10.101),/dir2(storage.example.com),/dir3(192.168.98.0/24)
6.3.2.2.4. Testing Volumes Mounted Using Gluster NFS (Deprecated)
You can confirm that Red Hat Gluster Storage directories are mounting successfully.
To test mounted volumes

Testing Mounted Red Hat Gluster Storage Volumes

Using the command-line, verify the Red Hat Gluster Storage volumes have been successfully mounted. All three commands can be run in the order listed, or used independently to verify a volume has been successfully mounted.
  1. Run the mount command to check whether the volume was successfully mounted.
    # mount
    server1:/test-volume on /mnt/glusterfs type nfs (rw,addr=server1)
  2. Run the df command to display the aggregated storage space from all the bricks in a volume.
    # df -h /mnt/glusterfs
    Filesystem              Size Used Avail Use% Mounted on
    server1:/test-volume    28T  22T  5.4T  82%  /mnt/glusterfs
  3. Move to the mount directory using the cd command, and list the contents.
    # cd /mnt/glusterfs
    # ls

    Note

    The LOCK functionality in NFS protocol is advisory, it is recommended to use locks if the same volume is accessed by multiple clients.

6.3.2.3. Troubleshooting Gluster NFS (Deprecated)

Q: The mount command on the NFS client fails with RPC Error: Program not registered. This error is encountered due to one of the following reasons:
Q: The rpcbind service is not running on the NFS client. This could be due to the following reasons:
Q: The NFS server glusterfsd starts but the initialization fails with nfsrpc- service: portmap registration of program failed error message in the log.
Q: The NFS server start-up fails with the message Port is already in use in the log file.
Q: The mount command fails with NFS server failed error:
Q: The showmount command fails with clnt_create: RPC: Unable to receive error. This error is encountered due to the following reasons:
Q: The application fails with Invalid argument or Value too large for defined data type
Q: After the machine that is running NFS server is restarted the client fails to reclaim the locks held earlier.
Q: The rpc actor failed to complete successfully error is displayed in the nfs.log, even after the volume is mounted successfully.
Q: The mount command fails with No such file or directory.
Q:
The mount command on the NFS client fails with RPC Error: Program not registered. This error is encountered due to one of the following reasons:
  • The NFS server is not running. You can check the status using the following command:
    # gluster volume status
  • The volume is not started. You can check the status using the following command:
    # gluster volume info
  • rpcbind is restarted. To check if rpcbind is running, execute the following command:
    # ps ax| grep rpcbind
A:
  • If the NFS server is not running, then restart the NFS server using the following command:
    # gluster volume start VOLNAME
  • If the volume is not started, then start the volume using the following command:
    # gluster volume start VOLNAME
  • If both rpcbind and NFS server is running then restart the NFS server using the following commands:
    # gluster volume stop VOLNAME
    # gluster volume start VOLNAME
Q:
The rpcbind service is not running on the NFS client. This could be due to the following reasons:
  • The portmap is not running.
  • Another instance of kernel NFS server or glusterNFS server is running.
A:
Start the rpcbind service by running the following command:
# service rpcbind start
Q:
The NFS server glusterfsd starts but the initialization fails with nfsrpc- service: portmap registration of program failed error message in the log.
A:
NFS start-up succeeds but the initialization of the NFS service can still fail preventing clients from accessing the mount points. Such a situation can be confirmed from the following error messages in the log file:
[2010-05-26 23:33:47] E [rpcsvc.c:2598:rpcsvc_program_register_portmap] rpc-service: Could notregister with portmap
[2010-05-26 23:33:47] E [rpcsvc.c:2682:rpcsvc_program_register] rpc-service: portmap registration of program failed
[2010-05-26 23:33:47] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
[2010-05-26 23:33:47] E [nfs.c:125:nfs_init_versions] nfs: Program init failed
[2010-05-26 23:33:47] C [nfs.c:531:notify] nfs: Failed to initialize protocols
[2010-05-26 23:33:49] E [rpcsvc.c:2614:rpcsvc_program_unregister_portmap] rpc-service: Could not unregister with portmap
[2010-05-26 23:33:49] E [rpcsvc.c:2731:rpcsvc_program_unregister] rpc-service: portmap unregistration of program failed
[2010-05-26 23:33:49] E [rpcsvc.c:2744:rpcsvc_program_unregister] rpc-service: Program unregistration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
  1. Start the rpcbind service on the NFS server by running the following command:
    # service rpcbind start
    After starting rpcbind service, glusterFS NFS server needs to be restarted.
  2. Stop another NFS server running on the same machine.
    Such an error is also seen when there is another NFS server running on the same machine but it is not the glusterFS NFS server. On Linux systems, this could be the kernel NFS server. Resolution involves stopping the other NFS server or not running the glusterFS NFS server on the machine. Before stopping the kernel NFS server, ensure that no critical service depends on access to that NFS server's exports.
    On Linux, kernel NFS servers can be stopped by using either of the following commands depending on the distribution in use:
    # service nfs-kernel-server stop
    # service nfs stop
  3. Restart glusterFS NFS server.
Q:
The NFS server start-up fails with the message Port is already in use in the log file.
A:
This error can arise in case there is already a glusterFS NFS server running on the same machine. This situation can be confirmed from the log file, if the following error lines exist:
[2010-05-26 23:40:49] E [rpc-socket.c:126:rpcsvc_socket_listen] rpc-socket: binding socket failed:Address already in use
[2010-05-26 23:40:49] E [rpc-socket.c:129:rpcsvc_socket_listen] rpc-socket: Port is already in use
[2010-05-26 23:40:49] E [rpcsvc.c:2636:rpcsvc_stage_program_register] rpc-service: could not create listening connection
[2010-05-26 23:40:49] E [rpcsvc.c:2675:rpcsvc_program_register] rpc-service: stage registration of program failed
[2010-05-26 23:40:49] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
[2010-05-26 23:40:49] E [nfs.c:125:nfs_init_versions] nfs: Program init failed
[2010-05-26 23:40:49] C [nfs.c:531:notify] nfs: Failed to initialize protocols
In this release, the glusterFS NFS server does not support running multiple NFS servers on the same machine. To resolve the issue, one of the glusterFS NFS servers must be shutdown.
Q:
The mount command fails with NFS server failed error:
A:
mount: mount to NFS server '10.1.10.11' failed: timed out (retrying).
Review and apply the suggested solutions to correct the issue.
  • Disable name lookup requests from NFS server to a DNS server.
    The NFS server attempts to authenticate NFS clients by performing a reverse DNS lookup to match host names in the volume file with the client IP addresses. There can be a situation where the NFS server either is not able to connect to the DNS server or the DNS server is taking too long to respond to DNS request. These delays can result in delayed replies from the NFS server to the NFS client resulting in the timeout error.
    NFS server provides a work-around that disables DNS requests, instead relying only on the client IP addresses for authentication. The following option can be added for successful mounting in such situations:
    option nfs.addr.namelookup off

    Note

    Remember that disabling the NFS server forces authentication of clients to use only IP addresses. If the authentication rules in the volume file use host names, those authentication rules will fail and client mounting will fail.
  • NFS version used by the NFS client is other than version 3 by default.
    glusterFS NFS server supports version 3 of NFS protocol by default. In recent Linux kernels, the default NFS version has been changed from 3 to 4. It is possible that the client machine is unable to connect to the glusterFS NFS server because it is using version 4 messages which are not understood by glusterFS NFS server. The timeout can be resolved by forcing the NFS client to use version 3. The vers option to mount command is used for this purpose:
    # mount nfsserver:export -o vers=3 /MOUNTPOINT
Q:
The showmount command fails with clnt_create: RPC: Unable to receive error. This error is encountered due to the following reasons:
  • The firewall might have blocked the port.
  • rpcbind might not be running.
A:
Check the firewall settings, and open ports 111 for portmap requests/replies and glusterFS NFS server requests/replies. glusterFS NFS server operates over the following port numbers: 38465, 38466, and 38467.
Q:
The application fails with Invalid argument or Value too large for defined data type
A:
These two errors generally happen for 32-bit NFS clients, or applications that do not support 64-bit inode numbers or large files.
Use the following option from the command-line interface to make glusterFS NFS return 32-bit inode numbers instead:
NFS.enable-ino32 <on | off>
This option is off by default, which permits NFS to return 64-bit inode numbers by default.
Applications that will benefit from this option include those that are:
  • built and run on 32-bit machines, which do not support large files by default,
  • built to 32-bit standards on 64-bit systems.
Applications which can be rebuilt from source are recommended to be rebuilt using the following flag with gcc:
-D_FILE_OFFSET_BITS=64
Q:
After the machine that is running NFS server is restarted the client fails to reclaim the locks held earlier.
A:
The Network Status Monitor (NSM) service daemon (rpc.statd) is started before gluster NFS server. Hence, NSM sends a notification to the client to reclaim the locks. When the clients send the reclaim request, the NFS server does not respond as it is not started yet. Hence the client request fails.
Solution: To resolve the issue, prevent the NSM daemon from starting when the server starts.
Run chkconfig --list nfslock to check if NSM is configured during OS boot.
If any of the entries are on,run chkconfig nfslock off to disable NSM clients during boot, which resolves the issue.
Q:
The rpc actor failed to complete successfully error is displayed in the nfs.log, even after the volume is mounted successfully.
A:
gluster NFS supports only NFS version 3. When nfs-utils mounts a client when the version is not mentioned, it tries to negotiate using version 4 before falling back to version 3. This is the cause of the messages in both the server log and the nfs.log file.
[2013-06-25 00:03:38.160547] W [rpcsvc.c:180:rpcsvc_program_actor] 0-rpc-service: RPC program version not available (req 100003 4)
[2013-06-25 00:03:38.160669] E [rpcsvc.c:448:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully
To resolve the issue, declare NFS version 3 and the noacl option in the mount command as follows:
# mount -t nfs -o vers=3,noacl server1:/test-volume /mnt/glusterfs
Q:
The mount command fails with No such file or directory.
A:
This problem is encountered as the volume is not present.

6.3.3. NFS Ganesha

NFS-Ganesha is a user space file server for the NFS protocol with support for NFSv3, NFSv4.0, and NFSv4.1.
Red Hat Gluster Storage 3.5 is supported with the community’s V2.7 stable release of NFS-Ganesha on Red Hat Enterprise Linux 7. To understand the various supported features of NFS-ganesha see, Supported Features of NFS-Ganesha.

Note

To install NFS-Ganesha refer, Deploying NFS-Ganesha on Red Hat Gluster Storage in the Red Hat Gluster Storage 3.5 Installation Guide.

6.3.3.1. Supported Features of NFS-Ganesha

The following list briefly describes the supported features of NFS-Ganesha:
Highly Available Active-Active NFS-Ganesha

In a highly available active-active environment, if a NFS-Ganesha server that is connected to a NFS client running a particular application goes down, the application/NFS client is seamlessly connected to another NFS-Ganesha server without any administrative intervention.

Data coherency across the multi-head NFS-Ganesha servers in the cluster is achieved using the Gluster’s Upcall infrastructure. Gluster’s Upcall infrastructure is a generic and extensible framework that sends notifications to the respective glusterfs clients (in this case NFS-Ganesha server) when changes are detected in the back-end file system.
Dynamic Export of Volumes

NFS-Ganesha supports addition and removal of exports dynamically. Dynamic exports is managed by the DBus interface. DBus is a system local IPC mechanism for system management and peer-to-peer application communication.

Exporting Multiple Entries

In NFS-Ganesha, multiple Red Hat Gluster Storage volumes or sub-directories can be exported simultaneously.

Pseudo File System

NFS-Ganesha creates and maintains a NFSv4 pseudo-file system, which provides clients with seamless access to all exported objects on the server.

Access Control List

NFS-Ganesha NFSv4 protocol includes integrated support for Access Control List (ACL)s, which are similar to those used by Windows. These ACLs can be used to identify a trustee and specify the access rights allowed, or denied for that trustee.This feature is disabled by default.

Note

AUDIT and ALARM ACE types are not currently supported.

6.3.3.2. Setting up NFS Ganesha

To set up NFS Ganesha, follow the steps mentioned in the further sections.

Note

You can also set up NFS-Ganesha using gdeploy, that automates the steps mentioned below. For more information, see "Deploying NFS-Ganesha"
6.3.3.2.1. Port and Firewall Information for NFS-Ganesha
You must ensure to open the ports and firewall services:
The following table lists the port details for NFS-Ganesha cluster setup:
Table 6.6. NFS Port Details
Service Port Number Protocol
sshd 22TCP
rpcbind/portmapper 111TCP/UDP
NFS 2049TCP/UDP
mountd 20048TCP/UDP
NLM 32803TCP/UDP
RQuota 875TCP/UDP
statd 662TCP/UDP
pcsd2224TCP
pacemaker_remote3121TCP
corosync5404 and 5405UDP
dlm21064TCP

Note

The port details for the Red Hat Gluster Storage services are listed under section 3. Verifying Port Access.
Defining Service Ports

Ensure the statd service is configured to use the ports mentioned above by executing the following commands on every node in the nfs-ganesha cluster:

  1. On Red Hat Enterprise Linux 7, edit /etc/sysconfig/nfs file as mentioned below:
    # sed -i '/STATD_PORT/s/^#//' /etc/sysconfig/nfs

    Note

    This step is not applicable for Red Hat Enterprise Linux 8.
  2. Restart the statd service:
    For Red Hat Enterprise Linux 7:
    # systemctl restart nfs-config
    # systemctl restart rpc-statd

    Note

    This step is not applicable for Red Hat Enterprise Linux 8.

Note

For the NFS client to use the LOCK functionality, the ports used by LOCKD and STATD daemons has to be configured and opened via firewalld on the client machine:
  1. Edit '/etc/sysconfig/nfs' using following commands:
    # sed -i '/STATD_PORT/s/^#//' /etc/sysconfig/nfs
    # sed -i '/LOCKD_TCPPORT/s/^#//' /etc/sysconfig/nfs
    # sed -i '/LOCKD_UDPPORT/s/^#//' /etc/sysconfig/nfs
  2. Restart the services:
    For Red Hat Enterprise Linux 7:
    # systemctl restart nfs-config
    # systemctl restart rpc-statd
    # systemctl restart nfslock
  3. Open the ports that are configured in the first step using the following command:
    # firewall-cmd --zone=public --add-port=662/tcp --add-port=662/udp \
    --add-port=32803/tcp --add-port=32769/udp \
    --add-port=111/tcp --add-port=111/udp
    # firewall-cmd --zone=public --add-port=662/tcp --add-port=662/udp \
    --add-port=32803/tcp --add-port=32769/udp \
    --add-port=111/tcp --add-port=111/udp --permanent
  4. To ensure NFS client UDP mount does not fail, ensure to open port 2049 by executing the following command:
    # firewall-cmd --zone=zone_name --add-port=2049/udp
    # firewall-cmd --zone=zone_name --add-port=2049/udp --permanent
  • Firewall Settings

    On Red Hat Enterprise Linux 7, enable the firewall services mentioned below.
    1. Get a list of active zones using the following command:
      # firewall-cmd --get-active-zones
    2. Allow the firewall service in the active zones, run the following commands:
      # firewall-cmd --zone=zone_name --add-service=nlm  --add-service=nfs  --add-service=rpc-bind  --add-service=high-availability --add-service=mountd --add-service=rquota
      
      # firewall-cmd --zone=zone_name  --add-service=nlm  --add-service=nfs  --add-service=rpc-bind  --add-service=high-availability --add-service=mountd --add-service=rquota --permanent
      
      # firewall-cmd --zone=zone_name --add-port=662/tcp --add-port=662/udp
      
      # firewall-cmd --zone=zone_name --add-port=662/tcp --add-port=662/udp --permanent
6.3.3.2.2. Prerequisites to run NFS-Ganesha
Ensure that the following prerequisites are taken into consideration before you run NFS-Ganesha in your environment:
  • A Red Hat Gluster Storage volume must be available for export and NFS-Ganesha rpms are installed.
  • Ensure that the fencing agents are configured. For more information on configuring fencing agents, refer to the following documentation:

    Note

    The required minimum number of nodes for a highly available installation/configuration of NFS Ganesha is 3 and a maximum number of supported nodes is 8.
  • Only one of NFS-Ganesha, gluster-NFS or kernel-NFS servers can be enabled on a given machine/host as all NFS implementations use the port 2049 and only one can be active at a given time. Hence you must disable kernel-NFS before NFS-Ganesha is started.
    Disable the kernel-nfs using the following command:
    For Red Hat Enterprise Linux 7

    # systemctl stop nfs-server
    # systemctl disable nfs-server
    To verify if kernel-nfs is disabled, execute the following command:
    # systemctl status nfs-server
    The service should be in stopped state.

    Note

    Gluster NFS will be stopped automatically when NFS-Ganesha is enabled.
    Ensure that none of the volumes have the variable nfs.disable set to 'off'.
  • Ensure to configure the ports as mentioned in Port/Firewall Information for NFS-Ganesha.
  • Edit the ganesha-ha.conf file based on your environment.
  • Reserve virtual IPs on the network for each of the servers configured in the ganesha.conf file. Ensure that these IPs are different than the hosts' static IPs and are not used anywhere else in the trusted storage pool or in the subnet.
  • Ensure that all the nodes in the cluster are DNS resolvable. For example, you can populate the /etc/hosts with the details of all the nodes in the cluster.
  • Make sure the SELinux is in Enforcing mode.
  • Start network service on all machines using the following command:
    For Red Hat Enterprise Linux 7:
    # systemctl start network
  • Create and mount a gluster shared volume by executing the following command:
    # gluster volume set all cluster.enable-shared-storage enable
    volume set: success
    
  • Create a directory named nfs-ganesha under /var/run/gluster/shared_storage

    Note

    With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .
  • Copy the ganesha.conf and ganesha-ha.conf files from /etc/ganesha to /var/run/gluster/shared_storage/nfs-ganesha.
  • Enable the glusterfssharedstorage.service service using the following command:
    systemctl enable glusterfssharedstorage.service
  • Enable the nfs-ganesha service using the following command:
    systemctl enable nfs-ganesha
6.3.3.2.3. Configuring the Cluster Services
The HA cluster is maintained using Pacemaker and Corosync. Pacemaker acts a resource manager and Corosync provides the communication layer of the cluster. For more information about Pacemaker/Corosync see the documentation under the Clustering section of the Red Hat Enterprise Linux 7 documentation: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/

Note

It is recommended to use 3 or more nodes to configure NFS Ganesha HA cluster, in order to maintain cluster quorum.
  1. Enable the pacemaker service using the following command:
    For Red Hat Enterprise Linux 7:
    # systemctl enable pacemaker.service
  2. Start the pcsd service using the following command.
    For Red Hat Enterprise Linux 7:
    # systemctl start pcsd

    Note

    • To start pcsd by default after the system is rebooted, execute the following command:
      For Red Hat Enterprise Linux 7:
      # systemctl enable pcsd
  3. Set a password for the user ‘hacluster’ on all the nodes using the following command. Use the same password for all the nodes:
    # echo <password> | passwd --stdin hacluster
  4. Perform cluster authentication between the nodes, where, username is ‘hacluster’, and password is the one you used in the previous step. Ensure to execute the following command on every node:
    For Red Hat Enterprise Linux 7:
    # pcs cluster auth <hostname1> <hostname2> ...
    For Red Hat Enterprise Linux 8:
    # pcs host auth <hostname1> <hostname2> ...

    Note

    The hostname of all the nodes in the Ganesha-HA cluster must be included in the command when executing it on every node.
    For example, in a four node cluster; nfs1, nfs2, nfs3, and nfs4, execute the following command on every node:
    For Red Hat Enterprise Linux 7:
    # pcs cluster auth nfs1 nfs2 nfs3 nfs4
    Username: hacluster
    Password:
    nfs1: Authorized
    nfs2: Authorized
    nfs3: Authorized
    nfs4: Authorized
    For Red Hat Enterprise Linux 8:
    # pcs host auth nfs1 nfs2 nfs3 nfs4
    Username: hacluster
    Password:
    nfs1: Authorized
    nfs2: Authorized
    nfs3: Authorized
    nfs4: Authorized
  5. Key-based SSH authentication without password for the root user has to be enabled on all the HA nodes. Follow these steps:
    1. On one of the nodes (node1) in the cluster, run:
      # ssh-keygen -f /var/lib/glusterd/nfs/secret.pem -t rsa -N ''
    2. Deploy the generated public key from node1 to all the nodes (including node1) by executing the following command for every node:
      # ssh-copy-id -i /var/lib/glusterd/nfs/secret.pem.pub root@<node-ip/hostname>
    3. Copy the ssh keypair from node1 to all the nodes in the Ganesha-HA cluster by executing the following command for every node:
      # scp -i /var/lib/glusterd/nfs/secret.pem /var/lib/glusterd/nfs/secret.* root@<node-ip/hostname>:/var/lib/glusterd/nfs/
  6. As part of cluster setup, port 875 is used to bind to the Rquota service. If this port is already in use, assign a different port to this service by modifying following line in ‘/etc/ganesha/ganesha.conf’ file on all the nodes.
    # Use a non-privileged port for RQuota
    Rquota_Port = 875;
6.3.3.2.4. Creating the ganesha-ha.conf file
The ganesha-ha.conf.sample is created in the following location /etc/ganesha when Red Hat Gluster Storage is installed. Rename the file to ganesha-ha.conf and make the changes based on your environment.
  1. Create a directory named nfs-ganesha under /var/run/gluster/shared_storage

    Note

    With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .
  2. Copy the ganesha.conf and ganesha-ha.conf files from /etc/ganesha to /var/run/gluster/shared_storage/nfs-ganesha.
Sample ganesha-ha.conf file:
# Name of the HA cluster created.
# must be unique within the subnet
HA_NAME="ganesha-ha-360"
#
#
# You may use short names or long names; you may not use IP addresses.
# Once you select one, stay with it as it will be mildly unpleasant to clean
# up if you switch later on. Ensure that all names - short and/or long - are in
# DNS or /etc/hosts on all machines in the cluster.
#
# The subset of nodes of the Gluster Trusted Pool that form the ganesha HA
# cluster. Hostname is specified.
HA_CLUSTER_NODES="server1.lab.redhat.com,server2.lab.redhat.com,..."
#
# Virtual IPs for each of the nodes specified above.
VIP_server1="10.0.2.1"
VIP_server2="10.0.2.2"
#VIP_server1_lab_redhat_com="10.0.2.1"
#VIP_server2_lab_redhat_com="10.0.2.2"
....
....

Note

  • Pacemaker handles the creation of the VIP and assigning an interface.
  • Ensure that the VIP is in the same network range.
  • Ensure that the HA_CLUSTER_NODES are specified as hostnames. Using IP addresses will cause clustering to fail.
6.3.3.2.5. Configuring NFS-Ganesha using Gluster CLI
Setting up the HA cluster

To setup the HA cluster, enable NFS-Ganesha by executing the following command:

  1. Enable NFS-Ganesha by executing the following command
    # gluster nfs-ganesha enable

    Note

    Before enabling or disabling NFS-Ganesha, ensure that all the nodes that are part of the NFS-Ganesha cluster are up.
    For example,
    # gluster nfs-ganesha enable
    Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the trusted pool. Do you still want to continue?
     (y/n) y
    This will take a few minutes to complete. Please wait ..
    nfs-ganesha : success

    Note

    After enabling NFS-Ganesha, if rpcinfo -p shows the statd port different from 662, then, restart the statd service:
    For Red Hat Enterprise Linux 7:
    # systemctl restart rpc-statd
    Tearing down the HA cluster

    To tear down the HA cluster, execute the following command:

    # gluster nfs-ganesha disable
    For example,
    # gluster nfs-ganesha disable
    Disabling NFS-Ganesha will tear down entire ganesha cluster across the trusted pool. Do you still want to continue?
    (y/n) y
    This will take a few minutes to complete. Please wait ..
    nfs-ganesha : success
    Verifying the status of the HA cluster

    To verify the status of the HA cluster, execute the following script:

    # /usr/libexec/ganesha/ganesha-ha.sh --status /var/run/gluster/shared_storage/nfs-ganesha

    Note

    With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .
    For example:
    # /usr/libexec/ganesha/ganesha-ha.sh --status /var/run/gluster/shared_storage/nfs-ganesha
     Online: [ server1 server2 server3 server4 ]
    server1-cluster_ip-1 server1
    server2-cluster_ip-1 server2
    server3-cluster_ip-1 server3
    server4-cluster_ip-1 server4
    Cluster HA Status: HEALTHY
    

    Note

    • It is recommended to manually restart the ganesha.nfsd service after the node is rebooted, to fail back the VIPs.
    • Disabling NFS Ganesha does not enable Gluster NFS by default. If required, Gluster NFS must be enabled manually.

Note

Ensure to disable the RQUOTA port to avoid the issues described in Section 6.3.3.9, “Troubleshooting NFS Ganesha”
  1. NFS-Ganesha fails to start.
  2. NFS-Ganesha port 875 is unavailable.
To disable RQUOTA port, run the following steps:
  1. The ganesha.conf file is available at /etc/ganesha/ganesha.conf.
  2. Uncomment the line #Enable_RQUOTA = false; to disable RQUOTA.
  3. Restart the nfs-ganesha service on all nodes.
    # systemctl restart nfs-ganesha
6.3.3.2.6. Exporting and Unexporting Volumes through NFS-Ganesha

Note

Start Red Hat Gluster Storage Volume before enabling NFS-Ganesha.
Exporting Volumes through NFS-Ganesha

To export a Red Hat Gluster Storage volume, execute the following command:

# gluster volume set <volname> ganesha.enable on
For example:
# gluster vol set testvol ganesha.enable on
volume set: success
Unexporting Volumes through NFS-Ganesha

To unexport a Red Hat Gluster Storage volume, execute the following command:

# gluster volume set <volname> ganesha.enable off
This command unexports the Red Hat Gluster Storage volume without affecting other exports.
For example:
# gluster vol set testvol ganesha.enable off
volume set: success
6.3.3.2.7. Verifying the NFS-Ganesha Status
To verify the status of the volume set options, follow the guidelines mentioned below:
  • Check if NFS-Ganesha is started by executing the following commands:
    On Red Hat Enterprise Linux-7
    # systemctl status nfs-ganesha
    For example:
    # systemctl  status nfs-ganesha
       nfs-ganesha.service - NFS-Ganesha file server
       Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha.service; disabled)
       Active: active (running) since Tue 2015-07-21 05:08:22 IST; 19h ago
       Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki
       Main PID: 15440 (ganesha.nfsd)
       CGroup: /system.slice/nfs-ganesha.service
                   └─15440 /usr/bin/ganesha.nfsd -L /var/log/ganesha/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT
       Jul 21 05:08:22 server1 systemd[1]: Started NFS-Ganesha file server.]
    
    
  • Check if the volume is exported.
    # showmount -e localhost
    For example:
    # showmount -e localhost
    Export list for localhost:
    /volname (everyone)
  • The logs of ganesha.nfsd daemon are written to /var/log/ganesha/ganesha.log. Check the log file on noticing any unexpected behavior.

6.3.3.3. Accessing NFS-Ganesha Exports

NFS-Ganesha exports can be accessed by mounting them in either NFSv3 or NFSv4 mode. Since this is an active-active HA configuration, the mount operation can be performed from the VIP of any node.
For better large file performance on all workloads that is generated on Red Hat Enterprise Linux 7 clients, it is recommended to set the following tunable before mounting the volume:
  1. Execute the following commands to set the tunable:
    # sysctl -w sunrpc.tcp_slot_table_entries=128
    # echo 128 > /proc/sys/sunrpc/tcp_slot_table_entries
    # echo 128 > /proc/sys/sunrpc/tcp_max_slot_table_entries
  2. To make the tunable persistent on reboot, execute the following commands:
    # echo "options sunrpc tcp_slot_table_entries=128" >> /etc/modprobe.d/sunrpc.conf
    # echo "options sunrpc tcp_max_slot_table_entries=128" >>  /etc/modprobe.d/sunrpc.conf

Note

Ensure that NFS clients and NFS-Ganesha servers in the cluster are DNS resolvable with unique host-names to use file locking through Network Lock Manager (NLM) protocol.
6.3.3.3.1. Mounting exports in NFSv3 Mode
To mount an export in NFSv3 mode, execute the following command:
# mount -t nfs -o vers=3 virtual_ip:/volname /mountpoint
For example:
mount -t nfs -o vers=3 10.70.0.0:/testvol /mnt
6.3.3.3.2. Mounting exports in NFSv4 Mode
To mount an export in NFSv4 mode on RHEL 7 client(s), execute the following command:
# mount -t nfs -o vers=4 virtual_ip:/volname /mountpoint
For example:
# mount -t nfs -o vers=4 10.70.0.0:/testvol /mnt

Important

The default version for RHEL 8 is NFSv4.2
To mount an export in a specific NFS version on RHEL 8 client(s), execute the following command:
# mount -t nfs -o vers=4.0 or 4.1 virtual_ip:/volname /mountpoint
For example:
# mount -t nfs -o vers=4.1 10.70.0.0:/testvol /mnt
6.3.3.3.3. Finding clients of an NFS server using dbus
To display the IP addresses of clients that have mounted the NFS exports, execute the following command:
# dbus-send --type=method_call --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ClientMgr org.ganesha.nfsd.clientmgr.ShowClients

Note

If the NFS export is unmounted or if a client is disconnected from the server, it may take a few minutes for this to be updated in the command output.
6.3.3.3.4.  Finding authorized client list and other information from an NFS server using dbus
To display the authorized client access list and other export options configured from an NFS server, execute the following command:
# dbus-send --type=method_call --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.DisplayExport uint16:Export_Id
This command, along with the ACLs, fetches other information like fullpath, pseudopath and tag of the export volume. The fullpath and the pseudopath is used for mounting the export volume.
The dbus DisplayExport command will give clients details of the export volume. The output syntax is as follows:
uint16 export_id
string fullpath
string pseudopath
string tag
array[
  struct {
    string client_type
    int32 CIDR_version
    byte CIDR_address
    byte CIDR_mask
    int32 CIDR_proto
    uint32 anonymous_uid
    uint32 anonymous_gid
    uint32 expire_time_attr
    uint32 options
    uint32 set
    }
    struct {
      .
      .
      .
      }
      .
      .
      .
    ]
In the above output, client_type is the client’s IP address, CIDR_version, CIDR_address, CIDR_mask and CIDR_proto are the CIDR representation details of the client and uint32 anonymous_uid, uint32 anonymous_gid, uint32 expire_time_attr, uint32 options and uint32 set are the Client Permissions.
For example:
#dbus-send --type=method_call --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.DisplayExport uint16:2

method return time=1559209192.642525 sender=:1.5491 -> destination=:1.5510 serial=370 reply_serial=2
uint16 2
string "/mani1"
string "/mani1"
string ""
array [
  struct {
    string "10.70.46.107/32"
    int32 0
    byte 0
    byte 255
    int32 1
    uint32 1440
    uint32 72
    uint32 0
    uint32 52441250
    uint32 7340536
    }
  struct {
    string "10.70.47.152/32"
    int32 0
    byte 0
    byte 255
    int32 1
    uint32 1440
    uint32 72
    uint32 0
    uint32 51392994
    uint32 7340536
    }
  ]

6.3.3.4. Modifying the NFS-Ganesha HA Setup

To modify the existing HA cluster and to change the default values of the exports use the ganesha-ha.sh script located at /usr/libexec/ganesha/.
6.3.3.4.1. Adding a Node to the Cluster
Before adding a node to the cluster, ensure that the firewall services are enabled as mentioned in Port Information for NFS-Ganesha and also the prerequisites mentioned in section Pre-requisites to run NFS-Ganesha are met.

Note

Since shared storage and /var/lib/glusterd/nfs/secret.pem SSH key are already generated, those steps should not be repeated.
To add a node to the cluster, execute the following command on any of the nodes in the existing NFS-Ganesha cluster:
# /usr/libexec/ganesha/ganesha-ha.sh --add <HA_CONF_DIR> <HOSTNAME> <NODE-VIP>
where,
HA_CONF_DIR: The directory path containing the ganesha-ha.conf file. By default it is /run/gluster/shared_storage/nfs-ganesha.
HOSTNAME: Hostname of the new node to be added
NODE-VIP: Virtual IP of the new node to be added.
For example:
# /usr/libexec/ganesha/ganesha-ha.sh --add /var/run/gluster/shared_storage/nfs-ganesha server16 10.00.00.01

Note

With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .
6.3.3.4.2. Deleting a Node in the Cluster
To delete a node from the cluster, execute the following command on any of the nodes in the existing NFS-Ganesha cluster:
# /usr/libexec/ganesha/ganesha-ha.sh --delete <HA_CONF_DIR> <HOSTNAME>
where,
HA_CONF_DIR: The directory path containing the ganesha-ha.conf file. By default it is located at /run/gluster/shared_storage/nfs-ganesha.
HOSTNAME: Hostname of the node to be deleted
For example:
# /usr/libexec/ganesha/ganesha-ha.sh --delete /var/run/gluster/shared_storage/nfs-ganesha  server16

Note

With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .
6.3.3.4.3. Replacing a Node in the Cluster
To replace a node in the existing NFS-Ganesha cluster:
  1. Delete the node from the cluster. Refer Section 6.3.3.4.2, “Deleting a Node in the Cluster”
  2. Note

    It is not required for the new node to have the same name as that of the old node.
  3. Add the node to the cluster. Refer Section 6.3.3.4.1, “Adding a Node to the Cluster”

    Note

6.3.3.5. Modifying the Default Export Configurations

It is recommended to use gluster CLI options to export or unexport volumes through NFS-Ganesha. However, this section provides some information on changing configurable parameters in NFS-Ganesha. Such parameter changes require NFS-Ganesha to be started manually.
For various supported export options see the ganesha-export-config 8 man page.
To modify the default export configurations perform the following steps on any of the nodes in the existing ganesha cluster:
  1. Edit/add the required fields in the corresponding export file located at /run/gluster/shared_storage/nfs-ganesha/exports/.
  2. Execute the following command
    # /usr/libexec/ganesha/ganesha-ha.sh --refresh-config <HA_CONF_DIR> <volname>
where:
  • HA_CONF_DIR: The directory path containing the ganesha-ha.conf file. By default it is located at /run/gluster/shared_storage/nfs-ganesha.
  • volname: The name of the volume whose export configuration has to be changed.
Sample export configuration file:
The following are the default set of parameters required to export any entry. The values given here are the default values used by the CLI options to start or stop NFS-Ganesha.
# cat export.conf

EXPORT{
    Export_Id = 1 ;   # Export ID unique to each export
    Path = "volume_path";  # Path of the volume to be exported. Eg: "/test_volume"

    FSAL {
        name = GLUSTER;
        hostname = "10.xx.xx.xx";  # IP of one of the nodes in the trusted pool
        volume = "volume_name";     # Volume name. Eg: "test_volume"
    }

    Access_type = RW;     # Access permissions
    Squash = No_root_squash; # To enable/disable root squashing
    Disable_ACL = true;     # To enable/disable ACL
    Pseudo = "pseudo_path";     # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
    Protocols = "3”, “4" ;     # NFS protocols supported
    Transports = "UDP”, “TCP" ; # Transport protocols supported
    SecType = "sys";     # Security flavors supported
}
The following sections describe various configurations possible via NFS-Ganesha. Minor changes have to be made to the export.conf file to see the expected behavior.
  • Providing Permissions for Specific Clients
  • Enabling and Disabling NFSv4 ACLs
  • Providing Pseudo Path for NFSv4 Mount
  • Exporting Subdirectories
6.3.3.5.1. Providing Permissions for Specific Clients
The parameter values and permission values given in the EXPORT block applies to any client that mounts the exported volume. To provide specific permissions to specific clients , introduce a client block inside the EXPORT block.
For example, to assign specific permissions for client 10.00.00.01, add the following block in the EXPORT block.
client {
        clients = 10.00.00.01;  # IP of the client.
        access_type = "RO"; # Read-only permissions
        Protocols = "3"; # Allow only NFSv3 protocol.
        anonymous_uid = 1440;
        anonymous_gid = 72;
  }
All the other clients inherit the permissions that are declared outside the client block.
6.3.3.5.2. Enabling and Disabling NFSv4 ACLs
To enable NFSv4 ACLs , edit the following parameter:
Disable_ACL = false;

Note

NFS clients should remount their share after enabling/disabling ACLs on the NFS-Ganesha server.
6.3.3.5.3. Providing Pseudo Path for NFSv4 Mount
To set NFSv4 pseudo path , edit the below parameter:
Pseudo = "pseudo_path"; # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
This path has to be used while mounting the export entry in NFSv4 mode.
6.3.3.5.4. Exporting Subdirectories
You can follow either of the following two methods to export subdir in NFS-ganesha
Method 1: Creating a separate export file. This will export the sub-directory shares without disrupting existing clients connected to other shares
  • Create a separate export file for the sub-directory.
    # cat export.ganesha-dir.conf
            # WARNING : Using Gluster CLI will overwrite manual
            # changes made to this file. To avoid it, edit the
            # file and run ganesha-ha.sh --refresh-config.
            EXPORT{
                  Export_Id = 3;
                  Path = "/ganesha/dir";
                  FSAL {
                       name = GLUSTER;
                       hostname="localhost";
                      volume="ganesha";
                volpath="/dir";
                       }
                  Access_type = RW;
                  Disable_ACL = true;
                  Squash="No_root_squash";
                  Pseudo="/ganesha/dir";
                  Protocols = "3", "4";
                  Transports = "UDP","TCP";
                  SecType = "sys";
                 }
  • Change the Export_ID to any unique unused ID.Edit the PathandPseudoparameters and add the volpath entry to the export file.
  • If a new export file is created for the sub-directory, you must add it's entry in ganesha.conf file.
     %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.<share-name>.conf"

    Note

    With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .
    For example:
                  %include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.ganesha.conf" --> Volume entry
                  %include >/var/run/gluster/shared_storage/nfs-ganesha/exports/export.ganesha-dir.conf" --> Subdir entry
  • Execute the following script to export the sub-directory shares without disrupting existing clients connected to other shares :
    # /usr/libexec/ganesha/ganesha-ha.sh --refresh-config <HA_CONF_DIR> <share-name>
    For example:
     /usr/libexec/ganesha/ganesha-ha.sh --refresh-config /run/gluster/shared_storage/nfs-ganesha/ ganesha-dir
Method 2: Editing the volume export file with subdir entries in it. This method will only export the subdir and not the parent volume.
  • Edit the volume export file with subdir entry.
    For Example:
    # cat export.ganesha.conf
                  # WARNING : Using Gluster CLI will overwrite manual
                  # changes made to this file. To avoid it, edit the
                  # file and run ganesha-ha.sh --refresh-config.
                  EXPORT{
                        Export_Id = 4;
                        Path = "/ganesha/dir1";
                        FSAL {
                             name = GLUSTER;
                             hostname="localhost";
                            volume="ganesha";
                            volpath="/dir1";
                             }
                        Access_type = RW;
                        Disable_ACL = true;
                        Squash="No_root_squash";
                        Pseudo="/ganesha/dir1";
                        Protocols = "3", "4";
                        Transports = "UDP","TCP";
                        SecType = "sys";
                       }
  • Change the Export_ID to any unique unused ID.Edit the PathandPseudoparameters and add the volpath entry to the export file.
  • Execute the following script to export the sub-directory shares without disrupting existing clients connected to other shares:
    # /usr/libexec/ganesha/ganesha-ha.sh --refresh-config <HA_CONF_DIR> <share-name>
    For example:
     /usr/libexec/ganesha/ganesha-ha.sh --refresh-config /run/gluster/shared_storage/nfs-ganesha/ ganesha

    Note

    If the same export file contains multiple EXPORT{} entries, then a volume restart or nfs-ganesha service restart is required.
6.3.3.5.4.1. Enabling all_squash option
To enable all_squash, edit the following parameter:
Squash = all_squash ; # To enable/disable root squashing
6.3.3.5.5. Unexporting Subdirectories
Sub-directory in NFS-ganesha can be unexported by the following steps:
  1. Note the export id of the share which you want to unexport from configuration file (/var/run/gluster/shared_storage/nfs-ganesha/exports/file-name.conf)
  2. Deleting the configuration:
    • Delete the configuration file (if there is a seperate configraution file):
      # rm -rf /var/run/gluster/shared_storage/nfs-ganesha/exports/file-name.conf
    • Delete the entry of the conf file from /etc/ganesha/ganesha.conf
      Remove the line:
      %include "/var/run/gluster/shared_storage/nfs-ganesha/export/export.conf
    • Run the below command:
      # dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.RemoveExport uint16:export_id
      Export_id in above command should be of export entry obtained from step 1.

6.3.3.6. Configuring Kerberized NFS-Ganesha

Note

NTP is no longer supported on Red Hat Enterprise Linux 8.
Execute the following steps on all the machines:
  1. Install the krb5-workstation and the ntpdate (RHEL 7) or the chrony (RHEL 8) packages on all the machines:
    # yum install krb5-workstation
    For Red Hat Enterprise Linux 7:
    # yum install ntpdate
    For Red Hat Enterprise Linux 8:
    # dnf install chrony

    Note

    • The krb5-libs package will be updated as a dependent package.
  2. For RHEL 7, configure the ntpdate based on the valid time server according to the environment:
    # echo <valid_time_server> >> /etc/ntp/step-tickers
    # systemctl enable ntpdate
    # systemctl start ntpdate
    For RHEL 8, configure chrony based on the valid time server accroding to the environment:
      # vi /etc/chrony.conf
      # systemctl enable chrony
      # systemctl start chrony
    For RHEL 7 and RHEL 8 both, perform the following steps:
  3. Ensure that all systems can resolve each other by FQDN in DNS.
  4. Configure the /etc/krb5.conf file and add relevant changes accordingly. For example:
    [logging]
      default = FILE:/var/log/krb5libs.log
      kdc = FILE:/var/log/krb5kdc.log
      admin_server = FILE:/var/log/kadmind.log
    
      [libdefaults]
      dns_lookup_realm = false
      ticket_lifetime = 24h
      renew_lifetime = 7d
      forwardable = true
      rdns = false
      default_realm = EXAMPLE.COM
      default_ccache_name = KEYRING:persistent:%{uid}
    
      [realms]
      EXAMPLE.COM = {
      kdc = kerberos.example.com
        admin_server = kerberos.example.com
      }
    
      [domain_realm]
      .example.com = EXAMPLE.COM
       example.com = EXAMPLE.COM

    Note

    For further details regarding the file configuration, refer to man krb5.conf.
  5. On the NFS-server and client, update the /etc/idmapd.conf file by making the required change. For example:
    Domain = example.com
6.3.3.6.1. Setting up the NFS-Ganesha Server
Execute the following steps to set up the NFS-Ganesha server:

Note

Before setting up the NFS-Ganesha server, make sure to set up the KDC based on the requirements.
  1. Install the following packages:
    # yum install nfs-utils
    # yum install rpcbind
  2. Install the relevant gluster and NFS-Ganesha rpms. For more information see, Red Hat Gluster Storage 3.5 Installation Guide.
  3. Create a Kerberos principle and add it to krb5.keytab on the NFS-Ganesha server
    $ kadmin
    $ kadmin: addprinc -randkey nfs/<host_name>@EXAMPLE.COM
    $ kadmin: ktadd nfs/<host_name>@EXAMPLE.COM
    For example:
    # kadmin
    Authenticating as principal root/admin@EXAMPLE.COM with password.
    Password for root/admin@EXAMPLE.COM:
    
    kadmin:  addprinc -randkey nfs/<host_name>@EXAMPLE.COM
    WARNING: no policy specified for nfs/<host_name>@EXAMPLE.COM; defaulting to no policy
    Principal "nfs/<host_name>@EXAMPLE.COM" created.
    
    
    kadmin:  ktadd nfs/<host_name>@EXAMPLE.COM
    Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno2, encryption type aes256-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab.
    Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab.
    Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab FILE:/etc/krb5.keytab.
    Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type arcfour-hmac added to keytab FILE:/etc/krb5.keytab.
    Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type camellia256-cts-cmac added to keytab FILE:/etc/krb5.keytab.
    Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type camellia128-cts-cmac added to keytab FILE:/etc/krb5.keytab.
    Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type des-hmac-sha1 added to keytab FILE:/etc/krb5.keytab.
    Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type des-cbc-md5 added to keytab FILE:/etc/krb5.keytab.
  4. Update /etc/ganesha/ganesha.conf file as mentioned below:
    NFS_KRB5
    {
            PrincipalName = nfs ;
            KeytabPath = /etc/krb5.keytab ;
            Active_krb5 = true ;
    }
  5. Based on the different kerberos security flavours (krb5, krb5i and krb5p) supported by nfs-ganesha, configure the 'SecType' parameter in the volume export file (/var/run/gluster/shared_storage/nfs-ganesha/exports) with appropriate security flavour.

    Note

    With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .
  6. Create an unprivileged user and ensure that the users that are created are resolvable to the UIDs through the central user database. For example:
    # useradd guest

    Note

    The username of this user has to be the same as the one on the NFS-client.
6.3.3.6.2. Setting up the NFS Client
Execute the following steps to set up the NFS client:

Note

For a detailed information on setting up NFS-clients for security on Red Hat Enterprise Linux, see Section 8.8.2 NFS Security, in the Red Hat Enterprise Linux 7 Storage Administration Guide.
  1. Install the following packages:
    # yum install nfs-utils
    # yum install rpcbind
  2. Create a kerberos principle and add it to krb5.keytab on the client side. For example:
    # kadmin
    # kadmin: addprinc -randkey host/<host_name>@EXAMPLE.COM
    # kadmin: ktadd host/<host_name>@EXAMPLE.COM
    # kadmin
    Authenticating as principal root/admin@EXAMPLE.COM with password.
    Password for root/admin@EXAMPLE.COM:
    
    kadmin:  addprinc -randkey host/<host_name>@EXAMPLE.COM
    WARNING: no policy specified for host/<host_name>@EXAMPLE.COM; defaulting to no policy
    Principal "host/<host_name>@EXAMPLE.COM" created.
    
    kadmin:  ktadd host/<host_name>@EXAMPLE.COM
    Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab.
    Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab.
    Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab FILE:/etc/krb5.keytab.
    Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type arcfour-hmac added to keytab FILE:/etc/krb5.keytab.
    Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type camellia256-cts-cmac added to keytab FILE:/etc/krb5.keytab.
    Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type camellia128-cts-cmac added to keytab FILE:/etc/krb5.keytab.
    Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type des-hmac-sha1 added to keytab FILE:/etc/krb5.keytab.
    Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type des-cbc-md5 added to keytab FILE:/etc/krb5.keytab.
  3. Check the status of nfs-client.target service and start it, if not already started:
    # systemctl status nfs-client.target
    # systemctl start nfs-client.target
    # systemctl enable nfs-client.target
  4. Create an unprivileged user and ensure that the users that are created are resolvable to the UIDs through the central user database. For example:
    # useradd guest

    Note

    The username of this user has to be the same as the one on the NFS-server.
  5. Mount the volume specifying kerberos security type:
    # mount -t nfs -o sec=krb5 <host_name>:/testvolume /mnt
    As root, all access should be granted.
    For example:
    Creation of a directory on the mount point and all other operations as root should be successful.
    # mkdir <directory name>
  6. Login as a guest user:
    # su - guest
    Without a kerberos ticket, all access to /mnt should be denied. For example:
    # su guest
    # ls
    ls: cannot open directory .: Permission denied
  7. Get the kerberos ticket for the guest and access /mnt:
    # kinit
    Password for guest@EXAMPLE.COM:
    
    # ls
    <directory created>

    Important

    With this ticket, some access must be allowed to /mnt. If there are directories on the NFS-server where "guest" does not have access to, it should work correctly.

6.3.3.7. NFS-Ganesha Service Downtime

In a highly available active-active environment, if a NFS-Ganesha server that is connected to a NFS client running a particular application goes down, the application/NFS client is seamlessly connected to another NFS-Ganesha server without any administrative intervention. However, there is a delay or fail-over time in connecting to another NFS-Ganesha server. This delay can be experienced during fail-back too, that is, when the connection is reset to the original node/server.
The following list describes how the time taken for the NFS server to detect a server reboot or resume is calculated.
  • If the ganesha.nfsd dies (crashes, oomkill, admin kill), the maximum time to detect it and put the ganesha cluster into grace is 20sec, plus whatever time pacemaker needs to effect the fail-over.

    Note

    This time taken to detect if the service is down, can be edited using the following command on all the nodes:
    # pcs resource op remove nfs-mon monitor
    # pcs resource op add nfs-mon monitor interval=<interval_period_value>
  • If the whole node dies (including network failure) then this down time is the total of whatever time pacemaker needs to detect that the node is gone, the time to put the cluster into grace, and the time to effect the fail-over. This is ~20 seconds.
  • So the max-fail-over time is approximately 20-22 seconds, and the average time is typically less. In other words, the time taken for NFS clients to detect server reboot or resume I/O is 20 - 22 seconds.
6.3.3.7.1. Modifying the Fail-over Time
After failover, there is a short period of time during which clients try to reclaim their lost OPEN/LOCK state. Servers block certain file operations during this period, as per the NFS specification. The file operations blocked are as follows:
Table 6.7. 
Protocols File Operations
NFSV3
  • SETATTR
NLM
  • LOCK
  • UNLOCK
  • SHARE
  • UNSHARE
  • CANCEL
  • LOCKT
NFSV4
  • LOCK
  • LOCKT
  • OPEN
  • REMOVE
  • RENAME
  • SETATTR

Note

LOCK, SHARE, and UNSHARE will be blocked only if it is requested with reclaim set to FALSE.
OPEN will be blocked if requested with claim type other than CLAIM_PREVIOUS or CLAIM_DELEGATE_PREV.
The default value for the grace period is 90 seconds. This value can be changed by adding the following lines in the /etc/ganesha/ganesha.conf file.
NFSv4 {
Grace_Period=<grace_period_value_in_sec>;
}
After editing the /etc/ganesha/ganesha.conf file, restart the NFS-Ganesha service using the following command on all the nodes :
On Red Hat Enterprise Linux 7

# systemctl restart nfs-ganesha

6.3.3.8. Tuning Readdir Performance for NFS-Ganesha

The NFS-Ganesha process reads entire content of a directory at an instance. Any parallel operations on that directory are paused until the readdir operation is complete. With Red Hat Gluster Storage 3.5, the Dir_Chunk parameter enables the directory content to be read in chunks at an instance. This parameter is enabled by default. The default value of this parameter is 128. The range for this parameter is 1 to UINT32_MAX. To disable this parameter, set the value to 0

Procedure 6.1. Configuring readdir perform for NFS-Ganesha

  1. Edit the /etc/ganesha/ganesha.conf file.
  2. Locate the CACHEINODE block.
  3. Add the Dir_Chunk parameter inside the block:
    CACHEINODE {
            Entries_HWMark = 125000;
            Chunks_HWMark = 1000;
            Dir_Chunk = 128; # Range: 1 to UINT32_MAX, 0 to disable
    }
  4. Save the ganesha.conf file and restart the NFS-Ganesha service on all nodes:
    # systemctl restart nfs-ganesha

6.3.3.9. Troubleshooting NFS Ganesha

Mandatory checks

Ensure you execute the following commands for all the issues/failures that is encountered:

  • Make sure all the prerequisites are met.
  • Execute the following commands to check the status of the services:
    # service nfs-ganesha status
    # service pcsd status
    # service pacemaker status
    # pcs status
  • Review the followings logs to understand the cause of failure.
    /var/log/ganesha/ganesha.log
    /var/log/ganesha/ganesha-gfapi.log
    /var/log/messages
    /var/log/pcsd.log
    
  • Situation

    NFS-Ganesha fails to start.

    Solution

    Ensure you execute all the mandatory checks to understand the root cause before proceeding with the following steps. Follow the listed steps to fix the issue:

    1. Ensure the kernel and gluster nfs services are inactive.
    2. Ensure that the port 875 is free to connect to the RQUOTA service.
    3. Ensure that the shared storage volume mount exists on the server after node reboot/shutdown. If it does not, then mount the shared storage volume manually using the following command:
      # mount -t glusterfs <local_node's_hostname>:gluster_shared_storage /var/run/gluster/shared_storage

      Note

      With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .
    For more information see, section Exporting and Unexporting Volumes through NFS-Ganesha.
  • Situation

    NFS-Ganesha port 875 is unavailable.

    Solution

    Ensure you execute all the mandatory checks to understand the root cause before proceeding with the following steps. Follow the listed steps to fix the issue:

    1. Run the following command to extract the PID of the process using port 875:
      netstat -anlp | grep 875
    2. Determine if the process using port 875 is an important system or user process.
    3. Perform one of the following depending upon the importance of the process:
      • If the process using port 875 is an important system or user process:
        1. Assign a different port to this service by modifying following line in ‘/etc/ganesha/ganesha.conf’ file on all the nodes:
          # Use a non-privileged port for RQuota
          Rquota_Port = port_number;
        2. Run the following commands after modifying the port number:
          # semanage port -a -t mountd_port_t -p tcp port_number
          # semanage port -a -t mountd_port_t -p udp port_number
        3. Run the following command to restart NFS-Ganesha:
          systemctl restart nfs-ganesha
      • If the process using port 875 is not an important system or user process:
        1. Run the following command to kill the process using port 875:
          # kill pid;
          Use the process ID extracted from the previous step.
        2. Run the following command to ensure that the process is killed and port 875 is free to use:
          # ps aux | grep pid;
        3. Run the following command to restart NFS-Ganesha:
          systemctl restart nfs-ganesha
        4. If required, restart the killed process.
  • Situation

    NFS-Ganesha Cluster setup fails.

    Solution

    Ensure you execute all the mandatory checks to understand the root cause before proceeding with the following steps.

    1. Ensure the kernel and gluster nfs services are inactive.
    2. Ensure that pcs cluster auth command is executed on all the nodes with same password for the user hacluster
    3. Ensure that shared volume storage is mounted on all the nodes.
    4. Ensure that the name of the HA Cluster does not exceed 15 characters.
    5. Ensure UDP multicast packets are pingable using OMPING.
    6. Ensure that Virtual IPs are not assigned to any NIC.
  • Situation

    NFS-Ganesha has started and fails to export a volume.

    Solution

    Ensure you execute all the mandatory checks to understand the root cause before proceeding with the following steps. Follow the listed steps to fix the issue:

    1. Ensure that volume is in Started state using the following command:
      # gluster volume status <volname>
      
    2. Execute the following commands to check the status of the services:
      # service nfs-ganesha status
      # showmount -e localhost
    3. Review the followings logs to understand the cause of failure.
      /var/log/ganesha/ganesha.log
      /var/log/ganesha/ganesha-gfapi.log
      /var/log/messages
    4. Ensure that dbus service is running using the following command
      # service messagebus status
    5. If the volume is not in a started state, run the following command to start the volume.
      # gluster volume start <volname>
      If the volume is not exported as part of volume start, run the following command to re-export the volume:
      # /usr/libexec/ganesha/dbus-send.sh /var/run/gluster/shared_storage on <volname>

      Note

      With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .
  • Situation

    Adding a new node to the HA cluster fails.

    Solution

    Ensure you execute all the mandatory checks to understand the root cause before proceeding with the following steps. Follow the listed steps to fix the issue:

    1. Ensure to run the following command from one of the nodes that is already part of the cluster:
      # ganesha-ha.sh --add <HA_CONF_DIR>  <NODE-HOSTNAME>  <NODE-VIP>
    2. Ensure that gluster_shared_storage volume is mounted on the node that needs to be added.
    3. Make sure that all the nodes of the cluster is DNS resolvable from the node that needs to be added.
    4. Execute the following command for each of the hosts in the HA cluster on the node that needs to be added:
      For Red Hat Enterprize Linux 7:
      # pcs cluster auth <hostname>
      For Red Hat Enterprize Linux 8:
      # pcs host auth <hostname>
  • Situation

    Cleanup required when nfs-ganesha HA cluster setup fails.

    Solution

    To restore back the machines to the original state, execute the following commands on each node forming the cluster:

    # /usr/libexec/ganesha/ganesha-ha.sh --teardown /var/run/gluster/shared_storage/nfs-ganesha
    # /usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha
    # systemctl stop nfs-ganesha

    Note

    With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .
  • Situation

    Permission issues.

    Solution

    By default, the root squash option is disabled when you start NFS-Ganesha using the CLI. In case, you encounter any permission issues, check the unix permissions of the exported entry.

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.