6.2. NFS
Note
getfacl
and setfacl
operations on NFS clients. The following options are provided to configure the Access Control Lists (ACL) in the glusterFS NFS server with the nfs.acl
option. For example:
- To set nfs.acl
ON
, run the following command:# gluster volume set VOLNAME nfs.acl on
- To set nfs.acl
OFF
, run the following command:# gluster volume set VOLNAME nfs.acl off
Note
ON
by default.
Important
firewall-cmd --get-active-zones
# firewall-cmd --get-active-zones
firewall-cmd --zone=zone_name --add-service=nfs --add-service=rpc-bind firewall-cmd --zone=zone_name --add-service=nfs --add-service=rpc-bind --permanent
# firewall-cmd --zone=zone_name --add-service=nfs --add-service=rpc-bind
# firewall-cmd --zone=zone_name --add-service=nfs --add-service=rpc-bind --permanent
6.2.1. Setting up CTDB for NFS
Important
firewall-cmd --get-active-zones
# firewall-cmd --get-active-zones
firewall-cmd --zone=zone_name --add-port=4379/tcp firewall-cmd --zone=zone_name --add-port=4379/tcp --permanent
# firewall-cmd --zone=zone_name --add-port=4379/tcp
# firewall-cmd --zone=zone_name --add-port=4379/tcp --permanent
Note
6.2.1.1. Prerequisites
- If you already have an older version of CTDB (version <= ctdb1.x), then remove CTDB by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow yum remove ctdb
# yum remove ctdb
After removing the older version, proceed with installing the latest CTDB.Note
Ensure that the system is subscribed to the samba channel to get the latest CTDB packages. - Install CTDB on all the nodes that are used as NFS servers to the latest version using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow yum install ctdb
# yum install ctdb
- In a CTDB based high availability environment of Samba/NFS , the locks will not be migrated on failover.
- You must ensure to open TCP port 4379 between the Red Hat Gluster Storage servers: This is the internode communication port of CTDB.
6.2.1.2. Configuring CTDB on Red Hat Gluster Storage Server
- Create a replicate volume. This volume will host only a zero byte lock file, hence choose minimal sized bricks. To create a replicate volume run the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume create volname replica n ipaddress:/brick path.......N times
# gluster volume create volname replica n ipaddress:/brick path.......N times
where,N: The number of nodes that are used as NFS servers. Each node must host one brick.For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume create ctdb replica 4 10.16.157.75:/rhgs/brick1/ctdb/b1 10.16.157.78:/rhgs/brick1/ctdb/b2 10.16.157.81:/rhgs/brick1/ctdb/b3 10.16.157.84:/rhgs/brick1/ctdb/b4
# gluster volume create ctdb replica 4 10.16.157.75:/rhgs/brick1/ctdb/b1 10.16.157.78:/rhgs/brick1/ctdb/b2 10.16.157.81:/rhgs/brick1/ctdb/b3 10.16.157.84:/rhgs/brick1/ctdb/b4
- In the following files, replace "all" in the statement META="all" to the newly created volume name
Copy to Clipboard Copied! Toggle word wrap Toggle overflow /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow META="all" to META="ctdb"
META="all" to META="ctdb"
- Start the volume.The S29CTDBsetup.sh script runs on all Red Hat Gluster Storage servers, adds an entry in /etc/fstab/ for the mount, and mounts the volume at /gluster/lock on all the nodes with NFS server. It also enables automatic start of CTDB service on reboot.
Note
When you stop the special CTDB volume, the S29CTDB-teardown.sh script runs on all Red Hat Gluster Storage servers and removes an entry in /etc/fstab/ for the mount and unmounts the volume at /gluster/lock. - Verify if the file /etc/sysconfig/ctdb exists on all the nodes that is used as NFS server. This file contains Red Hat Gluster Storage recommended CTDB configurations.
- Create /etc/ctdb/nodes file on all the nodes that is used as NFS servers and add the IPs of these nodes to the file.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow 10.16.157.0 10.16.157.3 10.16.157.6 10.16.157.9
10.16.157.0 10.16.157.3 10.16.157.6 10.16.157.9
The IPs listed here are the private IPs of NFS servers. - On all the nodes that are used as NFS server which require IP failover, create /etc/ctdb/public_addresses file and add the virtual IPs that CTDB should create to this file. Add these IP address in the following format:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <Virtual IP>/<routing prefix><node interface>
<Virtual IP>/<routing prefix><node interface>
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow 192.168.1.20/24 eth0 192.168.1.21/24 eth0
192.168.1.20/24 eth0 192.168.1.21/24 eth0
- Start the CTDB service on all the nodes by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service ctdb start
# service ctdb start
6.2.2. Using NFS to Mount Red Hat Gluster Storage Volumes
Note
nfsmount.conf
file at /etc/nfsmount.conf
by adding the following text in the file:
Defaultvers=3
vers=3
manually in all the mount commands.
# mount nfsserver:export -o vers=3 /MOUNTPOINT
tcp,rdma
volume it could be changed using the volume set option nfs.transport-type
.
6.2.2.1. Manually Mounting Volumes Using NFS
mount
command to manually mount a Red Hat Gluster Storage volume using NFS.
- If a mount point has not yet been created for the volume, run the
mkdir
command to create a mount point.Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir /mnt/glusterfs
# mkdir /mnt/glusterfs
- Run the correct
mount
command for the system.- For Linux
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mount -t nfs -o vers=3 server1:/test-volume /mnt/glusterfs
# mount -t nfs -o vers=3 server1:/test-volume /mnt/glusterfs
- For Solaris
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mount -o vers=3 nfs://server1:38467/test-volume /mnt/glusterfs
# mount -o vers=3 nfs://server1:38467/test-volume /mnt/glusterfs
mount
command to manually mount a Red Hat Gluster Storage volume using NFS over TCP.
Note
requested NFS version or transport protocol is not supported
nfs.mount-udp
is supported for mounting a volume, by default it is disabled. The following are the limitations:
- If
nfs.mount-udp
is enabled, the MOUNT protocol needed for NFSv3 can handle requests from NFS-clients that require MOUNT over UDP. This is useful for at least some versions of Solaris, IBM AIX and HP-UX. - Currently, MOUNT over UDP does not have support for mounting subdirectories on a volume. Mounting
server:/volume/subdir
exports is only functional when MOUNT over TCP is used. - MOUNT over UDP does not currently have support for different authentication options that MOUNT over TCP honors. Enabling
nfs.mount-udp
may give more permissions to NFS clients than intended via various authentication options likenfs.rpc-auth-allow
,nfs.rpc-auth-reject
andnfs.export-dir
.
- If a mount point has not yet been created for the volume, run the
mkdir
command to create a mount point.Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir /mnt/glusterfs
# mkdir /mnt/glusterfs
- Run the correct
mount
command for the system, specifying the TCP protocol option for the system.- For Linux
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mount -t nfs -o vers=3,mountproto=tcp server1:/test-volume /mnt/glusterfs
# mount -t nfs -o vers=3,mountproto=tcp server1:/test-volume /mnt/glusterfs
- For Solaris
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mount -o proto=tcp, nfs://server1:38467/test-volume /mnt/glusterfs
# mount -o proto=tcp, nfs://server1:38467/test-volume /mnt/glusterfs
6.2.2.2. Automatically Mounting Volumes Using NFS
Note
/etc/auto.master
and /etc/auto.misc
files, and restart the autofs
service. Whenever a user or process attempts to access the directory it will be mounted in the background on-demand.
- Open the
/etc/fstab
file in a text editor. - Append the following configuration to the
fstab
file.Copy to Clipboard Copied! Toggle word wrap Toggle overflow HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR nfs defaults,_netdev, 0 0
HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR nfs defaults,_netdev, 0 0
Using the example server names, the entry contains the following replaced values.Copy to Clipboard Copied! Toggle word wrap Toggle overflow server1:/test-volume /mnt/glusterfs nfs defaults,_netdev, 0 0
server1:/test-volume /mnt/glusterfs nfs defaults,_netdev, 0 0
- Open the
/etc/fstab
file in a text editor. - Append the following configuration to the
fstab
file.Copy to Clipboard Copied! Toggle word wrap Toggle overflow HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR nfs defaults,_netdev,mountproto=tcp 0 0
HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR nfs defaults,_netdev,mountproto=tcp 0 0
Using the example server names, the entry contains the following replaced values.Copy to Clipboard Copied! Toggle word wrap Toggle overflow server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,mountproto=tcp 0 0
server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,mountproto=tcp 0 0
6.2.2.3. Authentication Support for Subdirectory Mount
nfs.export-dir
option to provide client authentication during sub-directory mount. The nfs.export-dir
and nfs.export-dirs
options provide granular control to restrict or allow specific clients to mount a sub-directory. These clients can be authenticated with either an IP, host name or a Classless Inter-Domain Routing (CIDR) range.
- nfs.export-dirs: By default, all NFS sub-volumes are exported as individual exports. This option allows you to manage this behavior. When this option is turned off, none of the sub-volumes are exported and hence the sub-directories cannot be mounted. This option is on by default.To set this option to off, run the following command:
# gluster volume set VOLNAME nfs.export-dirs off
To set this option to on, run the following command:# gluster volume set VOLNAME nfs.export-dirs on
- nfs.export-dir: This option allows you to export specified subdirectories on the volume. You can export a particular subdirectory, for example:
# gluster volume set VOLNAME nfs.export-dir /d1,/d2/d3/d4,/d6
where d1, d2, d3, d4, d6 are the sub-directories.You can also control the access to mount these subdirectories based on the IP address, host name or a CIDR. For example:# gluster volume set VOLNAME nfs.export-dir "/d1(<ip address>),/d2/d3/d4(<host name>|<ip address>),/d6(<CIDR>)"
The directory /d1, /d2 and /d6 are directories inside the volume. Volume name must not be added to the path. For example if the volume vol1 has directories d1 and d2, then to export these directories use the following command:gluster volume set vol1 nfs.export-dir "/d1(192.0.2.2),d2(192.0.2.34)"
6.2.2.4. Testing Volumes Mounted Using NFS
Testing Mounted Red Hat Gluster Storage Volumes
Prerequisites
- Run the
mount
command to check whether the volume was successfully mounted.Copy to Clipboard Copied! Toggle word wrap Toggle overflow mount
# mount server1:/test-volume on /mnt/glusterfs type nfs (rw,addr=server1)
- Run the
df
command to display the aggregated storage space from all the bricks in a volume.Copy to Clipboard Copied! Toggle word wrap Toggle overflow df -h /mnt/glusterfs
# df -h /mnt/glusterfs Filesystem Size Used Avail Use% Mounted on server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs
- Move to the mount directory using the
cd
command, and list the contents.Copy to Clipboard Copied! Toggle word wrap Toggle overflow cd /mnt/glusterfs ls
# cd /mnt/glusterfs # ls
6.2.3. Troubleshooting NFS
- Q: The mount command on the NFS client fails with RPC Error: Program not registered. This error is encountered due to one of the following reasons:
- Q: The rpcbind service is not running on the NFS client. This could be due to the following reasons:
- Q: The NFS server glusterfsd starts but the initialization fails with nfsrpc- service: portmap registration of program failed error message in the log.
- Q: The NFS server start-up fails with the message Port is already in use in the log file.
- Q: The mount command fails with NFS server failed error:
- Q: The showmount command fails with clnt_create: RPC: Unable to receive error. This error is encountered due to the following reasons:
- Q: The application fails with Invalid argument or Value too large for defined data type
- Q: After the machine that is running NFS server is restarted the client fails to reclaim the locks held earlier.
- Q: The rpc actor failed to complete successfully error is displayed in the nfs.log, even after the volume is mounted successfully.
- Q: The mount command fails with No such file or directory.
RPC Error: Program not registered
. This error is encountered due to one of the following reasons:
- The NFS server is not running. You can check the status using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume status
# gluster volume status
- The volume is not started. You can check the status using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume info
# gluster volume info
- rpcbind is restarted. To check if rpcbind is running, execute the following command:
# ps ax| grep rpcbind
- If the NFS server is not running, then restart the NFS server using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume start VOLNAME
# gluster volume start VOLNAME
- If the volume is not started, then start the volume using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume start VOLNAME
# gluster volume start VOLNAME
- If both rpcbind and NFS server is running then restart the NFS server using the following commands:
# gluster volume stop VOLNAME
# gluster volume start VOLNAME
rpcbind
service is not running on the NFS client. This could be due to the following reasons:
- The portmap is not running.
- Another instance of kernel NFS server or glusterNFS server is running.
rpcbind
service by running the following command:
service rpcbind start
# service rpcbind start
[2010-05-26 23:33:47] E [rpcsvc.c:2598:rpcsvc_program_register_portmap] rpc-service: Could notregister with portmap [2010-05-26 23:33:47] E [rpcsvc.c:2682:rpcsvc_program_register] rpc-service: portmap registration of program failed [2010-05-26 23:33:47] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465 [2010-05-26 23:33:47] E [nfs.c:125:nfs_init_versions] nfs: Program init failed [2010-05-26 23:33:47] C [nfs.c:531:notify] nfs: Failed to initialize protocols [2010-05-26 23:33:49] E [rpcsvc.c:2614:rpcsvc_program_unregister_portmap] rpc-service: Could not unregister with portmap [2010-05-26 23:33:49] E [rpcsvc.c:2731:rpcsvc_program_unregister] rpc-service: portmap unregistration of program failed [2010-05-26 23:33:49] E [rpcsvc.c:2744:rpcsvc_program_unregister] rpc-service: Program unregistration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
[2010-05-26 23:33:47] E [rpcsvc.c:2598:rpcsvc_program_register_portmap] rpc-service: Could notregister with portmap
[2010-05-26 23:33:47] E [rpcsvc.c:2682:rpcsvc_program_register] rpc-service: portmap registration of program failed
[2010-05-26 23:33:47] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
[2010-05-26 23:33:47] E [nfs.c:125:nfs_init_versions] nfs: Program init failed
[2010-05-26 23:33:47] C [nfs.c:531:notify] nfs: Failed to initialize protocols
[2010-05-26 23:33:49] E [rpcsvc.c:2614:rpcsvc_program_unregister_portmap] rpc-service: Could not unregister with portmap
[2010-05-26 23:33:49] E [rpcsvc.c:2731:rpcsvc_program_unregister] rpc-service: portmap unregistration of program failed
[2010-05-26 23:33:49] E [rpcsvc.c:2744:rpcsvc_program_unregister] rpc-service: Program unregistration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
- Start the rpcbind service on the NFS server by running the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service rpcbind start
# service rpcbind start
After starting rpcbind service, glusterFS NFS server needs to be restarted. - Stop another NFS server running on the same machine.Such an error is also seen when there is another NFS server running on the same machine but it is not the glusterFS NFS server. On Linux systems, this could be the kernel NFS server. Resolution involves stopping the other NFS server or not running the glusterFS NFS server on the machine. Before stopping the kernel NFS server, ensure that no critical service depends on access to that NFS server's exports.On Linux, kernel NFS servers can be stopped by using either of the following commands depending on the distribution in use:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service nfs-kernel-server stop service nfs stop
# service nfs-kernel-server stop # service nfs stop
- Restart glusterFS NFS server.
[2010-05-26 23:40:49] E [rpc-socket.c:126:rpcsvc_socket_listen] rpc-socket: binding socket failed:Address already in use [2010-05-26 23:40:49] E [rpc-socket.c:129:rpcsvc_socket_listen] rpc-socket: Port is already in use [2010-05-26 23:40:49] E [rpcsvc.c:2636:rpcsvc_stage_program_register] rpc-service: could not create listening connection [2010-05-26 23:40:49] E [rpcsvc.c:2675:rpcsvc_program_register] rpc-service: stage registration of program failed [2010-05-26 23:40:49] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465 [2010-05-26 23:40:49] E [nfs.c:125:nfs_init_versions] nfs: Program init failed [2010-05-26 23:40:49] C [nfs.c:531:notify] nfs: Failed to initialize protocols
[2010-05-26 23:40:49] E [rpc-socket.c:126:rpcsvc_socket_listen] rpc-socket: binding socket failed:Address already in use
[2010-05-26 23:40:49] E [rpc-socket.c:129:rpcsvc_socket_listen] rpc-socket: Port is already in use
[2010-05-26 23:40:49] E [rpcsvc.c:2636:rpcsvc_stage_program_register] rpc-service: could not create listening connection
[2010-05-26 23:40:49] E [rpcsvc.c:2675:rpcsvc_program_register] rpc-service: stage registration of program failed
[2010-05-26 23:40:49] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
[2010-05-26 23:40:49] E [nfs.c:125:nfs_init_versions] nfs: Program init failed
[2010-05-26 23:40:49] C [nfs.c:531:notify] nfs: Failed to initialize protocols
mount
command fails with NFS server failed error:
mount: mount to NFS server '10.1.10.11' failed: timed out (retrying).
mount: mount to NFS server '10.1.10.11' failed: timed out (retrying).
- Disable name lookup requests from NFS server to a DNS server.The NFS server attempts to authenticate NFS clients by performing a reverse DNS lookup to match host names in the volume file with the client IP addresses. There can be a situation where the NFS server either is not able to connect to the DNS server or the DNS server is taking too long to respond to DNS request. These delays can result in delayed replies from the NFS server to the NFS client resulting in the timeout error.NFS server provides a work-around that disables DNS requests, instead relying only on the client IP addresses for authentication. The following option can be added for successful mounting in such situations:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow option nfs.addr.namelookup off
option nfs.addr.namelookup off
Note
Remember that disabling the NFS server forces authentication of clients to use only IP addresses. If the authentication rules in the volume file use host names, those authentication rules will fail and client mounting will fail. - NFS version used by the NFS client is other than version 3 by default.glusterFS NFS server supports version 3 of NFS protocol by default. In recent Linux kernels, the default NFS version has been changed from 3 to 4. It is possible that the client machine is unable to connect to the glusterFS NFS server because it is using version 4 messages which are not understood by glusterFS NFS server. The timeout can be resolved by forcing the NFS client to use version 3. The vers option to mount command is used for this purpose:
# mount nfsserver:export -o vers=3 /MOUNTPOINT
- The firewall might have blocked the port.
- rpcbind might not be running.
NFS.enable-ino32 <on | off>
NFS.enable-ino32 <on | off>
off
by default, which permits NFS to return 64-bit inode numbers by default.
- built and run on 32-bit machines, which do not support large files by default,
- built to 32-bit standards on 64-bit systems.
-D_FILE_OFFSET_BITS=64
-D_FILE_OFFSET_BITS=64
chkconfig --list nfslock
to check if NSM is configured during OS boot.
on,
run chkconfig nfslock off
to disable NSM clients during boot, which resolves the issue.
rpc actor failed to complete successfully
error is displayed in the nfs.log, even after the volume is mounted successfully.
nfs.log
file.
[2013-06-25 00:03:38.160547] W [rpcsvc.c:180:rpcsvc_program_actor] 0-rpc-service: RPC program version not available (req 100003 4) [2013-06-25 00:03:38.160669] E [rpcsvc.c:448:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully
[2013-06-25 00:03:38.160547] W [rpcsvc.c:180:rpcsvc_program_actor] 0-rpc-service: RPC program version not available (req 100003 4)
[2013-06-25 00:03:38.160669] E [rpcsvc.c:448:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully
noacl
option in the mount command as follows:
mount -t nfs -o vers=3,noacl server1:/test-volume /mnt/glusterfs
# mount -t nfs -o vers=3,noacl server1:/test-volume /mnt/glusterfs
No such file or directory
.
6.2.4. NFS-Ganesha
Note
Features | glusterFS NFS (NFSv3) | NFS-Ganesha (NFSv3) | NFS-Ganesha (NFSv4) |
---|---|---|---|
Root-squash | Yes | Yes | Yes |
Sub-directory exports | Yes | Yes | Yes |
Locking | Yes | Yes | Yes |
Client based export permissions | Yes | Yes | Yes |
Netgroups | Tech Preview | Tech Preview | Tech Preview |
Mount protocols | UDP, TCP | UDP, TCP | Only TCP |
NFS transport protocols | TCP | UDP, TCP | TCP |
AUTH_UNIX | Yes | Yes | Yes |
AUTH_NONE | Yes | Yes | Yes |
AUTH_KRB | No | Yes | Yes |
ACLs | Yes | No | Yes |
Delegations | N/A | N/A | No |
High availability | Yes (but no lock-recovery) | Yes | Yes |
High availability (fail-back) | Yes (but no lock-recovery) | Yes | Yes |
Multi-head | Yes | Yes | Yes |
Gluster RDMA volumes | Yes | Available but not supported | Available but not supported |
DRC | Available but not supported | No | No |
Dynamic exports | No | Yes | Yes |
pseudofs | N/A | N/A | Yes |
NFSv4.1 | N/A | N/A | Not Supported |
NFSv4.1/pNFS | N/A | N/A | Tech Preview |
Note
- Red Hat does not recommend running NFS-Ganesha in mixed-mode and/or hybrid environments. This includes multi-protocol environments where NFS and CIFS shares are used simultaneously, or running NFS-Ganesha together with gluster-nfs, kernel-nfs or gluster-fuse clients
- Only one of NFS-Ganesha, gluster-NFS or kernel-NFS servers can be enabled on a given machine/host as all NFS implementations use the port 2049 and only one can be active at a given time. Hence you must disable kernel-NFS before NFS-Ganesha is started.
6.2.4.1. Port Information for NFS-Ganesha
- On Red Hat Enterprise Linux 7, enable the NFS-Ganesha firewall service for nfs, rpcbind, mountd, nlm, rquota, and HA in the active zones or runtime and permanent mode using the following commands. In addition, configure firewalld to add port '662' which will be used by statd service.
- Get a list of active zones using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow firewall-cmd --get-active-zones
# firewall-cmd --get-active-zones
- Allow the firewall service in the active zones, run the following commands:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow firewall-cmd --zone=zone_name --add-service=nlm --add-service=nfs --add-service=rpc-bind --add-service=high-availability --add-service=mountd --add-service=rquota firewall-cmd --zone=zone_name --add-service=nlm --add-service=nfs --add-service=rpc-bind --add-service=high-availability --add-service=mountd --add-service=rquota --permanent firewall-cmd --zone=zone_name --add-port=662/tcp --add-port=662/udp firewall-cmd --zone=zone_name --add-port=662/tcp --add-port=662/udp --permanent
# firewall-cmd --zone=zone_name --add-service=nlm --add-service=nfs --add-service=rpc-bind --add-service=high-availability --add-service=mountd --add-service=rquota # firewall-cmd --zone=zone_name --add-service=nlm --add-service=nfs --add-service=rpc-bind --add-service=high-availability --add-service=mountd --add-service=rquota --permanent # firewall-cmd --zone=zone_name --add-port=662/tcp --add-port=662/udp # firewall-cmd --zone=zone_name --add-port=662/tcp --add-port=662/udp --permanent
- On the NFS-client machine, configure firewalld to add ports used by statd and nlm services by executing the following commands:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow firewall-cmd --zone=zone_name --add-port=662/tcp --add-port=662/udp \ --add-port=32803/tcp --add-port=32769/udp firewall-cmd --zone=zone_name --add-port=662/tcp --add-port=662/udp \ --add-port=32803/tcp --add-port=32769/udp --permanent
# firewall-cmd --zone=zone_name --add-port=662/tcp --add-port=662/udp \ --add-port=32803/tcp --add-port=32769/udp # firewall-cmd --zone=zone_name --add-port=662/tcp --add-port=662/udp \ --add-port=32803/tcp --add-port=32769/udp --permanent
- Ensure to configure the ports mentioned above. For more information see Defining Service Ports. in Section 7.2.4.3.1 Pre-requisites to run nfs-ganesha,
Note
Service | Port Number | Protocol |
sshd | 22 | TCP |
rpcbind/portmapper | 111 | TCP/UDP |
NFS | 2049 | TCP/UDP |
mountd | 20048 | TCP/UDP |
NLM | 32803 | TCP/UDP |
Rquota | 875 | TCP/UDP |
statd | 662 | TCP/UDP |
pcsd | 2224 | TCP |
pacemaker_remote | 3121 | TCP |
corosync | 5404 and 5405 | UDP |
dlm | 21064 | TCP |
6.2.4.2. Supported Features of NFS-Ganesha
In a highly available active-active environment, if a NFS-Ganesha server that is connected to a NFS client running a particular application goes down, the application/NFS client is seamlessly connected to another NFS-Ganesha server without any administrative intervention.
The Parallel Network File System (pNFS) is part of the NFS v4.1 protocol that allows compute clients to access storage devices directly and in parallel.
Previous versions of NFS-Ganesha required a restart of the server whenever the administrator had to add or remove exports. NFS-Ganesha now supports addition and removal of exports dynamically. Dynamic exports is managed by the DBus interface. DBus is a system local IPC mechanism for system management and peer-to-peer application communication.
Note
With this version of NFS-Ganesha, multiple Red Hat Gluster Storage volumes or sub-directories can now be exported simultaneously.
This version of NFS-Ganesha now creates and maintains a NFSv4 pseudo-file system, which provides clients with seamless access to all exported objects on the server.
NFS-Ganesha NFSv4 protocol includes integrated support for Access Control List (ACL)s, which are similar to those used by Windows. These ACLs can be used to identify a trustee and specify the access rights allowed, or denied for that trustee.This feature is disabled by default.
Note
6.2.4.3. Highly Available Active-Active NFS-Ganesha
Note
- Creating the ganesha-ha.conf file
The ganesha-ha.conf.sample is created in the following location /etc/ganesha when Red Hat Gluster Storage is installed. Rename the file to ganesha-ha.conf and make the changes based on your environment.
Following is an example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Sample ganesha-ha.conf file: # Name of the HA cluster created. # must be unique within the subnet HA_NAME="ganesha-ha-360" # # # You may use short names or long names; you may not use IP addresses. # Once you select one, stay with it as it will be mildly unpleasant to clean up if you switch later on. Ensure that all names - short and/or long - are in DNS or /etc/hosts on all machines in the cluster. # # The subset of nodes of the Gluster Trusted Pool that form the ganesha HA cluster. Hostname is specified. HA_CLUSTER_NODES="server1.lab.redhat.com,server2.lab.redhat.com,..." # # Virtual IPs for each of the nodes specified above. VIP.server1.lab.redhat.com="10.0.2.1" VIP.server2.lab.redhat.com="10.0.2.2" .... ....
Sample ganesha-ha.conf file: # Name of the HA cluster created. # must be unique within the subnet HA_NAME="ganesha-ha-360" # # # You may use short names or long names; you may not use IP addresses. # Once you select one, stay with it as it will be mildly unpleasant to clean up if you switch later on. Ensure that all names - short and/or long - are in DNS or /etc/hosts on all machines in the cluster. # # The subset of nodes of the Gluster Trusted Pool that form the ganesha HA cluster. Hostname is specified. HA_CLUSTER_NODES="server1.lab.redhat.com,server2.lab.redhat.com,..." # # Virtual IPs for each of the nodes specified above. VIP.server1.lab.redhat.com="10.0.2.1" VIP.server2.lab.redhat.com="10.0.2.2" .... ....
Note
- Pacemaker handles the creation of the VIP and assigning a interface.
- Ensure that the VIP is in the same network range.
- Configuring NFS-Ganesha using gluster CLI
The HA cluster can be set up or torn down using gluster CLI. In addition, it can export and unexport specific volumes. For more information, see section Configuring NFS-Ganesha using gluster CLI.
- Modifying the HA cluster using the ganesha-ha.sh script
After creating the cluster, any further modification can be done using the ganesha-ha.sh script. For more information, see Modifying the HA cluster using the ganesha-ha.sh script.
6.2.4.4. Configuring NFS-Ganesha using Gluster CLI
6.2.4.4.1. Prerequisites to run NFS-Ganesha
- A Red Hat Gluster Storage volume must be available for export and NFS-Ganesha rpms are installed.
- Disable the kernel-nfs using the following command:For Red Hat Enterprise Linux 7
Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl stop nfs-server systemctl disable nfs-server
# systemctl stop nfs-server # systemctl disable nfs-server
To verify if kernel-nfs is disabled, execute the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl status nfs-server
# systemctl status nfs-server
The service should be in stopped state.For Red Hat Enterprise Linux 6Copy to Clipboard Copied! Toggle word wrap Toggle overflow service nfs stop service nfs disable
# service nfs stop # service nfs disable
To verify if kernel-nfs is disabled, execute the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow service nfs status
# service nfs status
The service should be in stopped state. - Edit the ganesha-ha.conf file based on your environment.
- Reserve virtual IPs on the network for each of the servers configured in the ganesha.conf file. Ensure that these IPs are different than the hosts' static IPs and are not used anywhere else in the trusted storage pool or in the subnet.
- Ensure that all the nodes in the cluster are DNS resolvable. For example, you can populate the /etc/hosts with the details of all the nodes in the cluster.
- Make sure the SELinux is in Enforcing mode.
- On Red Hat Enterprise Linux 7, execute the following commands to disable and stop NetworkManager service and to enable the network service.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl disable NetworkManager systemctl stop NetworkManager systemctl enable network
# systemctl disable NetworkManager # systemctl stop NetworkManager # systemctl enable network
- Start network service on all machines using the following command:For Red Hat Enterprise Linux 6:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service network start
# service network start
For Red Hat Enterprise Linux 7:Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl start network
# systemctl start network
- Create and mount a gluster shared volume by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume set all cluster.enable-shared-storage enable
# gluster volume set all cluster.enable-shared-storage enable volume set: success
For more information, see Section 11.8, “Setting up Shared Storage Volume” - Create a directory named
nfs-ganesha
under/var/run/gluster/shared_storage
- Copy the
ganesha.conf
andganesha-ha.conf
files from/etc/ganesha
to/var/run/gluster/shared_storage/nfs-ganesha
. - Enable the pacemaker service using the following command:For Red Hat Enterprise Linux 6:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow chkconfig --add pacemaker chkconfig pacemaker on
# chkconfig --add pacemaker # chkconfig pacemaker on
For Red Hat Enterprise Linux 7:Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl enable pacemaker.service
# systemctl enable pacemaker.service
- Start the pcsd service using the following command.For Red Hat Enterprise Linux 6:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service pcsd start
# service pcsd start
For Red Hat Enterprise Linux 7:Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl start pcsd
# systemctl start pcsd
Note
- To start pcsd by default after the system is rebooted, execute the following command:For Red Hat Enterprise Linux 6:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow chkconfig --add pcsd chkconfig pcsd on
# chkconfig --add pcsd # chkconfig pcsd on
For Red Hat Enterprise Linux 7:Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl enable pcsd
# systemctl enable pcsd
- Set a password for the user ‘hacluster’ on all the nodes using the following command. Use the same password for all the nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow echo <password> | passwd --stdin hacluster
# echo <password> | passwd --stdin hacluster
- Perform cluster authentication between the nodes, where, username is ‘hacluster’, and password is the one you used in the previous step. Ensure to execute the following command on every node:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow pcs cluster auth <hostname1> <hostname2> ...
# pcs cluster auth <hostname1> <hostname2> ...
Note
The hostname of all the nodes in the Ganesha-HA cluster must be included in the command when executing it on every node.For example, in a four node cluster; nfs1, nfs2, nfs3, and nfs4, execute the following command on every node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow pcs cluster auth nfs1 nfs2 nfs3 nfs4
# pcs cluster auth nfs1 nfs2 nfs3 nfs4 Username: hacluster Password: nfs1: Authorized nfs2: Authorized nfs3: Authorized nfs4: Authorized
- Passwordless ssh for the root user has to be enabled on all the HA nodes. Follow these steps,
- On one of the nodes (node1) in the cluster, run:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ssh-keygen -f /var/lib/glusterd/nfs/secret.pem -t rsa -N ''
# ssh-keygen -f /var/lib/glusterd/nfs/secret.pem -t rsa -N ''
- Deploy the generated public key from node1 to all the nodes (including node1) by executing the following command for every node:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ssh-copy-id -i /var/lib/glusterd/nfs/secret.pem.pub root@<node-ip/hostname>
# ssh-copy-id -i /var/lib/glusterd/nfs/secret.pem.pub root@<node-ip/hostname>
- Copy the ssh keypair from node1 to all the nodes in the Ganesha-HA cluster by executing the following command for every node:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow scp -i /var/lib/glusterd/nfs/secret.pem /var/lib/glusterd/nfs/secret.* root@<node-ip/hostname>:/var/lib/glusterd/nfs/
# scp -i /var/lib/glusterd/nfs/secret.pem /var/lib/glusterd/nfs/secret.* root@<node-ip/hostname>:/var/lib/glusterd/nfs/
- As part of cluster setup, port 875 is used to bind to the Rquota service. If this port is already in use, assign a different port to this service by modifying following line in ‘/etc/ganesha/ganesha.conf’ file on all the nodes.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use a non-privileged port for RQuota
# Use a non-privileged port for RQuota Rquota_Port = 875;
- Defining Service Ports
To define the service ports, execute the following steps on every node in the nfs-ganesha cluster:
- Edit /etc/sysconfig/nfs file as mentioned below:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sed -i '/STATD_PORT/s/^#//' /etc/sysconfig/nfs
# sed -i '/STATD_PORT/s/^#//' /etc/sysconfig/nfs
- Restart the statd service:For Red Hat Enterprise Linux 6:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service nfslock restart
# service nfslock restart
For Red Hat Enterprise Linux 7:Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl restart nfs-config systemctl restart rpc-statd
# systemctl restart nfs-config # systemctl restart rpc-statd
Execute the following steps on the client machine:- Edit '/etc/sysconfig/nfs' using following commands:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sed -i '/STATD_PORT/s/^#//' /etc/sysconfig/nfs sed -i '/LOCKD_TCPPORT/s/^#//' /etc/sysconfig/nfs sed -i '/LOCKD_UDPPORT/s/^#//' /etc/sysconfig/nfs
# sed -i '/STATD_PORT/s/^#//' /etc/sysconfig/nfs # sed -i '/LOCKD_TCPPORT/s/^#//' /etc/sysconfig/nfs # sed -i '/LOCKD_UDPPORT/s/^#//' /etc/sysconfig/nfs
- Restart the services:For Red Hat Enterprise Linux 6:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service nfslock restart service nfs restart
# service nfslock restart # service nfs restart
For Red Hat Enterprise Linux 7:Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl restart nfs-config systemctl restart rpc-statd systemctl restart nfslock
# systemctl restart nfs-config # systemctl restart rpc-statd # systemctl restart nfslock
6.2.4.4.2. Configuring the HA Cluster
To setup the HA cluster, enable NFS-Ganesha by executing the following command:
- If you have upgraded to Red Hat Enterprise Linux 7.4, then enable the gluster_use_execmem boolean by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow setsebool -P gluster_use_execmem on
# setsebool -P gluster_use_execmem on
- Enable NFS-Ganesha by executing the following command
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster nfs-ganesha enable
# gluster nfs-ganesha enable
Note
Before enabling or disabling NFS-Ganesha, ensure that all the nodes that are part of the NFS-Ganesha cluster are up.For example,Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster nfs-ganesha enable
# gluster nfs-ganesha enable Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the trusted pool. Do you still want to continue? (y/n) y This will take a few minutes to complete. Please wait .. nfs-ganesha : success
Note
After enabling NFS-Ganesha, ifrpcinfo -p
shows the statd port different from 662, then, restart the statd service:For Red Hat Enterprise Linux 6:Copy to Clipboard Copied! Toggle word wrap Toggle overflow service nfslock restart
# service nfslock restart
For Red Hat Enterprise Linux 7:Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl restart rpc-statd
# systemctl restart rpc-statd
To tear down the HA cluster, execute the following command:
gluster nfs-ganesha disable
# gluster nfs-ganesha disable
gluster nfs-ganesha disable
# gluster nfs-ganesha disable
Disabling NFS-Ganesha will tear down entire ganesha cluster across the trusted pool. Do you still want to continue?
(y/n) y
This will take a few minutes to complete. Please wait ..
nfs-ganesha : success
To verify the status of the HA cluster, execute the following script:
/usr/libexec/ganesha/ganesha-ha.sh --status /var/run/gluster/shared_storage/nfs-ganesha
# /usr/libexec/ganesha/ganesha-ha.sh --status /var/run/gluster/shared_storage/nfs-ganesha
/usr/libexec/ganesha/ganesha-ha.sh --status /var/run/gluster/shared_storage/nfs-ganesha
# /usr/libexec/ganesha/ganesha-ha.sh --status /var/run/gluster/shared_storage/nfs-ganesha
Online: [ server1 server2 server3 server4 ] server1-cluster_ip-1 server1 server2-cluster_ip-1 server2 server3-cluster_ip-1 server3 server4-cluster_ip-1 server4 Cluster HA Status: HEALTHY
Online: [ server1 server2 server3 server4 ]
server1-cluster_ip-1 server1
server2-cluster_ip-1 server2
server3-cluster_ip-1 server3
server4-cluster_ip-1 server4
Cluster HA Status: HEALTHY
Note
- It is recommended to manually restart the
ganesha.nfsd
service after the node is rebooted, to fail back the VIPs. - Disabling NFS Ganesha does not enable Gluster NFS by default. If required, Gluster NFS must be enabled manually.
6.2.4.4.3. Exporting and Unexporting Volumes through NFS-Ganesha
To export a Red Hat Gluster Storage volume, execute the following command:
gluster volume set <volname> ganesha.enable on
# gluster volume set <volname> ganesha.enable on
gluster vol set testvol ganesha.enable on
# gluster vol set testvol ganesha.enable on
volume set: success
To unexport a Red Hat Gluster Storage volume, execute the following command:
gluster volume set <volname> ganesha.enable off
# gluster volume set <volname> ganesha.enable off
gluster vol set testvol ganesha.enable off
# gluster vol set testvol ganesha.enable off
volume set: success
To verify the status of the volume set options, follow the guidelines mentioned below:
- Check if NFS-Ganesha is started by executing the following commands:On Red Hat Enterprise Linux-6,
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service nfs-ganesha status
# service nfs-ganesha status
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow service nfs-ganesha status
# service nfs-ganesha status ganesha.nfsd (pid 4136) is running...
On Red Hat Enterprise Linux-7Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl status nfs-ganesha
# systemctl status nfs-ganesha
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl status nfs-ganesha
# systemctl status nfs-ganesha nfs-ganesha.service - NFS-Ganesha file server Loaded: loaded (/usr/lib/systemd/system/nfs-ganesha.service; disabled) Active: active (running) since Tue 2015-07-21 05:08:22 IST; 19h ago Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki Main PID: 15440 (ganesha.nfsd) CGroup: /system.slice/nfs-ganesha.service └─15440 /usr/bin/ganesha.nfsd -L /var/log/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT Jul 21 05:08:22 server1 systemd[1]: Started NFS-Ganesha file server.]
- Check if the volume is exported.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow showmount -e localhost
# showmount -e localhost
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow showmount -e localhost
# showmount -e localhost Export list for localhost: /volname (everyone)
- The logs of ganesha.nfsd daemon are written to /var/log/ganesha.log. Check the log file on noticing any unexpected behavior.
6.2.4.5. Modifying the HA cluster using the ganesha-ha.sh script
- Adding a node to the cluster
Before adding a node to the cluster, ensure all the prerequisites mentioned in section Pre-requisites to run NFS-Ganesha is met. To add a node to the cluster, execute the following command on any of the nodes in the existing NFS-Ganesha cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow /usr/libexec/ganesha/ganesha-ha.sh --add <HA_CONF_DIR> <HOSTNAME> <NODE-VIP>
# /usr/libexec/ganesha/ganesha-ha.sh --add <HA_CONF_DIR> <HOSTNAME> <NODE-VIP>
where,HA_CONF_DIR: The directory path containing the ganesha-ha.conf file. By default it is/etc/ganesha.
HOSTNAME: Hostname of the new node to be addedNODE-VIP: Virtual IP of the new node to be added.For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow /usr/libexec/ganesha/ganesha-ha.sh --add /etc/ganesha server16 10.00.00.01
# /usr/libexec/ganesha/ganesha-ha.sh --add /etc/ganesha server16 10.00.00.01
- Deleting a node in the cluster
To delete a node from the cluster, execute the following command on any of the nodes in the existing NFS-Ganesha cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow /usr/libexec/ganesha/ganesha-ha.sh --delete <HA_CONF_DIR> <HOSTNAME>
# /usr/libexec/ganesha/ganesha-ha.sh --delete <HA_CONF_DIR> <HOSTNAME>
where,HA_CONF_DIR: The directory path containing the ganesha-ha.conf file. By default it is located at/etc/ganesha
.HOSTNAME: Hostname of the new node to be addedFor example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow /usr/libexec/ganesha/ganesha-ha.sh --delete /etc/ganesha server16
# /usr/libexec/ganesha/ganesha-ha.sh --delete /etc/ganesha server16
- Modifying the default export configuration
To modify the default export configurations perform the following steps on any of the nodes in the existing ganesha cluster:
- Edit/add the required fields in the corresponding export file located at /etc/ganesha/exports/.
- Execute the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow /usr/libexec/ganesha/ganesha-ha.sh --refresh-config <HA_CONF_DIR> <volname>
# /usr/libexec/ganesha/ganesha-ha.sh --refresh-config <HA_CONF_DIR> <volname>
where,HA_CONF_DIR: The directory path containing the ganesha-ha.conf file. By default it is located at/etc/ganesha
.volname: The name of the volume whose export configuration has to be changed.For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow /usr/libexec/ganesha/ganesha-ha.sh --refresh-config /etc/ganesha testvol
# /usr/libexec/ganesha/ganesha-ha.sh --refresh-config /etc/ganesha testvol
Note
- The export ID must not be changed.
- Ensure that there are no active I/Os on the volume when this command is executed.
6.2.4.6. Accessing NFS-Ganesha Exports
Note
To mount an export in NFSv3 mode, execute the following command:
mount -t nfs -o vers=3 virtual_ip:/volname /mountpoint
# mount -t nfs -o vers=3 virtual_ip:/volname /mountpoint
mount -t nfs -o vers=3 10.70.0.0:/testvol /mnt
mount -t nfs -o vers=3 10.70.0.0:/testvol /mnt
To mount an export in NFSv4 mode, execute the following command:
mount -t nfs -o vers=4.0 virtual_ip:/volname /mountpoint
# mount -t nfs -o vers=4.0 virtual_ip:/volname /mountpoint
mount -t nfs -o vers=4.0 10.70.0.0:/testvol /mnt
# mount -t nfs -o vers=4.0 10.70.0.0:/testvol /mnt
6.2.4.7. NFS-Ganesha Service Downtime
- If the ganesha.nfsd dies (crashes, oomkill, admin kill), the maximum time to detect it and put the ganesha cluster into grace is 20sec, plus whatever time pacemaker needs to effect the fail-over.
Note
This time taken to detect if the service is down, can be edited using the following command on all the nodes:Copy to Clipboard Copied! Toggle word wrap Toggle overflow pcs resource op remove nfs-mon monitor pcs resource op add nfs-mon monitor interval=<interval_period_value>
# pcs resource op remove nfs-mon monitor # pcs resource op add nfs-mon monitor interval=<interval_period_value>
- If the whole node dies (including network failure) then this down time is the total of whatever time pacemaker needs to detect that the node is gone, the time to put the cluster into grace, and the time to effect the fail-over. This is ~20 seconds.
- So the max-fail-over time is approximately 20-22 seconds, and the average time is typically less. In other words, the time taken for NFS clients to detect server reboot or resume I/O is 20 - 22 seconds.
6.2.4.7.1. Modifying the Fail-over Time
Protocols | FOPs |
NFSV3 |
|
NLM |
|
NFSV4 |
|
Note
/etc/ganesha/ganesha.conf
file.
NFSv4 { Grace_Period=<grace_period_value_in_sec>; }
NFSv4 {
Grace_Period=<grace_period_value_in_sec>;
}
/etc/ganesha/ganesha.conf
file, restart the NFS-Ganesha service using the following command on all the nodes :
service nfs-ganesha restart
# service nfs-ganesha restart
systemctl restart nfs-ganesha
# systemctl restart nfs-ganesha
6.2.4.8. Configuring Kerberized NFS-Ganesha
- Install the krb5-workstation and the ntpdate packages on all the machines:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow yum install krb5-workstation yum install ntpdate
# yum install krb5-workstation # yum install ntpdate
Note
- The krb5-libs package will be updated as a dependent package.
- Configure the ntpdate based on the valid time server according to the environment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow echo <valid_time_server> >> /etc/ntp/step-tickers systemctl enable ntpdate systemctl start ntpdate
# echo <valid_time_server> >> /etc/ntp/step-tickers # systemctl enable ntpdate # systemctl start ntpdate
- Ensure that all systems can resolve each other by FQDN in DNS.
- Configure the
/etc/krb5.conf
file and add relevant changes accordingly. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow [logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false default_realm = EXAMPLE.COM default_ccache_name = KEYRING:persistent:%{uid} [realms] EXAMPLE.COM = { kdc = kerberos.example.com admin_server = kerberos.example.com } [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM
[logging] default = FILE:/var/log/krb5libs.log kdc = FILE:/var/log/krb5kdc.log admin_server = FILE:/var/log/kadmind.log [libdefaults] dns_lookup_realm = false ticket_lifetime = 24h renew_lifetime = 7d forwardable = true rdns = false default_realm = EXAMPLE.COM default_ccache_name = KEYRING:persistent:%{uid} [realms] EXAMPLE.COM = { kdc = kerberos.example.com admin_server = kerberos.example.com } [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM
Note
For further details regarding the file configuration, refer toman krb5.conf
. - On the NFS-server and client, update the /etc/idmapd.conf file by making the required change. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Domain = example.com
Domain = example.com
6.2.4.8.1. Setting up the NFS-Ganesha Server:
Note
- Install the following packages:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow yum install nfs-utils yum install rpcbind
# yum install nfs-utils # yum install rpcbind
- Install the relevant gluster and NFS-Ganesha rpms. For more information see, Red Hat Gluster Storage 3.2 Installation Guide.
- Create a Kerberos principle and add it to krb5.keytab on the NFS-Ganesha server
Copy to Clipboard Copied! Toggle word wrap Toggle overflow kadmin kadmin: addprinc -randkey nfs/<host_name>@EXAMPLE.COM kadmin: ktadd nfs/<host_name>@EXAMPLE.COM
$ kadmin $ kadmin: addprinc -randkey nfs/<host_name>@EXAMPLE.COM $ kadmin: ktadd nfs/<host_name>@EXAMPLE.COM
For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow kadmin
# kadmin Authenticating as principal root/admin@EXAMPLE.COM with password. Password for root/admin@EXAMPLE.COM: kadmin: addprinc -randkey nfs/<host_name>@EXAMPLE.COM WARNING: no policy specified for nfs/<host_name>@EXAMPLE.COM; defaulting to no policy Principal "nfs/<host_name>@EXAMPLE.COM" created. kadmin: ktadd nfs/<host_name>@EXAMPLE.COM Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno2, encryption type aes256-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab. Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab. Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab FILE:/etc/krb5.keytab. Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type arcfour-hmac added to keytab FILE:/etc/krb5.keytab. Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type camellia256-cts-cmac added to keytab FILE:/etc/krb5.keytab. Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type camellia128-cts-cmac added to keytab FILE:/etc/krb5.keytab. Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type des-hmac-sha1 added to keytab FILE:/etc/krb5.keytab. Entry for principal nfs/<host_name>@EXAMPLE.COM with kvno 2, encryption type des-cbc-md5 added to keytab FILE:/etc/krb5.keytab.
- Update
/etc/ganesha/ganesha.conf
file as mentioned below:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NFS_KRB5 { PrincipalName = nfs ; KeytabPath = /etc/krb5.keytab ; Active_krb5 = true ; DomainName = example.com; }
NFS_KRB5 { PrincipalName = nfs ; KeytabPath = /etc/krb5.keytab ; Active_krb5 = true ; DomainName = example.com; }
- Based on the different kerberos security flavours (krb5, krb5i and krb5p) supported by nfs-ganesha, configure the 'SecType' parameter in the volume export file (/etc/ganesha/exports/export.vol.conf) with appropriate security flavour
- Create an unprivileged user and ensure that the users that are created are resolvable to the UIDs through the central user database. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow useradd guest
# useradd guest
Note
The username of this user has to be the same as the one on the NFS-client.
6.2.4.8.2. Setting up the NFS Client
Note
- Install the following packages:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow yum install nfs-utils yum install rpcbind
# yum install nfs-utils # yum install rpcbind
- Create a kerberos principle and add it to krb5.keytab on the client side. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow kadmin kadmin: addprinc -randkey host/<host_name>@EXAMPLE.COM kadmin: ktadd host/<host_name>@EXAMPLE.COM
# kadmin # kadmin: addprinc -randkey host/<host_name>@EXAMPLE.COM # kadmin: ktadd host/<host_name>@EXAMPLE.COM
Copy to Clipboard Copied! Toggle word wrap Toggle overflow kadmin
# kadmin Authenticating as principal root/admin@EXAMPLE.COM with password. Password for root/admin@EXAMPLE.COM: kadmin: addprinc -randkey host/<host_name>@EXAMPLE.COM WARNING: no policy specified for host/<host_name>@EXAMPLE.COM; defaulting to no policy Principal "host/<host_name>@EXAMPLE.COM" created. kadmin: ktadd host/<host_name>@EXAMPLE.COM Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab. Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab FILE:/etc/krb5.keytab. Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab FILE:/etc/krb5.keytab. Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type arcfour-hmac added to keytab FILE:/etc/krb5.keytab. Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type camellia256-cts-cmac added to keytab FILE:/etc/krb5.keytab. Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type camellia128-cts-cmac added to keytab FILE:/etc/krb5.keytab. Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type des-hmac-sha1 added to keytab FILE:/etc/krb5.keytab. Entry for principal host/<host_name>@EXAMPLE.COM with kvno 2, encryption type des-cbc-md5 added to keytab FILE:/etc/krb5.keytab.
- Check the status of nfs-client.target service and start it, if not already started:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl status nfs-client.target systemctl start nfs-client.target systemctl enable nfs-client.target
# systemctl status nfs-client.target # systemctl start nfs-client.target # systemctl enable nfs-client.target
- Create an unprivileged user and ensure that the users that are created are resolvable to the UIDs through the central user database. For example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow useradd guest
# useradd guest
Note
The username of this user has to be the same as the one on the NFS-server. - Mount the volume specifying kerberos security type:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mount -t nfs -o sec=krb5 <host_name>:/testvolume /mnt
# mount -t nfs -o sec=krb5 <host_name>:/testvolume /mnt
As root, all access should be granted.For example:Creation of a directory on the mount point and all other operations as root should be successful.Copy to Clipboard Copied! Toggle word wrap Toggle overflow mkdir <directory name>
# mkdir <directory name>
- Login as a guest user:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow su - guest
# su - guest
Without a kerberos ticket, all access to /mnt should be denied. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow su guest ls
# su guest # ls ls: cannot open directory .: Permission denied
- Get the kerberos ticket for the guest and access /mnt:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow kinit ls
# kinit Password for guest@EXAMPLE.COM: # ls <directory created>
Important
With this ticket, some access must be allowed to /mnt. If there are directories on the NFS-server where "guest" does not have access to, it should work correctly.
6.2.4.9. pNFS
Important
6.2.4.9.1. Prerequisites
- Disable kernel-NFS, glusterFS-NFS servers on the system using the following commands:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service nfs stop gluster volume set <volname> nfs.disable ON
# service nfs stop # gluster volume set <volname> nfs.disable ON
- Disable nfs-ganesha and tear down HA cluster via gluster CLI (only if nfs-ganesha HA cluster is already created) by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster features.ganesha disable
# gluster features.ganesha disable
- Turn on feature.cache-invalidation for the volume, by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume set <volname> features.cache-invalidation on
# gluster volume set <volname> features.cache-invalidation on
6.2.4.9.2. Configuring NFS-Ganesha for pNFS
- Configure the MDS by adding following block to the ganesha.conf file located at
/etc/ganesha
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow GLUSTER { PNFS_MDS = true; }
GLUSTER { PNFS_MDS = true; }
- For optimal working of pNFS, NFS-Ganesha servers should run on every node in the trusted pool using the following command:On RHEL 6
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service nfs-ganesha start
# service nfs-ganesha start
On RHEL 7Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl start nfs-ganesha
# systemctl start nfs-ganesha
- Verify if the volume is exported via NFS-Ganesha on all the nodes by executing the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow showmount -e localhost
# showmount -e localhost
6.2.4.9.2.1. Mounting Volume using pNFS
mount -t nfs4 -o minorversion=1 <IP-or-hostname-of-MDS-server>:/<volname> /mount-point
# mount -t nfs4 -o minorversion=1 <IP-or-hostname-of-MDS-server>:/<volname> /mount-point
6.2.4.10. Manually Configuring NFS-Ganesha Exports
- Edit/add the required fields in the corresponding export file located at /etc/ganesha/exports/.
- Execute the following command
Copy to Clipboard Copied! Toggle word wrap Toggle overflow /usr/libexec/ganesha/ganesha-ha.sh --refresh-config <HA_CONF_DIR> <volname>
# /usr/libexec/ganesha/ganesha-ha.sh --refresh-config <HA_CONF_DIR> <volname>
- HA_CONF_DIR: The directory path containing the ganesha-ha.conf file. By default it is located at
/etc/ganesha
. - volname: The name of the volume whose export configuration has to be changed.
cat export.conf EXPORT{ Export_Id = 1 ; # Export ID unique to each export Path = "volume_path"; # Path of the volume to be exported. Eg: "/test_volume" FSAL { name = GLUSTER; hostname = "10.xx.xx.xx"; # IP of one of the nodes in the trusted pool volume = "volume_name"; # Volume name. Eg: "test_volume" } Access_type = RW; # Access permissions Squash = No_root_squash; # To enable/disable root squashing Disable_ACL = TRUE; # To enable/disable ACL Pseudo = "pseudo_path"; # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo" Protocols = "3”, “4" ; # NFS protocols supported Transports = "UDP”, “TCP" ; # Transport protocols supported SecType = "sys"; # Security flavors supported }
# cat export.conf
EXPORT{
Export_Id = 1 ; # Export ID unique to each export
Path = "volume_path"; # Path of the volume to be exported. Eg: "/test_volume"
FSAL {
name = GLUSTER;
hostname = "10.xx.xx.xx"; # IP of one of the nodes in the trusted pool
volume = "volume_name"; # Volume name. Eg: "test_volume"
}
Access_type = RW; # Access permissions
Squash = No_root_squash; # To enable/disable root squashing
Disable_ACL = TRUE; # To enable/disable ACL
Pseudo = "pseudo_path"; # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
Protocols = "3”, “4" ; # NFS protocols supported
Transports = "UDP”, “TCP" ; # Transport protocols supported
SecType = "sys"; # Security flavors supported
}
export.conf
file to see the expected behavior.
- Exporting Subdirectories
- Providing Permissions for Specific Clients
- Enabling and Disabling NFSv4 ACLs
- Providing Pseudo Path for NFSv4 Mount
- Providing pNFS support
To export subdirectories within a volume, edit the following parameters in the export.conf
file.
Path = "path_to_subdirectory"; # Path of the volume to be exported. Eg: "/test_volume/test_subdir" FSAL { name = GLUSTER; hostname = "10.xx.xx.xx"; # IP of one of the nodes in the trusted pool volume = "volume_name"; # Volume name. Eg: "test_volume" volpath = "path_to_subdirectory_with_respect_to_volume"; #Subdirectory path from the root of the volume. Eg: "/test_subdir" }
Path = "path_to_subdirectory"; # Path of the volume to be exported. Eg: "/test_volume/test_subdir"
FSAL {
name = GLUSTER;
hostname = "10.xx.xx.xx"; # IP of one of the nodes in the trusted pool
volume = "volume_name"; # Volume name. Eg: "test_volume"
volpath = "path_to_subdirectory_with_respect_to_volume"; #Subdirectory path from the root of the volume. Eg: "/test_subdir"
}
The parameter values and permission values given in the EXPORT
block applies to any client that mounts the exported volume. To provide specific permissions to specific clients , introduce a client
block inside the EXPORT
block.
EXPORT
block.
client { clients = 10.00.00.01; # IP of the client. allow_root_access = true; access_type = "RO"; # Read-only permissions Protocols = "3"; # Allow only NFSv3 protocol. anonymous_uid = 1440; anonymous_gid = 72; }
client {
clients = 10.00.00.01; # IP of the client.
allow_root_access = true;
access_type = "RO"; # Read-only permissions
Protocols = "3"; # Allow only NFSv3 protocol.
anonymous_uid = 1440;
anonymous_gid = 72;
}
client
block.
To enable NFSv4 ACLs , edit the following parameter:
Disable_ACL = FALSE;
Disable_ACL = FALSE;
To set NFSv4 pseudo path , edit the below parameter:
Pseudo = "pseudo_path"; # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
Pseudo = "pseudo_path"; # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
6.2.4.11. Troubleshooting
Ensure you execute the following commands for all the issues/failures that is encountered:
- Make sure all the prerequisites are met.
- Execute the following commands to check the status of the services:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service nfs-ganesha status service pcsd status service pacemaker status pcs status
# service nfs-ganesha status # service pcsd status # service pacemaker status # pcs status
- Review the followings logs to understand the cause of failure.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow /var/log/ganesha.log /var/log/ganesha-gfapi.log /var/log/messages /var/log/pcsd.log
/var/log/ganesha.log /var/log/ganesha-gfapi.log /var/log/messages /var/log/pcsd.log
- Situation
NFS-Ganesha fails to start.
SolutionEnsure you execute all the mandatory checks to understand the root cause before proceeding with the following steps. Follow the listed steps to fix the issue:
- Ensure the kernel and gluster nfs services are inactive.
- Ensure that the port 875 is free to connect to the RQUOTA service.
- Ensure that the shared storage volume mount exists on the server after node reboot/shutdown. If it does not, then mount the shared storage volume manually using the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow mount -t glusterfs <local_node's_hostname>:gluster_shared_storage /var/run/gluster/shared_storage
# mount -t glusterfs <local_node's_hostname>:gluster_shared_storage /var/run/gluster/shared_storage
For more information see, section Manually Configuring NFS-Ganesha Exports. - Situation
NFS-Ganesha Cluster setup fails.
SolutionEnsure you execute all the mandatory checks to understand the root cause before proceeding with the following steps.
- Ensure the kernel and gluster nfs services are inactive.
- Ensure that
pcs cluster auth
command is executed on all the nodes with same password for the userhacluster
- Ensure that shared volume storage is mounted on all the nodes.
- Ensure that the name of the HA Cluster does not exceed 15 characters.
- Ensure UDP multicast packets are pingable using
OMPING
. - Ensure that Virtual IPs are not assigned to any NIC.
- Situation
NFS-Ganesha has started and fails to export a volume.
SolutionEnsure you execute all the mandatory checks to understand the root cause before proceeding with the following steps. Follow the listed steps to fix the issue:
- Ensure that volume is in
Started
state using the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume status <volname>
# gluster volume status <volname>
- Execute the following commands to check the status of the services:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service nfs-ganesha status showmount -e localhost
# service nfs-ganesha status # showmount -e localhost
- Review the followings logs to understand the cause of failure.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow /var/log/ganesha.log /var/log/ganesha-gfapi.log /var/log/messages
/var/log/ganesha.log /var/log/ganesha-gfapi.log /var/log/messages
- Ensure that dbus service is running using the following command
Copy to Clipboard Copied! Toggle word wrap Toggle overflow service messagebus status
# service messagebus status
- Situation
Adding a new node to the HA cluster fails.
SolutionEnsure you execute all the mandatory checks to understand the root cause before proceeding with the following steps. Follow the listed steps to fix the issue:
- Ensure to run the following command from one of the nodes that is already part of the cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ganesha-ha.sh --add <HA_CONF_DIR> <NODE-HOSTNAME> <NODE-VIP>
# ganesha-ha.sh --add <HA_CONF_DIR> <NODE-HOSTNAME> <NODE-VIP>
- Ensure that gluster_shared_storage volume is mounted on the node that needs to be added.
- Make sure that all the nodes of the cluster is DNS resolvable from the node that needs to be added.
- Execute the following command for each of the hosts in the HA cluster on the node that needs to be added:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow pcs cluster auth <hostname>
# pcs cluster auth <hostname>
- Situation
Cleanup required when nfs-ganesha HA cluster setup fails.
SolutionTo restore back the machines to the original state, execute the following commands on each node forming the cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow /use/libexec/ganesha.sh --teardown /var/run/gluster/shared_storage/nfs-ganesha /use/libexec/ganesha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha systemctl stop nfs-ganesha
# /use/libexec/ganesha.sh --teardown /var/run/gluster/shared_storage/nfs-ganesha # /use/libexec/ganesha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha # systemctl stop nfs-ganesha
- Situation
Permission issues.
SolutionBy default, the
root squash
option is disabled when you start NFS-Ganesha using the CLI. In case, you encounter any permission issues, check the unix permissions of the exported entry.