이 콘텐츠는 선택한 언어로 제공되지 않습니다.
6.3. NFS
6.3.1. Support Matrix 링크 복사링크가 클립보드에 복사되었습니다!
Features | glusterFS NFS (NFSv3) | NFS-Ganesha (NFSv3) | NFS-Ganesha (NFSv4) |
---|---|---|---|
Root-squash | Yes | Yes | Yes |
All-squash | No | Yes | Yes |
Sub-directory exports | Yes | Yes | Yes |
Locking | Yes | Yes | Yes |
Client based export permissions | Yes | Yes | Yes |
Netgroups | Yes | Yes | Yes |
Mount protocols | UDP, TCP | UDP, TCP | Only TCP |
NFS transport protocols | TCP | UDP, TCP | TCP |
AUTH_UNIX | Yes | Yes | Yes |
AUTH_NONE | Yes | Yes | Yes |
AUTH_KRB | No | Yes | Yes |
ACLs | Yes | No | Yes |
Delegations | N/A | N/A | No |
High availability | Yes (but with certain limitations. For more information see, "Setting up CTDB for NFS") | Yes | Yes |
Multi-head | Yes | Yes | Yes |
Gluster RDMA volumes | Yes | Not supported | Not supported |
DRC | Not supported | Yes | Yes |
Dynamic exports | No | Yes | Yes |
pseudofs | N/A | N/A | Yes |
NFSv4.1 | N/A | N/A | Yes |
Note
- Red Hat does not recommend running NFS-Ganesha with any other NFS servers, such as, kernel-NFS and Gluster NFS servers.
- Only one of NFS-Ganesha, gluster-NFS or kernel-NFS servers can be enabled on a given machine/host as all NFS implementations use the port 2049 and only one can be active at a given time. Hence you must disable kernel-NFS before NFS-Ganesha is started.
6.3.2. Gluster NFS (Deprecated) 링크 복사링크가 클립보드에 복사되었습니다!
Warning
Note
mount -t nfs
” command on the client as below:
mount -t nfs HOSTNAME:VOLNAME MOUNTPATH
# mount -t nfs HOSTNAME:VOLNAME MOUNTPATH
gluster volume set VOLNAME nfs.disable off
# gluster volume set VOLNAME nfs.disable off
- To set nfs.acl ON, run the following command:
gluster volume set VOLNAME nfs.acl on
# gluster volume set VOLNAME nfs.acl on
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To set nfs.acl OFF, run the following command:
gluster volume set VOLNAME nfs.acl off
# gluster volume set VOLNAME nfs.acl off
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
Important
firewall-cmd --get-active-zones
# firewall-cmd --get-active-zones
firewall-cmd --zone=zone_name --add-service=nfs --add-service=rpc-bind firewall-cmd --zone=zone_name --add-service=nfs --add-service=rpc-bind --permanent
# firewall-cmd --zone=zone_name --add-service=nfs --add-service=rpc-bind
# firewall-cmd --zone=zone_name --add-service=nfs --add-service=rpc-bind --permanent
6.3.2.1. Setting up CTDB for Gluster NFS (Deprecated) 링크 복사링크가 클립보드에 복사되었습니다!
Important
firewall-cmd --get-active-zones
# firewall-cmd --get-active-zones
firewall-cmd --zone=zone_name --add-port=4379/tcp firewall-cmd --zone=zone_name --add-port=4379/tcp --permanent
# firewall-cmd --zone=zone_name --add-port=4379/tcp
# firewall-cmd --zone=zone_name --add-port=4379/tcp --permanent
Note
6.3.2.1.1. Prerequisites 링크 복사링크가 클립보드에 복사되었습니다!
- If you already have an older version of CTDB (version <= ctdb1.x), then remove CTDB by executing the following command:
yum remove ctdb
# yum remove ctdb
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After removing the older version, proceed with installing the latest CTDB.Note
Ensure that the system is subscribed to the samba channel to get the latest CTDB packages. - Install CTDB on all the nodes that are used as NFS servers to the latest version using the following command:
yum install ctdb
# yum install ctdb
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - CTDB uses TCP port 4379 by default. Ensure that this port is accessible between the Red Hat Gluster Storage servers.
6.3.2.1.2. Port and Firewall Information for Gluster NFS 링크 복사링크가 클립보드에 복사되었습니다!
firewall-cmd --zone=public --add-port=662/tcp --add-port=662/udp \ --add-port=32803/tcp --add-port=32769/udp \ --add-port=111/tcp --add-port=111/udp
# firewall-cmd --zone=public --add-port=662/tcp --add-port=662/udp \
--add-port=32803/tcp --add-port=32769/udp \
--add-port=111/tcp --add-port=111/udp
firewall-cmd --zone=public --add-port=662/tcp --add-port=662/udp \ --add-port=32803/tcp --add-port=32769/udp \ --add-port=111/tcp --add-port=111/udp --permanent
# firewall-cmd --zone=public --add-port=662/tcp --add-port=662/udp \
--add-port=32803/tcp --add-port=32769/udp \
--add-port=111/tcp --add-port=111/udp --permanent
- On Red Hat Enterprise Linux 7, edit
/etc/sysconfig/nfs
file as mentioned below:sed -i '/STATD_PORT/s/^#//' /etc/sysconfig/nfs
# sed -i '/STATD_PORT/s/^#//' /etc/sysconfig/nfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
This step is not applicable for Red Hat Enterprise Linux 8. - Restart the services:
- For Red Hat Enterprise Linux 6:
service nfslock restart service nfs restart
# service nfslock restart # service nfs restart
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Important
Red Hat Gluster Storage is not supported on Red Hat Enterprise Linux 6 (RHEL 6) from 3.5 Batch Update 1 onwards. See Version Details table in section Red Hat Gluster Storage Software Components and Versions of the Installation Guide - For Red Hat Enterprise Linux 7:
systemctl restart nfs-config systemctl restart rpc-statd systemctl restart nfs-mountd systemctl restart nfslock
# systemctl restart nfs-config # systemctl restart rpc-statd # systemctl restart nfs-mountd # systemctl restart nfslock
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
This step is not applicable for Red Hat Enterprise Linux 8.
6.3.2.1.3. Configuring CTDB on Red Hat Gluster Storage Server 링크 복사링크가 클립보드에 복사되었습니다!
- Create a replicate volume. This volume will host only a zero byte lock file, hence choose minimal sized bricks. To create a replicate volume run the following command:
gluster volume create volname replica n ipaddress:/brick path.......N times
# gluster volume create volname replica n ipaddress:/brick path.......N times
Copy to Clipboard Copied! Toggle word wrap Toggle overflow where,N: The number of nodes that are used as Gluster NFS servers. Each node must host one brick.For example:gluster volume create ctdb replica 3 10.16.157.75:/rhgs/brick1/ctdb/b1 10.16.157.78:/rhgs/brick1/ctdb/b2 10.16.157.81:/rhgs/brick1/ctdb/b3
# gluster volume create ctdb replica 3 10.16.157.75:/rhgs/brick1/ctdb/b1 10.16.157.78:/rhgs/brick1/ctdb/b2 10.16.157.81:/rhgs/brick1/ctdb/b3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - In the following files, replace "all" in the statement META="all" to the newly created volume name
/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
/var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:META="all" to META="ctdb"
META="all" to META="ctdb"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the volume.
gluster volume start ctdb
# gluster volume start ctdb
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As part of the start process, theS29CTDBsetup.sh
script runs on all Red Hat Gluster Storage servers, adds an entry in/etc/fstab
for the mount, and mounts the volume at/gluster/lock
on all the nodes with Gluster NFS server. It also enables automatic start of CTDB service on reboot.Note
When you stop the special CTDB volume, the S29CTDB-teardown.sh script runs on all Red Hat Gluster Storage servers and removes an entry in /etc/fstab for the mount and unmounts the volume at /gluster/lock. - Verify if the file /etc/sysconfig/ctdb exists on all the nodes that is used as Gluster NFS server. This file contains Red Hat Gluster Storage recommended CTDB configurations.
- Create /etc/ctdb/nodes file on all the nodes that is used as Gluster NFS servers and add the IPs of these nodes to the file.
10.16.157.0 10.16.157.3 10.16.157.6
10.16.157.0 10.16.157.3 10.16.157.6
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The IPs listed here are the private IPs of NFS servers. - On all the nodes that are used as Gluster NFS server which require IP failover, create /etc/ctdb/public_addresses file and add the virtual IPs that CTDB should create to this file. Add these IP address in the following format:
<Virtual IP>/<routing prefix><node interface>
<Virtual IP>/<routing prefix><node interface>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:192.168.1.20/24 eth0 192.168.1.21/24 eth0
192.168.1.20/24 eth0 192.168.1.21/24 eth0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the CTDB service on all the nodes by executing the following command:
service ctdb start
# service ctdb start
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
6.3.2.2. Using Gluster NFS to Mount Red Hat Gluster Storage Volumes (Deprecated) 링크 복사링크가 클립보드에 복사되었습니다!
Note
nfsmount.conf
file at /etc/nfsmount.conf
by adding the following text in the file:
Defaultvers=3
Defaultvers=3
vers=3
manually in all the mount commands.
mount nfsserver:export -o vers=3 /MOUNTPOINT
# mount nfsserver:export -o vers=3 /MOUNTPOINT
tcp,rdma
volume it could be changed using the volume set option nfs.transport-type
.
6.3.2.2.1. Manually Mounting Volumes Using Gluster NFS (Deprecated) 링크 복사링크가 클립보드에 복사되었습니다!
mount
command to manually mount a Red Hat Gluster Storage volume using Gluster NFS.
- If a mount point has not yet been created for the volume, run the
mkdir
command to create a mount point.mkdir /mnt/glusterfs
# mkdir /mnt/glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the correct
mount
command for the system.- For Linux
mount -t nfs -o vers=3 server1:/test-volume /mnt/glusterfs
# mount -t nfs -o vers=3 server1:/test-volume /mnt/glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - For Solaris
mount -o vers=3 nfs://server1:38467/test-volume /mnt/glusterfs
# mount -o vers=3 nfs://server1:38467/test-volume /mnt/glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
mount
command to manually mount a Red Hat Gluster Storage volume using Gluster NFS over TCP.
Note
requested NFS version or transport protocol is not supported
nfs.mount-udp
is supported for mounting a volume, by default it is disabled. The following are the limitations:
- If
nfs.mount-udp
is enabled, the MOUNT protocol needed for NFSv3 can handle requests from NFS-clients that require MOUNT over UDP. This is useful for at least some versions of Solaris, IBM AIX and HP-UX. - Currently, MOUNT over UDP does not have support for mounting subdirectories on a volume. Mounting
server:/volume/subdir
exports is only functional when MOUNT over TCP is used. - MOUNT over UDP does not currently have support for different authentication options that MOUNT over TCP honors. Enabling
nfs.mount-udp
may give more permissions to NFS clients than intended via various authentication options likenfs.rpc-auth-allow
,nfs.rpc-auth-reject
andnfs.export-dir
.
- If a mount point has not yet been created for the volume, run the
mkdir
command to create a mount point.mkdir /mnt/glusterfs
# mkdir /mnt/glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the correct
mount
command for the system, specifying the TCP protocol option for the system.- For Linux
mount -t nfs -o vers=3,mountproto=tcp server1:/test-volume /mnt/glusterfs
# mount -t nfs -o vers=3,mountproto=tcp server1:/test-volume /mnt/glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - For Solaris
mount -o proto=tcp, nfs://server1:38467/test-volume /mnt/glusterfs
# mount -o proto=tcp, nfs://server1:38467/test-volume /mnt/glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.2.2.2. Automatically Mounting Volumes Using Gluster NFS (Deprecated) 링크 복사링크가 클립보드에 복사되었습니다!
Note
/etc/auto.master
and /etc/auto.misc
files, and restart the autofs
service. Whenever a user or process attempts to access the directory it will be mounted in the background on-demand.
- Open the
/etc/fstab
file in a text editor. - Append the following configuration to the
fstab
file.HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR nfs defaults,_netdev, 0 0
HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR nfs defaults,_netdev, 0 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using the example server names, the entry contains the following replaced values.server1:/test-volume /mnt/glusterfs nfs defaults,_netdev, 0 0
server1:/test-volume /mnt/glusterfs nfs defaults,_netdev, 0 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Open the
/etc/fstab
file in a text editor. - Append the following configuration to the
fstab
file.HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR nfs defaults,_netdev,mountproto=tcp 0 0
HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR nfs defaults,_netdev,mountproto=tcp 0 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Using the example server names, the entry contains the following replaced values.server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,mountproto=tcp 0 0
server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,mountproto=tcp 0 0
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.2.2.3. Automatically Mounting Subdirectories Using NFS (Deprecated) 링크 복사링크가 클립보드에 복사되었습니다!
nfs.export-dir
and nfs.export-dirs
options provide granular control to restrict or allow specific clients to mount a sub-directory. These clients can be authenticated during sub-directory mount with either an IP, host name or a Classless Inter-Domain Routing (CIDR) range.
- nfs.export-dirs
- This option is enabled by default. It allows the sub-directories of exported volumes to be mounted by clients without needing to export individual sub-directories. When enabled, all sub-directories of all volumes are exported. When disabled, sub-directories must be exported individually in order to mount them on clients.To disable this option for all volumes, run the following command:
gluster volume set VOLNAME nfs.export-dirs off
# gluster volume set VOLNAME nfs.export-dirs off
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - nfs.export-dir
- When
nfs.export-dirs
is set toon
, thenfs.export-dir
option allows you to specify one or more sub-directories to export, rather than exporting all subdirectories (nfs.export-dirs on
), or only exporting individually exported subdirectories (nfs.export-dirs off
).To export certain subdirectories, run the following command:gluster volume set VOLNAME nfs.export-dir subdirectory
# gluster volume set VOLNAME nfs.export-dir subdirectory
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The subdirectory path should be the path from the root of the volume. For example, in a volume with six subdirectories, to export the first three subdirectories, the command would be the following:gluster volume set myvolume nfs.export-dir /dir1,/dir2,/dir3
# gluster volume set myvolume nfs.export-dir /dir1,/dir2,/dir3
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Subdirectories can also be exported based on the IP address, hostname, or a Classless Inter-Domain Routing (CIDR) range by adding these details in parentheses after the directory path:gluster volume set VOLNAME nfs.export-dir subdirectory(IPADDRESS),subdirectory(HOSTNAME),subdirectory(CIDR)
# gluster volume set VOLNAME nfs.export-dir subdirectory(IPADDRESS),subdirectory(HOSTNAME),subdirectory(CIDR)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow gluster volume set myvolume nfs.export-dir /dir1(192.168.10.101),/dir2(storage.example.com),/dir3(192.168.98.0/24)
# gluster volume set myvolume nfs.export-dir /dir1(192.168.10.101),/dir2(storage.example.com),/dir3(192.168.98.0/24)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.2.2.4. Testing Volumes Mounted Using Gluster NFS (Deprecated) 링크 복사링크가 클립보드에 복사되었습니다!
Testing Mounted Red Hat Gluster Storage Volumes
Prerequisites
- Run the
mount
command to check whether the volume was successfully mounted.mount
# mount server1:/test-volume on /mnt/glusterfs type nfs (rw,addr=server1)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the
df
command to display the aggregated storage space from all the bricks in a volume.df -h /mnt/glusterfs
# df -h /mnt/glusterfs Filesystem Size Used Avail Use% Mounted on server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Move to the mount directory using the
cd
command, and list the contents.cd /mnt/glusterfs ls
# cd /mnt/glusterfs # ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
The LOCK functionality in NFS protocol is advisory, it is recommended to use locks if the same volume is accessed by multiple clients.
6.3.2.3. Troubleshooting Gluster NFS (Deprecated) 링크 복사링크가 클립보드에 복사되었습니다!
- Q: The mount command on the NFS client fails with RPC Error: Program not registered. This error is encountered due to one of the following reasons:
- Q: The rpcbind service is not running on the NFS client. This could be due to the following reasons:
- Q: The NFS server glusterfsd starts but the initialization fails with nfsrpc- service: portmap registration of program failed error message in the log.
- Q: The NFS server start-up fails with the message Port is already in use in the log file.
- Q: The mount command fails with NFS server failed error:
- Q: The showmount command fails with clnt_create: RPC: Unable to receive error. This error is encountered due to the following reasons:
- Q: The application fails with Invalid argument or Value too large for defined data type
- Q: After the machine that is running NFS server is restarted the client fails to reclaim the locks held earlier.
- Q: The rpc actor failed to complete successfully error is displayed in the nfs.log, even after the volume is mounted successfully.
- Q: The mount command fails with No such file or directory.
RPC Error: Program not registered
. This error is encountered due to one of the following reasons:
- The NFS server is not running. You can check the status using the following command:
gluster volume status
# gluster volume status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The volume is not started. You can check the status using the following command:
gluster volume info
# gluster volume info
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - rpcbind is restarted. To check if rpcbind is running, execute the following command:
# ps ax| grep rpcbind
- If the NFS server is not running, then restart the NFS server using the following command:
gluster volume start VOLNAME
# gluster volume start VOLNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If the volume is not started, then start the volume using the following command:
gluster volume start VOLNAME
# gluster volume start VOLNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If both rpcbind and NFS server is running then restart the NFS server using the following commands:
# gluster volume stop VOLNAME
# gluster volume start VOLNAME
rpcbind
service is not running on the NFS client. This could be due to the following reasons:
- The portmap is not running.
- Another instance of kernel NFS server or glusterNFS server is running.
rpcbind
service by running the following command:
service rpcbind start
# service rpcbind start
- Start the rpcbind service on the NFS server by running the following command:
service rpcbind start
# service rpcbind start
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After starting rpcbind service, glusterFS NFS server needs to be restarted. - Stop another NFS server running on the same machine.Such an error is also seen when there is another NFS server running on the same machine but it is not the glusterFS NFS server. On Linux systems, this could be the kernel NFS server. Resolution involves stopping the other NFS server or not running the glusterFS NFS server on the machine. Before stopping the kernel NFS server, ensure that no critical service depends on access to that NFS server's exports.On Linux, kernel NFS servers can be stopped by using either of the following commands depending on the distribution in use:
service nfs-kernel-server stop service nfs stop
# service nfs-kernel-server stop # service nfs stop
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart glusterFS NFS server.
mount
command fails with NFS server failed error:
mount: mount to NFS server '10.1.10.11' failed: timed out (retrying).
mount: mount to NFS server '10.1.10.11' failed: timed out (retrying).
- Disable name lookup requests from NFS server to a DNS server.The NFS server attempts to authenticate NFS clients by performing a reverse DNS lookup to match host names in the volume file with the client IP addresses. There can be a situation where the NFS server either is not able to connect to the DNS server or the DNS server is taking too long to respond to DNS request. These delays can result in delayed replies from the NFS server to the NFS client resulting in the timeout error.NFS server provides a work-around that disables DNS requests, instead relying only on the client IP addresses for authentication. The following option can be added for successful mounting in such situations:
option nfs.addr.namelookup off
option nfs.addr.namelookup off
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Remember that disabling the NFS server forces authentication of clients to use only IP addresses. If the authentication rules in the volume file use host names, those authentication rules will fail and client mounting will fail. - NFS version used by the NFS client is other than version 3 by default.glusterFS NFS server supports version 3 of NFS protocol by default. In recent Linux kernels, the default NFS version has been changed from 3 to 4. It is possible that the client machine is unable to connect to the glusterFS NFS server because it is using version 4 messages which are not understood by glusterFS NFS server. The timeout can be resolved by forcing the NFS client to use version 3. The vers option to mount command is used for this purpose:
# mount nfsserver:export -o vers=3 /MOUNTPOINT
- The firewall might have blocked the port.
- rpcbind might not be running.
NFS.enable-ino32 <on | off>
NFS.enable-ino32 <on | off>
off
by default, which permits NFS to return 64-bit inode numbers by default.
- built and run on 32-bit machines, which do not support large files by default,
- built to 32-bit standards on 64-bit systems.
-D_FILE_OFFSET_BITS=64
-D_FILE_OFFSET_BITS=64
chkconfig --list nfslock
to check if NSM is configured during OS boot.
on,
run chkconfig nfslock off
to disable NSM clients during boot, which resolves the issue.
rpc actor failed to complete successfully
error is displayed in the nfs.log, even after the volume is mounted successfully.
nfs.log
file.
[2013-06-25 00:03:38.160547] W [rpcsvc.c:180:rpcsvc_program_actor] 0-rpc-service: RPC program version not available (req 100003 4) [2013-06-25 00:03:38.160669] E [rpcsvc.c:448:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully
[2013-06-25 00:03:38.160547] W [rpcsvc.c:180:rpcsvc_program_actor] 0-rpc-service: RPC program version not available (req 100003 4)
[2013-06-25 00:03:38.160669] E [rpcsvc.c:448:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully
noacl
option in the mount command as follows:
mount -t nfs -o vers=3,noacl server1:/test-volume /mnt/glusterfs
# mount -t nfs -o vers=3,noacl server1:/test-volume /mnt/glusterfs
No such file or directory
.
6.3.3. NFS Ganesha 링크 복사링크가 클립보드에 복사되었습니다!
Note
6.3.3.1. Supported Features of NFS-Ganesha 링크 복사링크가 클립보드에 복사되었습니다!
In a highly available active-active environment, if a NFS-Ganesha server that is connected to a NFS client running a particular application goes down, the application/NFS client is seamlessly connected to another NFS-Ganesha server without any administrative intervention.
NFS-Ganesha supports addition and removal of exports dynamically. Dynamic exports is managed by the DBus interface. DBus is a system local IPC mechanism for system management and peer-to-peer application communication.
In NFS-Ganesha, multiple Red Hat Gluster Storage volumes or sub-directories can be exported simultaneously.
NFS-Ganesha creates and maintains a NFSv4 pseudo-file system, which provides clients with seamless access to all exported objects on the server.
NFS-Ganesha NFSv4 protocol includes integrated support for Access Control List (ACL)s, which are similar to those used by Windows. These ACLs can be used to identify a trustee and specify the access rights allowed, or denied for that trustee.This feature is disabled by default.
Note
6.3.3.2. Setting up NFS Ganesha 링크 복사링크가 클립보드에 복사되었습니다!
Note
6.3.3.2.1. Port and Firewall Information for NFS-Ganesha 링크 복사링크가 클립보드에 복사되었습니다!
Service | Port Number | Protocol |
sshd | 22 | TCP |
rpcbind/portmapper | 111 | TCP/UDP |
NFS | 2049 | TCP/UDP |
mountd | 20048 | TCP/UDP |
NLM | 32803 | TCP/UDP |
RQuota | 875 | TCP/UDP |
statd | 662 | TCP/UDP |
pcsd | 2224 | TCP |
pacemaker_remote | 3121 | TCP |
corosync | 5404 and 5405 | UDP |
dlm | 21064 | TCP |
Note
Ensure the statd service is configured to use the ports mentioned above by executing the following commands on every node in the nfs-ganesha cluster:
- On Red Hat Enterprise Linux 7, edit /etc/sysconfig/nfs file as mentioned below:
sed -i '/STATD_PORT/s/^#//' /etc/sysconfig/nfs
# sed -i '/STATD_PORT/s/^#//' /etc/sysconfig/nfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
This step is not applicable for Red Hat Enterprise Linux 8. - Restart the statd service:For Red Hat Enterprise Linux 7:
systemctl restart nfs-config systemctl restart rpc-statd
# systemctl restart nfs-config # systemctl restart rpc-statd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
This step is not applicable for Red Hat Enterprise Linux 8.
Note
- Edit '/etc/sysconfig/nfs' using following commands:
sed -i '/STATD_PORT/s/^#//' /etc/sysconfig/nfs sed -i '/LOCKD_TCPPORT/s/^#//' /etc/sysconfig/nfs sed -i '/LOCKD_UDPPORT/s/^#//' /etc/sysconfig/nfs
# sed -i '/STATD_PORT/s/^#//' /etc/sysconfig/nfs # sed -i '/LOCKD_TCPPORT/s/^#//' /etc/sysconfig/nfs # sed -i '/LOCKD_UDPPORT/s/^#//' /etc/sysconfig/nfs
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Restart the services:For Red Hat Enterprise Linux 7:
systemctl restart nfs-config systemctl restart rpc-statd systemctl restart nfslock
# systemctl restart nfs-config # systemctl restart rpc-statd # systemctl restart nfslock
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Open the ports that are configured in the first step using the following command:
firewall-cmd --zone=public --add-port=662/tcp --add-port=662/udp \ --add-port=32803/tcp --add-port=32769/udp \ --add-port=111/tcp --add-port=111/udp
# firewall-cmd --zone=public --add-port=662/tcp --add-port=662/udp \ --add-port=32803/tcp --add-port=32769/udp \ --add-port=111/tcp --add-port=111/udp
Copy to Clipboard Copied! Toggle word wrap Toggle overflow firewall-cmd --zone=public --add-port=662/tcp --add-port=662/udp \ --add-port=32803/tcp --add-port=32769/udp \ --add-port=111/tcp --add-port=111/udp --permanent
# firewall-cmd --zone=public --add-port=662/tcp --add-port=662/udp \ --add-port=32803/tcp --add-port=32769/udp \ --add-port=111/tcp --add-port=111/udp --permanent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To ensure NFS client UDP mount does not fail, ensure to open port 2049 by executing the following command:
firewall-cmd --zone=zone_name --add-port=2049/udp firewall-cmd --zone=zone_name --add-port=2049/udp --permanent
# firewall-cmd --zone=zone_name --add-port=2049/udp # firewall-cmd --zone=zone_name --add-port=2049/udp --permanent
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Firewall SettingsOn Red Hat Enterprise Linux 7, enable the firewall services mentioned below.
- Get a list of active zones using the following command:
firewall-cmd --get-active-zones
# firewall-cmd --get-active-zones
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Allow the firewall service in the active zones, run the following commands:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.3.2.2. Prerequisites to run NFS-Ganesha 링크 복사링크가 클립보드에 복사되었습니다!
- A Red Hat Gluster Storage volume must be available for export and NFS-Ganesha rpms are installed.
- Ensure that the fencing agents are configured. For more information on configuring fencing agents, refer to the following documentation:
- Fencing Configuration section in the High Availability Add-On Administration guide: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-fenceconfig-haaa
- Fence Devices section in the High Availability Add-On Reference guide: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-guiclustcomponents-haar#s2-guifencedevices-HAAR
Note
The required minimum number of nodes for a highly available installation/configuration of NFS Ganesha is 3 and a maximum number of supported nodes is 8. - Only one of NFS-Ganesha, gluster-NFS or kernel-NFS servers can be enabled on a given machine/host as all NFS implementations use the port 2049 and only one can be active at a given time. Hence you must disable kernel-NFS before NFS-Ganesha is started.Disable the kernel-nfs using the following command:For Red Hat Enterprise Linux 7
systemctl stop nfs-server systemctl disable nfs-server
# systemctl stop nfs-server # systemctl disable nfs-server
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To verify if kernel-nfs is disabled, execute the following command:systemctl status nfs-server
# systemctl status nfs-server
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The service should be in stopped state.Note
Gluster NFS will be stopped automatically when NFS-Ganesha is enabled.Ensure that none of the volumes have the variablenfs.disable
set to 'off'. - Ensure to configure the ports as mentioned in Port/Firewall Information for NFS-Ganesha.
- Edit the ganesha-ha.conf file based on your environment.
- Reserve virtual IPs on the network for each of the servers configured in the ganesha.conf file. Ensure that these IPs are different than the hosts' static IPs and are not used anywhere else in the trusted storage pool or in the subnet.
- Ensure that all the nodes in the cluster are DNS resolvable. For example, you can populate the /etc/hosts with the details of all the nodes in the cluster.
- Make sure the SELinux is in Enforcing mode.
- Start network service on all machines using the following command:For Red Hat Enterprise Linux 7:
systemctl start network
# systemctl start network
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create and mount a gluster shared volume by executing the following command:
gluster volume set all cluster.enable-shared-storage enable
# gluster volume set all cluster.enable-shared-storage enable volume set: success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see Section 11.12, “Setting up Shared Storage Volume” - Create a directory named
nfs-ganesha
under/var/run/gluster/shared_storage
Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . - Copy the
ganesha.conf
andganesha-ha.conf
files from/etc/ganesha
to/var/run/gluster/shared_storage/nfs-ganesha
. - Enable the glusterfssharedstorage.service service using the following command:
systemctl enable glusterfssharedstorage.service
systemctl enable glusterfssharedstorage.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Enable the nfs-ganesha service using the following command:
systemctl enable nfs-ganesha
systemctl enable nfs-ganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.3.2.3. Configuring the Cluster Services 링크 복사링크가 클립보드에 복사되었습니다!
Note
- Enable the pacemaker service using the following command:For Red Hat Enterprise Linux 7:
systemctl enable pacemaker.service
# systemctl enable pacemaker.service
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Start the pcsd service using the following command.For Red Hat Enterprise Linux 7:
systemctl start pcsd
# systemctl start pcsd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
- To start pcsd by default after the system is rebooted, execute the following command:For Red Hat Enterprise Linux 7:
systemctl enable pcsd
# systemctl enable pcsd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Set a password for the user ‘hacluster’ on all the nodes using the following command. Use the same password for all the nodes:
echo <password> | passwd --stdin hacluster
# echo <password> | passwd --stdin hacluster
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Perform cluster authentication between the nodes, where, username is ‘hacluster’, and password is the one you used in the previous step. Ensure to execute the following command on every node:For Red Hat Enterprise Linux 7:
pcs cluster auth <hostname1> <hostname2> ...
# pcs cluster auth <hostname1> <hostname2> ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For Red Hat Enterprise Linux 8:pcs host auth <hostname1> <hostname2> ...
# pcs host auth <hostname1> <hostname2> ...
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
The hostname of all the nodes in the Ganesha-HA cluster must be included in the command when executing it on every node.For example, in a four node cluster; nfs1, nfs2, nfs3, and nfs4, execute the following command on every node:For Red Hat Enterprise Linux 7:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For Red Hat Enterprise Linux 8:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Key-based SSH authentication without password for the root user has to be enabled on all the HA nodes. Follow these steps:
- On one of the nodes (node1) in the cluster, run:
ssh-keygen -f /var/lib/glusterd/nfs/secret.pem -t rsa -N ''
# ssh-keygen -f /var/lib/glusterd/nfs/secret.pem -t rsa -N ''
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Deploy the generated public key from node1 to all the nodes (including node1) by executing the following command for every node:
ssh-copy-id -i /var/lib/glusterd/nfs/secret.pem.pub root@<node-ip/hostname>
# ssh-copy-id -i /var/lib/glusterd/nfs/secret.pem.pub root@<node-ip/hostname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Copy the ssh keypair from node1 to all the nodes in the Ganesha-HA cluster by executing the following command for every node:
scp -i /var/lib/glusterd/nfs/secret.pem /var/lib/glusterd/nfs/secret.* root@<node-ip/hostname>:/var/lib/glusterd/nfs/
# scp -i /var/lib/glusterd/nfs/secret.pem /var/lib/glusterd/nfs/secret.* root@<node-ip/hostname>:/var/lib/glusterd/nfs/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- As part of cluster setup, port 875 is used to bind to the Rquota service. If this port is already in use, assign a different port to this service by modifying following line in ‘/etc/ganesha/ganesha.conf’ file on all the nodes.
Use a non-privileged port for RQuota
# Use a non-privileged port for RQuota Rquota_Port = 875;
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.3.2.4. Creating the ganesha-ha.conf file 링크 복사링크가 클립보드에 복사되었습니다!
- Create a directory named nfs-ganesha under /var/run/gluster/shared_storage
Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . - Copy the ganesha.conf and ganesha-ha.conf files from /etc/ganesha to /var/run/gluster/shared_storage/nfs-ganesha.
Note
- Pacemaker handles the creation of the VIP and assigning an interface.
- Ensure that the VIP is in the same network range.
- Ensure that the HA_CLUSTER_NODES are specified as hostnames. Using IP addresses will cause clustering to fail.
6.3.3.2.5. Configuring NFS-Ganesha using Gluster CLI 링크 복사링크가 클립보드에 복사되었습니다!
To setup the HA cluster, enable NFS-Ganesha by executing the following command:
- Enable NFS-Ganesha by executing the following command
gluster nfs-ganesha enable
# gluster nfs-ganesha enable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
Before enabling or disabling NFS-Ganesha, ensure that all the nodes that are part of the NFS-Ganesha cluster are up.For example,gluster nfs-ganesha enable
# gluster nfs-ganesha enable Enabling NFS-Ganesha requires Gluster-NFS to be disabled across the trusted pool. Do you still want to continue? (y/n) y This will take a few minutes to complete. Please wait .. nfs-ganesha : success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
After enabling NFS-Ganesha, ifrpcinfo -p
shows the statd port different from 662, then, restart the statd service:For Red Hat Enterprise Linux 7:systemctl restart rpc-statd
# systemctl restart rpc-statd
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Tearing down the HA clusterTo tear down the HA cluster, execute the following command:
gluster nfs-ganesha disable
# gluster nfs-ganesha disable
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example,gluster nfs-ganesha disable
# gluster nfs-ganesha disable Disabling NFS-Ganesha will tear down entire ganesha cluster across the trusted pool. Do you still want to continue? (y/n) y This will take a few minutes to complete. Please wait .. nfs-ganesha : success
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verifying the status of the HA clusterTo verify the status of the HA cluster, execute the following script:
/usr/libexec/ganesha/ganesha-ha.sh --status /var/run/gluster/shared_storage/nfs-ganesha
# /usr/libexec/ganesha/ganesha-ha.sh --status /var/run/gluster/shared_storage/nfs-ganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .For example:/usr/libexec/ganesha/ganesha-ha.sh --status /var/run/gluster/shared_storage/nfs-ganesha
# /usr/libexec/ganesha/ganesha-ha.sh --status /var/run/gluster/shared_storage/nfs-ganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
- It is recommended to manually restart the
ganesha.nfsd
service after the node is rebooted, to fail back the VIPs. - Disabling NFS Ganesha does not enable Gluster NFS by default. If required, Gluster NFS must be enabled manually.
Note
- NFS-Ganesha fails to start.
- NFS-Ganesha port 875 is unavailable.
- The ganesha.conf file is available at /etc/ganesha/ganesha.conf.
- Uncomment the line #Enable_RQUOTA = false; to disable RQUOTA.
- Restart the nfs-ganesha service on all nodes.
systemctl restart nfs-ganesha
# systemctl restart nfs-ganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.3.2.6. Exporting and Unexporting Volumes through NFS-Ganesha 링크 복사링크가 클립보드에 복사되었습니다!
Note
To export a Red Hat Gluster Storage volume, execute the following command:
gluster volume set <volname> ganesha.enable on
# gluster volume set <volname> ganesha.enable on
gluster vol set testvol ganesha.enable on
# gluster vol set testvol ganesha.enable on
volume set: success
To unexport a Red Hat Gluster Storage volume, execute the following command:
gluster volume set <volname> ganesha.enable off
# gluster volume set <volname> ganesha.enable off
gluster vol set testvol ganesha.enable off
# gluster vol set testvol ganesha.enable off
volume set: success
6.3.3.2.7. Verifying the NFS-Ganesha Status 링크 복사링크가 클립보드에 복사되었습니다!
- Check if NFS-Ganesha is started by executing the following commands:On Red Hat Enterprise Linux-7
systemctl status nfs-ganesha
# systemctl status nfs-ganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check if the volume is exported.
showmount -e localhost
# showmount -e localhost
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:showmount -e localhost
# showmount -e localhost Export list for localhost: /volname (everyone)
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The logs of ganesha.nfsd daemon are written to /var/log/ganesha/ganesha.log. Check the log file on noticing any unexpected behavior.
6.3.3.3. Accessing NFS-Ganesha Exports 링크 복사링크가 클립보드에 복사되었습니다!
- Execute the following commands to set the tunable:
sysctl -w sunrpc.tcp_slot_table_entries=128 echo 128 > /proc/sys/sunrpc/tcp_slot_table_entries echo 128 > /proc/sys/sunrpc/tcp_max_slot_table_entries
# sysctl -w sunrpc.tcp_slot_table_entries=128 # echo 128 > /proc/sys/sunrpc/tcp_slot_table_entries # echo 128 > /proc/sys/sunrpc/tcp_max_slot_table_entries
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - To make the tunable persistent on reboot, execute the following commands:
echo "options sunrpc tcp_slot_table_entries=128" >> /etc/modprobe.d/sunrpc.conf echo "options sunrpc tcp_max_slot_table_entries=128" >> /etc/modprobe.d/sunrpc.conf
# echo "options sunrpc tcp_slot_table_entries=128" >> /etc/modprobe.d/sunrpc.conf # echo "options sunrpc tcp_max_slot_table_entries=128" >> /etc/modprobe.d/sunrpc.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Note
6.3.3.3.1. Mounting exports in NFSv3 Mode 링크 복사링크가 클립보드에 복사되었습니다!
mount -t nfs -o vers=3 virtual_ip:/volname /mountpoint
# mount -t nfs -o vers=3 virtual_ip:/volname /mountpoint
mount -t nfs -o vers=3 10.70.0.0:/testvol /mnt
mount -t nfs -o vers=3 10.70.0.0:/testvol /mnt
6.3.3.3.2. Mounting exports in NFSv4 Mode 링크 복사링크가 클립보드에 복사되었습니다!
mount -t nfs -o vers=4 virtual_ip:/volname /mountpoint
# mount -t nfs -o vers=4 virtual_ip:/volname /mountpoint
mount -t nfs -o vers=4 10.70.0.0:/testvol /mnt
# mount -t nfs -o vers=4 10.70.0.0:/testvol /mnt
Important
mount -t nfs -o vers=4.0 or 4.1 virtual_ip:/volname /mountpoint
# mount -t nfs -o vers=4.0 or 4.1 virtual_ip:/volname /mountpoint
mount -t nfs -o vers=4.1 10.70.0.0:/testvol /mnt
# mount -t nfs -o vers=4.1 10.70.0.0:/testvol /mnt
6.3.3.3.3. Finding clients of an NFS server using dbus 링크 복사링크가 클립보드에 복사되었습니다!
dbus-send --type=method_call --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ClientMgr org.ganesha.nfsd.clientmgr.ShowClients
# dbus-send --type=method_call --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ClientMgr org.ganesha.nfsd.clientmgr.ShowClients
Note
dbus-send --type=method_call --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.DisplayExport uint16:Export_Id
# dbus-send --type=method_call --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.DisplayExport uint16:Export_Id
client_type
is the client’s IP address, CIDR_version
, CIDR_address
, CIDR_mask
and CIDR_proto
are the CIDR representation details of the client and uint32 anonymous_uid
, uint32 anonymous_gid
, uint32 expire_time_attr
, uint32 options
and uint32
set are the Client Permissions.
6.3.3.4. Modifying the NFS-Ganesha HA Setup 링크 복사링크가 클립보드에 복사되었습니다!
6.3.3.4.1. Adding a Node to the Cluster 링크 복사링크가 클립보드에 복사되었습니다!
Note
/var/lib/glusterd/nfs/secret.pem
SSH key are already generated, those steps should not be repeated.
/usr/libexec/ganesha/ganesha-ha.sh --add <HA_CONF_DIR> <HOSTNAME> <NODE-VIP>
# /usr/libexec/ganesha/ganesha-ha.sh --add <HA_CONF_DIR> <HOSTNAME> <NODE-VIP>
/run/gluster/shared_storage/nfs-ganesha.
/usr/libexec/ganesha/ganesha-ha.sh --add /var/run/gluster/shared_storage/nfs-ganesha server16 10.00.00.01
# /usr/libexec/ganesha/ganesha-ha.sh --add /var/run/gluster/shared_storage/nfs-ganesha server16 10.00.00.01
Note
6.3.3.4.2. Deleting a Node in the Cluster 링크 복사링크가 클립보드에 복사되었습니다!
/usr/libexec/ganesha/ganesha-ha.sh --delete <HA_CONF_DIR> <HOSTNAME>
# /usr/libexec/ganesha/ganesha-ha.sh --delete <HA_CONF_DIR> <HOSTNAME>
/run/gluster/shared_storage/nfs-ganesha
.
/usr/libexec/ganesha/ganesha-ha.sh --delete /var/run/gluster/shared_storage/nfs-ganesha server16
# /usr/libexec/ganesha/ganesha-ha.sh --delete /var/run/gluster/shared_storage/nfs-ganesha server16
Note
6.3.3.4.3. Replacing a Node in the Cluster 링크 복사링크가 클립보드에 복사되었습니다!
- Delete the node from the cluster. Refer Section 6.3.3.4.2, “Deleting a Node in the Cluster”
- Create a node with the same hostname.Refer Section 11.10.2, “Replacing a Host Machine with the Same Hostname”
Note
It is not required for the new node to have the same name as that of the old node. - Add the node to the cluster. Refer Section 6.3.3.4.1, “Adding a Node to the Cluster”
Note
Ensure that firewall services are enabled as mentioned in Section 6.3.3.2.1, “Port and Firewall Information for NFS-Ganesha” and also the Section 6.3.3.2.2, “Prerequisites to run NFS-Ganesha” are met.
6.3.3.5. Modifying the Default Export Configurations 링크 복사링크가 클립보드에 복사되었습니다!
ganesha-export-config 8
man page.
- Edit/add the required fields in the corresponding export file located at
/run/gluster/shared_storage/nfs-ganesha/exports/
. - Execute the following command
/usr/libexec/ganesha/ganesha-ha.sh --refresh-config <HA_CONF_DIR> <volname>
# /usr/libexec/ganesha/ganesha-ha.sh --refresh-config <HA_CONF_DIR> <volname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- HA_CONF_DIR: The directory path containing the ganesha-ha.conf file. By default it is located at
/run/gluster/shared_storage/nfs-ganesha
. - volname: The name of the volume whose export configuration has to be changed.
export.conf
file to see the expected behavior.
- Providing Permissions for Specific Clients
- Enabling and Disabling NFSv4 ACLs
- Providing Pseudo Path for NFSv4 Mount
- Exporting Subdirectories
6.3.3.5.1. Providing Permissions for Specific Clients 링크 복사링크가 클립보드에 복사되었습니다!
EXPORT
block applies to any client that mounts the exported volume. To provide specific permissions to specific clients , introduce a client
block inside the EXPORT
block.
EXPORT
block.
client
block.
6.3.3.5.2. Enabling and Disabling NFSv4 ACLs 링크 복사링크가 클립보드에 복사되었습니다!
Disable_ACL = false;
Disable_ACL = false;
Note
6.3.3.5.3. Providing Pseudo Path for NFSv4 Mount 링크 복사링크가 클립보드에 복사되었습니다!
Pseudo = "pseudo_path"; # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
Pseudo = "pseudo_path"; # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
6.3.3.5.4. Exporting Subdirectories 링크 복사링크가 클립보드에 복사되었습니다!
- Create a separate export file for the sub-directory.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Change the
Export_ID
to any unique unused ID.Edit thePath
andPseudo
parameters and add the volpath entry to the export file. - If a new export file is created for the sub-directory, you must add it's entry in
ganesha.conf
file.%include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.<share-name>.conf"
%include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.<share-name>.conf"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .For example:%include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.ganesha.conf" --> Volume entry %include >/var/run/gluster/shared_storage/nfs-ganesha/exports/export.ganesha-dir.conf" --> Subdir entry
%include "/var/run/gluster/shared_storage/nfs-ganesha/exports/export.ganesha.conf" --> Volume entry %include >/var/run/gluster/shared_storage/nfs-ganesha/exports/export.ganesha-dir.conf" --> Subdir entry
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following script to export the sub-directory shares without disrupting existing clients connected to other shares :
/usr/libexec/ganesha/ganesha-ha.sh --refresh-config <HA_CONF_DIR> <share-name>
# /usr/libexec/ganesha/ganesha-ha.sh --refresh-config <HA_CONF_DIR> <share-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:/usr/libexec/ganesha/ganesha-ha.sh --refresh-config /run/gluster/shared_storage/nfs-ganesha/ ganesha-dir
/usr/libexec/ganesha/ganesha-ha.sh --refresh-config /run/gluster/shared_storage/nfs-ganesha/ ganesha-dir
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Edit the volume export file with subdir entry.For Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Change the
Export_ID
to any unique unused ID.Edit thePath
andPseudo
parameters and add the volpath entry to the export file. - Execute the following script to export the sub-directory shares without disrupting existing clients connected to other shares:
/usr/libexec/ganesha/ganesha-ha.sh --refresh-config <HA_CONF_DIR> <share-name>
# /usr/libexec/ganesha/ganesha-ha.sh --refresh-config <HA_CONF_DIR> <share-name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:/usr/libexec/ganesha/ganesha-ha.sh --refresh-config /run/gluster/shared_storage/nfs-ganesha/ ganesha
/usr/libexec/ganesha/ganesha-ha.sh --refresh-config /run/gluster/shared_storage/nfs-ganesha/ ganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
If the same export file contains multiple EXPORT{} entries, then a volume restart or nfs-ganesha service restart is required.
6.3.3.5.4.1. Enabling all_squash option 링크 복사링크가 클립보드에 복사되었습니다!
all_squash
, edit the following parameter:
Squash = all_squash ; # To enable/disable root squashing
Squash = all_squash ; # To enable/disable root squashing
6.3.3.5.5. Unexporting Subdirectories 링크 복사링크가 클립보드에 복사되었습니다!
- Note the export id of the share which you want to unexport from configuration file
(/var/run/gluster/shared_storage/nfs-ganesha/exports/file-name.conf)
- Deleting the configuration:
- Delete the configuration file (if there is a seperate configraution file):
rm -rf /var/run/gluster/shared_storage/nfs-ganesha/exports/file-name.conf
# rm -rf /var/run/gluster/shared_storage/nfs-ganesha/exports/file-name.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Delete the entry of the conf file from /etc/ganesha/ganesha.confRemove the line:
%include "/var/run/gluster/shared_storage/nfs-ganesha/export/export.conf
%include "/var/run/gluster/shared_storage/nfs-ganesha/export/export.conf
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the below command:
dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.RemoveExport uint16:export_id
# dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.RemoveExport uint16:export_id
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Export_id in above command should be of export entry obtained from step 1.
6.3.3.6. Configuring Kerberized NFS-Ganesha 링크 복사링크가 클립보드에 복사되었습니다!
Note
- Install the krb5-workstation and the ntpdate (RHEL 7) or the chrony (RHEL 8) packages on all the machines:
yum install krb5-workstation
# yum install krb5-workstation
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For Red Hat Enterprise Linux 7:yum install ntpdate
# yum install ntpdate
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For Red Hat Enterprise Linux 8:dnf install chrony
# dnf install chrony
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
- The krb5-libs package will be updated as a dependent package.
- For RHEL 7, configure the ntpdate based on the valid time server according to the environment:
echo <valid_time_server> >> /etc/ntp/step-tickers systemctl enable ntpdate systemctl start ntpdate
# echo <valid_time_server> >> /etc/ntp/step-tickers # systemctl enable ntpdate # systemctl start ntpdate
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For RHEL 8, configure chrony based on the valid time server accroding to the environment:vi /etc/chrony.conf
# vi /etc/chrony.conf # systemctl enable chrony # systemctl start chrony
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For RHEL 7 and RHEL 8 both, perform the following steps: - Ensure that all systems can resolve each other by FQDN in DNS.
- Configure the
/etc/krb5.conf
file and add relevant changes accordingly. For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
For further details regarding the file configuration, refer toman krb5.conf
. - On the NFS-server and client, update the /etc/idmapd.conf file by making the required change. For example:
Domain = example.com
Domain = example.com
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.3.6.1. Setting up the NFS-Ganesha Server 링크 복사링크가 클립보드에 복사되었습니다!
Note
- Install the following packages:
yum install nfs-utils yum install rpcbind
# yum install nfs-utils # yum install rpcbind
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Install the relevant gluster and NFS-Ganesha rpms. For more information see, Red Hat Gluster Storage 3.5 Installation Guide.
- Create a Kerberos principle and add it to krb5.keytab on the NFS-Ganesha server
kadmin kadmin: addprinc -randkey nfs/<host_name>@EXAMPLE.COM kadmin: ktadd nfs/<host_name>@EXAMPLE.COM
$ kadmin $ kadmin: addprinc -randkey nfs/<host_name>@EXAMPLE.COM $ kadmin: ktadd nfs/<host_name>@EXAMPLE.COM
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Update
/etc/ganesha/ganesha.conf
file as mentioned below:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Based on the different kerberos security flavours (krb5, krb5i and krb5p) supported by nfs-ganesha, configure the 'SecType' parameter in the volume export file (/var/run/gluster/shared_storage/nfs-ganesha/exports) with appropriate security flavour.
Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . - Create an unprivileged user and ensure that the users that are created are resolvable to the UIDs through the central user database. For example:
useradd guest
# useradd guest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
The username of this user has to be the same as the one on the NFS-client.
6.3.3.6.2. Setting up the NFS Client 링크 복사링크가 클립보드에 복사되었습니다!
Note
- Install the following packages:
yum install nfs-utils yum install rpcbind
# yum install nfs-utils # yum install rpcbind
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create a kerberos principle and add it to krb5.keytab on the client side. For example:
kadmin kadmin: addprinc -randkey host/<host_name>@EXAMPLE.COM kadmin: ktadd host/<host_name>@EXAMPLE.COM
# kadmin # kadmin: addprinc -randkey host/<host_name>@EXAMPLE.COM # kadmin: ktadd host/<host_name>@EXAMPLE.COM
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Check the status of nfs-client.target service and start it, if not already started:
systemctl status nfs-client.target systemctl start nfs-client.target systemctl enable nfs-client.target
# systemctl status nfs-client.target # systemctl start nfs-client.target # systemctl enable nfs-client.target
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Create an unprivileged user and ensure that the users that are created are resolvable to the UIDs through the central user database. For example:
useradd guest
# useradd guest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
The username of this user has to be the same as the one on the NFS-server. - Mount the volume specifying kerberos security type:
mount -t nfs -o sec=krb5 <host_name>:/testvolume /mnt
# mount -t nfs -o sec=krb5 <host_name>:/testvolume /mnt
Copy to Clipboard Copied! Toggle word wrap Toggle overflow As root, all access should be granted.For example:Creation of a directory on the mount point and all other operations as root should be successful.mkdir <directory name>
# mkdir <directory name>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Login as a guest user:
su - guest
# su - guest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Without a kerberos ticket, all access to /mnt should be denied. For example:su guest ls
# su guest # ls ls: cannot open directory .: Permission denied
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Get the kerberos ticket for the guest and access /mnt:
kinit ls
# kinit Password for guest@EXAMPLE.COM: # ls <directory created>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Important
With this ticket, some access must be allowed to /mnt. If there are directories on the NFS-server where "guest" does not have access to, it should work correctly.
6.3.3.7. NFS-Ganesha Service Downtime 링크 복사링크가 클립보드에 복사되었습니다!
- If the ganesha.nfsd dies (crashes, oomkill, admin kill), the maximum time to detect it and put the ganesha cluster into grace is 20sec, plus whatever time pacemaker needs to effect the fail-over.
Note
This time taken to detect if the service is down, can be edited using the following command on all the nodes:pcs resource op remove nfs-mon monitor pcs resource op add nfs-mon monitor interval=<interval_period_value>
# pcs resource op remove nfs-mon monitor # pcs resource op add nfs-mon monitor interval=<interval_period_value>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If the whole node dies (including network failure) then this down time is the total of whatever time pacemaker needs to detect that the node is gone, the time to put the cluster into grace, and the time to effect the fail-over. This is ~20 seconds.
- So the max-fail-over time is approximately 20-22 seconds, and the average time is typically less. In other words, the time taken for NFS clients to detect server reboot or resume I/O is 20 - 22 seconds.
6.3.3.7.1. Modifying the Fail-over Time 링크 복사링크가 클립보드에 복사되었습니다!
Protocols | File Operations |
NFSV3 |
|
NLM |
|
NFSV4 |
|
Note
/etc/ganesha/ganesha.conf
file.
NFSv4 { Grace_Period=<grace_period_value_in_sec>; }
NFSv4 {
Grace_Period=<grace_period_value_in_sec>;
}
/etc/ganesha/ganesha.conf
file, restart the NFS-Ganesha service using the following command on all the nodes :
systemctl restart nfs-ganesha
# systemctl restart nfs-ganesha
6.3.3.8. Tuning Readdir Performance for NFS-Ganesha 링크 복사링크가 클립보드에 복사되었습니다!
Dir_Chunk
parameter enables the directory content to be read in chunks at an instance. This parameter is enabled by default. The default value of this parameter is 128
. The range for this parameter is 1
to UINT32_MAX
. To disable this parameter, set the value to 0
Procedure 6.1. Configuring readdir perform for NFS-Ganesha
- Edit the
/etc/ganesha/ganesha.conf
file. - Locate the
CACHEINODE
block. - Add the
Dir_Chunk
parameter inside the block:CACHEINODE { Entries_HWMark = 125000; Chunks_HWMark = 1000; Dir_Chunk = 128; # Range: 1 to UINT32_MAX, 0 to disable }
CACHEINODE { Entries_HWMark = 125000; Chunks_HWMark = 1000; Dir_Chunk = 128; # Range: 1 to UINT32_MAX# Range: 1 to UINT32_MAX# Range: 1 to UINT32_MAX# Range: 1 to UINT32_MAX, 0 to disable0 to disable }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the
ganesha.conf
file and restart the NFS-Ganesha service on all nodes:systemctl restart nfs-ganesha
# systemctl restart nfs-ganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.3.3.9. Troubleshooting NFS Ganesha 링크 복사링크가 클립보드에 복사되었습니다!
Ensure you execute the following commands for all the issues/failures that is encountered:
- Make sure all the prerequisites are met.
- Execute the following commands to check the status of the services:
service nfs-ganesha status service pcsd status service pacemaker status pcs status
# service nfs-ganesha status # service pcsd status # service pacemaker status # pcs status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Review the followings logs to understand the cause of failure.
/var/log/ganesha/ganesha.log /var/log/ganesha/ganesha-gfapi.log /var/log/messages /var/log/pcsd.log
/var/log/ganesha/ganesha.log /var/log/ganesha/ganesha-gfapi.log /var/log/messages /var/log/pcsd.log
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Situation
NFS-Ganesha fails to start.
SolutionEnsure you execute all the mandatory checks to understand the root cause before proceeding with the following steps. Follow the listed steps to fix the issue:
- Ensure the kernel and gluster nfs services are inactive.
- Ensure that the port 875 is free to connect to the RQUOTA service.
- Ensure that the shared storage volume mount exists on the server after node reboot/shutdown. If it does not, then mount the shared storage volume manually using the following command:
mount -t glusterfs <local_node's_hostname>:gluster_shared_storage /var/run/gluster/shared_storage
# mount -t glusterfs <local_node's_hostname>:gluster_shared_storage /var/run/gluster/shared_storage
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .
For more information see, section Exporting and Unexporting Volumes through NFS-Ganesha. - Situation
NFS-Ganesha port 875 is unavailable.
SolutionEnsure you execute all the mandatory checks to understand the root cause before proceeding with the following steps. Follow the listed steps to fix the issue:
- Run the following command to extract the PID of the process using port 875:
netstat -anlp | grep 875
netstat -anlp | grep 875
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Determine if the process using port 875 is an important system or user process.
- Perform one of the following depending upon the importance of the process:
- If the process using port 875 is an important system or user process:
- Assign a different port to this service by modifying following line in ‘/etc/ganesha/ganesha.conf’ file on all the nodes:
Use a non-privileged port for RQuota
# Use a non-privileged port for RQuota Rquota_Port = port_number;
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following commands after modifying the port number:
semanage port -a -t mountd_port_t -p tcp port_number semanage port -a -t mountd_port_t -p udp port_number
# semanage port -a -t mountd_port_t -p tcp port_number # semanage port -a -t mountd_port_t -p udp port_number
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following command to restart NFS-Ganesha:
systemctl restart nfs-ganesha
systemctl restart nfs-ganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- If the process using port 875 is not an important system or user process:
- Run the following command to kill the process using port 875:
kill pid;
# kill pid;
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the process ID extracted from the previous step. - Run the following command to ensure that the process is killed and port 875 is free to use:
ps aux | grep pid;
# ps aux | grep pid;
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Run the following command to restart NFS-Ganesha:
systemctl restart nfs-ganesha
systemctl restart nfs-ganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If required, restart the killed process.
- Situation
NFS-Ganesha Cluster setup fails.
SolutionEnsure you execute all the mandatory checks to understand the root cause before proceeding with the following steps.
- Ensure the kernel and gluster nfs services are inactive.
- Ensure that
pcs cluster auth
command is executed on all the nodes with same password for the userhacluster
- Ensure that shared volume storage is mounted on all the nodes.
- Ensure that the name of the HA Cluster does not exceed 15 characters.
- Ensure UDP multicast packets are pingable using
OMPING
. - Ensure that Virtual IPs are not assigned to any NIC.
- Situation
NFS-Ganesha has started and fails to export a volume.
SolutionEnsure you execute all the mandatory checks to understand the root cause before proceeding with the following steps. Follow the listed steps to fix the issue:
- Ensure that volume is in
Started
state using the following command:gluster volume status <volname>
# gluster volume status <volname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Execute the following commands to check the status of the services:
service nfs-ganesha status showmount -e localhost
# service nfs-ganesha status # showmount -e localhost
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Review the followings logs to understand the cause of failure.
/var/log/ganesha/ganesha.log /var/log/ganesha/ganesha-gfapi.log /var/log/messages
/var/log/ganesha/ganesha.log /var/log/ganesha/ganesha-gfapi.log /var/log/messages
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that dbus service is running using the following command
service messagebus status
# service messagebus status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - If the volume is not in a started state, run the following command to start the volume.
gluster volume start <volname>
# gluster volume start <volname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the volume is not exported as part of volume start, run the following command to re-export the volume:/usr/libexec/ganesha/dbus-send.sh /var/run/gluster/shared_storage on <volname>
# /usr/libexec/ganesha/dbus-send.sh /var/run/gluster/shared_storage on <volname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ .
- Situation
Adding a new node to the HA cluster fails.
SolutionEnsure you execute all the mandatory checks to understand the root cause before proceeding with the following steps. Follow the listed steps to fix the issue:
- Ensure to run the following command from one of the nodes that is already part of the cluster:
ganesha-ha.sh --add <HA_CONF_DIR> <NODE-HOSTNAME> <NODE-VIP>
# ganesha-ha.sh --add <HA_CONF_DIR> <NODE-HOSTNAME> <NODE-VIP>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure that gluster_shared_storage volume is mounted on the node that needs to be added.
- Make sure that all the nodes of the cluster is DNS resolvable from the node that needs to be added.
- Execute the following command for each of the hosts in the HA cluster on the node that needs to be added:For Red Hat Enterprize Linux 7:
pcs cluster auth <hostname>
# pcs cluster auth <hostname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For Red Hat Enterprize Linux 8:pcs host auth <hostname>
# pcs host auth <hostname>
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Situation
Cleanup required when nfs-ganesha HA cluster setup fails.
SolutionTo restore back the machines to the original state, execute the following commands on each node forming the cluster:
/usr/libexec/ganesha/ganesha-ha.sh --teardown /var/run/gluster/shared_storage/nfs-ganesha /usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha systemctl stop nfs-ganesha
# /usr/libexec/ganesha/ganesha-ha.sh --teardown /var/run/gluster/shared_storage/nfs-ganesha # /usr/libexec/ganesha/ganesha-ha.sh --cleanup /var/run/gluster/shared_storage/nfs-ganesha # systemctl stop nfs-ganesha
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note
With the release of 3.5 Batch Update 3, the mount point of shared storage is changed from /var/run/gluster/ to /run/gluster/ . - Situation
Permission issues.
SolutionBy default, the
root squash
option is disabled when you start NFS-Ganesha using the CLI. In case, you encounter any permission issues, check the unix permissions of the exported entry.