Search

7.3. NFS

download PDF
Linux, and other operating systems that support the NFSv3 standard can use NFS to access the Red Hat Storage volumes.
Differences in implementation of the NFSv3 standard in operating systems may result in some operational issues. If issues are encountered when using NFSv3, contact Red Hat support to receive more information on Red Hat Storage Server client operating system compatibility, and information about known issues affecting NFSv3.
NFS ACL v3 is supported, which allows getfacl and setfacl operations on NFS clients. The following options are provided to configure the Access Control Lists (ACL) in the glusterFS NFS server with the nfs.acl option. For example:
  • To set nfs.acl ON, run the following command:
    # gluster volume set VOLNAME nfs.acl on
  • To set nfs.acl OFF, run the following command:
    # gluster volume set VOLNAME nfs.acl off

Note

ACL is ON by default.
Red Hat Storage includes Network Lock Manager (NLM) v4. NLM protocol allows NFSv3 clients to lock files across the network. NLM is required to make applications running on top of NFSv3 mount points to use the standard fcntl() (POSIX) and flock() (BSD) lock system calls to synchronize access across clients.
This section describes how to use NFS to mount Red Hat Storage volumes (both manually and automatically) and how to verify that the volume has been mounted successfully.

7.3.1. Using NFS to Mount Red Hat Storage Volumes

You can use either of the following methods to mount Red Hat Storage volumes:

Note

Currently GlusterFS NFS server only supports version 3 of NFS protocol. As a preferred option, always configure version 3 as the default version in the nfsmount.conf file at /etc/nfsmount.conf by adding the following text in the file:
Defaultvers=3
In case the file is not modified, then ensure to add vers=3 manually in all the mount commands.
# mount nfsserver:export -o vers=3 /MOUNTPOINT
RDMA support in GlusterFS that is mentioned in the previous sections is with respect to communication between bricks and Fuse mount/GFAPI/NFS server. NFS kernel client will still communicate with GlusterFS NFS server over tcp.
In case of volumes which were created with only one type of transport, communication between GlusterFS NFS server and bricks will be over that transport type. In case of tcp,rdma volume it could be changed using the volume set option nfs.transport-type.
After mounting a volume, you can test the mounted volume using the procedure described in Section 7.3.1.4, “Testing Volumes Mounted Using NFS”.

7.3.1.1. Manually Mounting Volumes Using NFS

Create a mount point and run the mount command to manually mount a Red Hat Storage volume using NFS.
  1. If a mount point has not yet been created for the volume, run the mkdir command to create a mount point.
    # mkdir /mnt/glusterfs
  2. Run the correct mount command for the system.
    For Linux
    # mount -t nfs -o vers=3 server1:/test-volume /mnt/glusterfs
    For Solaris
    # mount -o vers=3 nfs://server1:38467/test-volume /mnt/glusterfs
Manually Mount a Red Hat Storage Volume using NFS over TCP
Create a mount point and run the mount command to manually mount a Red Hat Storage volume using NFS over TCP.

Note

glusterFS NFS server does not support UDP. If a NFS client such as Solaris client, connects by default using UDP, the following message appears:
requested NFS version or transport protocol is not supported
The option nfs.mount-udp is supported for mounting a volume, by default it is disabled. The following are the limitations:
  • If nfs.mount-udp is enabled, the MOUNT protocol needed for NFSv3 can handle requests from NFS-clients that require MOUNT over UDP. This is useful for at least some versions of Solaris, IBM AIX and HP-UX.
  • Currently, MOUNT over UDP does not have support for mounting subdirectories on a volume. Mounting server:/volume/subdir exports is only functional when MOUNT over TCP is used.
  • MOUNT over UDP does not currently have support for different authentication options that MOUNT over TCP honors. Enabling nfs.mount-udp may give more permissions to NFS clients than intended via various authentication options like nfs.rpc-auth-allow, nfs.rpc-auth-reject and nfs.export-dir.
  1. If a mount point has not yet been created for the volume, run the mkdir command to create a mount point.
    # mkdir /mnt/glusterfs
  2. Run the correct mount command for the system, specifying the TCP protocol option for the system.
    For Linux
    # mount -t nfs -o vers=3,mountproto=tcp server1:/test-volume /mnt/glusterfs
    For Solaris
    # mount -o proto=tcp, nfs://server1:38467/test-volume /mnt/glusterfs

7.3.1.2. Automatically Mounting Volumes Using NFS

Red Hat Storage volumes can be mounted automatically using NFS, each time the system starts.

Note

In addition to the tasks described below, Red Hat Storage supports Linux, UNIX, and similar operating system's standard method of auto-mounting NFS mounts.
Update the /etc/auto.master and /etc/auto.misc files, and restart the autofs service. Whenever a user or process attempts to access the directory it will be mounted in the background on-demand.
Mounting a Volume Automatically using NFS
Mount a Red Hat Storage Volume automatically using NFS at server start.
  1. Open the /etc/fstab file in a text editor.
  2. Append the following configuration to the fstab file.
    HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR glusterfs mountdir nfs defaults,_netdev, 0 0
    Using the example server names, the entry contains the following replaced values.
    server1:/test-volume /mnt/glusterfs nfs defaults,_netdev, 0 0
Mounting a Volume Automatically using NFS over TCP
Mount a Red Hat Storage Volume automatically using NFS over TCP at server start.
  1. Open the /etc/fstab file in a text editor.
  2. Append the following configuration to the fstab file.
    HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR glusterfs nfs defaults,_netdev,mountproto=tcp 0 0
    Using the example server names, the entry contains the following replaced values.
    server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,mountproto=tcp 0 0

7.3.1.3. Authentication Support for Subdirectory Mount

This update extends nfs.export-dir option to provide client authentication during sub-directory mount. The nfs.export-dir and nfs.export-dirs options provide granular control to restrict or allow specific clients to mount a sub-directory. These clients can be authenticated with either an IP, host name or a Classless Inter-Domain Routing (CIDR) range.
  • nfs.export-dirs: By default, all NFS sub-volumes are exported as individual exports. This option allows you to manage this behavior. When this option is turned off, none of the sub-volumes are exported and hence the sub-directories cannot be mounted. This option is on by default.
    To set this option to off, run the following command:
    # gluster volume set VOLNAME nfs.export-dirs off
    To set this option to on, run the following command:
    # gluster volume set VOLNAME nfs.export-dirs on
  • nfs.export-dir: This option allows you to export specified subdirectories on the volume. You can export a particular subdirectory, for example:
    # gluster volume set VOLNAME nfs.export-dir /d1,/d2/d3/d4,/d6
    where d1, d2, d3, d4, d6 are the sub-directories.
    You can also control the access to mount these subdirectories based on the IP address, host name or a CIDR. For example:
    # gluster volume set VOLNAME nfs.export-dir "/d1(<ip address>),/d2/d3/d4(<host name>|<ip address>),/d6(<CIDR>)"
    The directory /d1, /d2 and /d6 are directories inside the volume. Volume name must not be added to the path. For example if the volume vol1 has directories d1 and d2, then to export these directories use the following command: #gluster volume set vol1 nfs.export-dir "/d1(192.0.2.2),d2(192.0.2.34)"

7.3.1.4. Testing Volumes Mounted Using NFS

You can confirm that Red Hat Storage directories are mounting successfully.
To test mounted volumes

Testing Mounted Red Hat Storage Volumes

Using the command-line, verify the Red Hat Storage volumes have been successfully mounted. All three commands can be run in the order listed, or used independently to verify a volume has been successfully mounted.
  1. Run the mount command to check whether the volume was successfully mounted.
    # mount
    server1:/test-volume on /mnt/glusterfs type nfs (rw,addr=server1)
  2. Run the df command to display the aggregated storage space from all the bricks in a volume.
    # df -h /mnt/glusterfs 
    Filesystem              Size Used Avail Use% Mounted on 
    server1:/test-volume    28T  22T  5.4T  82%  /mnt/glusterfs
  3. Move to the mount directory using the cd command, and list the contents.
    # cd /mnt/glusterfs 
    # ls

7.3.2. Troubleshooting NFS

Q: The mount command on the NFS client fails with RPC Error: Program not registered. This error is encountered due to one of the following reasons:
Q: The rpcbind service is not running on the NFS client. This could be due to the following reasons:
Q: The NFS server glusterfsd starts but the initialization fails with nfsrpc- service: portmap registration of program failed error message in the log.
Q: The NFS server start-up fails with the message Port is already in use in the log file.
Q: The mount command fails with NFS server failed error:
Q: The showmount command fails with clnt_create: RPC: Unable to receive error. This error is encountered due to the following reasons:
Q: The application fails with Invalid argument or Value too large for defined data type
Q: After the machine that is running NFS server is restarted the client fails to reclaim the locks held earlier.
Q: The rpc actor failed to complete successfully error is displayed in the nfs.log, even after the volume is mounted successfully.
Q: The mount command fails with No such file or directory.
Q:
The mount command on the NFS client fails with RPC Error: Program not registered. This error is encountered due to one of the following reasons:
  • The NFS server is not running. You can check the status using the following command:
    # gluster volume status
  • The volume is not started. You can check the status using the following command:
    # gluster volume info
  • rpcbind is restarted. To check if rpcbind is running, execute the following command:
    # ps ax| grep rpcbind
A:
  • If the NFS server is not running, then restart the NFS server using the following command:
    # gluster volume start VOLNAME
  • If the volume is not started, then start the volume using the following command:
    # gluster volume start VOLNAME
  • If both rpcbind and NFS server is running then restart the NFS server using the following commands:
    # gluster volume stop VOLNAME
    # gluster volume start VOLNAME
Q:
The rpcbind service is not running on the NFS client. This could be due to the following reasons:
  • The portmap is not running.
  • Another instance of kernel NFS server or glusterNFS server is running.
A:
Start the rpcbind service by running the following command:
# service rpcbind start
Q:
The NFS server glusterfsd starts but the initialization fails with nfsrpc- service: portmap registration of program failed error message in the log.
A:
NFS start-up succeeds but the initialization of the NFS service can still fail preventing clients from accessing the mount points. Such a situation can be confirmed from the following error messages in the log file:
[2010-05-26 23:33:47] E [rpcsvc.c:2598:rpcsvc_program_register_portmap] rpc-service: Could notregister with portmap 
[2010-05-26 23:33:47] E [rpcsvc.c:2682:rpcsvc_program_register] rpc-service: portmap registration of program failed
[2010-05-26 23:33:47] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
[2010-05-26 23:33:47] E [nfs.c:125:nfs_init_versions] nfs: Program init failed
[2010-05-26 23:33:47] C [nfs.c:531:notify] nfs: Failed to initialize protocols
[2010-05-26 23:33:49] E [rpcsvc.c:2614:rpcsvc_program_unregister_portmap] rpc-service: Could not unregister with portmap
[2010-05-26 23:33:49] E [rpcsvc.c:2731:rpcsvc_program_unregister] rpc-service: portmap unregistration of program failed
[2010-05-26 23:33:49] E [rpcsvc.c:2744:rpcsvc_program_unregister] rpc-service: Program unregistration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
  1. Start the rpcbind service on the NFS server by running the following command:
    # service rpcbind start
    After starting rpcbind service, glusterFS NFS server needs to be restarted.
  2. Stop another NFS server running on the same machine.
    Such an error is also seen when there is another NFS server running on the same machine but it is not the glusterFS NFS server. On Linux systems, this could be the kernel NFS server. Resolution involves stopping the other NFS server or not running the glusterFS NFS server on the machine. Before stopping the kernel NFS server, ensure that no critical service depends on access to that NFS server's exports.
    On Linux, kernel NFS servers can be stopped by using either of the following commands depending on the distribution in use:
    # service nfs-kernel-server stop
    # service nfs stop
  3. Restart glusterFS NFS server.
Q:
The NFS server start-up fails with the message Port is already in use in the log file.
A:
This error can arise in case there is already a glusterFS NFS server running on the same machine. This situation can be confirmed from the log file, if the following error lines exist:
[2010-05-26 23:40:49] E [rpc-socket.c:126:rpcsvc_socket_listen] rpc-socket: binding socket failed:Address already in use
[2010-05-26 23:40:49] E [rpc-socket.c:129:rpcsvc_socket_listen] rpc-socket: Port is already in use 
[2010-05-26 23:40:49] E [rpcsvc.c:2636:rpcsvc_stage_program_register] rpc-service: could not create listening connection 
[2010-05-26 23:40:49] E [rpcsvc.c:2675:rpcsvc_program_register] rpc-service: stage registration of program failed 
[2010-05-26 23:40:49] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465 
[2010-05-26 23:40:49] E [nfs.c:125:nfs_init_versions] nfs: Program init failed 
[2010-05-26 23:40:49] C [nfs.c:531:notify] nfs: Failed to initialize protocols
In this release, the glusterFS NFS server does not support running multiple NFS servers on the same machine. To resolve the issue, one of the glusterFS NFS servers must be shutdown.
Q:
The mount command fails with NFS server failed error:
A:
mount: mount to NFS server '10.1.10.11' failed: timed out (retrying).
Review and apply the suggested solutions to correct the issue.
  • Disable name lookup requests from NFS server to a DNS server.
    The NFS server attempts to authenticate NFS clients by performing a reverse DNS lookup to match host names in the volume file with the client IP addresses. There can be a situation where the NFS server either is not able to connect to the DNS server or the DNS server is taking too long to respond to DNS request. These delays can result in delayed replies from the NFS server to the NFS client resulting in the timeout error.
    NFS server provides a work-around that disables DNS requests, instead relying only on the client IP addresses for authentication. The following option can be added for successful mounting in such situations:
    option nfs.addr.namelookup off

    Note

    Remember that disabling the NFS server forces authentication of clients to use only IP addresses. If the authentication rules in the volume file use host names, those authentication rules will fail and client mounting will fail.
  • NFS version used by the NFS client is other than version 3 by default.
    glusterFS NFS server supports version 3 of NFS protocol by default. In recent Linux kernels, the default NFS version has been changed from 3 to 4. It is possible that the client machine is unable to connect to the glusterFS NFS server because it is using version 4 messages which are not understood by glusterFS NFS server. The timeout can be resolved by forcing the NFS client to use version 3. The vers option to mount command is used for this purpose:
    # mount nfsserver:export -o vers=3 /MOUNTPOINT
Q:
The showmount command fails with clnt_create: RPC: Unable to receive error. This error is encountered due to the following reasons:
  • The firewall might have blocked the port.
  • rpcbind might not be running.
A:
Check the firewall settings, and open ports 111 for portmap requests/replies and glusterFS NFS server requests/replies. glusterFS NFS server operates over the following port numbers: 38465, 38466, and 38467.
Q:
The application fails with Invalid argument or Value too large for defined data type
A:
These two errors generally happen for 32-bit NFS clients, or applications that do not support 64-bit inode numbers or large files.
Use the following option from the command-line interface to make glusterFS NFS return 32-bit inode numbers instead:
NFS.enable-ino32 <on | off>
This option is off by default, which permits NFS to return 64-bit inode numbers by default.
Applications that will benefit from this option include those that are:
  • built and run on 32-bit machines, which do not support large files by default,
  • built to 32-bit standards on 64-bit systems.
Applications which can be rebuilt from source are recommended to be rebuilt using the following flag with gcc:
-D_FILE_OFFSET_BITS=64
Q:
After the machine that is running NFS server is restarted the client fails to reclaim the locks held earlier.
A:
The Network Status Monitor (NSM) service daemon (rpc.statd) is started before gluster NFS server. Hence, NSM sends a notification to the client to reclaim the locks. When the clients send the reclaim request, the NFS server does not respond as it is not started yet. Hence the client request fails.
Solution: To resolve the issue, prevent the NSM daemon from starting when the server starts.
Run chkconfig --list nfslock to check if NSM is configured during OS boot.
If any of the entries are on,run chkconfig nfslock off to disable NSM clients during boot, which resolves the issue.
Q:
The rpc actor failed to complete successfully error is displayed in the nfs.log, even after the volume is mounted successfully.
A:
gluster NFS supports only NFS version 3. When nfs-utils mounts a client when the version is not mentioned, it tries to negotiate using version 4 before falling back to version 3. This is the cause of the messages in both the server log and the nfs.log file.
[2013-06-25 00:03:38.160547] W [rpcsvc.c:180:rpcsvc_program_actor] 0-rpc-service: RPC program version not available (req 100003 4)
[2013-06-25 00:03:38.160669] E [rpcsvc.c:448:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully
To resolve the issue, declare NFS version 3 and the noacl option in the mount command as follows:
mount -t nfs -o vers=3,noacl server1:/test-volume /mnt/glusterfs
Q:
The mount command fails with No such file or directory.
A:
This problem is encountered as the volume is not present.

7.3.3. NFS Ganesha

Important

  • nfs-ganesha is a technology preview feature. Technology preview features are not fully supported under Red Hat subscription level agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process. As Red Hat considers making future iterations of technology preview features generally available, we will provide commercially reasonable support to resolve any reported issues that customers experience when using these features.
  • Red Hat Storage currently does not support NFSv4 delegations, Multi-head NFS and High Availability. These will be added in the upcoming releases of Red Hat Storage nfs-ganesha. It is not a feature recommended for production deployment in its current form. However, Red Hat Storage volumes can be exported via nfs-ganesha for consumption by both NFSv3 and NFSv4 clients.
nfs-ganesha is a user space file server for the NFS protocol with support for NFSv3, v4, v4.1, pNFS.
Red Hat Storage volume is supported with the community's V2.1-RC1 release of nfs-ganesha. The current release of Red Hat Storage offers nfs-ganesha for use with Red Hat Storage volumes as an early beta technology preview feature. This community release of nfs-ganesha has improved NFSv4 protocol support and stability. With this technology preview feature Red Hat Storage volumes can be exported via nfs-ganesha for consumption by both NFSv3 and NFSv4 clients.

7.3.3.1. Installing nfs-ganesha

nfs-ganesha can be installed using any of the following methods:
  • Installing nfs-ganesha using yum
  • Installing nfs-ganesha during an ISO Installation
  • Installing nfs-ganesha using RHN / Red Hat Satellite
7.3.3.1.1. Installing using yum
The nfs-ganesha package can be installed using the following command:
# yum install nfs-ganesha
The package installs the following:
# rpm -qlp nfs-ganesha-2.1.0.2-4.el6rhs.x86_64.rpm
/etc/glusterfs-ganesha/README
/etc/glusterfs-ganesha/nfs-ganesha.conf
/etc/glusterfs-ganesha/org.ganesha.nfsd.conf
/usr/bin/ganesha.nfsd
/usr/lib64/ganesha
/usr/lib64/ganesha/libfsalgluster.so
/usr/lib64/ganesha/libfsalgluster.so.4
/usr/lib64/ganesha/libfsalgluster.so.4.2.0
/usr/lib64/ganesha/libfsalgpfs.so
/usr/lib64/ganesha/libfsalgpfs.so.4
/usr/lib64/ganesha/libfsalgpfs.so.4.2.0
/usr/lib64/ganesha/libfsalnull.so
/usr/lib64/ganesha/libfsalnull.so.4
/usr/lib64/ganesha/libfsalnull.so.4.2.0
/usr/lib64/ganesha/libfsalproxy.so
/usr/lib64/ganesha/libfsalproxy.so.4
/usr/lib64/ganesha/libfsalproxy.so.4.2.0
/usr/lib64/ganesha/libfsalvfs.so
/usr/lib64/ganesha/libfsalvfs.so.4
/usr/lib64/ganesha/libfsalvfs.so.4.2.0
/usr/share/doc/nfs-ganesha
/usr/share/doc/nfs-ganesha/ChangeLog
/usr/share/doc/nfs-ganesha/LICENSE.txt
/usr/bin/ganesha.nfsd is the nfs-ganesha daemon.
7.3.3.1.2. Installing nfs-ganesha during an ISO Installation
For more information about installing Red Hat Storage using an ISO image, see Installing from an ISO Image section in the Red Hat Storage 3 Installation Guide.
  1. While installing Red Hat Storage using an ISO, in the Customizing the Software Selection screen, select Red Hat Storage Tools Group and click Optional Packages.
  2. From the list of packages, select nfs-ganesha and click Close.
    Installing nfs-ganesha

    Figure 7.1. Installing nfs-ganesha

  3. Proceed with the remaining installation steps for installing Red Hat Storage. For more information on how to install Red Hat Storage using an ISO, see Installing from an ISO Image section of the Red Hat Storage 3 Installation Guide.
7.3.3.1.3. Installing from Red Hat Satellite Server or Red Hat Network
Ensure that your system is subscribed to the required channels. For more information refer to Subscribing to the Red Hat Storage Server Channels in the Red Hat Storage 3.0 Installation Guide.
  1. Install nfs-ganesha by executing the following command:
    # yum install nfs-ganesha
  2. Verify the installation by running the following command:
    # yum list nfs-ganesha
    
    Installed Packages
    nfs-ganesha.x86_64      2.1.0.2-4.el6rhs      rhs-3-for-rhel-6-server-rpms

7.3.3.2. Pre-requisites to run nfs-ganesha

Note

  • Red Hat does not recommend running nfs-ganesha in mixed-mode and/or hybrid environments. This includes multi-protocol environments where NFS and CIFS shares are used simultaneously, or running nfs-ganesha together with gluster-nfs, kernel-nfs or gluster-fuse clients.
  • Only one of nfs-ganesha, gluster-nfs server or kernel-nfs can be enabled on a given machine/host as all NFS implementations use the port 2049 and only one can be active at a given time. Hence you must disable gluster-nfs (it is enabled by default on a volume) and kernel-nfs before nfs-ganesha is started.
Ensure that the following pre-requisites are taken into consideration before you run nfs-ganesha in your environment:
  • A Red Hat Storage volume must be available for export and nfs-ganesha rpms are installed.
  • IPv6 must be enabled on the host interface which is used by the nfs-ganesha daemon. To enable IPv6 support, perform the following steps:
    1. Comment or remove the line options ipv6 disable=1 in the /etc/modprobe.d/ipv6.conf file.
    2. Reboot the system.

7.3.3.3. Exporting and Unexporting Volumes through nfs-ganesha

This release supports gluster CLI commands to export or unexport Red Hat Storage volumes via nfs-ganesha. These commands use the DBus interface to add or remove exports dynamically.
Before using the CLI options for nfs-ganesha, execute the following steps:
  1. Copy the org.ganesha.nfsd.conf file into the /etc/dbus-1/system.d/ directory. The org.ganesha.nfsd.conf file can be found in /etc/glusterfs-ganesha/ on installation of nfs-ganesha rpms.
  2. Execute the following command:
    service messagebus restart

Note

The connection to the DBus server is only made once at server initialization. nfs-ganesha must be restarted if the file was not copied prior to starting nfs-ganesha.
Exporting Volumes through nfs-ganesha

Volume set options can be used to export or unexport a Red Hat Storage volume via nfs-ganesha. Use these volume options to export a Red Hat Storage volume.

  1. Disable gluster-nfs on all Red Hat Storage volumes.
    # gluster volume set volname nfs.disable on
    gluster-nfs and nfs-ganesha cannot run simultaneously. Hence, gluster-nfs must be disabled on all Red Hat Storage volumes before exporting them via nfs-ganesha.
  2. To set the host IP, execute the following command:
    # gluster vol set volname nfs-ganesha.host IP
    This command sets the host IP to start nfs-ganesha.In a multi-node volume environment, it is recommended that all the nfs-ganesha related commands/operations must be run on one of the nodes only. Hence, the IP address provided must be the IP of that node. If a Red Hat Storage volume is already exported, setting a different host IP will take immediate effect.
  3. To start nfs-ganesha, execute the following command:
    # gluster volume set volname nfs-ganesha.enable on
Unexporting Volumes through nfs-ganesha

To unexport a Red Hat Storage volume, execute the following command:

# gluster vol set volname nfs-ganesha.enable off

This command unexports the Red Hat Storage volume without affecting other exports.
Restarting nfs-ganesha

Before restarting nfs-ganesha, unexport all Red Hat Storage volumes by executing the following command:

# gluster vol set volname nfs-ganesha.enable off

Execute each of the following steps on all the volumes to be exported.
  1. To set the host IP, execute the following command:
    # gluster vol set volname nfs-ganesha.host IP
  2. To restart nfs-ganesha, execute the following command:
    # gluster volume set volname nfs-ganesha.enable on
Verifying the Status

To verify the status of the volume set options, follow the guidelines mentioned below:

  • Check if nfs-ganesha is started by executing the following command:
    ps aux | grep ganesha
  • Check if the volume is exported.
    showmount -e localhost
  • The logs of ganesha.nfsd daemon are written to /tmp/ganesha.log. Check the log file on noticing any unexpected behavior. This file will be lost in case of a system reboot.

7.3.3.4. Supported Features of nfs-ganesha

Dynamic Exports of Volumes

Previous versions of nfs-ganesha required a restart of the server whenever the administrator had to add or remove exports. nfs-ganesha now supports addition and removal of exports dynamically. Dynamic exports is managed by the DBus interface. DBus is a system local IPC mechanism for system management and peer-to-peer application communication.

Note

Modifying an export in place is currently not supported.
Exporting Multiple Entries

With this version of nfs-ganesha, multiple Red Hat Storage volumes or sub-directories can now be exported simultaneously.

Pseudo File System

This version of nfs-ganesha now creates and maintains a NFSv4 pseudo-file system, which provides clients with seamless access to all exported objects on the server.

Access Control List

nfs-ganesha NFSv4 protocol includes integrated support for Access Control List (ACL)s, which are similar to those used by Windows. These ACLs can be used to identify a trustee and specify the access rights allowed, or denied for that trustee.This feature is disabled by default.

Note

AUDIT and ALARM ACE types are not currently supported.

7.3.3.5. Manually Configuring nfs-ganesha Exports

It is recommended to use gluster CLI options to export or unexport volumes through nfs-ganesha. However, this section provides some information on changing configurable parameters in nfs-ganesha. Such parameter changes require nfs-ganesha to be started manually.
To start nfs-ganesha manually, execute the following command:
# /usr/bin/ganesha.nfsd -f location of nfs-ganesha.conf file -L location of log file -N log level -d
For example:
/usr/bin/ganesha.nfsd -f nfs-ganesha.conf -L nfs-ganesha.log -N NIV_DEBUG -d
where:
  • nfs-ganesha.conf is the configuration file that is available by default on installation of nfs-ganesha rpms. This file is located at /etc/glusterfs-ganesha.
  • nfs-ganesha.log is the log file for the ganesha.nfsd process.
  • NIV_DEBUG is the log level.
Sample export configuration file:
To export any Red Hat Storage volume or directory, copy the EXPORT block into a .conf file, for example export.conf. Edit the parameters appropriately and include the export.conf file in nfs-ganesha.conf. This can be done by adding the line below at the end of nfs-ganesha.conf.
%include "export.conf"
The following are the minimal set of parameters required to export any entry. The values given here are the default values used by the CLI options to start or stop nfs-ganesha.
# cat export.conf 

EXPORT{    
	Export_Id = 1 ;   # Export ID unique to each export
	Path = "volume_path";  # Path of the volume to be exported. Eg: "/test_volume"

	FSAL { 
		name = GLUSTER;
		hostname = "10.xx.xx.xx";  # IP of one of the nodes in the trusted pool
		volume = "volume_name";	 # Volume name. Eg: "test_volume"
	}

	Access_type = RW;	 # Access permissions
	Squash = No_root_squash; # To enable/disable root squashing
	Disable_ACL = TRUE;	 # To enable/disable ACL
	Pseudo = "pseudo_path";	 # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
	Protocols = "3,4" ;	 # NFS protocols supported
	Transports = "UDP,TCP" ; # Transport protocols supported
	SecType = "sys";	 # Security flavors supported
}
The following section describes various configurations possible via nfs-ganesha. Minor changes have to be made to the export.conf file to see the expected behavior.
Exporting Subdirectories

To export subdirectories within a volume, edit the following parameters in the export.conf file.

Path = "path_to_subdirectory";  # Path of the volume to be exported. Eg: "/test_volume/test_subdir"

	FSAL { 
		name = GLUSTER;
		hostname = "10.xx.xx.xx";  # IP of one of the nodes in the trusted pool
		volume = "volume_name";	 # Volume name. Eg: "test_volume"
		volpath = "path_to_subdirectory_with_respect_to_volume"; #Subdirectory path from the root of the volume. Eg: "/test_subdir"
	}
Exporting Multiple Entries

To export multiple export entries, define separate export block in the export.conf file for each of the entires, with unique export ID.

For example:
# cat export.conf 
EXPORT{    
	Export_Id = 1 ;   # Export ID unique to each export
	Path = "test_volume";  # Path of the volume to be exported. Eg: "/test_volume"

	FSAL { 
		name = GLUSTER;
		hostname = "10.xx.xx.xx";  # IP of one of the nodes in the trusted pool
		volume = "test_volume";	 # Volume name. Eg: "test_volume"
	}

	Access_type = RW;	 # Access permissions
	Squash = No_root_squash; # To enable/disable root squashing
	Disable_ACL = TRUE;	 # To enable/disable ACL
	Pseudo = "/test_volume";	 # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
	Protocols = "3,4" ;	 # NFS protocols supported
	Transports = "UDP,TCP" ; # Transport protocols supported
	SecType = "sys";	 # Security flavors supported
}

EXPORT{    
	Export_Id = 2 ;   # Export ID unique to each export
	Path = "test_volume/test_subdir";  # Path of the volume to be exported. Eg: "/test_volume"

	FSAL { 
		name = GLUSTER;
		hostname = "10.xx.xx.xx";  # IP of one of the nodes in the trusted pool
		volume = "test_volume";	 # Volume name. Eg: "test_volume"
		volpath = "/test_subdir"
	}

	Access_type = RW;	 # Access permissions
	Squash = No_root_squash; # To enable/disable root squashing
	Disable_ACL = "FALSE;	 # To enable/disable ACL
	Pseudo = "/test_subdir";	 # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
	Protocols = "3,4" ;	 # NFS protocols supported
	Transports = "UDP,TCP" ; # Transport protocols supported
	SecType = "sys";	 # Security flavors supported
}


#showmount -e localhost
Export list for localhost:
/test_volume (everyone)
/test_volume/test_subdir (everyone)
/        (everyone)
Providing Permissions for Specific Clients

The parameter values and permission values given in the EXPORT block applies to any client that mounts the exported volume. To provide specific permissions to specific clients , introduce a client block inside the EXPORT block.

For example, to assign specific permissions for client 10.70.43.92, add the following block in the EXPORT block.
client {
        clients = "10.xx.xx.xx";  # IP of the client.
        allow_root_access = true;
        access_type = "RO"; # Read-only permissions
        Protocols = "3"; # Allow only NFSv3 protocol.
        anonymous_uid = 1440;
        anonymous_gid = 72;
  }
All the other clients inherit the permissions that are declared outside the client block.
Enabling and Disabling NFSv4 ACLs

To enable NFSv4 ACLs , edit the following parameter:

Disable_ACL = FALSE;
Providing Pseudo Path for NFSv4 Mount

To set NFSv4 pseudo path , edit the below parameter:

Pseudo = "pseudo_path"; # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
This path has to be used while mounting the export entry in NFSv4 mode.
Adding and Removing Export Entries Dynamically

File org.ganesha.nfsd.conf is installed in /etc/glusterfs-ganesha/ as part of the nfs-ganesha rpms. To export entries dynamically without restarting nfs-ganesha, execute the following steps:

  1. Copy the file org.ganesha.nfsd.conf into the directory /etc/dbus-1/system.d/.
  2. Execute the following command:
    service messagebus restart
  • Adding an export dynamically

    To add an export dynamically, add an export block as explained in section Exporting Multiple Entries, and execute the following command:

    dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.AddExport string:/path-to-export.conf string:'EXPORT(Path=/path-in-export-block)'
    For example, to add testvol1 dynamically:
    dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.AddExport string:/home/nfs-ganesha/export.conf string:'EXPORT(Path=/testvol1)')
    			
    method return sender=:1.35 -> dest=:1.37 reply_serial=2
  • Removing an export dynamically

    To remove an export dynamically, execute the following command:

    dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.RemoveExport int32:export-id-in-the-export-block
    For example:
    dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.RemoveExport int32:79
    			
    method return sender=:1.35 -> dest=:1.37 reply_serial=2

7.3.3.6. Accessing nfs-ganesha Exports

nfs-ganesha exports can be accessed by mounting them in either NFSv3 or NFSv4 mode.
Mounting exports in NFSv3 mode

To mount an export in NFSv3 mode, execute the following command:

mount -t nfs -o vers=3 ip:/volname /mountpoint
For example:
mount -t nfs -o vers=3 10.70.0.0:/testvol /mnt
Mounting exports in NFSv4 mode

To mount an export in NFSv4 mode, execute the following command:

mount -t nfs -o vers=4 ip:/volname /mountpoint
For example:
mount -t nfs -o vers=4 10.70.0.0:/testvol /mnt

7.3.3.7. Troubleshooting

  • Situation

    nfs-ganesha fails to start.

    Solution

    Follow the listed steps to fix the issue:

    1. Review the /tmp/ganesha.log to understand the cause of failure.
    2. Ensure the kernel and gluster nfs services are inactive.
    3. Ensure you execute both the nfs-ganesha.host and nfs-ganesha.enable volume set options.
    For more information see, Section 7.3.3.5 Manually Configuring nfs-ganesha Exports.
  • Situation

    nfs-ganesha has started and fails to export a volume.

    Solution

    Follow the listed steps to fix the issue:

    1. Ensure the file org.ganesha.nfsd.conf is copied into /etc/dbus-1/systemd/ before starting nfs-ganesha.
    2. In case you had not copied the file, restart nfs-ganesha. For more information see, Section 7.3.3.3 Exporting and Unexporting Volumes through nfs-ganesha
  • Situation

    nfs-ganesha fails to stop

    Solution

    Execute the following steps

    1. Check for the status of the nfs-ganesha process.
    2. If it is still running, issue a kill -9 signal on its PID.
    3. Run the following command to check if nfs, mountd, rquotad, nlockmgr and rquotad services are unregistered cleanly.
      rpcinfo -p
      • If the services are not unregistered, then delete these entries using the following command:
        rpcinfo -d

        Note

        You can also restart the rpcbind service instead of using rpcinfo -d on individual entries.
    4. Force start the volume by using the following command:
      # gluster volume start volname force
  • Situation

    Permission issues.

    Solution

    By default, the root squash option is disabled when you start nfs-ganesha using the CLI. In case, you encounter any permission issues, check the unix permissions of the exported entry.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.