7.3. NFS
getfacl
and setfacl
operations on NFS clients. The following options are provided to configure the Access Control Lists (ACL) in the glusterFS NFS server with the nfs.acl
option. For example:
- To set nfs.acl
ON
, run the following command:# gluster volume set VOLNAME nfs.acl on
- To set nfs.acl
OFF
, run the following command:# gluster volume set VOLNAME nfs.acl off
Note
ON
by default.
7.3.1. Using NFS to Mount Red Hat Storage Volumes
Note
nfsmount.conf
file at /etc/nfsmount.conf
by adding the following text in the file:
Defaultvers=3
vers=3
manually in all the mount commands.
# mount nfsserver:export -o vers=3 /MOUNTPOINT
tcp,rdma
volume it could be changed using the volume set option nfs.transport-type
.
7.3.1.1. Manually Mounting Volumes Using NFS
mount
command to manually mount a Red Hat Storage volume using NFS.
- If a mount point has not yet been created for the volume, run the
mkdir
command to create a mount point.# mkdir /mnt/glusterfs
- Run the correct
mount
command for the system.- For Linux
# mount -t nfs -o vers=3 server1:/test-volume /mnt/glusterfs
- For Solaris
# mount -o vers=3 nfs://server1:38467/test-volume /mnt/glusterfs
mount
command to manually mount a Red Hat Storage volume using NFS over TCP.
Note
requested NFS version or transport protocol is not supported
nfs.mount-udp
is supported for mounting a volume, by default it is disabled. The following are the limitations:
- If
nfs.mount-udp
is enabled, the MOUNT protocol needed for NFSv3 can handle requests from NFS-clients that require MOUNT over UDP. This is useful for at least some versions of Solaris, IBM AIX and HP-UX. - Currently, MOUNT over UDP does not have support for mounting subdirectories on a volume. Mounting
server:/volume/subdir
exports is only functional when MOUNT over TCP is used. - MOUNT over UDP does not currently have support for different authentication options that MOUNT over TCP honors. Enabling
nfs.mount-udp
may give more permissions to NFS clients than intended via various authentication options likenfs.rpc-auth-allow
,nfs.rpc-auth-reject
andnfs.export-dir
.
- If a mount point has not yet been created for the volume, run the
mkdir
command to create a mount point.# mkdir /mnt/glusterfs
- Run the correct
mount
command for the system, specifying the TCP protocol option for the system.- For Linux
# mount -t nfs -o vers=3,mountproto=tcp server1:/test-volume /mnt/glusterfs
- For Solaris
# mount -o proto=tcp, nfs://server1:38467/test-volume /mnt/glusterfs
7.3.1.2. Automatically Mounting Volumes Using NFS
Note
/etc/auto.master
and /etc/auto.misc
files, and restart the autofs
service. Whenever a user or process attempts to access the directory it will be mounted in the background on-demand.
- Open the
/etc/fstab
file in a text editor. - Append the following configuration to the
fstab
file.HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR glusterfs mountdir nfs defaults,_netdev, 0 0
Using the example server names, the entry contains the following replaced values.server1:/test-volume /mnt/glusterfs nfs defaults,_netdev, 0 0
- Open the
/etc/fstab
file in a text editor. - Append the following configuration to the
fstab
file.HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR glusterfs nfs defaults,_netdev,mountproto=tcp 0 0
Using the example server names, the entry contains the following replaced values.server1:/test-volume /mnt/glusterfs nfs defaults,_netdev,mountproto=tcp 0 0
7.3.1.3. Authentication Support for Subdirectory Mount
nfs.export-dir
option to provide client authentication during sub-directory mount. The nfs.export-dir
and nfs.export-dirs
options provide granular control to restrict or allow specific clients to mount a sub-directory. These clients can be authenticated with either an IP, host name or a Classless Inter-Domain Routing (CIDR) range.
- nfs.export-dirs: By default, all NFS sub-volumes are exported as individual exports. This option allows you to manage this behavior. When this option is turned off, none of the sub-volumes are exported and hence the sub-directories cannot be mounted. This option is on by default.To set this option to off, run the following command:
# gluster volume set VOLNAME nfs.export-dirs off
To set this option to on, run the following command:# gluster volume set VOLNAME nfs.export-dirs on
- nfs.export-dir: This option allows you to export specified subdirectories on the volume. You can export a particular subdirectory, for example:
# gluster volume set VOLNAME nfs.export-dir /d1,/d2/d3/d4,/d6
where d1, d2, d3, d4, d6 are the sub-directories.You can also control the access to mount these subdirectories based on the IP address, host name or a CIDR. For example:# gluster volume set VOLNAME nfs.export-dir "/d1(<ip address>),/d2/d3/d4(<host name>|<ip address>),/d6(<CIDR>)"
The directory /d1, /d2 and /d6 are directories inside the volume. Volume name must not be added to the path. For example if the volume vol1 has directories d1 and d2, then to export these directories use the following command:#gluster volume set vol1 nfs.export-dir "/d1(192.0.2.2),d2(192.0.2.34)"
7.3.1.4. Testing Volumes Mounted Using NFS
Testing Mounted Red Hat Storage Volumes
Prerequisites
- Run the
mount
command to check whether the volume was successfully mounted.# mount server1:/test-volume on /mnt/glusterfs type nfs (rw,addr=server1)
- Run the
df
command to display the aggregated storage space from all the bricks in a volume.# df -h /mnt/glusterfs Filesystem Size Used Avail Use% Mounted on server1:/test-volume 28T 22T 5.4T 82% /mnt/glusterfs
- Move to the mount directory using the
cd
command, and list the contents.# cd /mnt/glusterfs # ls
7.3.2. Troubleshooting NFS
- Q: The mount command on the NFS client fails with RPC Error: Program not registered. This error is encountered due to one of the following reasons:
- Q: The rpcbind service is not running on the NFS client. This could be due to the following reasons:
- Q: The NFS server glusterfsd starts but the initialization fails with nfsrpc- service: portmap registration of program failed error message in the log.
- Q: The NFS server start-up fails with the message Port is already in use in the log file.
- Q: The mount command fails with NFS server failed error:
- Q: The showmount command fails with clnt_create: RPC: Unable to receive error. This error is encountered due to the following reasons:
- Q: The application fails with Invalid argument or Value too large for defined data type
- Q: After the machine that is running NFS server is restarted the client fails to reclaim the locks held earlier.
- Q: The rpc actor failed to complete successfully error is displayed in the nfs.log, even after the volume is mounted successfully.
- Q: The mount command fails with No such file or directory.
RPC Error: Program not registered
. This error is encountered due to one of the following reasons:
- The NFS server is not running. You can check the status using the following command:
# gluster volume status
- The volume is not started. You can check the status using the following command:
# gluster volume info
- rpcbind is restarted. To check if rpcbind is running, execute the following command:
# ps ax| grep rpcbind
- If the NFS server is not running, then restart the NFS server using the following command:
# gluster volume start VOLNAME
- If the volume is not started, then start the volume using the following command:
# gluster volume start VOLNAME
- If both rpcbind and NFS server is running then restart the NFS server using the following commands:
# gluster volume stop VOLNAME
# gluster volume start VOLNAME
rpcbind
service is not running on the NFS client. This could be due to the following reasons:
- The portmap is not running.
- Another instance of kernel NFS server or glusterNFS server is running.
rpcbind
service by running the following command:
# service rpcbind start
[2010-05-26 23:33:47] E [rpcsvc.c:2598:rpcsvc_program_register_portmap] rpc-service: Could notregister with portmap [2010-05-26 23:33:47] E [rpcsvc.c:2682:rpcsvc_program_register] rpc-service: portmap registration of program failed [2010-05-26 23:33:47] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465 [2010-05-26 23:33:47] E [nfs.c:125:nfs_init_versions] nfs: Program init failed [2010-05-26 23:33:47] C [nfs.c:531:notify] nfs: Failed to initialize protocols [2010-05-26 23:33:49] E [rpcsvc.c:2614:rpcsvc_program_unregister_portmap] rpc-service: Could not unregister with portmap [2010-05-26 23:33:49] E [rpcsvc.c:2731:rpcsvc_program_unregister] rpc-service: portmap unregistration of program failed [2010-05-26 23:33:49] E [rpcsvc.c:2744:rpcsvc_program_unregister] rpc-service: Program unregistration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465
- Start the rpcbind service on the NFS server by running the following command:
# service rpcbind start
After starting rpcbind service, glusterFS NFS server needs to be restarted. - Stop another NFS server running on the same machine.Such an error is also seen when there is another NFS server running on the same machine but it is not the glusterFS NFS server. On Linux systems, this could be the kernel NFS server. Resolution involves stopping the other NFS server or not running the glusterFS NFS server on the machine. Before stopping the kernel NFS server, ensure that no critical service depends on access to that NFS server's exports.On Linux, kernel NFS servers can be stopped by using either of the following commands depending on the distribution in use:
# service nfs-kernel-server stop # service nfs stop
- Restart glusterFS NFS server.
[2010-05-26 23:40:49] E [rpc-socket.c:126:rpcsvc_socket_listen] rpc-socket: binding socket failed:Address already in use [2010-05-26 23:40:49] E [rpc-socket.c:129:rpcsvc_socket_listen] rpc-socket: Port is already in use [2010-05-26 23:40:49] E [rpcsvc.c:2636:rpcsvc_stage_program_register] rpc-service: could not create listening connection [2010-05-26 23:40:49] E [rpcsvc.c:2675:rpcsvc_program_register] rpc-service: stage registration of program failed [2010-05-26 23:40:49] E [rpcsvc.c:2695:rpcsvc_program_register] rpc-service: Program registration failed: MOUNT3, Num: 100005, Ver: 3, Port: 38465 [2010-05-26 23:40:49] E [nfs.c:125:nfs_init_versions] nfs: Program init failed [2010-05-26 23:40:49] C [nfs.c:531:notify] nfs: Failed to initialize protocols
mount
command fails with NFS server failed error:
mount: mount to NFS server '10.1.10.11' failed: timed out (retrying).
- Disable name lookup requests from NFS server to a DNS server.The NFS server attempts to authenticate NFS clients by performing a reverse DNS lookup to match host names in the volume file with the client IP addresses. There can be a situation where the NFS server either is not able to connect to the DNS server or the DNS server is taking too long to respond to DNS request. These delays can result in delayed replies from the NFS server to the NFS client resulting in the timeout error.NFS server provides a work-around that disables DNS requests, instead relying only on the client IP addresses for authentication. The following option can be added for successful mounting in such situations:
option nfs.addr.namelookup off
Note
Remember that disabling the NFS server forces authentication of clients to use only IP addresses. If the authentication rules in the volume file use host names, those authentication rules will fail and client mounting will fail. - NFS version used by the NFS client is other than version 3 by default.glusterFS NFS server supports version 3 of NFS protocol by default. In recent Linux kernels, the default NFS version has been changed from 3 to 4. It is possible that the client machine is unable to connect to the glusterFS NFS server because it is using version 4 messages which are not understood by glusterFS NFS server. The timeout can be resolved by forcing the NFS client to use version 3. The vers option to mount command is used for this purpose:
# mount nfsserver:export -o vers=3 /MOUNTPOINT
- The firewall might have blocked the port.
- rpcbind might not be running.
NFS.enable-ino32 <on | off>
off
by default, which permits NFS to return 64-bit inode numbers by default.
- built and run on 32-bit machines, which do not support large files by default,
- built to 32-bit standards on 64-bit systems.
-D_FILE_OFFSET_BITS=64
chkconfig --list nfslock
to check if NSM is configured during OS boot.
on,
run chkconfig nfslock off
to disable NSM clients during boot, which resolves the issue.
rpc actor failed to complete successfully
error is displayed in the nfs.log, even after the volume is mounted successfully.
nfs.log
file.
[2013-06-25 00:03:38.160547] W [rpcsvc.c:180:rpcsvc_program_actor] 0-rpc-service: RPC program version not available (req 100003 4) [2013-06-25 00:03:38.160669] E [rpcsvc.c:448:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully
noacl
option in the mount command as follows:
mount -t nfs -o vers=3,noacl server1:/test-volume /mnt/glusterfs
No such file or directory
.
7.3.3. NFS Ganesha
Important
- nfs-ganesha is a technology preview feature. Technology preview features are not fully supported under Red Hat subscription level agreements (SLAs), may not be functionally complete, and are not intended for production use. However, these features provide early access to upcoming product innovations, enabling customers to test functionality and provide feedback during the development process. As Red Hat considers making future iterations of technology preview features generally available, we will provide commercially reasonable support to resolve any reported issues that customers experience when using these features.
- Red Hat Storage currently does not support NFSv4 delegations, Multi-head NFS and High Availability. These will be added in the upcoming releases of Red Hat Storage nfs-ganesha. It is not a feature recommended for production deployment in its current form. However, Red Hat Storage volumes can be exported via nfs-ganesha for consumption by both NFSv3 and NFSv4 clients.
7.3.3.1. Installing nfs-ganesha
- Installing nfs-ganesha using yum
- Installing nfs-ganesha during an ISO Installation
- Installing nfs-ganesha using RHN / Red Hat Satellite
7.3.3.1.1. Installing using yum
# yum install nfs-ganesha
# rpm -qlp nfs-ganesha-2.1.0.2-4.el6rhs.x86_64.rpm /etc/glusterfs-ganesha/README /etc/glusterfs-ganesha/nfs-ganesha.conf /etc/glusterfs-ganesha/org.ganesha.nfsd.conf /usr/bin/ganesha.nfsd /usr/lib64/ganesha /usr/lib64/ganesha/libfsalgluster.so /usr/lib64/ganesha/libfsalgluster.so.4 /usr/lib64/ganesha/libfsalgluster.so.4.2.0 /usr/lib64/ganesha/libfsalgpfs.so /usr/lib64/ganesha/libfsalgpfs.so.4 /usr/lib64/ganesha/libfsalgpfs.so.4.2.0 /usr/lib64/ganesha/libfsalnull.so /usr/lib64/ganesha/libfsalnull.so.4 /usr/lib64/ganesha/libfsalnull.so.4.2.0 /usr/lib64/ganesha/libfsalproxy.so /usr/lib64/ganesha/libfsalproxy.so.4 /usr/lib64/ganesha/libfsalproxy.so.4.2.0 /usr/lib64/ganesha/libfsalvfs.so /usr/lib64/ganesha/libfsalvfs.so.4 /usr/lib64/ganesha/libfsalvfs.so.4.2.0 /usr/share/doc/nfs-ganesha /usr/share/doc/nfs-ganesha/ChangeLog /usr/share/doc/nfs-ganesha/LICENSE.txt
/usr/bin/ganesha.nfsd
is the nfs-ganesha daemon.
7.3.3.1.2. Installing nfs-ganesha during an ISO Installation
- While installing Red Hat Storage using an ISO, in the Customizing the Software Selection screen, select Red Hat Storage Tools Group and click Optional Packages.
- From the list of packages, select
nfs-ganesha
and click Close.Figure 7.1. Installing nfs-ganesha
- Proceed with the remaining installation steps for installing Red Hat Storage. For more information on how to install Red Hat Storage using an ISO, see Installing from an ISO Image section of the Red Hat Storage 3 Installation Guide.
7.3.3.1.3. Installing from Red Hat Satellite Server or Red Hat Network
- Install nfs-ganesha by executing the following command:
# yum install nfs-ganesha
- Verify the installation by running the following command:
# yum list nfs-ganesha Installed Packages nfs-ganesha.x86_64 2.1.0.2-4.el6rhs rhs-3-for-rhel-6-server-rpms
7.3.3.2. Pre-requisites to run nfs-ganesha
Note
- Red Hat does not recommend running nfs-ganesha in mixed-mode and/or hybrid environments. This includes multi-protocol environments where NFS and CIFS shares are used simultaneously, or running nfs-ganesha together with gluster-nfs, kernel-nfs or gluster-fuse clients.
- Only one of nfs-ganesha, gluster-nfs server or kernel-nfs can be enabled on a given machine/host as all NFS implementations use the port 2049 and only one can be active at a given time. Hence you must disable gluster-nfs (it is enabled by default on a volume) and kernel-nfs before nfs-ganesha is started.
- A Red Hat Storage volume must be available for export and nfs-ganesha rpms are installed.
- IPv6 must be enabled on the host interface which is used by the nfs-ganesha daemon. To enable IPv6 support, perform the following steps:
- Comment or remove the line
options ipv6 disable=1
in the/etc/modprobe.d/ipv6.conf
file. - Reboot the system.
7.3.3.3. Exporting and Unexporting Volumes through nfs-ganesha
- Copy the
org.ganesha.nfsd.conf
file into the/etc/dbus-1/system.d/
directory. Theorg.ganesha.nfsd.conf
file can be found in/etc/glusterfs-ganesha/
on installation of nfs-ganesha rpms. - Execute the following command:
service messagebus restart
Note
Volume set options can be used to export or unexport a Red Hat Storage volume via nfs-ganesha. Use these volume options to export a Red Hat Storage volume.
- Disable gluster-nfs on all Red Hat Storage volumes.
# gluster volume set volname nfs.disable on
gluster-nfs and nfs-ganesha cannot run simultaneously. Hence, gluster-nfs must be disabled on all Red Hat Storage volumes before exporting them via nfs-ganesha. - To set the host IP, execute the following command:
# gluster vol set volname nfs-ganesha.host IP
This command sets the host IP to start nfs-ganesha.In a multi-node volume environment, it is recommended that all the nfs-ganesha related commands/operations must be run on one of the nodes only. Hence, the IP address provided must be the IP of that node. If a Red Hat Storage volume is already exported, setting a different host IP will take immediate effect. - To start nfs-ganesha, execute the following command:
# gluster volume set volname nfs-ganesha.enable on
To unexport a Red Hat Storage volume, execute the following command:
# gluster vol set volname nfs-ganesha.enable off
Before restarting nfs-ganesha, unexport all Red Hat Storage volumes by executing the following command:
# gluster vol set volname nfs-ganesha.enable off
- To set the host IP, execute the following command:
# gluster vol set volname nfs-ganesha.host IP
- To restart nfs-ganesha, execute the following command:
# gluster volume set volname nfs-ganesha.enable on
To verify the status of the volume set options, follow the guidelines mentioned below:
- Check if nfs-ganesha is started by executing the following command:
ps aux | grep ganesha
- Check if the volume is exported.
showmount -e localhost
- The logs of ganesha.nfsd daemon are written to
/tmp/ganesha.log
. Check the log file on noticing any unexpected behavior. This file will be lost in case of a system reboot.
7.3.3.4. Supported Features of nfs-ganesha
Previous versions of nfs-ganesha required a restart of the server whenever the administrator had to add or remove exports. nfs-ganesha now supports addition and removal of exports dynamically. Dynamic exports is managed by the DBus interface. DBus is a system local IPC mechanism for system management and peer-to-peer application communication.
Note
With this version of nfs-ganesha, multiple Red Hat Storage volumes or sub-directories can now be exported simultaneously.
This version of nfs-ganesha now creates and maintains a NFSv4 pseudo-file system, which provides clients with seamless access to all exported objects on the server.
nfs-ganesha NFSv4 protocol includes integrated support for Access Control List (ACL)s, which are similar to those used by Windows. These ACLs can be used to identify a trustee and specify the access rights allowed, or denied for that trustee.This feature is disabled by default.
Note
7.3.3.5. Manually Configuring nfs-ganesha Exports
# /usr/bin/ganesha.nfsd -f location of nfs-ganesha.conf file -L location of log file -N log level -d
/usr/bin/ganesha.nfsd -f nfs-ganesha.conf -L nfs-ganesha.log -N NIV_DEBUG -d
nfs-ganesha.conf
is the configuration file that is available by default on installation of nfs-ganesha rpms. This file is located at/etc/glusterfs-ganesha.
nfs-ganesha.log
is the log file for the ganesha.nfsd process.- NIV_DEBUG is the log level.
EXPORT
block into a .conf
file, for example export.conf
. Edit the parameters appropriately and include the export.conf
file in nfs-ganesha.conf
. This can be done by adding the line below at the end of nfs-ganesha.conf
.
%include "export.conf"
# cat export.conf EXPORT{ Export_Id = 1 ; # Export ID unique to each export Path = "volume_path"; # Path of the volume to be exported. Eg: "/test_volume" FSAL { name = GLUSTER; hostname = "10.xx.xx.xx"; # IP of one of the nodes in the trusted pool volume = "volume_name"; # Volume name. Eg: "test_volume" } Access_type = RW; # Access permissions Squash = No_root_squash; # To enable/disable root squashing Disable_ACL = TRUE; # To enable/disable ACL Pseudo = "pseudo_path"; # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo" Protocols = "3,4" ; # NFS protocols supported Transports = "UDP,TCP" ; # Transport protocols supported SecType = "sys"; # Security flavors supported }
export.conf
file to see the expected behavior.
To export subdirectories within a volume, edit the following parameters in the export.conf
file.
Path = "path_to_subdirectory"; # Path of the volume to be exported. Eg: "/test_volume/test_subdir" FSAL { name = GLUSTER; hostname = "10.xx.xx.xx"; # IP of one of the nodes in the trusted pool volume = "volume_name"; # Volume name. Eg: "test_volume" volpath = "path_to_subdirectory_with_respect_to_volume"; #Subdirectory path from the root of the volume. Eg: "/test_subdir" }
To export multiple export entries, define separate export block in the export.conf file for each of the entires, with unique export ID.
# cat export.conf EXPORT{ Export_Id = 1 ; # Export ID unique to each export Path = "test_volume"; # Path of the volume to be exported. Eg: "/test_volume" FSAL { name = GLUSTER; hostname = "10.xx.xx.xx"; # IP of one of the nodes in the trusted pool volume = "test_volume"; # Volume name. Eg: "test_volume" } Access_type = RW; # Access permissions Squash = No_root_squash; # To enable/disable root squashing Disable_ACL = TRUE; # To enable/disable ACL Pseudo = "/test_volume"; # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo" Protocols = "3,4" ; # NFS protocols supported Transports = "UDP,TCP" ; # Transport protocols supported SecType = "sys"; # Security flavors supported } EXPORT{ Export_Id = 2 ; # Export ID unique to each export Path = "test_volume/test_subdir"; # Path of the volume to be exported. Eg: "/test_volume" FSAL { name = GLUSTER; hostname = "10.xx.xx.xx"; # IP of one of the nodes in the trusted pool volume = "test_volume"; # Volume name. Eg: "test_volume" volpath = "/test_subdir" } Access_type = RW; # Access permissions Squash = No_root_squash; # To enable/disable root squashing Disable_ACL = "FALSE; # To enable/disable ACL Pseudo = "/test_subdir"; # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo" Protocols = "3,4" ; # NFS protocols supported Transports = "UDP,TCP" ; # Transport protocols supported SecType = "sys"; # Security flavors supported } #showmount -e localhost Export list for localhost: /test_volume (everyone) /test_volume/test_subdir (everyone) / (everyone)
The parameter values and permission values given in the EXPORT
block applies to any client that mounts the exported volume. To provide specific permissions to specific clients , introduce a client
block inside the EXPORT
block.
EXPORT
block.
client { clients = "10.xx.xx.xx"; # IP of the client. allow_root_access = true; access_type = "RO"; # Read-only permissions Protocols = "3"; # Allow only NFSv3 protocol. anonymous_uid = 1440; anonymous_gid = 72; }
client
block.
To enable NFSv4 ACLs , edit the following parameter:
Disable_ACL = FALSE;
To set NFSv4 pseudo path , edit the below parameter:
Pseudo = "pseudo_path"; # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
File org.ganesha.nfsd.conf
is installed in /etc/glusterfs-ganesha/
as part of the nfs-ganesha rpms. To export entries dynamically without restarting nfs-ganesha, execute the following steps:
- Copy the file
org.ganesha.nfsd.conf
into the directory/etc/dbus-1/system.d/
. - Execute the following command:
service messagebus restart
- Adding an export dynamically
To add an export dynamically, add an export block as explained in section Exporting Multiple Entries, and execute the following command:
dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.AddExport string:/path-to-export.conf string:'EXPORT(Path=/path-in-export-block)'
For example, to add testvol1 dynamically:dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.AddExport string:/home/nfs-ganesha/export.conf string:'EXPORT(Path=/testvol1)') method return sender=:1.35 -> dest=:1.37 reply_serial=2
- Removing an export dynamically
To remove an export dynamically, execute the following command:
dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.RemoveExport int32:export-id-in-the-export-block
For example:dbus-send --print-reply --system --dest=org.ganesha.nfsd /org/ganesha/nfsd/ExportMgr org.ganesha.nfsd.exportmgr.RemoveExport int32:79 method return sender=:1.35 -> dest=:1.37 reply_serial=2
7.3.3.6. Accessing nfs-ganesha Exports
To mount an export in NFSv3 mode, execute the following command:
mount -t nfs -o vers=3 ip:/volname /mountpoint
mount -t nfs -o vers=3 10.70.0.0:/testvol /mnt
To mount an export in NFSv4 mode, execute the following command:
mount -t nfs -o vers=4 ip:/volname /mountpoint
mount -t nfs -o vers=4 10.70.0.0:/testvol /mnt
7.3.3.7. Troubleshooting
- Situation
nfs-ganesha fails to start.
SolutionFollow the listed steps to fix the issue:
- Review the
/tmp/ganesha.log
to understand the cause of failure. - Ensure the kernel and gluster nfs services are inactive.
- Ensure you execute both the
nfs-ganesha.host
andnfs-ganesha.enable
volume set options.
For more information see, Section 7.3.3.5 Manually Configuring nfs-ganesha Exports. - Situation
nfs-ganesha has started and fails to export a volume.
SolutionFollow the listed steps to fix the issue:
- Ensure the file
org.ganesha.nfsd.conf
is copied into/etc/dbus-1/systemd/
before starting nfs-ganesha. - In case you had not copied the file, restart nfs-ganesha. For more information see, Section 7.3.3.3 Exporting and Unexporting Volumes through nfs-ganesha
- Situation
nfs-ganesha fails to stop
SolutionExecute the following steps
- Check for the status of the nfs-ganesha process.
- If it is still running, issue a kill -9 signal on its PID.
- Run the following command to check if nfs, mountd, rquotad, nlockmgr and rquotad services are unregistered cleanly.
rpcinfo -p
- If the services are not unregistered, then delete these entries using the following command:
rpcinfo -d
Note
You can also restart the rpcbind service instead of using rpcinfo -d on individual entries.
- Force start the volume by using the following command:
# gluster volume start volname force
- Situation
Permission issues.
SolutionBy default, the
root squash
option is disabled when you start nfs-ganesha using the CLI. In case, you encounter any permission issues, check the unix permissions of the exported entry.