Suchen

Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 24. Exporting NFS shares

download PDF

As a system administrator, you can use the NFS server to share a directory on your system over network.

24.1. Introduction to NFS

This section explains the basic concepts of the NFS service.

A Network File System (NFS) allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables you to consolidate resources onto centralized servers on the network.

The NFS server refers to the /etc/exports configuration file to determine whether the client is allowed to access any exported file systems. Once verified, all file and directory operations are available to the user.

24.2. Supported NFS versions

This section lists versions of NFS supported in Red Hat Enterprise Linux and their features.

Currently, Red Hat Enterprise Linux 8 supports the following major versions of NFS:

  • NFS version 3 (NFSv3) supports safe asynchronous writes and is more robust at error handling than the previous NFSv2; it also supports 64-bit file sizes and offsets, allowing clients to access more than 2 GB of file data.
  • NFS version 4 (NFSv4) works through firewalls and on the Internet, no longer requires an rpcbind service, supports Access Control Lists (ACLs), and utilizes stateful operations.

NFS version 2 (NFSv2) is no longer supported by Red Hat.

Default NFS version

The default NFS version in Red Hat Enterprise Linux 8 is 4.2. NFS clients attempt to mount using NFSv4.2 by default, and fall back to NFSv4.1 when the server does not support NFSv4.2. The mount later falls back to NFSv4.0 and then to NFSv3.

Features of minor NFS versions

Following are the features of NFSv4.2 in Red Hat Enterprise Linux 8:

Server-side copy
Enables the NFS client to efficiently copy data without wasting network resources using the copy_file_range() system call.
Sparse files
Enables files to have one or more holes, which are unallocated or uninitialized data blocks consisting only of zeroes. The lseek() operation in NFSv4.2 supports seek_hole() and seek_data(), which enables applications to map out the location of holes in the sparse file.
Space reservation
Permits storage servers to reserve free space, which prohibits servers to run out of space. NFSv4.2 supports the allocate() operation to reserve space, the deallocate() operation to unreserve space, and the fallocate() operation to preallocate or deallocate space in a file.
Labeled NFS
Enforces data access rights and enables SELinux labels between a client and a server for individual files on an NFS file system.
Layout enhancements
Provides the layoutstats() operation, which enables some Parallel NFS (pNFS) servers to collect better performance statistics.

Following are the features of NFSv4.1:

  • Enhances performance and security of network, and also includes client-side support for pNFS.
  • No longer requires a separate TCP connection for callbacks, which allows an NFS server to grant delegations even when it cannot contact the client: for example, when NAT or a firewall interferes.
  • Provides exactly once semantics (except for reboot operations), preventing a previous issue whereby certain operations sometimes returned an inaccurate result if a reply was lost and the operation was sent twice.

24.3. The TCP and UDP protocols in NFSv3 and NFSv4

NFSv4 requires the Transmission Control Protocol (TCP) running over an IP network.

NFSv3 could also use the User Datagram Protocol (UDP) in earlier Red Hat Enterprise Linux versions. In Red Hat Enterprise Linux 8, NFS over UDP is no longer supported. By default, UDP is disabled in the NFS server.

24.4. Services required by NFS

This section lists system services that are required for running an NFS server or mounting NFS shares. Red Hat Enterprise Linux starts these services automatically.

Red Hat Enterprise Linux uses a combination of kernel-level support and service processes to provide NFS file sharing. All NFS versions rely on Remote Procedure Calls (RPC) between clients and servers. To share or mount NFS file systems, the following services work together depending on which version of NFS is implemented:

nfsd
The NFS server kernel module that services requests for shared NFS file systems.
rpcbind
Accepts port reservations from local RPC services. These ports are then made available (or advertised) so the corresponding remote RPC services can access them. The rpcbind service responds to requests for RPC services and sets up connections to the requested RPC service. This is not used with NFSv4.
rpc.mountd
This process is used by an NFS server to process MOUNT requests from NFSv3 clients. It checks that the requested NFS share is currently exported by the NFS server, and that the client is allowed to access it. If the mount request is allowed, the nfs-mountd service replies with a Success status and provides the File-Handle for this NFS share back to the NFS client.
rpc.nfsd
This process enables explicit NFS versions and protocols the server advertises to be defined. It works with the Linux kernel to meet the dynamic demands of NFS clients, such as providing server threads each time an NFS client connects. This process corresponds to the nfs-server service.
lockd
This is a kernel thread that runs on both clients and servers. It implements the Network Lock Manager (NLM) protocol, which enables NFSv3 clients to lock files on the server. It is started automatically whenever the NFS server is run and whenever an NFS file system is mounted.
rpc.statd
This process implements the Network Status Monitor (NSM) RPC protocol, which notifies NFS clients when an NFS server is restarted without being gracefully brought down. The rpc-statd service is started automatically by the nfs-server service, and does not require user configuration. This is not used with NFSv4.
rpc.rquotad
This process provides user quota information for remote users. The rpc-rquotad service, which is provided by the quota-rpc package, has to be started by user when the nfs-server is started.
rpc.idmapd

This process provides NFSv4 client and server upcalls, which map between on-the-wire NFSv4 names (strings in the form of user@domain) and local UIDs and GIDs. For idmapd to function with NFSv4, the /etc/idmapd.conf file must be configured. At a minimum, the Domain parameter should be specified, which defines the NFSv4 mapping domain. If the NFSv4 mapping domain is the same as the DNS domain name, this parameter can be skipped. The client and server must agree on the NFSv4 mapping domain for ID mapping to function properly.

Only the NFSv4 server uses rpc.idmapd, which is started by the nfs-idmapd service. The NFSv4 client uses the keyring-based nfsidmap utility, which is called by the kernel on-demand to perform ID mapping. If there is a problem with nfsidmap, the client falls back to using rpc.idmapd.

The RPC services with NFSv4

The mounting and locking protocols have been incorporated into the NFSv4 protocol. The server also listens on the well-known TCP port 2049. As such, NFSv4 does not need to interact with rpcbind, lockd, and rpc-statd services. The nfs-mountd service is still required on the NFS server to set up the exports, but is not involved in any over-the-wire operations.

24.5. NFS host name formats

This section describes different formats that you can use to specify a host when mounting or exporting an NFS share.

You can specify the host in the following formats:

Single machine

Either of the following:

  • A fully-qualified domain name (that can be resolved by the server)
  • Host name (that can be resolved by the server)
  • An IP address.
IP networks

Either of the following formats is valid:

  • a.b.c.d/z, where a.b.c.d is the network and z is the number of bits in the netmask; for example 192.168.0.0/24.
  • a.b.c.d/netmask, where a.b.c.d is the network and netmask is the netmask; for example, 192.168.100.8/255.255.255.0.
Netgroups
The @group-name format , where group-name is the NIS netgroup name.

24.6. NFS server configuration

This section describes the syntax and options of two ways to configure exports on an NFS server:

  • Manually editing the /etc/exports configuration file
  • Using the exportfs utility on the command line

24.6.1. The /etc/exports configuration file

The /etc/exports file controls which file systems are exported to remote hosts and specifies options. It follows the following syntax rules:

  • Blank lines are ignored.
  • To add a comment, start a line with the hash mark (#).
  • You can wrap long lines with a backslash (\).
  • Each exported file system should be on its own individual line.
  • Any lists of authorized hosts placed after an exported file system must be separated by space characters.
  • Options for each of the hosts must be placed in parentheses directly after the host identifier, without any spaces separating the host and the first parenthesis.
Export entry

Each entry for an exported file system has the following structure:

export host(options)

It is also possible to specify multiple hosts, along with specific options for each host. To do so, list them on the same line as a space-delimited list, with each host name followed by its respective options (in parentheses), as in:

export host1(options1) host2(options2) host3(options3)

In this structure:

export
The directory being exported
host
The host or network to which the export is being shared
options
The options to be used for host

Example 24.1. A simple /etc/exports file

In its simplest form, the /etc/exports file only specifies the exported directory and the hosts permitted to access it:

/exported/directory bob.example.com

Here, bob.example.com can mount /exported/directory/ from the NFS server. Because no options are specified in this example, NFS uses default options.

Important

The format of the /etc/exports file is very precise, particularly in regards to use of the space character. Remember to always separate exported file systems from hosts and hosts from one another with a space character. However, there should be no other space characters in the file except on comment lines.

For example, the following two lines do not mean the same thing:

/home bob.example.com(rw)
/home bob.example.com (rw)

The first line allows only users from bob.example.com read and write access to the /home directory. The second line allows users from bob.example.com to mount the directory as read-only (the default), while the rest of the world can mount it read/write.

Default options

The default options for an export entry are:

ro
The exported file system is read-only. Remote hosts cannot change the data shared on the file system. To allow hosts to make changes to the file system (that is, read and write), specify the rw option.
sync
The NFS server will not reply to requests before changes made by previous requests are written to disk. To enable asynchronous writes instead, specify the option async.
wdelay
The NFS server will delay writing to the disk if it suspects another write request is imminent. This can improve performance as it reduces the number of times the disk must be accessed by separate write commands, thereby reducing write overhead. To disable this, specify the no_wdelay option, which is available only if the default sync option is also specified.
root_squash

This prevents root users connected remotely (as opposed to locally) from having root privileges; instead, the NFS server assigns them the user ID nobody. This effectively "squashes" the power of the remote root user to the lowest local user, preventing possible unauthorized writes on the remote server. To disable root squashing, specify the no_root_squash option.

To squash every remote user (including root), use the all_squash option. To specify the user and group IDs that the NFS server should assign to remote users from a particular host, use the anonuid and anongid options, respectively, as in:

export host(anonuid=uid,anongid=gid)

Here, uid and gid are user ID number and group ID number, respectively. The anonuid and anongid options enable you to create a special user and group account for remote NFS users to share.

By default, access control lists (ACLs) are supported by NFS under Red Hat Enterprise Linux. To disable this feature, specify the no_acl option when exporting the file system.

Default and overridden options

Each default for every exported file system must be explicitly overridden. For example, if the rw option is not specified, then the exported file system is shared as read-only. The following is a sample line from /etc/exports which overrides two default options:

/another/exported/directory 192.168.0.3(rw,async)

In this example, 192.168.0.3 can mount /another/exported/directory/ read and write, and all writes to disk are asynchronous.

24.6.2. The exportfs utility

The exportfs utility enables the root user to selectively export or unexport directories without restarting the NFS service. When given the proper options, the exportfs utility writes the exported file systems to /var/lib/nfs/xtab. Because the nfs-mountd service refers to the xtab file when deciding access privileges to a file system, changes to the list of exported file systems take effect immediately.

Common exportfs options

The following is a list of commonly-used options available for exportfs:

-r
Causes all directories listed in /etc/exports to be exported by constructing a new export list in /var/lib/nfs/etab. This option effectively refreshes the export list with any changes made to /etc/exports.
-a
Causes all directories to be exported or unexported, depending on what other options are passed to exportfs. If no other options are specified, exportfs exports all file systems specified in /etc/exports.
-o file-systems
Specifies directories to be exported that are not listed in /etc/exports. Replace file-systems with additional file systems to be exported. These file systems must be formatted in the same way they are specified in /etc/exports. This option is often used to test an exported file system before adding it permanently to the list of exported file systems.
-i
Ignores /etc/exports; only options given from the command line are used to define exported file systems.
-u
Unexports all shared directories. The command exportfs -ua suspends NFS file sharing while keeping all NFS services up. To re-enable NFS sharing, use exportfs -r.
-v
Verbose operation, where the file systems being exported or unexported are displayed in greater detail when the exportfs command is executed.

If no options are passed to the exportfs utility, it displays a list of currently exported file systems.

Additional resources

24.7. NFS and rpcbind

The rpcbind service maps Remote Procedure Call (RPC) services to the ports on which they listen. RPC processes notify rpcbind when they start, registering the ports they are listening on and the RPC program numbers they expect to serve. The client system then contacts rpcbind on the server with a particular RPC program number. The rpcbind service redirects the client to the proper port number so it can communicate with the requested service.

The Network File System Version 3 (NFSv3) requires the rpcbind service.

Because RPC-based services rely on rpcbind to make all connections with incoming client requests, rpcbind must be available before any of these services start.

Access control rules for rpcbind affect all RPC-based services. Alternatively, it is possible to specify access control rules for each of the NFS RPC daemons.

Additional resources

  • rpc.mountd(8) man page
  • rpc.statd(8) man page

24.8. Installing NFS

This procedure installs all packages necessary to mount or export NFS shares.

Procedure

  • Install the nfs-utils package:

    # yum install nfs-utils

24.9. Starting the NFS server

This procedure describes how to start the NFS server, which is required to export NFS shares.

Prerequisites

  • For servers that support NFSv3 connections, the rpcbind service must be running. To verify that rpcbind is active, use the following command:

    $ systemctl status rpcbind

    If the service is stopped, start and enable it:

    $ systemctl enable --now rpcbind

Procedure

  • To start the NFS server and enable it to start automatically at boot, use the following command:

    # systemctl enable --now nfs-server

Additional resources

24.10. Troubleshooting NFS and rpcbind

Because the rpcbind service provides coordination between RPC services and the port numbers used to communicate with them, it is useful to view the status of current RPC services using rpcbind when troubleshooting. The rpcinfo utility shows each RPC-based service with port numbers, an RPC program number, a version number, and an IP protocol type (TCP or UDP).

Procedure

  1. To make sure the proper NFS RPC-based services are enabled for rpcbind, use the following command:

    # rpcinfo -p

    Example 24.2. rpcinfo -p command output

    The following is sample output from this command:

       program vers proto   port  service
        100000    4   tcp    111  portmapper
        100000    3   tcp    111  portmapper
        100000    2   tcp    111  portmapper
        100000    4   udp    111  portmapper
        100000    3   udp    111  portmapper
        100000    2   udp    111  portmapper
        100005    1   udp  20048  mountd
        100005    1   tcp  20048  mountd
        100005    2   udp  20048  mountd
        100005    2   tcp  20048  mountd
        100005    3   udp  20048  mountd
        100005    3   tcp  20048  mountd
        100024    1   udp  37769  status
        100024    1   tcp  49349  status
        100003    3   tcp   2049  nfs
        100003    4   tcp   2049  nfs
        100227    3   tcp   2049  nfs_acl
        100021    1   udp  56691  nlockmgr
        100021    3   udp  56691  nlockmgr
        100021    4   udp  56691  nlockmgr
        100021    1   tcp  46193  nlockmgr
        100021    3   tcp  46193  nlockmgr
        100021    4   tcp  46193  nlockmgr

    If one of the NFS services does not start up correctly, rpcbind will be unable to map RPC requests from clients for that service to the correct port.

  2. In many cases, if NFS is not present in rpcinfo output, restarting NFS causes the service to correctly register with rpcbind and begin working:

    # systemctl restart nfs-server

Additional resources

24.11. Configuring the NFS server to run behind a firewall

NFS requires the rpcbind service, which dynamically assigns ports for RPC services and can cause issues for configuring firewall rules. The following sections describe how to configure NFS versions to work behind a firewall if you want to support:

  • NFSv3

    This includes any servers that support NFSv3:

    • NFSv3-only servers
    • Servers that support both NFSv3 and NFSv4
  • NFSv4-only

24.11.1. Configuring the NFSv3-enabled server to run behind a firewall

The following procedure describes how to configure servers that support NFSv3 to run behind a firewall. This includes NFSv3-only servers and servers that support both NFSv3 and NFSv4.

Procedure

  1. To allow clients to access NFS shares behind a firewall, configure the firewall by running the following commands on the NFS server:

    firewall-cmd --permanent --add-service mountd
    firewall-cmd --permanent --add-service rpc-bind
    firewall-cmd --permanent --add-service nfs
  2. Specify the ports to be used by the RPC service nlockmgr in the /etc/nfs.conf file as follows:

    [lockd]
    
    port=tcp-port-number
    udp-port=udp-port-number

    Alternatively, you can specify nlm_tcpport and nlm_udpport in the /etc/modprobe.d/lockd.conf file.

  3. Open the specified ports in the firewall by running the following commands on the NFS server:

    firewall-cmd --permanent --add-port=<lockd-tcp-port>/tcp
    firewall-cmd --permanent --add-port=<lockd-udp-port>/udp
  4. Add static ports for rpc.statd by editing the [statd] section of the /etc/nfs.conf file as follows:

    [statd]
    
    port=port-number
  5. Open the added ports in the firewall by running the following commands on the NFS server:

    firewall-cmd --permanent --add-port=<statd-tcp-port>/tcp
    firewall-cmd --permanent --add-port=<statd-udp-port>/udp
  6. Reload the firewall configuration:

    firewall-cmd --reload
  7. Restart the rpc-statd service first, and then restart the nfs-server service:

    # systemctl restart rpc-statd.service
    # systemctl restart nfs-server.service

    Alternatively, if you specified the lockd ports in the /etc/modprobe.d/lockd.conf file:

    1. Update the current values of /proc/sys/fs/nfs/nlm_tcpport and /proc/sys/fs/nfs/nlm_udpport:

      # sysctl -w fs.nfs.nlm_tcpport=<tcp-port>
      # sysctl -w fs.nfs.nlm_udpport=<udp-port>
    2. Restart the rpc-statd and nfs-server services:

      # systemctl restart rpc-statd.service
      # systemctl restart nfs-server.service

24.11.2. Configuring the NFSv4-only server to run behind a firewall

The following procedure describes how to configure the NFSv4-only server to run behind a firewall.

Procedure

  1. To allow clients to access NFS shares behind a firewall, configure the firewall by running the following command on the NFS server:

    firewall-cmd --permanent --add-service nfs
  2. Reload the firewall configuration:

    firewall-cmd --reload
  3. Restart the nfs-server:

    # systemctl restart nfs-server

24.11.3. Configuring an NFSv3 client to run behind a firewall

The procedure to configure an NFSv3 client to run behind a firewall is similar to the procedure to configure an NFSv3 server to run behind a firewall.

If the machine you are configuring is both an NFS client and an NFS server, follow the procedure described in Configuring the NFSv3-enabled server to run behind a firewall.

The following procedure describes how to configure a machine that is an NFS client only to run behind a firewall.

Procedure

  1. To allow the NFS server to perform callbacks to the NFS client when the client is behind a firewall, add the rpc-bind service to the firewall by running the following command on the NFS client:

    firewall-cmd --permanent --add-service rpc-bind
  2. Specify the ports to be used by the RPC service nlockmgr in the /etc/nfs.conf file as follows:

    [lockd]
    
    port=port-number
    udp-port=upd-port-number

    Alternatively, you can specify nlm_tcpport and nlm_udpport in the /etc/modprobe.d/lockd.conf file.

  3. Open the specified ports in the firewall by running the following commands on the NFS client:

    firewall-cmd --permanent --add-port=<lockd-tcp-port>/tcp
    firewall-cmd --permanent --add-port=<lockd-udp-port>/udp
  4. Add static ports for rpc.statd by editing the [statd] section of the /etc/nfs.conf file as follows:

    [statd]
    
    port=port-number
  5. Open the added ports in the firewall by running the following commands on the NFS client:

    firewall-cmd --permanent --add-port=<statd-tcp-port>/tcp
    firewall-cmd --permanent --add-port=<statd-udp-port>/udp
  6. Reload the firewall configuration:

    firewall-cmd --reload
  7. Restart the rpc-statd service:

    # systemctl restart rpc-statd.service

    Alternatively, if you specified the lockd ports in the /etc/modprobe.d/lockd.conf file:

    1. Update the current values of /proc/sys/fs/nfs/nlm_tcpport and /proc/sys/fs/nfs/nlm_udpport:

      # sysctl -w fs.nfs.nlm_tcpport=<tcp-port>
      # sysctl -w fs.nfs.nlm_udpport=<udp-port>
    2. Restart the rpc-statd service:

      # systemctl restart rpc-statd.service

24.11.4. Configuring an NFSv4 client to run behind a firewall

Perform this procedure only if the client is using NFSv4.0. In that case, it is necessary to open a port for NFSv4.0 callbacks.

This procedure is not needed for NFSv4.1 or higher because in the later protocol versions the server performs callbacks on the same connection that was initiated by the client.

Procedure

  1. To allow NFSv4.0 callbacks to pass through firewalls, set /proc/sys/fs/nfs/nfs_callback_tcpport and allow the server to connect to that port on the client as follows:

    # echo "fs.nfs.nfs_callback_tcpport = <callback-port>" >/etc/sysctl.d/90-nfs-callback-port.conf
    # sysctl -p /etc/sysctl.d/90-nfs-callback-port.conf
  2. Open the specified port in the firewall by running the following command on the NFS client:

    firewall-cmd --permanent --add-port=<callback-port>/tcp
  3. Reload the firewall configuration:

    firewall-cmd --reload

24.12. Exporting RPC quota through a firewall

If you export a file system that uses disk quotas, you can use the quota Remote Procedure Call (RPC) service to provide disk quota data to NFS clients.

Procedure

  1. Enable and start the rpc-rquotad service:

    # systemctl enable --now rpc-rquotad
    Note

    The rpc-rquotad service is, if enabled, started automatically after starting the nfs-server service.

  2. To make the quota RPC service accessible behind a firewall, the TCP (or UDP, if UDP is enabled) port 875 need to be open. The default port number is defined in the /etc/services file.

    You can override the default port number by appending -p port-number to the RPCRQUOTADOPTS variable in the /etc/sysconfig/rpc-rquotad file.

  3. By default, remote hosts can only read quotas. If you want to allow clients to set quotas, append the -S option to the RPCRQUOTADOPTS variable in the /etc/sysconfig/rpc-rquotad file.
  4. Restart rpc-rquotad for the changes in the /etc/sysconfig/rpc-rquotad file to take effect:

    # systemctl restart rpc-rquotad

24.13. Enabling NFS over RDMA (NFSoRDMA)

In Red Hat Enterprise Linux 8, Remote direct memory access (RDMA) service on RDMA-capable hardware provides Network File System (NFS) protocol support for high-speed file transfer over the network.

Procedure

  1. Install the rdma-core package:

    # yum install rdma-core
  2. Verify the lines with xprtrdma and svcrdma are commented out in the /etc/rdma/modules/rdma.conf file:

    # NFS over RDMA client support
    xprtrdma
    # NFS over RDMA server support
    svcrdma
  3. On the NFS server, create directory /mnt/nfsordma and export it to /etc/exports:

    # mkdir /mnt/nfsordma
    # echo "/mnt/nfsordma *(fsid=0,rw,async,insecure,no_root_squash)" >> /etc/exports
  4. On the NFS client, mount the nfs-share with server IP address, for example, 172.31.0.186:

    # mount -o rdma,port=20049 172.31.0.186:/mnt/nfs-share /mnt/nfs
  5. Restart the nfs-server service:

    # systemctl restart nfs-server

Additional resources

24.14. Additional resources

Red Hat logoGithubRedditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

© 2024 Red Hat, Inc.