Chapter 2. Deploying an NFS server


By using the Network File System (NFS) protocol, remote users can mount shared directories over a network. They can use them as if mounted locally. This enables you to consolidate resources onto centralized servers on the network.

2.1. Key features of minor NFSv4 versions

Each minor NFSv4 version brings enhancements aimed at improving performance and security. Use these improvements to use the full potential of NFSv4, ensuring efficient and reliable file sharing across networks.

Key features of NFSv4.2

Server-side copy
Server-side copy is a capability of the NFS server to copy files without transferring the data back and forth over the network.
Sparse files
Enables files to have one or more empty spaces, or gaps, which are unallocated or uninitialized data blocks consisting of zeros. This enables applications to map out the location of holes in the sparse file.
Space reservation
Clients can reserve or allocate space on the storage server before writing data. This prevents the server from running out of space.
Labeled NFS
Enforces data access rights and enables SELinux labels between a client and server for individual files on an NFS file system.
Layout enhancements
Provides functionality to enable Parallel NFS (pNFS) servers to collect better performance statistics.

Key features of NFSv4.1

Client-side support for pNFS
High-speed I/O support to clustered servers enables you to store data on multiple machines and provide direct access to data. It also synchronizes updates to metadata.
Sessions

Sessions maintain the state of the server relative to the connections belonging to a client. They provide two key features.

  • exactly-once-semantics (EOS) which helps to distinguish between the response of an old and new operation.
  • Bind multiple network connections for NFS operations, improving performance

Key features of NFSv4.0

RPC and security
The RPCSEC_GSS framework enhances Remote Procedure Call (RPC) security. The NFSv4 protocol introduces a new operation for in-band security negotiation. This enables clients to query server policies for accessing file system resources securely.
Procedure and operation structure
NFS 4.0 introduces the COMPOUND procedure, which enables clients to merge multiple operations into a single request to reduce RPCs.
File system model

NFS 4.0 retains the hierarchical file system model, treating files as byte streams and encoding names with UTF-8 for internationalization.

  • File handle types

    With volatile file handles, servers can adjust to file system changes and enable clients to adapt as needed without requiring permanent handles.

  • Attribute types

    The file attribute structure includes required, recommended, and named attributes, each serving distinct purposes. Required attributes, derived from NFSv3, are essential for distinguishing file types. Recommended attributes, such as Access Control Lists (ACLs), provide enhanced access control.

  • Multi-server namespace

    Namespaces span across multiple servers, simplify file system transfers based on attributes, support referrals, redundancy, and seamless server migration.

OPEN and CLOSE operations
These operations can combine file lookup, creation, and semantic sharing at a single point, ensuring correct file sharing semantics.
File locking
File locking is part of the protocol, eliminating the need for RPC callbacks. File lock state is managed by the server under a lease-based model. Failure to renew the lease may result in state release by the server.
Client caching and delegation
Caching resembles previous versions, with client-determined timeouts for attribute and directory caching. Delegations in NFS 4.0 allow the server to assign certain responsibilities to the client. This guarantees specific file sharing semantics and enables local file operations without immediate server interaction.

2.2. The AUTH_SYS authentication method

The AUTH_SYS method, which is also known as AUTH_UNIX, is a client authentication mechanism. With AUTH_SYS, the client sends the User ID (UID) and Group ID (GID) of the user to the server to verify its identity and permissions when accessing files.

It is considered less secure as it relies on the client-provided information, making it susceptible to unauthorized access if misconfigured.

Mapping mechanisms ensure that NFS clients can access files with the appropriate permissions on the server, even if the UID and GID assignments differ between systems. UIDs and GIDs are mapped between NFS client and server by the following mechanisms:

Direct mapping

UIDs and GIDs are directly mapped by NFS servers and clients between local and remote systems. This requires consistent UID and GID assignments across all systems participating in NFS file sharing. For example, a user with UID 1000 on a client can only access the files on a share that a user with UID 1000 on the server has access to.

For a simplified ID management in an NFS environment, administrators often rely on centralized services, such as LDAP or Network Information Service (NIS) to manage UID and GID mappings across multiple systems.

User and Group ID mapping
NFS servers and clients can use the idmapd service to translate UIDs and GIDs between different systems for consistent identification and permission assignment.

2.3. The AUTH_GSS authentication method

Kerberos is a network authentication protocol that allows secure authentication for clients and servers over a non-secure network. It uses symmetric key cryptography and requires a trusted Key Distribution Center (KDC) to authenticate users and services.

Unlike AUTH_SYS, RPCSEC_GSS Kerberos mechanism ensures the server does not depend on the client to correctly represent which user accesses the file. Instead, the system uses cryptography to authenticate users to the server. This prevents a malicious client from impersonating a user without having the Kerberos credentials of that user.

In the /etc/exports file, the sec option defines one or multiple methods of Kerberos security that the share should provide. Clients can mount the share with one of these methods where The sec option supports the following values:

  • sys: no cryptographic protection (default)
  • krb5: authentication only
  • krb5i: authentication and integrity protection
  • krb5p: authentication, integrity checking, and traffic encryption

Note that the more cryptographic functionality a method provides, the lower is the performance.

2.4. File permissions on exported file systems

File permissions on exported file systems determine access rights to files and directories for clients accessing them over NFS.

Once a remote host mounts the NFS file system, the only protection each shared file has is its file system permissions. If two users share the same User ID (UID) value, they can mount the same NFS file system on different client systems. In this case, they can modify each other’s files.

NFS treats the root user on the client as equivalent to the root user on the server. However, by default, the NFS server maps root to the nobody account when accessing an NFS share. The root_squash option controls this behavior.

For more information about this option, see the exports(5) man page on your system.

2.5. Services required on an NFS server

Red Hat Enterprise Linux (RHEL) uses a combination of a kernel module and user-space processes to provide NFS file shares. Principal services used by NFS servers include kernel modules and user-space processes that provide access to NFS file shares.

The following table includes details of their functions and configuration.

Expand
Table 2.1. Services required on an NFS server
Service NameNFS versionsDescription

rpcbind

3

This process accepts port reservations from local remote procedure call (RPC) services, makes them available or advertised. This allows corresponding remote RPC services to access them. The rpcbind service responds to requests and sets up connections to the specified RPC service.

rpc.mountd

3, 4

This service processes MOUNT requests from NFSv3 clients, and NFSv4 servers use internal functions of this service.

It checks that the requested NFS share is currently exported by the NFS server and that the client has access permissions.

rpc.nfsd

3, 4

This process advertises explicit NFS versions and protocols the server defines. It works with the kernel to meet the dynamic demands of NFS clients. For example, providing server threads each time an NFS client connects.

The nfs-server service starts this process.

rpc.rquotad

3, 4

This service provides user quota information for remote users.

rpc.idmapd

4

This process provides NFSv4 client and server upcalls, mapping between NFSv4 names (strings in `user@domain` form) and local user and group IDs.

gssproxy

3, 4

This service handles krb5 authentication on behalf of rpc.nfsd.

nfsdcld

4

This service provides a NFSv4 client tracking daemon that prevents the server from granting lock reclaims. It prevents reclaims when other clients have taken conflicting locks during a network partition combined with a server reboot.

rpc.statd

3

This service notifies other NFSv3 clients when the local host reboots, and notifies the kernel when a remote NFSv3 host reboots.

Expand
Table 2.2. Modules required on an NFS server
Module NameNFS versionsDescription

nfsd

3, 4

The NFS kernel module that services requests for shared NFS file systems.

lockd

3

This kernel module implements the Network Lock Manager (NLM) protocol, which enables clients to lock files on the server. RHEL loads the module automatically when the NFS server runs.

For more information, see the following man pages in your system:

  • rpcbind(8)
  • rpc.mountd(8)
  • rpc.nfsd(8)
  • rpc.statd(8)
  • rpc.rquotad(8)
  • rpc.idmapd(8)
  • gssproxy(8)
  • nfsdcld(8)

2.6. The /etc/exports configuration file

The /etc/exports file controls which directories the server exports. Each line contains an export point, a whitespace-separated list of clients allowed to mount the directory, and options for each client.

The following is the format for an /etc/exports entry:

<directory> <host_or_network_1>(<options_1>) <host_or_network_n>(<options_n>)...

The following are the individual parts of an /etc/exports entry:

<directory>
The directory that is being exported.
<host_or_network>
The host or network to which the export is being shared. For example, you can specify a hostname, an IP address, or an IP network.
<options>
The options for the host or network.

Adding a space between a client and options, changes the behavior. For example, the following lines do not have the same meaning:

/projects	client.example.com(rw)
/projects	client.example.com (rw)

In the first line, the server allows only client.example.com to mount the /projects directory in read-write mode. No other hosts can mount the share. However, the space between client.example.com and (rw) in the second line changes the behavior. The server exports the directory to client.example.com in read-only mode (default setting). All other hosts can mount the share in read-write mode.

The NFS server uses the following default settings for each exported directory:

Expand
Table 2.3. Default options of entries in /etc/exports
Default settingDescription

ro

Exports the directory in read-only mode.

sync

The NFS server does not reply to requests before changes made by previous requests are written to disk.

wdelay

The server delays writing to the disk if it suspects another write request is pending.

root_squash

Prevents that the root user on clients has root permissions on an exported directory. With root_squash enabled, the NFS server maps access from root to the user nobody.

You can view and manage exported file systems by using the exportfs command. For details see the exportfs(8) man page on your system.

2.7. Configuring an NFSv4-only server

If you do not have any NFSv3 clients in your network, you can configure the NFS server to support only NFSv4. You can also support specific minor protocol versions. Using only NFSv4 on the server reduces the number of ports that are open to the network.

Procedure

  1. Install the nfs-utils package:

    # dnf install nfs-utils
  2. Edit the /etc/nfs.conf file, and make the following changes:

    1. Disable the vers3 parameter in the [nfsd] section to disable NFSv3:

      [nfsd]
      vers3=n
    2. Optional: If you require only specific NFSv4 minor versions, uncomment all vers4.<minor_version> parameters and set them accordingly, for example:

      [nfsd]
      vers3=n
      # vers4=y
      vers4.0=n
      vers4.1=n
      vers4.2=y

      With this configuration, the server provides only NFS version 4.2.

      Important

      If you require only a specific NFSv4 minor version, set only the parameters for the minor versions. Do not uncomment the vers4 parameter to avoid an unpredictable activation or deactivation of minor versions. By default, the vers4 parameter enables or disables all NFSv4 minor versions. However, this behavior changes if you set vers4 in conjunction with other vers parameters.

  3. Disable all NFSv3-related services:

    # systemctl mask --now rpc-statd.service rpcbind.service rpcbind.socket
  4. Configure the rpc.mountd daemon to not listen for NFSv3 mount requests. Create a /etc/systemd/system/nfs-mountd.service.d/v4only.conf file with the following content:

    [Service]
    ExecStart=
    ExecStart=/usr/sbin/rpc.mountd --no-tcp --no-udp
  5. Reload the systemd manager configuration and restart the nfs-mountd service:

    # systemctl daemon-reload
    # systemctl restart nfs-mountd
  6. Optional: Create a directory that you want to share, for example:

    # mkdir -p /nfs/projects/

    If you want to share an existing directory, skip this step.

  7. Set the permissions you require on the /nfs/projects/ directory:

    # chmod 2770 /nfs/projects/
    # chgrp users /nfs/projects/

    These commands set write permissions for the users group on the /nfs/projects/ directory. They ensure that the same group is automatically set on new entries created in this directory.

  8. Add an export point to the /etc/exports file for each directory that you want to share:

    /nfs/projects/     192.0.2.0/24(rw) 2001:db8::/32(rw)

    This entry shares the /nfs/projects/ directory to be accessible with read and write access to clients in the 192.0.2.0/24 and 2001:db8::/32 subnets.

  9. Open the relevant ports in firewalld:

    # firewall-cmd --permanent --add-service nfs
    # firewall-cmd --reload
  10. Enable and start the NFS server:

    # systemctl enable --now nfs-server

Verification

  • On the server, verify that the server provides only the NFS versions that you have configured:

    # cat /proc/fs/nfsd/versions
    -3 +4 -4.0 -4.1 +4.2
  • On a client, perform the following steps:

    1. Install the nfs-utils package:

      # dnf install nfs-utils
    2. Mount an exported NFS share:

      # mount server.example.com:/nfs/projects/ /mnt/
    3. As a user which is a member of the users group, create a file in /mnt/:

      # touch /mnt/file
    4. List the directory to verify that the file was created:

      # ls -l /mnt/
      total 0
      -rw-r--r--. 1 demo users 0 Jan 16 14:18 file

In a network which still uses NFSv3 clients, configure the server to provide shares by using the NFSv3 protocol. If you also have newer clients in your network, you can, additionally, enable NFSv4. By default, Red Hat Enterprise Linux NFS clients use the latest NFS version that the server provides.

Procedure

  1. Install the nfs-utils package:

    # dnf install nfs-utils
  2. Optional: By default, NFSv3 and NFSv4 are enabled. If you do not require NFSv4 or only specific minor versions, uncomment all vers4.<minor_version> parameters and set them accordingly:

    [nfsd]
    # vers3=y
    # vers4=y
    vers4.0=n
    vers4.1=n
    vers4.2=y

    With this configuration, the server provides only the NFS version 3 and 4.2.

    Important

    If you require only a specific NFSv4 minor version, set only the parameters for the minor versions. Do not uncomment the vers4 parameter to avoid an unpredictable activation or deactivation of minor versions. By default, the vers4 parameter enables or disables all NFSv4 minor versions. However, this behavior changes if you set vers4 in conjunction with other vers parameters.

  3. By default, NFSv3 RPC services use random ports. To enable a firewall configuration, configure fixed port numbers in the /etc/nfs.conf file:

    1. In the [lockd] section, set a fixed port number for the nlockmgr RPC service, for example:

      [lockd]
      port=5555

      With this setting, the service automatically uses this port number for both the UDP and TCP protocol.

    2. In the [statd] section, set a fixed port number for the rpc.statd service, for example:

      [statd]
      port=6666

      With this setting, the service automatically uses this port number for both the UDP and TCP protocol.

  4. Optional: Create a directory that you want to share, for example:

    # mkdir -p /nfs/projects/

    If you want to share an existing directory, skip this step.

  5. Set the permissions you require on the /nfs/projects/ directory:

    # chmod 2770 /nfs/projects/
    # chgrp users /nfs/projects/

    These commands set write permissions for the users group on the /nfs/projects/ directory. They ensure that the same group is automatically set on new entries created in this directory.

  6. Add an export point to the /etc/exports file for each directory that you want to share:

    /nfs/projects/     192.0.2.0/24(rw) 2001:db8::/32(rw)

    This entry shares the /nfs/projects/ directory to be accessible with read and write access to clients in the 192.0.2.0/24 and 2001:db8::/32 subnets.

  7. Open the relevant ports in firewalld:

    # firewall-cmd --permanent --add-service={nfs,rpc-bind,mountd}
    # firewall-cmd --permanent --add-port={5555/tcp,5555/udp,6666/tcp,6666/udp}
    # firewall-cmd --reload
  8. Enable and start the NFS server:

    # systemctl enable --now rpc-statd nfs-server

Verification

  • On the server, verify that the server provides only the NFS versions that you have configured:

    # cat /proc/fs/nfsd/versions
    +3 +4 -4.0 -4.1 +4.2
  • On a client, perform the following steps:

    1. Install the nfs-utils package:

      # dnf install nfs-utils
    2. Mount an exported NFS share:

      # mount -o vers=<version> server.example.com:/nfs/projects/ /mnt/
    3. Verify that the share was mounted with the specified NFS version:

      # mount | grep "/mnt"
      server.example.com:/nfs/projects/ on /mnt type nfs (rw,relatime,vers=3,...
    4. As a user which is a member of the users group, create a file in /mnt/:

      # touch /mnt/file
    5. List the directory to verify that the file was created:

      # ls -l /mnt/
      total 0
      -rw-r--r--. 1 demo users 0 Jan 16 14:18 file

2.9. Enabling quota support on an NFS server

You can restrict the amount of data a user or a group can store by configuring quotas on the file system. On an NFS server, the rpc-rquotad service ensures that the quota is also applied to users on NFS clients. For more information, see the quota(1) and xfs_quota(8) man pages on your system.

Prerequisites

  • The NFS server is running and configured.
  • Quotas have been configured on the ext or XFS file system.

Procedure

  1. Verify that quotas are enabled on the directories that you export:

    • For ext file system, enter:

      # quotaon -p /nfs/projects/
      group quota on /nfs/projects (/dev/sdb1) is on
      user quota on /nfs/projects (/dev/sdb1) is on
      project quota on /nfs/projects (/dev/sdb1) is off
    • For an XFS file system, enter:

      # findmnt /nfs/projects
      TARGET    	SOURCE	FSTYPE OPTIONS
      /nfs/projects /dev/sdb1 xfs	rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,usrquota,grpquota
  2. Install the quota-rpc package:

    # dnf install quota-rpc
  3. Optional: By default, the quota RPC service runs on port 875. You can run the service on a different port. Append -p <port_number> to the RPCRQUOTADOPTS variable in the /etc/sysconfig/rpc-rquotad file:

    RPCRQUOTADOPTS="-p __<port_number>__"
  4. Optional: By default, remote hosts can only read quotas. To allow clients to set quotas, append the -S option to the RPCRQUOTADOPTS variable in /etc/sysconfig/rpc-rquotad:

    RPCRQUOTADOPTS="-S"
  5. Open the port in firewalld:

    # firewall-cmd --permanent --add-port=875/udp
    # firewall-cmd --reload
  6. Enable and start the rpc-rquotad service:

    # systemctl enable --now rpc-rquotad

Verification

  1. On the client:

    1. Mount the exported share:

      # mount server.example.com:/nfs/projects/ /mnt/
    2. Display the quota. The command depends on the file system of the exported directory. For example:

      • To display the quota of a specific user on all mounted ext file systems, enter:

        # quota -u <user_name>
        Disk quotas for user demo (uid 1000):
             Filesystem     space     quota     limit     grace     files     quota      limit     grace
        server.example.com:/nfs/projects
                     0K       100M      200M                  0         0         0
      • To display the user and group quota on an XFS file system, enter:

        # xfs_quota -x -c "report -h" /mnt/
        User quota on /nfs/projects (/dev/vdb1)
                    Blocks
        User ID     Used     Soft     Hard     Warn/Grace
        ---------- ---------------------------------
        root        0        0        0        00 [------]
        demo        0        100M     200M     00 [------]

2.10. Enabling NFS over RDMA on an NFS server

Remote Direct Memory Access (RDMA) is a protocol that enables direct data transfer. A client system can transfer data from a storage server’s memory into its own memory. This enhances storage throughput, decreases latency in data transfer between the server and client, and reduces CPU load on both ends. If both the NFS server and clients are connected over RDMA, clients can use NFSoRDMA to mount an exported directory.

Prerequisites

  • The NFS service is running and configured
  • An InfiniBand or RDMA over Converged Ethernet (RoCE) device is installed on the server.
  • IP over InfiniBand (IPoIB) is configured on the server, and the InfiniBand device has an IP address assigned.

Procedure

  1. Install the rdma-core package:

    # dnf install rdma-core
  2. If the package was already installed, verify that the xprtrdma and svcrdma modules in the /etc/rdma/modules/rdma.conf file are uncommented:

    # NFS over RDMA client support
    xprtrdma
    # NFS over RDMA server support
    svcrdma
  3. Optional: By default, NFS over RDMA uses port 20049. If you want to use a different port, set the rdma-port setting in the [nfsd] section of the /etc/nfs.conf file:

    rdma-port=<port>
  4. Open the NFSoRDMA port in firewalld:

    # firewall-cmd --permanent --add-port={20049/tcp,20049/udp}
    # firewall-cmd --reload

    Adjust the port numbers if you set a different port than 20049.

  5. Restart the nfs-server service:

    # systemctl restart nfs-server

Verification

  1. On a client with InfiniBand hardware, perform the following steps:

    1. Install the following packages:

      # dnf install nfs-utils rdma-core
    2. Mount an exported NFS share over RDMA:

      # mount -o rdma server.example.com:/nfs/projects/ /mnt/

      If you set a port number other than the default (20049), pass port=<port_number> to the command:

      # mount -o rdma,port=<port_number> server.example.com:/nfs/projects/ /mnt/
    3. Verify that the share was mounted with the rdma option:

      # mount | grep "/mnt"
      server.example.com:/nfs/projects/ on /mnt type nfs (...,proto=rdma,...)

If you use Red Hat Enterprise Linux Identity Management (IdM), you can join your NFS server to the IdM domain. This enables you to centrally manage users and groups and to use Kerberos for authentication, integrity protection, and traffic encryption.

Prerequisites

  • The NFS server is enrolled in a Red Hat Enterprise Linux Identity Management (IdM) domain.
  • The NFS server is running and configured.

Procedure

  1. Obtain a kerberos ticket as an IdM administrator:

    # kinit admin
  2. Create a nfs/<FQDN> service principal:

    # ipa service-add nfs/nfs_server.idm.example.com
  3. Retrieve the nfs service principal from IdM, and store it in the /etc/krb5.keytab file:

    # ipa-getkeytab -s idm_server.idm.example.com -p nfs/nfs_server.idm.example.com -k /etc/krb5.keytab
  4. Optional: Display the principals in the /etc/krb5.keytab file:

    # klist -k /etc/krb5.keytab
    Keytab name: FILE:/etc/krb5.keytab
    KVNO Principal
    ---- --------------------------------------------------------------------------
       1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM
       1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM
       1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM
       1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM
       7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM
       7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM
       7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM
       7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM

    By default, the IdM client adds the host principal to the /etc/krb5.keytab file when you join the host to IdM. If the host principal is missing, use the ipa-getkeytab -s idm_server.idm.example.com -p host/nfs_server.idm.example.com -k /etc/krb5.keytab command to add it.

  5. Use the ipa-client-automount utility to configure mapping of IdM IDs.

    1. If the client is not in the IdM DNS domain, use the --domain option to specify the primary DNS domain of the IdM deployment. Alternatively, use the --server option to specify the IdM server to connect to:

      # ipa-client-automount --domain idm.example.com

      The --domain option triggers DNS discovery to determine the IdM servers to use.

    2. If the client is already in the IdM DNS domain, run the command without the --domain option:

      # ipa-client-automount
      Searching for IPA server...
      IPA server: DNS discovery
      Location: default
      Continue to configure the system with these values? [no]: yes
      Configured /etc/idmapd.conf
      Restarting sssd, waiting for it to become available.
      Started autofs
  6. Update your /etc/exports file, and add the Kerberos security method to the client options. For example:

    /nfs/projects/      	192.0.2.0/24(rw,sec=krb5i)

    If you want that your clients can select from multiple security methods, specify them separated by colons:

    /nfs/projects/      	192.0.2.0/24(rw,sec=krb5:krb5i:krb5p)
  7. Reload the exported file systems:

    # exportfs -r

2.12. Configuring an NFS server with TLS support

Without the RPCSEC_GSS protocol, Network File System (NFS) traffic is unencrypted by default. Starting with Red Hat Enterprise Linux 10, it is possible to configure NFS with Transport Layer Security (TLS). This allows NFS traffic to be encrypted by default.

Prerequisites

  • You have configured an NFSv4 server. For instructions, see Configuring an NFSv4-only server.
  • You have a Certificate Authority (CA) certificate.
  • You have installed the ktls-utils package.

Procedure

  1. Create a private key and a certificate signing request (CSR):

    # openssl req -new -newkey rsa:4096 -noenc \
    -keyout /etc/pki/tls/private/server.example.com.key \
    -out /etc/pki/tls/private/server.example.com.csr \
    -subj "/C=US/ST=State/L=City/O=Organization/CN=server.example.com" \
    -addext "subjectAltName=DNS:server.example.com,IP:192.0.2.1"
    Important

    Common Name (CN) and Domain Name System (DNS) must match the hostname. IP must match IP of the host.

  2. Send the /etc/pki/tls/private/server.example.com.csr file to a CA and request a server certificate. Store the received CA certificate and the server certificate on the host.
  3. Import the CA certificate to the systems’s truststore:

    # cp ca.crt /etc/pki/ca-trust/source/anchors
    # update-ca-trust
  4. Move the server certificate to the /etc/pki/tls/certs/ directory:

    # mv server.example.com.crt /etc/pki/tls/certs/
  5. Ensure the SELinux context is correct on the private key and certificates:

    # restorecon -Rv /etc/pki/tls/certs/
  6. Add the server certificate and private key to the [authenticate.server] section in the /etc/tlshd.conf file:

    x509.certificate= /etc/pki/tls/certs/server.example.com.crt
    x509.private_key= /etc/pki/tls/private/server.example.com.key

    Leave the x509.truststore parameter unset.

  7. Enable and start the tlshd service:

    # systemctl enable --now tlshd.service

2.13. Configuring an NFS client with TLS support

If the server supports Network File System (NFS) with Transport Layer Security (TLS) encryption, configure the client and mount by using xprtsec=tls. This includes importing the Certificate Authority (CA) certificate, enabling the TLS daemon, and mounting the share with encryption. Using TLS helps protect data in transit between the client and server.

Prerequisites

Procedure

  1. Import the Certificate Authority (CA) certificate to the systems’s truststore:

    # cp ca.crt /etc/pki/ca-trust/source/anchors
    # update-ca-trust
  2. Enable and start the tlshd service:

    # systemctl enable --now tlshd.service
  3. Mount an NFS share by using TLS encryption:

    # mount -o xprtsec=tls server.example.com:/nfs/projects/ /mnt/

Verification

  • Verify that the client successfully mounted NFS share with TLS support:

    # journalctl -u tlshd
    …
    Apr 01 08:37:56 client.example.com tlshd[10688]: Handshake with server.example.com (192.0.2.1) was successful

If the server supports Network File System (NFS) with Transport Layer Security (TLS) encryption, configure the NFS server and client. They authenticate each other by using the TLS protocol. The configuration includes creating and installing certificates, configuring the TLS daemon, and mounting the NFS share with encryption.

Prerequisites

Procedure

  1. Create a private key and a certificate signing request (CSR):

    # openssl req -new -newkey rsa:4096 -noenc \
    -keyout /etc/pki/tls/private/client.example.com.key \
    -out /etc/pki/tls/private/client.example.com.csr \
    -subj "/C=US/ST=State/L=City/O=Organization/CN=client.example.com" \
    -addext "subjectAltName=DNS:client.example.com,IP:192.0.2.2"
    Important

    Common Name (CN) and Domain Name System (DNS) must match the hostname. IP must match IP of the host.

  2. Send the /etc/pki/tls/private/client.example.com.csr file to a Certificate Authority (CA) and request a client certificate. Store the received CA certificate and the client certificate on the host.
  3. Import the CA certificate to the systems’s truststore:

    # cp ca.crt /etc/pki/ca-trust/source/anchors
    # update-ca-trust
  4. Move the client certificate to the /etc/pki/tls/certs/ directory:

    # mv client.example.com.crt /etc/pki/tls/certs/
  5. Ensure the SELinux context is correct on the private key and certificates:

    # restorecon -Rv /etc/pki/tls/certs/
  6. Add the client certificate and private key to the [authenticate.client] section in the /etc/tlshd.conf file:

    x509.certificate= /etc/pki/tls/certs/client.example.com.crt
    x509.private_key= /etc/pki/tls/private/client.example.com.key

    Leave the x509.truststore parameter unset.

  7. Enable and start the tlshd service:

    # systemctl enable --now tlshd.service
  8. Mount an NFS share by using TLS encryption:

    # mount -o xprtsec=mtls server.example.com:/nfs/projects/ /mnt/

Verification

  • Verify that the client successfully mounted NFS share with TLS support:

    # journalctl -u tlshd
    …
    Apr 01 08:37:56 client.example.com tlshd[10688]: Handshake with server.example.com (192.0.2.1) was successful
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top