Chapter 2. Deploying an NFS server


By using the Network File System (NFS) protocol, remote users can mount shared directories over a network and use them as they were mounted locally. This enables you to consolidate resources onto centralized servers on the network.

2.1. Key features of minor NFSv4 versions

Each minor NFSv4 version brings enhancements aimed at improving performance and security. Use these improvements to utilize the full potential of NFSv4, ensuring efficient and reliable file sharing across networks.

Key features of NFSv4.2

Server-side copy
Server-side copy is a capability of the NFS server to copy files on the server without transferring the data back and forth over the network.
Sparse files
Enables files to have one or more empty spaces, or gaps, which are unallocated or uninitialized data blocks consisting only of zeros. This enables applications to map out the location of holes in the sparse file.
Space reservation
Clients can reserve or allocate space on the storage server before writing data. This prevents the server from running out of space.
Labeled NFS
Enforces data access rights and enables SELinux labels between a client and a server for individual files on an NFS file system.
Layout enhancements
Provides functionality to enable Parallel NFS (pNFS) servers to collect better performance statistics.

Key features of NFSv4.1

Client-side support for pNFS
The support of high-speed I/O to clustered servers enables you to store data on multiple machines, to provide direct access to data, and synchronization of updates to metadata.
Sessions

Sessions maintain the state of the server relative to the connections belonging to a client. They provide two key features.

  • exactly-once-semantics (EOS) which helps to distinguish between the response of an old and new operation.
  • Bind multiple network connections for NFS operations, improving performance

Key features of NFSv4.0

RPC and security
The RPCSEC_GSS framework enhances RPC security. The NFSv4 protocol introduces a new operation for in-band security negotiation. This enables clients to query server policies for accessing file system resources securely.
Procedure and operation structure
NFS 4.0 introduces the COMPOUND procedure, which enables clients to merge multiple operations into a single request to reduce RPCs.
File system model

NFS 4.0 retains the hierarchical file system model, treating files as byte streams and encoding names with UTF-8 for internationalization.

  • File handle types

    With volatile file handles, servers can adjust to file system changes and enable clients to adapt as needed without requiring permanent file handles.

  • Attribute types

    The file attribute structure includes required, recommended, and named attributes, each serving distinct purposes. Required attributes, derived from NFSv3, are essential for distinguishing file types, while recommended attributes, such as ACLs, provide enhanced access control.

  • Multi-server namespace

    Namespaces span across multiple servers, simplify file system transfers based on attributes, support referrals, redundancy, and seamless server migration.

OPEN and CLOSE operations
These operations can combine file lookup, creation, and semantic sharing at a single point, ensuring correct file sharing semantics.
File locking
File locking is part of the protocol, eliminating the need for RPC callbacks. File lock state is managed by the server under a lease-based model, where failure to renew the lease may result in state release by the server.
Client caching and delegation
Caching resembles previous versions, with client-determined timeouts for attribute and directory caching. Delegations in NFS 4.0 allow the server to assign certain responsibilities to the client, guaranteeing specific file sharing semantics and enabling local file operations without immediate server interaction.

2.2. The AUTH_SYS authentication method

The AUTH_SYS method, which is also known as AUTH_UNIX, is a client authentication mechanism. With AUTH_SYS, the client sends the User ID (UID) and Group ID (GID) of the user to the server to verify its identity and permissions when accessing files. It is considered less secure as it relies on the client-provided information, making it susceptible to unauthorized access if misconfigured.

Mapping mechanisms ensure that NFS clients can access files with the appropriate permissions on the server, even if the UID and GID assignments differ between systems. UIDs and GIDs are mapped between NFS client and server by the following mechanisms:

Direct mapping

UIDs and GIDs are directly mapped by NFS servers and clients between local and remote systems. This requires consistent UID and GID assignments across all systems participating in NFS file sharing. For example, a user with UID 1000 on a client can only access the files on a share that a user with UID 1000 on the server has access to.

For a simplified ID management in an NFS environment, administrators often rely on centralized services, such as LDAP or Network Information Service (NIS) to manage UID and GID mappings across multiple systems.

User and Group ID mapping
NFS servers and clients can use the idmapd service to translate UIDs and GIDs between different systems for consistent identification and permission assignment.

2.3. The AUTH_GSS authentication method

Kerberos is a network authentication protocol that allows secure authentication for clients and servers over a non-secure network. It uses symmetric key cryptography and requires a trusted Key Distribution Center (KDC) to authenticate users and services.

Unlike AUTH_SYS, with the RPCSEC_GSS Kerberos mechanism, the server does not depend on the client to correctly represent which user is accessing the file. Instead, cryptography is used to authenticate users to the server, which prevents a malicious client from impersonating a user without having that user’s Kerberos credentials.

In the /etc/exports file, the sec option defines one or multiple methods of Kerberos security that the share should provide, and clients can mount the share with one of these methods. The sec option supports the following values:

  • sys: no cryptographic protection (default)
  • krb5: authentication only
  • krb5i: authentication and integrity protection
  • krb5p: authentication, integrity checking, and traffic encryption

Note that the more cryptographic functionality a method provides, the lower is the performance.

2.4. File permissions on exported file systems

File permissions on exported file systems determine access rights to files and directories for clients accessing them over NFS.

Once the NFS file system is mounted by a remote host, the only protection each shared file has is its file system permissions. If two users that share the same User ID (UID) value mount the same NFS file system on different client systems, they can modify each other’s files.

NFS treats the root user on the client as equivalent to the root user on the server. However, by default, the NFS server maps root to the nobody account when accessing an NFS share. The root_squash option controls this behavior.

2.5. Services required on an NFS server

Red Hat Enterprise Linux (RHEL) uses a combination of a kernel module and user-space processes to provide NFS file shares:

Table 2.1. Services required on an NFS server
Service NameNFS versionsDescription

rpcbind

3

This process accepts port reservations from local remote procedure call (RPC) services, makes them available or advertised, allowing corresponding remote RPC services to access them. The rpcbind service responds to requests and sets up connections to the specified RPC service.

rpc.mountd

3, 4

This service processes MOUNT requests from NFSv3 clients, and NFSv4 servers use internal functions of this service.

It checks that the requested NFS share is currently exported by the NFS server and that the client is allowed to access it.

rpc.nfsd

3, 4

This process advertises explicit NFS versions and protocols the server defines. It works with the kernel to meet the dynamic demands of NFS clients, such as providing server threads each time an NFS client connects.

The nfs-server service starts this process.

rpc.rquotad

3, 4

This service provides user quota information for remote users.

rpc.idmapd

4

This process provides NFSv4 client and server upcalls, which map between NFSv4 names (strings in the form of `user@domain`) and local user and group IDs.

gssproxy

3, 4

This service handles krb5 authentication on behalf of rpc.nfsd.

nfsdcld

4

This service provides a NFSv4 client tracking daemon that prevents the server from granting lock reclaims when other clients have taken conflicting locks during a network partition combined with a server reboot.

rpc.statd

3

This service provides notification to other NFSv3 clients when the local host reboots, and to the kernel when a remote NFSv3 host reboots.

Table 2.2. Modules required on an NFS server
Module NameNFS versionsDescription

nfsd

3, 4

The NFS kernel module that services requests for shared NFS file systems.

lockd

3

This kernel module implements the Network Lock Manager (NLM) protocol, which enables clients to lock files on the server. RHEL loads the module automatically when the NFS server runs.

2.6. The /etc/exports configuration file

The /etc/exports file controls which directories the server exports. Each line contains an export point, a whitespace-separated list of clients that are allowed to mount the directory, and options for each of the clients:

<directory> <host_or_network_1>(<options_1>) <host_or_network_n>(<options_n>)...
Copy to Clipboard

The following are the individual parts of an /etc/exports entry:

<export>
The directory that is being exported.
<host_or_network>
The host or network to which the export is being shared. For example, you can specify a hostname, an IP address, or an IP network.
<options>
The options for the host or network.

Adding a space between a client and options, changes the behavior. For example, the following lines do not have the same meaning:

/projects	client.example.com(rw)
/projects	client.example.com (rw)
Copy to Clipboard

In the first line, the server allows only client.example.com to mount the /projects directory in read-write mode, and no other hosts can mount the share. However, due to the space between client.example.com and (rw) in the second line, the server exports the directory to client.example.com in read-only mode (default setting), but all other hosts can mount the share in read-write mode.

The NFS server uses the following default settings for each exported directory:

Table 2.3. Default options of entries in /etc/exports
Default settingDescription

ro

Exports the directory in read-only mode.

sync

The NFS server does not reply to requests before changes made by previous requests are written to disk.

wdelay

The server delays writing to the disk if it suspects another write request is pending..

root_squash

Prevents that the root user on clients has root permissions on an exported directory. With root_squash enabled, the NFS server maps access from root to the user nobody.

2.7. Configuring an NFSv4-only server

If you do not have any NFSv3 clients in your network, you can configure the NFS server to support only NFSv4 or specific minor protocol versions of it. Using only NFSv4 on the server reduces the number of ports that are open to the network.

Procedure

  1. Install the nfs-utils package:

    # dnf install nfs-utils
    Copy to Clipboard
  2. Edit the /etc/nfs.conf file, and make the following changes:

    1. Disable the vers3 parameter in the [nfsd] section to disable NFSv3:

      [nfsd]
      vers3=n
      Copy to Clipboard
    2. Optional: If you require only specific NFSv4 minor versions, uncomment all vers4.<minor_version> parameters and set them accordingly, for example:

      [nfsd]
      vers3=n
      # vers4=y
      vers4.0=n
      vers4.1=n
      vers4.2=y
      Copy to Clipboard

      With this configuration, the server provides only NFS version 4.2.

      Important

      If you require only a specific NFSv4 minor version, set only the parameters for the minor versions. Do not uncomment the vers4 parameter to avoid an unpredictable activation or deactivation of minor versions. By default, the vers4 parameter enables or disables all NFSv4 minor versions. However, this behavior changes if you set vers4 in conjunction with other vers parameters.

  3. Disable all NFSv3-related services:

    # systemctl mask --now rpc-statd.service rpcbind.service rpcbind.socket
    Copy to Clipboard
  4. Configure the rpc.mountd daemon to not listen for NFSv3 mount requests. Create a /etc/systemd/system/nfs-mountd.service.d/v4only.conf file with the following content:

    [Service]
    ExecStart=
    ExecStart=/usr/sbin/rpc.mountd --no-tcp --no-udp
    Copy to Clipboard
  5. Reload the systemd manager configuration and restart the nfs-mountd service:

    # systemctl daemon-reload
    # systemctl restart nfs-mountd
    Copy to Clipboard
  6. Optional: Create a directory that you want to share, for example:

    # mkdir -p /nfs/projects/
    Copy to Clipboard

    If you want to share an existing directory, skip this step.

  7. Set the permissions you require on the /nfs/projects/ directory:

    # chmod 2770 /nfs/projects/
    # chgrp users /nfs/projects/
    Copy to Clipboard

    These commands set write permissions for the users group on the /nfs/projects/ directory and ensure that the same group is automatically set on new entries created in this directory.

  8. Add an export point to the /etc/exports file for each directory that you want to share:

    /nfs/projects/     192.0.2.0/24(rw) 2001:db8::/32(rw)
    Copy to Clipboard

    This entry shares the /nfs/projects/ directory to be accessible with read and write access to clients in the 192.0.2.0/24 and 2001:db8::/32 subnets.

  9. Open the relevant ports in firewalld:

    # firewall-cmd --permanent --add-service nfs
    # firewall-cmd --reload
    Copy to Clipboard
  10. Enable and start the NFS server:

    # systemctl enable --now nfs-server
    Copy to Clipboard

Verification

  • On the server, verify that the server provides only the NFS versions that you have configured:

    # cat /proc/fs/nfsd/versions
    -3 +4 -4.0 -4.1 +4.2
    Copy to Clipboard
  • On a client, perform the following steps:

    1. Install the nfs-utils package:

      # dnf install nfs-utils
      Copy to Clipboard
    2. Mount an exported NFS share:

      # mount server.example.com:/nfs/projects/ /mnt/
      Copy to Clipboard
    3. As a user which is a member of the users group, create a file in /mnt/:

      # touch /mnt/file
      Copy to Clipboard
    4. List the directory to verify that the file was created:

      # ls -l /mnt/
      total 0
      -rw-r--r--. 1 demo users 0 Jan 16 14:18 file
      Copy to Clipboard

2.8. Configuring an NFSv3 server with optional NFSv4 support

In a network which still uses NFSv3 clients, configure the server to provide shares by using the NFSv3 protocol. If you also have newer clients in your network, you can, additionally, enable NFSv4. By default, Red Hat Enterprise Linux NFS clients use the latest NFS version that the server provides.

Procedure

  1. Install the nfs-utils package:

    # dnf install nfs-utils
    Copy to Clipboard
  2. Optional: By default, NFSv3 and NFSv4 are enabled. If you do not require NFSv4 or only specific minor versions, uncomment all vers4.<minor_version> parameters and set them accordingly:

    [nfsd]
    # vers3=y
    # vers4=y
    vers4.0=n
    vers4.1=n
    vers4.2=y
    Copy to Clipboard

    With this configuration, the server provides only the NFS version 3 and 4.2.

    Important

    If you require only a specific NFSv4 minor version, set only the parameters for the minor versions. Do not uncomment the vers4 parameter to avoid an unpredictable activation or deactivation of minor versions. By default, the vers4 parameter enables or disables all NFSv4 minor versions. However, this behavior changes if you set vers4 in conjunction with other vers parameters.

  3. By default, NFSv3 RPC services use random ports. To enable a firewall configuration, configure fixed port numbers in the /etc/nfs.conf file:

    1. In the [lockd] section, set a fixed port number for the nlockmgr RPC service, for example:

      [lockd]
      port=5555
      Copy to Clipboard

      With this setting, the service automatically uses this port number for both the UDP and TCP protocol.

    2. In the [statd] section, set a fixed port number for the rpc.statd service, for example:

      [statd]
      port=6666
      Copy to Clipboard

      With this setting, the service automatically uses this port number for both the UDP and TCP protocol.

  4. Optional: Create a directory that you want to share, for example:

    # mkdir -p /nfs/projects/
    Copy to Clipboard

    If you want to share an existing directory, skip this step.

  5. Set the permissions you require on the /nfs/projects/ directory:

    # chmod 2770 /nfs/projects/
    # chgrp users /nfs/projects/
    Copy to Clipboard

    These commands set write permissions for the users group on the /nfs/projects/ directory and ensure that the same group is automatically set on new entries created in this directory.

  6. Add an export point to the /etc/exports file for each directory that you want to share:

    /nfs/projects/     192.0.2.0/24(rw) 2001:db8::/32(rw)
    Copy to Clipboard

    This entry shares the /nfs/projects/ directory to be accessible with read and write access to clients in the 192.0.2.0/24 and 2001:db8::/32 subnets.

  7. Open the relevant ports in firewalld:

    # firewall-cmd --permanent --add-service={nfs,rpc-bind,mountd}
    # firewall-cmd --permanent --add-port={5555/tcp,5555/udp,6666/tcp,6666/udp}
    # firewall-cmd --reload
    Copy to Clipboard
  8. Enable and start the NFS server:

    # systemctl enable --now rpc-statd nfs-server
    Copy to Clipboard

Verification

  • On the server, verify that the server provides only the NFS versions that you have configured:

    # cat /proc/fs/nfsd/versions
    +3 +4 -4.0 -4.1 +4.2
    Copy to Clipboard
  • On a client, perform the following steps:

    1. Install the nfs-utils package:

      # dnf install nfs-utils
      Copy to Clipboard
    2. Mount an exported NFS share:

      # mount -o vers=<version> server.example.com:/nfs/projects/ /mnt/
      Copy to Clipboard
    3. Verify that the share was mounted with the specified NFS version:

      # mount | grep "/mnt"
      server.example.com:/nfs/projects/ on /mnt type nfs (rw,relatime,vers=3,...
      Copy to Clipboard
    4. As a user which is a member of the users group, create a file in /mnt/:

      # touch /mnt/file
      Copy to Clipboard
    5. List the directory to verify that the file was created:

      # ls -l /mnt/
      total 0
      -rw-r--r--. 1 demo users 0 Jan 16 14:18 file
      Copy to Clipboard

2.9. Enabling quota support on an NFS server

If you want to restrict the amount of data a user or a group can store, you can configure quotas on the file system. On an NFS server, the rpc-rquotad service ensures that the quota is also applied to users on NFS clients.

Prerequisites

  • The NFS server is running and configured.
  • Quotas have been configured on the ext or XFS file system.

Procedure

  1. Verify that quotas are enabled on the directories that you export:

    • For ext file system, enter:

      # quotaon -p /nfs/projects/
      group quota on /nfs/projects (/dev/sdb1) is on
      user quota on /nfs/projects (/dev/sdb1) is on
      project quota on /nfs/projects (/dev/sdb1) is off
      Copy to Clipboard
    • For an XFS file system, enter:

      # findmnt /nfs/projects
      TARGET    	SOURCE	FSTYPE OPTIONS
      /nfs/projects /dev/sdb1 xfs	rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,usrquota,grpquota
      Copy to Clipboard
  2. Install the quota-rpc package:

    # dnf install quota-rpc
    Copy to Clipboard
  3. Optional: By default, the quota RPC service runs on port 875. If you want to run the service on a different port, append -p <port_number> to the RPCRQUOTADOPTS variable in the /etc/sysconfig/rpc-rquotad file:

    RPCRQUOTADOPTS="-p __<port_number>__"
    Copy to Clipboard
  4. Optional: By default, remote hosts can only read quotas. To allow clients to set quotas, append the -S option to the RPCRQUOTADOPTS variable in the /etc/sysconfig/rpc-rquotad file:

    RPCRQUOTADOPTS="-S"
    Copy to Clipboard
  5. Open the port in firewalld:

    # firewall-cmd --permanent --add-port=875/udp
    # firewall-cmd --reload
    Copy to Clipboard
  6. Enable and start the rpc-rquotad service:

    # systemctl enable --now rpc-rquotad
    Copy to Clipboard

Verification

  1. On the client:

    1. Mount the exported share:

      # mount server.example.com:/nfs/projects/ /mnt/
      Copy to Clipboard
    2. Display the quota. The command depends on the file system of the exported directory. For example:

      • To display the quota of a specific user on all mounted ext file systems, enter:

        # quota -u <user_name>
        Disk quotas for user demo (uid 1000):
             Filesystem     space     quota     limit     grace     files     quota      limit     grace
        server.example.com:/nfs/projects
                     0K       100M      200M                  0         0         0
        Copy to Clipboard
      • To display the user and group quota on an XFS file system, enter:

        # xfs_quota -x -c "report -h" /mnt/
        User quota on /nfs/projects (/dev/vdb1)
                    Blocks
        User ID     Used     Soft     Hard     Warn/Grace
        ---------- ---------------------------------
        root        0        0        0        00 [------]
        demo        0        100M     200M     00 [------]
        Copy to Clipboard

2.10. Enabling NFS over RDMA on an NFS server

Remote Direct Memory Access (RDMA) is a protocol that enables a client system to directly transfer data from the memory of a storage server into its own memory. This enhances storage throughput, decreases latency in data transfer between the server and client, and reduces CPU load on both ends. If both the NFS server and clients are connected over RDMA, clients can use NFSoRDMA to mount an exported directory.

Prerequisites

  • The NFS service is running and configured
  • An InfiniBand or RDMA over Converged Ethernet (RoCE) device is installed on the server.
  • IP over InfiniBand (IPoIB) is configured on the server, and the InfiniBand device has an IP address assigned.

Procedure

  1. Install the rdma-core package:

    # dnf install rdma-core
    Copy to Clipboard
  2. If the package was already installed, verify that the xprtrdma and svcrdma modules in the /etc/rdma/modules/rdma.conf file are uncommented:

    # NFS over RDMA client support
    xprtrdma
    # NFS over RDMA server support
    svcrdma
    Copy to Clipboard
  3. Optional: By default, NFS over RDMA uses port 20049. If you want to use a different port, set the rdma-port setting in the [nfsd] section of the /etc/nfs.conf file:

    rdma-port=<port>
    Copy to Clipboard
  4. Open the NFSoRDMA port in firewalld:

    # firewall-cmd --permanent --add-port={20049/tcp,20049/udp}
    # firewall-cmd --reload
    Copy to Clipboard

    Adjust the port numbers if you set a different port than 20049.

  5. Restart the nfs-server service:

    # systemctl restart nfs-server
    Copy to Clipboard

Verification

  1. On a client with InfiniBand hardware, perform the following steps:

    1. Install the following packages:

      # dnf install nfs-utils rdma-core
      Copy to Clipboard
    2. Mount an exported NFS share over RDMA:

      # mount -o rdma server.example.com:/nfs/projects/ /mnt/
      Copy to Clipboard

      If you set a port number other than the default (20049), pass port=<port_number> to the command:

      # mount -o rdma,port=<port_number> server.example.com:/nfs/projects/ /mnt/
      Copy to Clipboard
    3. Verify that the share was mounted with the rdma option:

      # mount | grep "/mnt"
      server.example.com:/nfs/projects/ on /mnt type nfs (...,proto=rdma,...)
      Copy to Clipboard

2.11. Setting up an NFS server with Kerberos in a Red Hat Enterprise Linux Identity Management domain

If you use Red Hat Enterprise Linux Identity Management (IdM), you can join your NFS server to the IdM domain. This enables you to centrally manage users and groups and to use Kerberos for authentication, integrity protection, and traffic encryption.

Prerequisites

  • The NFS server is enrolled in a Red Hat Enterprise Linux Identity Management (IdM) domain.
  • The NFS server is running and configured.

Procedure

  1. Obtain a kerberos ticket as an IdM administrator:

    # kinit admin
    Copy to Clipboard
  2. Create a nfs/<FQDN> service principal:

    # ipa service-add nfs/nfs_server.idm.example.com
    Copy to Clipboard
  3. Retrieve the nfs service principal from IdM, and store it in the /etc/krb5.keytab file:

    # ipa-getkeytab -s idm_server.idm.example.com -p nfs/nfs_server.idm.example.com -k /etc/krb5.keytab
    Copy to Clipboard
  4. Optional: Display the principals in the /etc/krb5.keytab file:

    # klist -k /etc/krb5.keytab
    Keytab name: FILE:/etc/krb5.keytab
    KVNO Principal
    ---- --------------------------------------------------------------------------
       1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM
       1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM
       1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM
       1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM
       7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM
       7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM
       7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM
       7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM
    Copy to Clipboard

    By default, the IdM client adds the host principal to the /etc/krb5.keytab file when you join the host to the IdM domain. If the host principal is missing, use the ipa-getkeytab -s idm_server.idm.example.com -p host/nfs_server.idm.example.com -k /etc/krb5.keytab command to add it.

  5. Use the ipa-client-automount utility to configure mapping of IdM IDs:

    #  ipa-client-automount
    Searching for IPA server...
    IPA server: DNS discovery
    Location: default
    Continue to configure the system with these values? [no]: yes
    Configured /etc/idmapd.conf
    Restarting sssd, waiting for it to become available.
    Started autofs
    Copy to Clipboard
  6. Update your /etc/exports file, and add the Kerberos security method to the client options. For example:

    /nfs/projects/      	192.0.2.0/24(rw,sec=krb5i)
    Copy to Clipboard

    If you want that your clients can select from multiple security methods, specify them separated by colons:

    /nfs/projects/      	192.0.2.0/24(rw,sec=krb5:krb5i:krb5p)
    Copy to Clipboard
  7. Reload the exported file systems:

    # exportfs -r
    Copy to Clipboard

2.12. Configuring an NFS server with TLS support

Without the RPCSEC_GSS protocol, NFS traffic is unencrypted by default. Starting with Red Hat Enterprise Linux 10, it is possible to configure NFS with TLS, allowing NFS traffic to be encrypted by default.

Prerequisites

  • You have configured an NFSv4 server. For instructions, see Configuring an NFSv4-only server.
  • You have a Certificate Authority (CA) certificate.
  • You have installed the ktls-utils package.

Procedure

  1. Create a private key and a certificate signing request (CSR):

    # openssl req -new -newkey rsa:4096 -noenc \
    -keyout /etc/pki/tls/private/server.example.com.key \
    -out /etc/pki/tls/private/server.example.com.csr \
    -subj "/C=US/ST=State/L=City/O=Organization/CN=server.example.com" \
    -addext "subjectAltName=DNS:server.example.com,IP:192.0.2.1"
    Copy to Clipboard
    Important

    Common Name (CN) and DNS must match the hostname. IP must match IP of the host.

  2. Send the /etc/pki/tls/private/server.example.com.csr file to a CA and request a server certificate. Store the received CA certificate and the server certificate on the host.
  3. Import the CA certificate to the systems’s truststore:

    # cp ca.crt /etc/pki/ca-trust/source/anchors
    # update-ca-trust
    Copy to Clipboard
  4. Move the server certificate to the /etc/pki/tls/certs/ directory:

    # mv server.example.com.crt /etc/pki/tls/certs/
    Copy to Clipboard
  5. Ensure the SELinux context is correct on the private key and certificates:

    # restorecon -Rv /etc/pki/tls/certs/
    Copy to Clipboard
  6. Add the server certificate and private key to the [authenticate.server] section in the /etc/tlshd.conf file:

    x509.certificate= /etc/pki/tls/certs/server.example.com.crt
    x509.private_key= /etc/pki/tls/private/server.example.com.key
    Copy to Clipboard

    Leave the x509.truststore parameter unset.

  7. Enable and start the tlshd service:

    # systemctl enable --now tlshd.service
    Copy to Clipboard

2.13. Configuring an NFS client with TLS support

If the server supports NFS with TLS encryption, you can configure the client accordingly and use the xprtsec=tls parameter to mount it with TLS support.

Prerequisites

Procedure

  1. Import the Certificate Authority (CA) certificate to the systems’s truststore:

    # cp ca.crt /etc/pki/ca-trust/source/anchors
    # update-ca-trust
    Copy to Clipboard
  2. Enable and start the tlshd service:

    # systemctl enable --now tlshd.service
    Copy to Clipboard
  3. Mount an NFS share by using TLS encryption:

    # mount -o xprtsec=tls server.example.com:/nfs/projects/ /mnt/
    Copy to Clipboard

Verification

  • Verify that the client successfully mounted NFS share with TLS support:

    # journalctl -u tlshd
    …
    Apr 01 08:37:56 client.example.com tlshd[10688]: Handshake with server.example.com (192.0.2.1) was successful
    Copy to Clipboard

2.14. Configuring an NFS client with mutual TLS support

If the server supports NFS with TLS encryption, you can configure the NFS server and client to authenticate each other by using TLS protocol.

Prerequisites

Procedure

  1. Create a private key and a certificate signing request (CSR):

    # openssl req -new -newkey rsa:4096 -noenc \
    -keyout /etc/pki/tls/private/client.example.com.key \
    -out /etc/pki/tls/private/client.example.com.csr \
    -subj "/C=US/ST=State/L=City/O=Organization/CN=client.example.com" \
    -addext "subjectAltName=DNS:client.example.com,IP:192.0.2.2"
    Copy to Clipboard
    Important

    Common Name (CN) and DNS must match the hostname. IP must match IP of the host.

  2. Send the /etc/pki/tls/private/client.example.com.csr file to a Certificate Authority (CA) and request a client certificate. Store the received CA certificate and the client certificate on the host.
  3. Import the CA certificate to the systems’s truststore:

    # cp ca.crt /etc/pki/ca-trust/source/anchors
    # update-ca-trust
    Copy to Clipboard
  4. Move the client certificate to the /etc/pki/tls/certs/ directory:

    # mv client.example.com.crt /etc/pki/tls/certs/
    Copy to Clipboard
  5. Ensure the SELinux context is correct on the private key and certificates:

    # restorecon -Rv /etc/pki/tls/certs/
    Copy to Clipboard
  6. Add the client certificate and private key to the [authenticate.client] section in the /etc/tlshd.conf file:

    x509.certificate= /etc/pki/tls/certs/client.example.com.crt
    x509.private_key= /etc/pki/tls/private/client.example.com.key
    Copy to Clipboard

    Leave the x509.truststore parameter unset.

  7. Enable and start the tlshd service:

    # systemctl enable --now tlshd.service
    Copy to Clipboard
  8. Mount an NFS share by using TLS encryption:

    # mount -o xprtsec=mtls server.example.com:/nfs/projects/ /mnt/
    Copy to Clipboard

Verification

  • Verify that the client successfully mounted NFS share with TLS support:

    # journalctl -u tlshd
    …
    Apr 01 08:37:56 client.example.com tlshd[10688]: Handshake with server.example.com (192.0.2.1) was successful
    Copy to Clipboard
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat