Chapter 2. Deploying an NFS server
By using the Network File System (NFS) protocol, remote users can mount shared directories over a network. They can use them as if mounted locally. This enables you to consolidate resources onto centralized servers on the network.
2.1. Key features of minor NFSv4 versions Copy linkLink copied to clipboard!
Each minor NFSv4 version brings enhancements aimed at improving performance and security. Use these improvements to use the full potential of NFSv4, ensuring efficient and reliable file sharing across networks.
Key features of NFSv4.2
- Server-side copy
- Server-side copy is a capability of the NFS server to copy files without transferring the data back and forth over the network.
- Sparse files
- Enables files to have one or more empty spaces, or gaps, which are unallocated or uninitialized data blocks consisting of zeros. This enables applications to map out the location of holes in the sparse file.
- Space reservation
- Clients can reserve or allocate space on the storage server before writing data. This prevents the server from running out of space.
- Labeled NFS
- Enforces data access rights and enables SELinux labels between a client and server for individual files on an NFS file system.
- Layout enhancements
- Provides functionality to enable Parallel NFS (pNFS) servers to collect better performance statistics.
Key features of NFSv4.1
- Client-side support for pNFS
- High-speed I/O support to clustered servers enables you to store data on multiple machines and provide direct access to data. It also synchronizes updates to metadata.
- Sessions
Sessions maintain the state of the server relative to the connections belonging to a client. They provide two key features.
-
exactly-once-semantics(EOS) which helps to distinguish between the response of an old and new operation. - Bind multiple network connections for NFS operations, improving performance
-
Key features of NFSv4.0
- RPC and security
-
The
RPCSEC_GSSframework enhances Remote Procedure Call (RPC) security. The NFSv4 protocol introduces a new operation for in-band security negotiation. This enables clients to query server policies for accessing file system resources securely. - Procedure and operation structure
-
NFS 4.0 introduces the
COMPOUNDprocedure, which enables clients to merge multiple operations into a single request to reduce RPCs. - File system model
NFS 4.0 retains the hierarchical file system model, treating files as byte streams and encoding names with UTF-8 for internationalization.
File handle types
With volatile file handles, servers can adjust to file system changes and enable clients to adapt as needed without requiring permanent handles.
Attribute types
The file attribute structure includes required, recommended, and named attributes, each serving distinct purposes. Required attributes, derived from NFSv3, are essential for distinguishing file types. Recommended attributes, such as Access Control Lists (ACLs), provide enhanced access control.
Multi-server namespace
Namespaces span across multiple servers, simplify file system transfers based on attributes, support referrals, redundancy, and seamless server migration.
- OPEN and CLOSE operations
- These operations can combine file lookup, creation, and semantic sharing at a single point, ensuring correct file sharing semantics.
- File locking
- File locking is part of the protocol, eliminating the need for RPC callbacks. File lock state is managed by the server under a lease-based model. Failure to renew the lease may result in state release by the server.
- Client caching and delegation
- Caching resembles previous versions, with client-determined timeouts for attribute and directory caching. Delegations in NFS 4.0 allow the server to assign certain responsibilities to the client. This guarantees specific file sharing semantics and enables local file operations without immediate server interaction.
2.2. The AUTH_SYS authentication method Copy linkLink copied to clipboard!
The AUTH_SYS method, which is also known as AUTH_UNIX, is a client authentication mechanism. With AUTH_SYS, the client sends the User ID (UID) and Group ID (GID) of the user to the server to verify its identity and permissions when accessing files.
It is considered less secure as it relies on the client-provided information, making it susceptible to unauthorized access if misconfigured.
Mapping mechanisms ensure that NFS clients can access files with the appropriate permissions on the server, even if the UID and GID assignments differ between systems. UIDs and GIDs are mapped between NFS client and server by the following mechanisms:
- Direct mapping
UIDs and GIDs are directly mapped by NFS servers and clients between local and remote systems. This requires consistent UID and GID assignments across all systems participating in NFS file sharing. For example, a user with UID 1000 on a client can only access the files on a share that a user with UID 1000 on the server has access to.
For a simplified ID management in an NFS environment, administrators often rely on centralized services, such as LDAP or Network Information Service (NIS) to manage UID and GID mappings across multiple systems.
- User and Group ID mapping
-
NFS servers and clients can use the
idmapdservice to translate UIDs and GIDs between different systems for consistent identification and permission assignment.
2.3. The AUTH_GSS authentication method Copy linkLink copied to clipboard!
Kerberos is a network authentication protocol that allows secure authentication for clients and servers over a non-secure network. It uses symmetric key cryptography and requires a trusted Key Distribution Center (KDC) to authenticate users and services.
Unlike AUTH_SYS, RPCSEC_GSS Kerberos mechanism ensures the server does not depend on the client to correctly represent which user accesses the file. Instead, the system uses cryptography to authenticate users to the server. This prevents a malicious client from impersonating a user without having the Kerberos credentials of that user.
In the /etc/exports file, the sec option defines one or multiple methods of Kerberos security that the share should provide. Clients can mount the share with one of these methods where The sec option supports the following values:
-
sys: no cryptographic protection (default) -
krb5: authentication only -
krb5i: authentication and integrity protection -
krb5p: authentication, integrity checking, and traffic encryption
Note that the more cryptographic functionality a method provides, the lower is the performance.
2.4. File permissions on exported file systems Copy linkLink copied to clipboard!
File permissions on exported file systems determine access rights to files and directories for clients accessing them over NFS.
Once a remote host mounts the NFS file system, the only protection each shared file has is its file system permissions. If two users share the same User ID (UID) value, they can mount the same NFS file system on different client systems. In this case, they can modify each other’s files.
NFS treats the root user on the client as equivalent to the root user on the server. However, by default, the NFS server maps root to the nobody account when accessing an NFS share. The root_squash option controls this behavior.
For more information about this option, see the exports(5) man page on your system.
2.5. Services required on an NFS server Copy linkLink copied to clipboard!
Red Hat Enterprise Linux (RHEL) uses a combination of a kernel module and user-space processes to provide NFS file shares. Principal services used by NFS servers include kernel modules and user-space processes that provide access to NFS file shares.
The following table includes details of their functions and configuration.
| Service Name | NFS versions | Description |
|---|---|---|
|
| 3 |
This process accepts port reservations from local remote procedure call (RPC) services, makes them available or advertised. This allows corresponding remote RPC services to access them. The |
|
| 3, 4 |
This service processes It checks that the requested NFS share is currently exported by the NFS server and that the client has access permissions. |
|
| 3, 4 | This process advertises explicit NFS versions and protocols the server defines. It works with the kernel to meet the dynamic demands of NFS clients. For example, providing server threads each time an NFS client connects.
The |
|
| 3, 4 | This service provides user quota information for remote users. |
|
| 4 | This process provides NFSv4 client and server upcalls, mapping between NFSv4 names (strings in `user@domain` form) and local user and group IDs. |
|
| 3, 4 |
This service handles |
|
| 4 | This service provides a NFSv4 client tracking daemon that prevents the server from granting lock reclaims. It prevents reclaims when other clients have taken conflicting locks during a network partition combined with a server reboot. |
|
| 3 | This service notifies other NFSv3 clients when the local host reboots, and notifies the kernel when a remote NFSv3 host reboots. |
| Module Name | NFS versions | Description |
|---|---|---|
|
| 3, 4 | The NFS kernel module that services requests for shared NFS file systems. |
|
| 3 | This kernel module implements the Network Lock Manager (NLM) protocol, which enables clients to lock files on the server. RHEL loads the module automatically when the NFS server runs. |
For more information, see the following man pages in your system:
-
rpcbind(8) -
rpc.mountd(8) -
rpc.nfsd(8) -
rpc.statd(8) -
rpc.rquotad(8) -
rpc.idmapd(8) -
gssproxy(8) -
nfsdcld(8)
2.6. The /etc/exports configuration file Copy linkLink copied to clipboard!
The /etc/exports file controls which directories the server exports. Each line contains an export point, a whitespace-separated list of clients allowed to mount the directory, and options for each client.
The following is the format for an /etc/exports entry:
<directory> <host_or_network_1>(<options_1>) <host_or_network_n>(<options_n>)...
The following are the individual parts of an /etc/exports entry:
- <directory>
- The directory that is being exported.
- <host_or_network>
- The host or network to which the export is being shared. For example, you can specify a hostname, an IP address, or an IP network.
- <options>
- The options for the host or network.
Adding a space between a client and options, changes the behavior. For example, the following lines do not have the same meaning:
/projects client.example.com(rw)
/projects client.example.com (rw)
In the first line, the server allows only client.example.com to mount the /projects directory in read-write mode. No other hosts can mount the share. However, the space between client.example.com and (rw) in the second line changes the behavior. The server exports the directory to client.example.com in read-only mode (default setting). All other hosts can mount the share in read-write mode.
The NFS server uses the following default settings for each exported directory:
| Default setting | Description |
|---|---|
|
| Exports the directory in read-only mode. |
|
| The NFS server does not reply to requests before changes made by previous requests are written to disk. |
|
| The server delays writing to the disk if it suspects another write request is pending. |
|
|
Prevents that the |
You can view and manage exported file systems by using the exportfs command. For details see the exportfs(8) man page on your system.
2.7. Configuring an NFSv4-only server Copy linkLink copied to clipboard!
If you do not have any NFSv3 clients in your network, you can configure the NFS server to support only NFSv4. You can also support specific minor protocol versions. Using only NFSv4 on the server reduces the number of ports that are open to the network.
Procedure
Install the
nfs-utilspackage:# dnf install nfs-utilsEdit the
/etc/nfs.conffile, and make the following changes:Disable the
vers3parameter in the[nfsd]section to disable NFSv3:[nfsd] vers3=nOptional: If you require only specific NFSv4 minor versions, uncomment all
vers4.<minor_version>parameters and set them accordingly, for example:[nfsd] vers3=n # vers4=y vers4.0=n vers4.1=n vers4.2=yWith this configuration, the server provides only NFS version 4.2.
ImportantIf you require only a specific NFSv4 minor version, set only the parameters for the minor versions. Do not uncomment the
vers4parameter to avoid an unpredictable activation or deactivation of minor versions. By default, thevers4parameter enables or disables all NFSv4 minor versions. However, this behavior changes if you setvers4in conjunction with otherversparameters.
Disable all NFSv3-related services:
# systemctl mask --now rpc-statd.service rpcbind.service rpcbind.socketConfigure the
rpc.mountddaemon to not listen for NFSv3 mount requests. Create a/etc/systemd/system/nfs-mountd.service.d/v4only.conffile with the following content:[Service] ExecStart= ExecStart=/usr/sbin/rpc.mountd --no-tcp --no-udpReload the
systemdmanager configuration and restart thenfs-mountdservice:# systemctl daemon-reload# systemctl restart nfs-mountdOptional: Create a directory that you want to share, for example:
# mkdir -p /nfs/projects/If you want to share an existing directory, skip this step.
Set the permissions you require on the
/nfs/projects/directory:# chmod 2770 /nfs/projects/# chgrp users /nfs/projects/These commands set write permissions for the
usersgroup on the/nfs/projects/directory. They ensure that the same group is automatically set on new entries created in this directory.Add an export point to the
/etc/exportsfile for each directory that you want to share:/nfs/projects/ 192.0.2.0/24(rw) 2001:db8::/32(rw)This entry shares the
/nfs/projects/directory to be accessible with read and write access to clients in the192.0.2.0/24and2001:db8::/32subnets.Open the relevant ports in
firewalld:# firewall-cmd --permanent --add-service nfs# firewall-cmd --reloadEnable and start the NFS server:
# systemctl enable --now nfs-server
Verification
On the server, verify that the server provides only the NFS versions that you have configured:
# cat /proc/fs/nfsd/versions-3 +4 -4.0 -4.1 +4.2On a client, perform the following steps:
Install the
nfs-utilspackage:# dnf install nfs-utilsMount an exported NFS share:
# mount server.example.com:/nfs/projects/ /mnt/As a user which is a member of the
usersgroup, create a file in/mnt/:# touch /mnt/fileList the directory to verify that the file was created:
# ls -l /mnt/total 0 -rw-r--r--. 1 demo users 0 Jan 16 14:18 file
2.8. Configuring an NFSv3 server with optional NFSv4 support Copy linkLink copied to clipboard!
In a network which still uses NFSv3 clients, configure the server to provide shares by using the NFSv3 protocol. If you also have newer clients in your network, you can, additionally, enable NFSv4. By default, Red Hat Enterprise Linux NFS clients use the latest NFS version that the server provides.
Procedure
Install the
nfs-utilspackage:# dnf install nfs-utilsOptional: By default, NFSv3 and NFSv4 are enabled. If you do not require NFSv4 or only specific minor versions, uncomment all
vers4.<minor_version>parameters and set them accordingly:[nfsd] # vers3=y # vers4=y vers4.0=n vers4.1=n vers4.2=yWith this configuration, the server provides only the NFS version 3 and 4.2.
ImportantIf you require only a specific NFSv4 minor version, set only the parameters for the minor versions. Do not uncomment the
vers4parameter to avoid an unpredictable activation or deactivation of minor versions. By default, thevers4parameter enables or disables all NFSv4 minor versions. However, this behavior changes if you setvers4in conjunction with otherversparameters.By default, NFSv3 RPC services use random ports. To enable a firewall configuration, configure fixed port numbers in the
/etc/nfs.conffile:In the
[lockd]section, set a fixed port number for thenlockmgrRPC service, for example:[lockd] port=5555With this setting, the service automatically uses this port number for both the UDP and TCP protocol.
In the
[statd]section, set a fixed port number for therpc.statdservice, for example:[statd] port=6666With this setting, the service automatically uses this port number for both the UDP and TCP protocol.
Optional: Create a directory that you want to share, for example:
# mkdir -p /nfs/projects/If you want to share an existing directory, skip this step.
Set the permissions you require on the
/nfs/projects/directory:# chmod 2770 /nfs/projects/# chgrp users /nfs/projects/These commands set write permissions for the
usersgroup on the/nfs/projects/directory. They ensure that the same group is automatically set on new entries created in this directory.Add an export point to the
/etc/exportsfile for each directory that you want to share:/nfs/projects/ 192.0.2.0/24(rw) 2001:db8::/32(rw)This entry shares the
/nfs/projects/directory to be accessible with read and write access to clients in the192.0.2.0/24and2001:db8::/32subnets.Open the relevant ports in
firewalld:# firewall-cmd --permanent --add-service={nfs,rpc-bind,mountd}# firewall-cmd --permanent --add-port={5555/tcp,5555/udp,6666/tcp,6666/udp}# firewall-cmd --reloadEnable and start the NFS server:
# systemctl enable --now rpc-statd nfs-server
Verification
On the server, verify that the server provides only the NFS versions that you have configured:
# cat /proc/fs/nfsd/versions+3 +4 -4.0 -4.1 +4.2On a client, perform the following steps:
Install the
nfs-utilspackage:# dnf install nfs-utilsMount an exported NFS share:
# mount -o vers=<version> server.example.com:/nfs/projects/ /mnt/Verify that the share was mounted with the specified NFS version:
# mount | grep "/mnt"server.example.com:/nfs/projects/ on /mnt type nfs (rw,relatime,vers=3,...As a user which is a member of the
usersgroup, create a file in/mnt/:# touch /mnt/fileList the directory to verify that the file was created:
# ls -l /mnt/ total 0 -rw-r--r--. 1 demo users 0 Jan 16 14:18 file
2.9. Enabling quota support on an NFS server Copy linkLink copied to clipboard!
You can restrict the amount of data a user or a group can store by configuring quotas on the file system. On an NFS server, the rpc-rquotad service ensures that the quota is also applied to users on NFS clients. For more information, see the quota(1) and xfs_quota(8) man pages on your system.
Prerequisites
Procedure
Verify that quotas are enabled on the directories that you export:
For ext file system, enter:
# quotaon -p /nfs/projects/group quota on /nfs/projects (/dev/sdb1) is on user quota on /nfs/projects (/dev/sdb1) is on project quota on /nfs/projects (/dev/sdb1) is offFor an XFS file system, enter:
# findmnt /nfs/projectsTARGET SOURCE FSTYPE OPTIONS /nfs/projects /dev/sdb1 xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,usrquota,grpquota
Install the
quota-rpcpackage:# dnf install quota-rpcOptional: By default, the quota RPC service runs on port 875. You can run the service on a different port. Append
-p <port_number>to theRPCRQUOTADOPTSvariable in the/etc/sysconfig/rpc-rquotadfile:RPCRQUOTADOPTS="-p __<port_number>__"Optional: By default, remote hosts can only read quotas. To allow clients to set quotas, append the
-Soption to theRPCRQUOTADOPTSvariable in/etc/sysconfig/rpc-rquotad:RPCRQUOTADOPTS="-S"Open the port in
firewalld:# firewall-cmd --permanent --add-port=875/udp# firewall-cmd --reloadEnable and start the
rpc-rquotadservice:# systemctl enable --now rpc-rquotad
Verification
On the client:
Mount the exported share:
# mount server.example.com:/nfs/projects/ /mnt/Display the quota. The command depends on the file system of the exported directory. For example:
To display the quota of a specific user on all mounted ext file systems, enter:
# quota -u <user_name>Disk quotas for user demo (uid 1000): Filesystem space quota limit grace files quota limit grace server.example.com:/nfs/projects 0K 100M 200M 0 0 0To display the user and group quota on an XFS file system, enter:
# xfs_quota -x -c "report -h" /mnt/User quota on /nfs/projects (/dev/vdb1) Blocks User ID Used Soft Hard Warn/Grace ---------- --------------------------------- root 0 0 0 00 [------] demo 0 100M 200M 00 [------]
2.10. Enabling NFS over RDMA on an NFS server Copy linkLink copied to clipboard!
Remote Direct Memory Access (RDMA) is a protocol that enables direct data transfer. A client system can transfer data from a storage server’s memory into its own memory. This enhances storage throughput, decreases latency in data transfer between the server and client, and reduces CPU load on both ends. If both the NFS server and clients are connected over RDMA, clients can use NFSoRDMA to mount an exported directory.
Prerequisites
- The NFS service is running and configured
- An InfiniBand or RDMA over Converged Ethernet (RoCE) device is installed on the server.
- IP over InfiniBand (IPoIB) is configured on the server, and the InfiniBand device has an IP address assigned.
Procedure
Install the
rdma-corepackage:# dnf install rdma-coreIf the package was already installed, verify that the
xprtrdmaandsvcrdmamodules in the/etc/rdma/modules/rdma.conffile are uncommented:# NFS over RDMA client support xprtrdma # NFS over RDMA server support svcrdmaOptional: By default, NFS over RDMA uses port 20049. If you want to use a different port, set the
rdma-portsetting in the[nfsd]section of the/etc/nfs.conffile:rdma-port=<port>Open the NFSoRDMA port in
firewalld:# firewall-cmd --permanent --add-port={20049/tcp,20049/udp}# firewall-cmd --reloadAdjust the port numbers if you set a different port than 20049.
Restart the
nfs-serverservice:# systemctl restart nfs-server
Verification
On a client with InfiniBand hardware, perform the following steps:
Install the following packages:
# dnf install nfs-utils rdma-coreMount an exported NFS share over RDMA:
# mount -o rdma server.example.com:/nfs/projects/ /mnt/If you set a port number other than the default (20049), pass
port=<port_number>to the command:# mount -o rdma,port=<port_number> server.example.com:/nfs/projects/ /mnt/Verify that the share was mounted with the
rdmaoption:# mount | grep "/mnt" server.example.com:/nfs/projects/ on /mnt type nfs (...,proto=rdma,...)
2.11. Setting up an NFS server with Kerberos in an Identity Management domain Copy linkLink copied to clipboard!
If you use Red Hat Enterprise Linux Identity Management (IdM), you can join your NFS server to the IdM domain. This enables you to centrally manage users and groups and to use Kerberos for authentication, integrity protection, and traffic encryption.
Prerequisites
- The NFS server is enrolled in a Red Hat Enterprise Linux Identity Management (IdM) domain.
- The NFS server is running and configured.
Procedure
Obtain a kerberos ticket as an IdM administrator:
# kinit adminCreate a
nfs/<FQDN>service principal:# ipa service-add nfs/nfs_server.idm.example.comRetrieve the
nfsservice principal from IdM, and store it in the/etc/krb5.keytabfile:# ipa-getkeytab -s idm_server.idm.example.com -p nfs/nfs_server.idm.example.com -k /etc/krb5.keytabOptional: Display the principals in the
/etc/krb5.keytabfile:# klist -k /etc/krb5.keytabKeytab name: FILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------- 1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM 1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM 1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM 1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM 7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM 7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM 7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM 7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COMBy default, the IdM client adds the host principal to the
/etc/krb5.keytabfile when you join the host to IdM. If the host principal is missing, use theipa-getkeytab -s idm_server.idm.example.com -p host/nfs_server.idm.example.com -k /etc/krb5.keytabcommand to add it.Use the
ipa-client-automountutility to configure mapping of IdM IDs.If the client is not in the IdM DNS domain, use the
--domainoption to specify the primary DNS domain of the IdM deployment. Alternatively, use the--serveroption to specify the IdM server to connect to:# ipa-client-automount --domain idm.example.comThe
--domainoption triggers DNS discovery to determine the IdM servers to use.If the client is already in the IdM DNS domain, run the command without the
--domainoption:# ipa-client-automountSearching for IPA server... IPA server: DNS discovery Location: default Continue to configure the system with these values? [no]: yes Configured /etc/idmapd.conf Restarting sssd, waiting for it to become available. Started autofs
Update your
/etc/exportsfile, and add the Kerberos security method to the client options. For example:/nfs/projects/ 192.0.2.0/24(rw,sec=krb5i)If you want that your clients can select from multiple security methods, specify them separated by colons:
/nfs/projects/ 192.0.2.0/24(rw,sec=krb5:krb5i:krb5p)Reload the exported file systems:
# exportfs -r
2.12. Configuring an NFS server with TLS support Copy linkLink copied to clipboard!
Without the RPCSEC_GSS protocol, Network File System (NFS) traffic is unencrypted by default. Starting with Red Hat Enterprise Linux 10, it is possible to configure NFS with Transport Layer Security (TLS). This allows NFS traffic to be encrypted by default.
Prerequisites
- You have configured an NFSv4 server. For instructions, see Configuring an NFSv4-only server.
- You have a Certificate Authority (CA) certificate.
-
You have installed the
ktls-utilspackage.
Procedure
Create a private key and a certificate signing request (CSR):
# openssl req -new -newkey rsa:4096 -noenc \ -keyout /etc/pki/tls/private/server.example.com.key \ -out /etc/pki/tls/private/server.example.com.csr \ -subj "/C=US/ST=State/L=City/O=Organization/CN=server.example.com" \ -addext "subjectAltName=DNS:server.example.com,IP:192.0.2.1"ImportantCommon Name (CN) and Domain Name System (DNS) must match the hostname. IP must match IP of the host.
-
Send the
/etc/pki/tls/private/server.example.com.csrfile to a CA and request a server certificate. Store the received CA certificate and the server certificate on the host. Import the CA certificate to the systems’s truststore:
# cp ca.crt /etc/pki/ca-trust/source/anchors# update-ca-trustMove the server certificate to the
/etc/pki/tls/certs/directory:# mv server.example.com.crt /etc/pki/tls/certs/Ensure the SELinux context is correct on the private key and certificates:
# restorecon -Rv /etc/pki/tls/certs/Add the server certificate and private key to the
[authenticate.server]section in the/etc/tlshd.conffile:x509.certificate= /etc/pki/tls/certs/server.example.com.crt x509.private_key= /etc/pki/tls/private/server.example.com.keyLeave the
x509.truststoreparameter unset.Enable and start the
tlshdservice:# systemctl enable --now tlshd.service
2.13. Configuring an NFS client with TLS support Copy linkLink copied to clipboard!
If the server supports Network File System (NFS) with Transport Layer Security (TLS) encryption, configure the client and mount by using xprtsec=tls. This includes importing the Certificate Authority (CA) certificate, enabling the TLS daemon, and mounting the share with encryption. Using TLS helps protect data in transit between the client and server.
Prerequisites
- You have configured the NFS server with TLS encryption. For details, see Configuring an NFS server with TLS support.
-
You have installed the
ktls-utilspackage.
Procedure
Import the Certificate Authority (CA) certificate to the systems’s truststore:
# cp ca.crt /etc/pki/ca-trust/source/anchors# update-ca-trustEnable and start the
tlshdservice:# systemctl enable --now tlshd.serviceMount an NFS share by using TLS encryption:
# mount -o xprtsec=tls server.example.com:/nfs/projects/ /mnt/
Verification
Verify that the client successfully mounted NFS share with TLS support:
# journalctl -u tlshd… Apr 01 08:37:56 client.example.com tlshd[10688]: Handshake with server.example.com (192.0.2.1) was successful
2.14. Configuring an NFS client with mutual TLS support Copy linkLink copied to clipboard!
If the server supports Network File System (NFS) with Transport Layer Security (TLS) encryption, configure the NFS server and client. They authenticate each other by using the TLS protocol. The configuration includes creating and installing certificates, configuring the TLS daemon, and mounting the NFS share with encryption.
Prerequisites
- You have configured the NFS server with TLS encryption. For details, see Configuring an NFS server with TLS support.
-
You have installed the
ktls-utilspackage.
Procedure
Create a private key and a certificate signing request (CSR):
# openssl req -new -newkey rsa:4096 -noenc \ -keyout /etc/pki/tls/private/client.example.com.key \ -out /etc/pki/tls/private/client.example.com.csr \ -subj "/C=US/ST=State/L=City/O=Organization/CN=client.example.com" \ -addext "subjectAltName=DNS:client.example.com,IP:192.0.2.2"ImportantCommon Name (CN) and Domain Name System (DNS) must match the hostname. IP must match IP of the host.
-
Send the
/etc/pki/tls/private/client.example.com.csrfile to a Certificate Authority (CA) and request a client certificate. Store the received CA certificate and the client certificate on the host. Import the CA certificate to the systems’s truststore:
# cp ca.crt /etc/pki/ca-trust/source/anchors# update-ca-trustMove the client certificate to the
/etc/pki/tls/certs/directory:# mv client.example.com.crt /etc/pki/tls/certs/Ensure the SELinux context is correct on the private key and certificates:
# restorecon -Rv /etc/pki/tls/certs/Add the client certificate and private key to the
[authenticate.client]section in the/etc/tlshd.conffile:x509.certificate= /etc/pki/tls/certs/client.example.com.crt x509.private_key= /etc/pki/tls/private/client.example.com.keyLeave the
x509.truststoreparameter unset.Enable and start the
tlshdservice:# systemctl enable --now tlshd.serviceMount an NFS share by using TLS encryption:
# mount -o xprtsec=mtls server.example.com:/nfs/projects/ /mnt/
Verification
Verify that the client successfully mounted NFS share with TLS support:
# journalctl -u tlshd… Apr 01 08:37:56 client.example.com tlshd[10688]: Handshake with server.example.com (192.0.2.1) was successful