Chapter 2. Deploying an NFS server
By using the Network File System (NFS) protocol, remote users can mount shared directories over a network and use them as they were mounted locally. This enables you to consolidate resources onto centralized servers on the network.
2.1. Key features of minor NFSv4 versions
Each minor NFSv4 version brings enhancements aimed at improving performance and security. Use these improvements to utilize the full potential of NFSv4, ensuring efficient and reliable file sharing across networks.
Key features of NFSv4.2
- Server-side copy
- Server-side copy is a capability of the NFS server to copy files on the server without transferring the data back and forth over the network.
- Sparse files
- Enables files to have one or more empty spaces, or gaps, which are unallocated or uninitialized data blocks consisting only of zeros. This enables applications to map out the location of holes in the sparse file.
- Space reservation
- Clients can reserve or allocate space on the storage server before writing data. This prevents the server from running out of space.
- Labeled NFS
- Enforces data access rights and enables SELinux labels between a client and a server for individual files on an NFS file system.
- Layout enhancements
- Provides functionality to enable Parallel NFS (pNFS) servers to collect better performance statistics.
Key features of NFSv4.1
- Client-side support for pNFS
- The support of high-speed I/O to clustered servers enables you to store data on multiple machines, to provide direct access to data, and synchronization of updates to metadata.
- Sessions
- Sessions maintain the server’s state relative to the connections belonging to a client. These sessions provide improved performance and efficiency by reducing the overhead associated with establishing and terminating connections for each Remote Procedure Call (RPC) operation.
Key features of NFSv4.0
- RPC and security
-
The
RPCSEC_GSS
framework enhances RPC security. The NFSv4 protocol introduces a new operation for in-band security negotiation. This enables clients to query server policies for accessing file system resources securely. - Procedure and operation structure
-
NFS 4.0 introduces the
COMPOUND
procedure, which enables clients to merge multiple operations into a single request to reduce RPCs. - File system model
NFS 4.0 retains the hierarchical file system model, treating files as byte streams and encoding names with UTF-8 for internationalization.
File handle types
With volatile file handles, servers can adjust to file system changes and enable clients to adapt as needed without requiring permanent file handles.
Attribute types
The file attribute structure includes required, recommended, and named attributes, each serving distinct purposes. Required attributes, derived from NFSv3, are essential for distinguishing file types, while recommended attributes, such as ACLs, provide enhanced access control.
Multi-server namespace
Namespaces span across multiple servers, simplify file system transfers based on attributes, support referrals, redundancy, and seamless server migration.
- OPEN and CLOSE operations
- These operations can combine file lookup, creation, and semantic sharing at a single point and make the file access management more efficient.
- File locking
- File locking is part of the protocol, eliminating the need for RPC callbacks. File lock state is managed by the server under a lease-based model, where failure to renew the lease may result in state release by the server.
- Client caching and delegation
- Caching resembles previous versions, with client-determined timeouts for attribute and directory caching. Delegations in NFS 4.0 allow the server to assign certain responsibilities to the client, guaranteeing specific file sharing semantics and enabling local file operations without immediate server interaction.
2.2. The AUTH_SYS authentication method
The AUTH_SYS
method, which is also known as AUTH_UNIX
, is a client authentication mechanism. With AUTH_SYS
, the client sends the User ID (UID) and Group ID (GID) of the user to the server to verify its identity and permissions when accessing files. It is considered less secure as it relies on the client-provided information, making it susceptible to unauthorized access if misconfigured.
Mapping mechanisms ensure that NFS clients can access files with the appropriate permissions on the server, even if the UID and GID assignments differ between systems. UIDs and GIDs are mapped between NFS client and server by the following mechanisms:
- Direct mapping
UIDs and GIDs are directly mapped by NFS servers and clients between local and remote systems. This requires consistent UID and GID assignments across all systems participating in NFS file sharing. For example, a user with UID 1000 on a client can only access the files on a share that a user with UID 1000 on the server has access to.
For a simplified ID management in an NFS environment, administrators often rely on centralized services, such as LDAP or Network Information Service (NIS) to manage UID and GID mappings across multiple systems.
- User and Group ID mapping
-
NFS servers and clients can use the
idmapd
service to translate UIDs and GIDs between different systems for consistent identification and permission assignment.
2.3. The AUTH_GSS authentication method
Kerberos is a network authentication protocol that allows secure authentication for clients and servers over a non-secure network. It uses symmetric key cryptography and requires a trusted Key Distribution Center (KDC) to authenticate users and services.
Unlike AUTH_SYS
, with the RPCSEC_GSS
Kerberos mechanism, the server does not depend on the client to correctly represent which user is accessing the file. Instead, cryptography is used to authenticate users to the server, which prevents a malicious client from impersonating a user without having that user’s Kerberos credentials.
In the /etc/exports
file, the sec
option defines one or multiple methods of Kerberos security that the share should provide, and clients can mount the share with one of these methods. The sec
option supports the following values:
-
sys
: no cryptographic protection (default) -
krb5
: authentication only -
krb5i
: authentication and integrity protection -
krb5p
: authentication, integrity checking, and traffic encryption
Note that the more cryptographic functionality a method provides, the lower is the performance.
2.4. File permissions on exported file systems
File permissions on exported file systems determine access rights to files and directories for clients accessing them over NFS.
Once the NFS file system is mounted by a remote host, the only protection each shared file has is its file system permissions. If two users that share the same User ID (UID) value mount the same NFS file system on different client systems, they can modify each other’s files.
NFS treats the root
user on the client as equivalent to the root
user on the server. However, by default, the NFS server maps root
to the nobody
account when accessing an NFS share. The root_squash
option controls this behavior.
Additional resources
-
exports(5)
man page
2.5. Services required on an NFS server
Red Hat Enterprise Linux (RHEL) uses a combination of a kernel module and user-space processes to provide NFS file shares:
Service name | NFS versions | Description |
---|---|---|
| 3, 4 | The NFS kernel module that services requests for shared NFS file systems. |
| 3 |
This process accepts port reservations from local remote procedure call (RPC) services, makes them available or advertised, allowing corresponding remote RPC services to access them. The |
| 3, 4 |
This service processes It checks that the requested NFS share is currently exported by the NFS server and that the client is allowed to access it. |
| 3, 4 | This process advertises explicit NFS versions and protocols the server defines. It works with the kernel to meet the dynamic demands of NFS clients, such as providing server threads each time an NFS client connects.
The |
| 3 | This kernel module implements the Network Lock Manager (NLM) protocol, which enables clients to lock files on the server. RHEL loads the module automatically when the NFS server runs. |
| 3, 4 | This service provides user quota information for remote users. |
| 4 | This process provides NFSv4 client and server upcalls, which map between NFSv4 names (strings in the form of `user@domain`) and local user and group IDs. |
| 3, 4 |
This service handles |
| 4 | This service provides a NFSv4 client tracking daemon that prevents the server from granting lock reclaims when other clients have taken conflicting locks during a network partition combined with a server reboot. |
| 3 | This service provides notification to other NFSv3 clients when the local host reboots, and to the kernel when a remote NFSv3 host reboots. |
Additional resources
-
rpcbind(8)
,rpc.mountd(8)
,rpc.nfsd(8)
,rpc.statd(8)
,rpc.rquotad(8)
,rpc.idmapd(8)
,gssproxy(8)
,nfsdcld(8)
,rpc.statd(8)
man pages
2.6. The /etc/exports configuration file
The /etc/exports
file controls which directories the server exports. Each line contains an export point, a whitespace-separated list of clients that are allowed to mount the directory, and options for each of the clients:
<directory> <host_or_network_1>(<options_1>) <host_or_network_n>(<options_n>)...
The following are the individual parts of an /etc/exports
entry:
- <export>
- The directory that is being exported.
- <host_or_network>
- The host or network to which the export is being shared. For example, you can specify a hostname, an IP address, or an IP network.
- <options>
- The options for the host or network.
Adding a space between a client and options, changes the behavior. For example, the following lines do not have the same meaning:
/projects client.example.com(rw) /projects client.example.com (rw)
In the first line, the server allows only client.example.com
to mount the /projects
directory in read-write mode, and no other hosts can mount the share. However, due to the space between client.example.com
and (rw)
in the second line, the server exports the directory to client.example.com
in read-only mode (default setting), but all other hosts can mount the share in read-write mode.
The NFS server uses the following default settings for each exported directory:
Default setting | Description |
---|---|
| Exports the directory in read-only mode. |
| The NFS server does not reply to requests before changes made by previous requests are written to disk. |
| The server delays writing to the disk if it suspects another write request is pending.. |
|
Prevents that the |
2.7. Configuring an NFSv4-only server
If you do not have any NFSv3 clients in your network, you can configure the NFS server to support only NFSv4 or specific minor protocol versions of it. Using only NFSv4 on the server reduces the number of ports that are open to the network.
Procedure
Install the
nfs-utils
package:# dnf install nfs-utils
Edit the
/etc/nfs.conf
file, and make the following changes:Disable the
vers3
parameter in the[nfsd]
section to disable NFSv3:[nfsd] vers3=n
Optional: If you require only specific NFSv4 minor versions, uncomment all
vers4.<minor_version>
parameters and set them accordingly, for example:[nfsd] vers3=n # vers4=y vers4.0=n vers4.1=n vers4.2=y
With this configuration, the server provides only NFS version 4.2.
ImportantIf you require only a specific NFSv4 minor version, set only the parameters for the minor versions. Do not uncomment the
vers4
parameter to avoid an unpredictable activation or deactivation of minor versions. By default, thevers4
parameter enables or disables all NFSv4 minor versions. However, this behavior changes if you setvers4
in conjunction with othervers
parameters.
Disable all NFSv3-related services:
# systemctl mask --now rpc-statd.service rpcbind.service rpcbind.socket
Configure the
rpc.mountd
daemon to not listen for NFSv3 mount requests. Create a/etc/systemd/system/nfs-mountd.service.d/v4only.conf
file with the following content:[Service] ExecStart= ExecStart=/usr/sbin/rpc.mountd --no-tcp --no-udp
Reload the
systemd
manager configuration and restart thenfs-mountd
service:# systemctl daemon-reload # systemctl restart nfs-mountd
Optional: Create a directory that you want to share, for example:
# mkdir -p /nfs/projects/
If you want to share an existing directory, skip this step.
Set the permissions you require on the
/nfs/projects/
directory:# chmod 2770 /nfs/projects/ # chgrp users /nfs/projects/
These commands set write permissions for the
users
group on the/nfs/projects/
directory and ensure that the same group is automatically set on new entries created in this directory.Add an export point to the
/etc/exports
file for each directory that you want to share:/nfs/projects/ 192.0.2.0/24(rw) 2001:db8::/32(rw)
This entry shares the
/nfs/projects/
directory to be accessible with read and write access to clients in the192.0.2.0/24
and2001:db8::/32
subnets.Open the relevant ports in
firewalld
:# firewall-cmd --permanent --add-service nfs # firewall-cmd --reload
Enable and start the NFS server:
# systemctl enable --now nfs-server
Verification
On the server, verify that the server provides only the NFS versions that you have configured:
# cat /proc/fs/nfsd/versions -3 +4 -4.0 -4.1 +4.2
On a client, perform the following steps:
Install the
nfs-utils
package:# dnf install nfs-utils
Mount an exported NFS share:
# mount server.example.com:/nfs/projects/ /mnt/
As a user which is a member of the
users
group, create a file in/mnt/
:# touch /mnt/file
List the directory to verify that the file was created:
# ls -l /mnt/ total 0 -rw-r--r--. 1 demo users 0 Jan 16 14:18 file
2.8. Configuring an NFSv3 server with optional NFSv4 support
In a network which still uses NFSv3 clients, configure the server to provide shares by using the NFSv3 protocol. If you also have newer clients in your network, you can, additionally, enable NFSv4. By default, Red Hat Enterprise Linux NFS clients use the latest NFS version that the server provides.
Procedure
Install the
nfs-utils
package:# dnf install nfs-utils
Optional: By default, NFSv3 and NFSv4 are enabled. If you do not require NFSv4 or only specific minor versions, uncomment all
vers4.<minor_version>
parameters and set them accordingly:[nfsd] # vers3=y # vers4=y vers4.0=n vers4.1=n vers4.2=y
With this configuration, the server provides only the NFS version 3 and 4.2.
ImportantIf you require only a specific NFSv4 minor version, set only the parameters for the minor versions. Do not uncomment the
vers4
parameter to avoid an unpredictable activation or deactivation of minor versions. By default, thevers4
parameter enables or disables all NFSv4 minor versions. However, this behavior changes if you setvers4
in conjunction with othervers
parameters.By default, NFSv3 RPC services use random ports. To enable a firewall configuration, configure fixed port numbers in the
/etc/nfs.conf
file:In the
[lockd]
section, set a fixed port number for thenlockmgr
RPC service, for example:[lockd] port=5555
With this setting, the service automatically uses this port number for both the UDP and TCP protocol.
In the
[statd]
section, set a fixed port number for therpc.statd
service, for example:[statd] port=6666
With this setting, the service automatically uses this port number for both the UDP and TCP protocol.
Optional: Create a directory that you want to share, for example:
# mkdir -p /nfs/projects/
If you want to share an existing directory, skip this step.
Set the permissions you require on the
/nfs/projects/
directory:# chmod 2770 /nfs/projects/ # chgrp users /nfs/projects/
These commands set write permissions for the
users
group on the/nfs/projects/
directory and ensure that the same group is automatically set on new entries created in this directory.Add an export point to the
/etc/exports
file for each directory that you want to share:/nfs/projects/ 192.0.2.0/24(rw) 2001:db8::/32(rw)
This entry shares the
/nfs/projects/
directory to be accessible with read and write access to clients in the192.0.2.0/24
and2001:db8::/32
subnets.Open the relevant ports in
firewalld
:# firewall-cmd --permanent --add-service={nfs,rpc-bind,mountd} # firewall-cmd --permanent --add-port={5555/tcp,5555/udp,6666/tcp,6666/udp} # firewall-cmd --reload
Enable and start the NFS server:
# systemctl enable --now rpc-statd nfs-server
Verification
On the server, verify that the server provides only the NFS versions that you have configured:
# cat /proc/fs/nfsd/versions +3 +4 -4.0 -4.1 +4.2
On a client, perform the following steps:
Install the
nfs-utils
package:# dnf install nfs-utils
Mount an exported NFS share:
# mount -o vers=<version> server.example.com:/nfs/projects/ /mnt/
Verify that the share was mounted with the specified NFS version:
# mount | grep "/mnt" server.example.com:/nfs/projects/ on /mnt type nfs (rw,relatime,vers=3,...
As a user which is a member of the
users
group, create a file in/mnt/
:# touch /mnt/file
List the directory to verify that the file was created:
# ls -l /mnt/ total 0 -rw-r--r--. 1 demo users 0 Jan 16 14:18 file
2.9. Enabling quota support on an NFS server
If you want to restrict the amount of data a user or a group can store, you can configure quotas on the file system. On an NFS server, the rpc-rquotad
service ensures that the quota is also applied to users on NFS clients.
Prerequisites
Procedure
Verify that quotas are enabled on the directories that you export:
For ext file system, enter:
# quotaon -p /nfs/projects/ group quota on /nfs/projects (/dev/sdb1) is on user quota on /nfs/projects (/dev/sdb1) is on project quota on /nfs/projects (/dev/sdb1) is off
For an XFS file system, enter:
# findmnt /nfs/projects TARGET SOURCE FSTYPE OPTIONS /nfs/projects /dev/sdb1 xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,usrquota,grpquota
Install the
quota-rpc
package:# dnf install quota-rpc
Optional. By default, the quota RPC service runs on port 875. If you want to run the service on a different port, append
-p <port_number>
to theRPCRQUOTADOPTS
variable in the/etc/sysconfig/rpc-rquotad
file:RPCRQUOTADOPTS="-p __<port_number>__"
Optional: By default, remote hosts can only read quotas. To allow clients to set quotas, append the
-S
option to theRPCRQUOTADOPTS
variable in the/etc/sysconfig/rpc-rquotad
file:RPCRQUOTADOPTS="-S"
Open the port in
firewalld
:# firewall-cmd --permanent --add-port=875/udp # firewall-cmd --reload
Enable and start the
rpc-rquotad
service:# systemctl enable --now rpc-rquotad
Verification
On the client:
Mount the exported share:
# mount server.example.com:/nfs/projects/ /mnt/
Display the quota. The command depends on the file system of the exported directory. For example:
To display the quota of a specific user on all mounted ext file systems, enter:
# quota -u <user_name> Disk quotas for user demo (uid 1000): Filesystem space quota limit grace files quota limit grace server.example.com:/nfs/projects 0K 100M 200M 0 0 0
To display the user and group quota on an XFS file system, enter:
# xfs_quota -x -c "report -h" /mnt/ User quota on /nfs/projects (/dev/vdb1) Blocks User ID Used Soft Hard Warn/Grace ---------- --------------------------------- root 0 0 0 00 [------] demo 0 100M 200M 00 [------]
Additional resources
-
quota(1)
man page -
xfs_quota(8)
man page
2.10. Enabling NFS over RDMA on an NFS server
Remote Direct Memory Access (RDMA) is a protocol that enables a client system to directly transfer data from the memory of a storage server into its own memory. This enhances storage throughput, decreases latency in data transfer between the server and client, and reduces CPU load on both ends. If both the NFS server and clients are connected over RDMA, clients can use NFSoRDMA to mount an exported directory.
Prerequisites
- The NFS service is running and configured
- An InfiniBand or RDMA over Converged Ethernet (RoCE) device is installed on the server.
- IP over InfiniBand (IPoIB) is configured on the server, and the InfiniBand device has an IP address assigned.
Procedure
Install the
rdma-core
package:# dnf install rdma-core
If the package was already installed, verify that the
xprtrdma
andsvcrdma
modules in the/etc/rdma/modules/rdma.conf
file are uncommented:# NFS over RDMA client support xprtrdma # NFS over RDMA server support svcrdma
Optional. By default, NFS over RDMA uses port 20049. If you want to use a different port, set the
rdma-port
setting in the[nfsd]
section of the/etc/nfs.conf
file:rdma-port=<port>
Open the NFSoRDMA port in
firewalld
:# firewall-cmd --permanent --add-port={20049/tcp,20049/udp} # firewall-cmd --reload
Adjust the port numbers if you set a different port than 20049.
Restart the
nfs-server
service:# systemctl restart nfs-server
Verification
On a client with InfiniBand hardware, perform the following steps:
Install the following packages:
# dnf install nfs-utils rdma-core
Mount an exported NFS share over RDMA:
# mount -o rdma server.example.com:/nfs/projects/ /mnt/
If you set a port number other than the default (20049), pass
port=<port_number>
to the command:# mount -o rdma,port=<port_number> server.example.com:/nfs/projects/ /mnt/
Verify that the share was mounted with the
rdma
option:# mount | grep "/mnt" server.example.com:/nfs/projects/ on /mnt type nfs (...,proto=rdma,...)
Additional resources
2.11. Setting up an NFS server with Kerberos in a Red Hat Identity Management domain
If you use Red Hat Identity Management (IdM), you can join your NFS server to the IdM domain. This enables you to centrally manage users and groups and to use Kerberos for authentication, integrity protection, and traffic encryption.
Prerequisites
- The NFS server is enrolled in a Red Hat Identity Management (IdM) domain.
- The NFS server is running and configured.
Procedure
Obtain a kerberos ticket as an IdM administrator:
# kinit admin
Create a
nfs/<FQDN>
service principal:# ipa service-add nfs/nfs_server.idm.example.com
Retrieve the
nfs
service principal from IdM, and store it in the/etc/krb5.keytab
file:# ipa-getkeytab -s idm_server.idm.example.com -p nfs/nfs_server.idm.example.com -k /etc/krb5.keytab
Optional: Display the principals in the
/etc/krb5.keytab
file:# klist -k /etc/krb5.keytab Keytab name: FILE:/etc/krb5.keytab KVNO Principal ---- -------------------------------------------------------------------------- 1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM 1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM 1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM 1 nfs/nfs_server.idm.example.com@IDM.EXAMPLE.COM 7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM 7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM 7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM 7 host/nfs_server.idm.example.com@IDM.EXAMPLE.COM
By default, the IdM client adds the host principal to the
/etc/krb5.keytab
file when you join the host to the IdM domain. If the host principal is missing, use theipa-getkeytab -s idm_server.idm.example.com -p host/nfs_server.idm.example.com -k /etc/krb5.keytab
command to add it.Use the
ipa-client-automount
utility to configure mapping of IdM IDs:# ipa-client-automount Searching for IPA server... IPA server: DNS discovery Location: default Continue to configure the system with these values? [no]: yes Configured /etc/idmapd.conf Restarting sssd, waiting for it to become available. Started autofs
Update your
/etc/exports
file, and add the Kerberos security method to the client options. For example:/nfs/projects/ 192.0.2.0/24(rw,sec=krb5i)
If you want that your clients can select from multiple security methods, specify them separated by colons:
/nfs/projects/ 192.0.2.0/24(rw,sec=krb5:krb5i:krb5p)
Reload the exported file systems:
# exportfs -r