Chapter 9. File Systems
SELinux security labels are now supported on the OverlayFS file system
With this update, the OverlayFS file system now supports SELinux security labels. When using Docker containers with the OverlayFS storage driver, you no longer have to configure Docker to disable SELinux support for the containers. (BZ#1297929)
NFSoRDMA server is now fully supported
NFS over RDMA (NFSoRDMA) server, previously provided as a Technology Preview, is now fully supported when accessed by Red Hat Enterprise Linux clients. For more information on NFSoRDMA see the following section in the Red Hat Enterprise Linux 7 Storage Administration Guide: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html-single/Storage_Administration_Guide/index.html#nfs-rdma (BZ#1400501)
autofs
now supports the browse options of amd
format maps
The browse functionality of sun format maps makes available automount points visible in directory listings of mounted automount-managed mounts and is now also available for
autofs
amd
format maps.
You can now add mount point sections to the
autofs
configuration for amd
format mounts, in the same way automount points are configured in amd
, without the need to also add a corresponding entry to the master map. As a result, you can avoid having incompatible master map entries in the autofs
master map within shared multi-vendor environments.
The
browsable_dirs
option can be used in either the autofs
[ amd ]
configuration section, or following amd
mount point sections. The browsable
and utimeout
map options of amd type auto
map entries can also be used.
To make searching logs easier, autofs
now provides identifiers of mount request log entries
For busy sites, it can be difficult to identify log entries for specific mount attempts when examining mount problems. The entries are often mixed with other concurrent mount requests and activities if the log recorded a lot of activity. Now, you can quickly filter entries for specific mount requests if you enable adding a mount request log identifier to mount request log entries in the
autofs
configuration. The new logging is turned off by default and is controlled by the use_mount_request_log_id
option, as described in the autofs.conf
file. (BZ#1382093)
GFS2 on IBM z Systems is now supported in SSI environments
Starting with Red Hat Enterprise Linux 7.4, GFS2 on IBM z Systems (Resilient Storage on the
s390x
add-on) is supported in z/VM Single System Image (SSI) environments, with multiple central electronics complexes (CECs). This allows the cluster to stay up even when logical partitions (LPARs) or CECs are restarted. Live migration is not supported due to the real-time requirements of High Availability (HA) clustering. The maximum node limit of 4 nodes on IBM z Systems still applies. For information on configuring high availability and resilient storage for IBM z systems, see https://access.redhat.com/articles/1543363. (BZ#1273401)
gfs2-utils rebased to version 3.1.10
The gfs2-utils packages have been upgraded to upstream version 3.1.10, which provides a number of bug fixes and enhancements over the previous version. Notably, this update provides:
- various checking and performance improvements of the
fsck.gfs2
command - better handling of odd block device geometry in the
mkfs.gfs2
command. gfs2_edit savemeta
leaf chain block handling bug fixes.- handling UUIDs by the
libuuid
library instead of custom functions. - new
--enable-gprof
configuration option for profiling. - documentation improvements. (BZ#1413684)
FUSE now supports SEEK_HOLE
and SEEK_DATA
in lseek
calls
This update provides the
SEEK_HOLE
and SEEK_DATA
features for the Filesystem in Userspace (FUSE) lseek
system call. Now, you can use FUSE lseek
to adjust the offset of the file to the next location in the file that contains data, with SEEK_DATA
, or a hole, with SEEK_HOLE
. (BZ#1306396)
NFS server now supports limited copy-offload
The NFS server-side copy feature now allows the NFS client to copy file data between two files that reside on the same file system on the same NFS server without the need to transmit data back and forth over the network through the NFS client. Note that the NFS protocol also allows copies between different file systems or servers, but the Red Hat Enterprise Linux implementation currently does not support such operations. (BZ#1356122)
SELinux is supported for use with GFS2 file systems
Security Enhanced Linux (SELinux) is now supported for use with GFS2 file systems. Since use of SELinux with GFS2 incurs a small performance penalty, you may choose not to use SELinux with GFS2 even on a system with SELinux in enforcing mode. For information on how to configure this, see https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Global_File_System_2/index.html. (BZ#437984)
NFSoRDMA client and server now support Kerberos authentication
This update adds Kerberos authentication support for NFS over RDMA (NFSoRDMA) client and server to allow you to use krb5, krb5i, and krb5p authentication with NFSoRDMA features. You can now use Kerberos with NFSoRDMA for secure authentication of each Remote Procedure Call (RPC) transaction. Note that you need version 1.3.0-0.36 or higher of the nfs-utils package to be installed to use Kerberos with NFSoRDMA. (BZ#1401797)
rpc.idmapd
now supports obtaining NFSv4 ID Domains from DNS
The NFS domain name that is used in the ID mapping can now be retrieved from DNS. If the
Domain
variable is not set in the /etc/idmapd.conf
file, DNS is queried to search for the _nfsv4idmapdomain
Text record. If a value is found, it is used as the NFS domain. (BZ#980925)
NFSv4.1 is now the default NFS mount protocol
Prior to this update, NFSv4.0 was the default NFS mount protocol. NFSv4.1 provides significant feature improvements over NFSv4.0, such as sessions, pNFS, parallel OPENs, and session trunking. With this update, NFSv4.1 is the default NFS mount protocol.
If you have already specified the mount protocol minor version, this update causes no change in behavior. This update causes a change in behavior if you have specified NFSv4 without a specific minor version, provided the server supports NFSv4.1. If the server only supports NFSv4.0, the mount remains a NFSv4.0 mount. You can retain the original behavior by specifying
0
as the minor version:
- on the mount command line,
- in the
/etc/fstab
file, - or in the
/etc/nfsmount.conf
file. (BZ#1375259)
Setting nfs-utils
configuration options has been centralized in nfs.conf
With this update,
nfs-utils
uses configuration centralized in the nfs.conf
file, which is structured into stanzas for each nfs-utils
program. Each nfs-utils
program can read the configuration directly from the file, so you no longer need to use the systemctl restart nfs-config.service
command, but restart only the specific program. For more information, see the nfs.conf(5)
manual page.
For compatibility with earlier releases, the older
/etc/sysconfig/nfs
configuration method is still available. However, it is recommended to avoid specifying configuration settings in both the /etc/sysconfig/nfs
and /etc/nfs.conf
file. (BZ#1418041)
Locking performance for NFSv4.1 mounts has been improved for certain workloads
NFSv4 clients poll the server at an interval to obtain a lock under contention. As a result, the locking performance for contented locks for NFSv4 is slower than the performance of NFSv3.
The CB_NOTIFY_LOCK operation has been added to the NFS client and server, so NFSv4.1 and later allow servers to call back to clients waiting on a lock.
This update improves the locking performance for contented locks on NFSv4.1 mounts for certain workloads. Note that the performance might not improve for longer lock contention times. (BZ#1377710)
The CephFS kernel client is fully supported with Red Hat Ceph Storage 3
The Ceph File System (CephFS) kernel module enables Red Hat Enterprise Linux nodes to mount Ceph File Systems from Red Hat Ceph Storage clusters. The kernel client in Red Hat Enterprise Linux is a more efficient alternative to the Filesystem in Userspace (FUSE) client included with Red Hat Ceph Storage. Note that the kernel client currently lacks support for CephFS quotas.
The CephFS kernel client was introduced in Red Hat Enterprise Linux 7.3 as a Technology Preview, and since the release of Red Hat Ceph Storage 3, CephFS is fully supported.
For more information, see the Ceph File System Guide for Red Hat Ceph Storage 3: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/ceph_file_system_guide/. (BZ#1626527)