Managing file systems
Creating, modifying, and administering file systems
Abstract
Chapter 1. Overview of available file systems Copy linkLink copied to clipboard!
Choosing the file system that is appropriate for your application is an important decision due to the large number of options available and the trade-offs involved.
The following sections describe the file systems that Red Hat Enterprise Linux 10 includes by default, and recommendations on the most suitable file system for your application.
1.1. Types of file systems Copy linkLink copied to clipboard!
Red Hat Enterprise Linux 10 supports a variety of file systems (FS). Different types of file systems solve different kinds of problems, and their usage is application specific.
At the most general level, available file systems can be grouped into the following major types:
| Type | File system | Attributes and use cases |
|---|---|---|
| Disk or local FS | XFS | XFS is the default file system in RHEL. Red Hat recommends deploying XFS as your local file system unless there are specific reasons to do otherwise: for example, compatibility or corner cases around performance. |
| ext4 | ext4 has the benefit of familiarity in Linux, having evolved from the older ext2 and ext3 file systems. In many cases, it rivals XFS on performance. Support limits for ext4 filesystem and file sizes are lower than those on XFS. | |
| Network or client-and-server FS | NFS | Use NFS to share files between multiple systems on the same network. |
| SMB | Use SMB for file sharing with Microsoft Windows systems. | |
| Volume-managing FS | Stratis | Stratis is a volume manager built on a combination of XFS and LVM. The purpose of Stratis is to emulate capabilities offered by volume-managing file systems like Btrfs and ZFS. It is possible to build this stack manually, but Stratis reduces configuration complexity, implements best practices, and consolidates error information. |
1.2. Local file systems Copy linkLink copied to clipboard!
Local file systems are file systems that run on a single, local server and are directly attached to storage.
For example, a local file system is the only choice for internal SATA or SAS disks, and is used when your server has internal hardware RAID controllers with local drives. Local file systems are also the most common file systems used on SAN attached storage when the device exported on the SAN is not shared.
All local file systems are POSIX-compliant and provide support for a well-defined set of system calls, such as read(), write(), and seek().
When considering a file system choice, choose a file system based on how large the file system needs to be, what unique features it must have, and how it performs under your workload.
- Available local file systems
- XFS
- ext4
1.3. The XFS file system Copy linkLink copied to clipboard!
XFS is a highly scalable, high-performance, robust, and mature 64-bit journaling file system that supports very large files and file systems on a single host. It is the default file system in Red Hat Enterprise Linux 10. XFS was originally developed in the early 1990s by SGI and has a long history of running on extremely large servers and storage arrays.
The features of XFS include:
- Reliability
- Metadata journaling, which ensures file system integrity after a system crash by keeping a record of file system operations that can be replayed when the system is restarted and the file system remounted
- Extensive run-time metadata consistency checking
- Scalable and fast repair utilities
- Quota journaling. This avoids the need for lengthy quota consistency checks after a crash.
- Scalability and performance
- Supported file system size up to 1024 TiB
- Ability to support a large number of concurrent operations
- B-tree indexing for scalability of free space management
- Sophisticated metadata read-ahead algorithms
- Optimizations for streaming video workloads
- Allocation schemes
- Extent-based allocation
- Stripe-aware allocation policies
- Delayed allocation
- Space pre-allocation
- Dynamically allocated inodes
- Other features
- Reflink-based file copies
- Tightly integrated backup and restore utilities
- Online defragmentation
- Online file system growing
- Comprehensive diagnostics capabilities
-
Extended attributes (
xattr). This allows the system to associate several additional name/value pairs per file. - Project or directory quotas. This allows quota restrictions over a directory tree.
- Subsecond timestamps
- Performance characteristics
XFS has a high performance on large systems with enterprise workloads. A large system is one with a relatively high number of CPUs, multiple HBAs, and connections to external disk arrays. XFS also performs well on smaller systems that have a multi-threaded, parallel I/O workload.
XFS performs comparably well on smaller systems, but is more focused on scalability and large data sets.
1.4. The ext4 file system Copy linkLink copied to clipboard!
The ext4 file system is the fourth generation of the ext file system family. The ext4 driver can read and write to ext2 and ext3 file systems, but the ext4 file system format is not compatible with ext2 and ext3 drivers.
ext4 adds several new and improved features, such as:
- Supported file system size up to 50 TiB
- Extent-based metadata
- Delayed allocation
- Journal checksumming
- Large storage support
The extent-based metadata and the delayed allocation features provide a more compact and efficient way to track utilized space in a file system. These features improve file system performance and reduce the space consumed by metadata. Delayed allocation allows the file system to postpone selection of the permanent location for newly written user data until the data is flushed to disk. This enables higher performance since it can allow for larger, more contiguous allocations, allowing the file system to make decisions with much better information.
File system repair time using the fsck utility in ext4 is much faster than in ext2 and ext3. Some file system repairs have demonstrated up to a six-fold increase in performance.
1.5. Comparison of XFS and ext4 Copy linkLink copied to clipboard!
XFS is the default file system in RHEL. This section compares the usage and features of XFS and ext4.
- Metadata error behavior
-
In ext4, you can configure the behavior when the file system encounters metadata errors. The default behavior is to simply continue the operation. When XFS encounters an unrecoverable metadata error, it shuts down the file system and returns the
EFSCORRUPTEDerror. XFS also supports configurable error handling. For more information see configurable error handling in XFS. - Quotas
In ext4, you can enable quotas when creating the file system or later on an existing file system. You can then configure the quota enforcement using a mount option.
XFS quotas are not a remountable option. You must activate quotas on the initial mount.
Running the
quotacheckcommand on an XFS file system has no effect. The first time you turn on quota accounting, XFS checks quotas automatically.- File system resize
- XFS has no utility to reduce the size of a file system. You can only increase the size of an XFS file system. In comparison, ext4 supports both extending and reducing the size of a file system; however shrinking is only an offline operation.
- Inode numbers
The ext4 file system does not support more than 232 inodes.
XFS supports dynamic inode allocation. The amount of space inodes can consume on an XFS filesystem is calculated as a percentage of the total filesystem space. To prevent the system from running out of inodes, an administrator can tune this percentage after the filesystem has been created, given there is free space left on the file system.
Certain applications cannot properly handle inode numbers larger than 232 on an XFS file system. These applications might cause the failure of 32-bit stat calls with the
EOVERFLOWreturn value. Inode number exceed 232 under the following conditions:- The file system is larger than 1 TiB with 256-byte inodes.
- The file system is larger than 2 TiB with 512-byte inodes.
If your application fails with large inode numbers, mount the XFS file system with the
-o inode32option to enforce inode numbers below 232. Note that usinginode32does not affect inodes that are already allocated with 64-bit numbers.ImportantDo not use the
inode32option unless a specific environment requires it. Theinode32option changes allocation behavior. As a consequence, theENOSPCerror might occur if no space is available to allocate inodes in the lower disk blocks.
1.6. Choosing a local file system Copy linkLink copied to clipboard!
To choose a file system that meets your application requirements, you must understand the target system on which you will deploy the file system. In general, use XFS unless you have a specific use case for ext4.
- XFS
- For large-scale deployments, use XFS, particularly when handling large files (hundreds of megabytes) and high I/O concurrency. XFS performs optimally in environments with high bandwidth (greater than 200MB/s) and more than 1000 IOPS. However, it consumes more CPU resources for metadata operations compared to ext4 and does not support file system shrinking.
- ext4
- For smaller systems or environments with limited I/O bandwidth, ext4 might be a better fit. It performs better in single-threaded, lower I/O workloads and environments with lower throughput requirements. ext4 also supports offline shrinking, which can be beneficial if resizing the file system is a requirement.
Benchmark your application’s performance on your target server and storage system to ensure the selected file system meets your performance and scalability requirements.
| Scenario | Recommended file system |
|---|---|
| No special use case | XFS |
| Large server | XFS |
| Large storage devices | XFS |
| Large files | XFS |
| Multi-threaded I/O | XFS |
| Single-threaded I/O | XFS, ext4 |
| Limited I/O capability (under 1000 IOPS) | XFS, ext4 |
| Limited bandwidth (under 200MB/s) | XFS, ext4 |
| CPU-bound workload | XFS, ext4 |
| Support for offline shrinking | XFS, ext4 |
1.7. Network file systems Copy linkLink copied to clipboard!
Network file systems, also referred to as client/server file systems, enable client systems to access files that are stored on a shared server. This makes it possible for multiple users on multiple systems to share files and storage resources.
Such file systems are built from one or more servers that export a set of file systems to one or more clients. The client nodes do not have access to the underlying block storage, but rather interact with the storage using a protocol that allows for better access control.
- Available network file systems
- The most common client/server file system for RHEL customers is the NFS file system. RHEL provides both an NFS server component to export a local file system over the network and an NFS client to import these file systems.
- RHEL also includes a CIFS client that supports the popular Microsoft SMB file servers for Windows interoperability. The userspace Samba server provides Windows clients with a Microsoft SMB service from a RHEL server.
1.10. Volume-managing file systems Copy linkLink copied to clipboard!
Volume-managing file systems integrate the entire storage stack for the purposes of simplicity and in-stack optimization.
- Available volume-managing file systems
- Red Hat Enterprise Linux 10 provides the Stratis volume manager. Stratis uses XFS for the file system layer and integrates it with LVM, Device Mapper, and other components.
Stratis was first released in Red Hat Enterprise Linux 8.0. It is conceived to fill the gap created when Red Hat deprecated Btrfs. Stratis 1.0 is an intuitive, command line-based volume manager that can perform significant storage management operations while hiding the complexity from the user:
- Volume management
- Pool creation
- Thin storage pools
- Snapshots
- Automated read cache
Stratis offers powerful features, but currently lacks certain capabilities of other offerings that it might be compared to, such as Btrfs or ZFS. Most notably, it does not support CRCs with self healing.
Chapter 2. Managing local storage by using RHEL system roles Copy linkLink copied to clipboard!
To manage LVM and local file systems (FS) by using Ansible, you can use the storage role.
Using the storage role enables you to automate administration of file systems on disks and logical volumes on multiple machines.
For more information about RHEL system roles and how to apply them, see Introduction to RHEL system roles.
2.1. Creating an XFS file system on a block device by using the storage RHEL system role Copy linkLink copied to clipboard!
You can use the storage RHEL system role to automate the creation of an XFS file system on block devices.
The storage role can create a file system only on an unpartitioned, whole disk or a logical volume (LV). It cannot create the file system on a partition.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The setting specified in the example playbook include the following:
name: barefs-
The volume name (
barefsin the example) is currently arbitrary. Thestoragerole identifies the volume by the disk device listed under thedisksattribute. fs_type: <file_system>-
You can omit the
fs_typeparameter if you want to use the default file system XFS. disks: <list_of_disks_and_volumes>- A YAML list of disk and LV names.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.2. Persistently mounting a file system by using the storage RHEL system role Copy linkLink copied to clipboard!
You can use the storage RHEL system role to persistently mount file systems to ensure they remain available across system reboots and are automatically mounted on startup. If the file system on the device you specified in the playbook does not exist, the role creates it.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.3. Creating or resizing a logical volume by using the storage RHEL system role Copy linkLink copied to clipboard!
You can use the storage RHEL system role to create and resize LVM logical volumes. The role automatically creates volume groups when if they do not exist.
Use the storage role to perform the following tasks:
- To create an LVM logical volume in a volume group consisting of many disks
- To resize an existing file system on LVM
- To express an LVM volume size in percentage of the pool’s total size
If the volume group does not exist, the role creates it. If a logical volume exists in the volume group, it is resized if the size does not match what is specified in the playbook.
If you are reducing a logical volume, to prevent data loss you must ensure that the file system on that logical volume is not using the space in the logical volume that is being reduced.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
size: <size>- You must specify the size by using units (for example, GiB) or percentage (for example, 60%).
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that specified volume has been created or resized to the requested size:
ansible managed-node-01.example.com -m command -a 'lvs myvg'
# ansible managed-node-01.example.com -m command -a 'lvs myvg'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4. Enabling online block discard by using the storage RHEL system role Copy linkLink copied to clipboard!
You can mount an XFS file system with the online block discard option to automatically discard unused blocks.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that online block discard option is enabled:
ansible managed-node-01.example.com -m command -a 'findmnt /mnt/data'
# ansible managed-node-01.example.com -m command -a 'findmnt /mnt/data'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Creating and mounting a file system by using the storage RHEL system role Copy linkLink copied to clipboard!
You can use the storage RHEL system role to create and mount file systems that persist across reboots. The role automatically adds entries to /etc/fstab to ensure persistent mounting.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The setting specified in the example playbook include the following:
disks: <list_of_devices>- A YAML list of device names that the role uses when it creates the volume.
fs_type: <file_system>-
Specifies the file system the role should set on the volume. You can select
xfs,ext3,ext4,swap, orunformatted. label-name: <file_system_label>- Optional: sets the label of the file system.
mount_point: <directory>-
Optional: if the volume should be automatically mounted, set the
mount_pointvariable to the directory to which the volume should be mounted.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6. Configuring a RAID volume by using the storage RHEL system role Copy linkLink copied to clipboard!
With the storage system role, you can configure a RAID volume on RHEL by using Red Hat Ansible Automation Platform and Ansible-Core. Create an Ansible Playbook with the parameters to configure a RAID volume to suit your requirements.
Device names might change in certain circumstances, for example, when you add a new disk to a system. Therefore, to prevent data loss, use persistent naming attributes in the playbook. For more information about persistent naming attributes, see Persistent naming attributes.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the array was correctly created:
ansible managed-node-01.example.com -m command -a 'mdadm --detail /dev/md/data'
# ansible managed-node-01.example.com -m command -a 'mdadm --detail /dev/md/data'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7. Configuring an LVM volume group on RAID by using the storage RHEL system role Copy linkLink copied to clipboard!
You can use the storage RHEL system role to configure LVM volume groups on RAID arrays.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.NoteSetting
raid_levelat thestorage_poollevel creates an MD RAID array first, and then builds an LVM volume group on top of it.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that your pool is on RAID:
ansible managed-node-01.example.com -m command -a 'lsblk'
# ansible managed-node-01.example.com -m command -a 'lsblk'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.8. Configuring a stripe size for RAID LVM volumes by using the storage RHEL system role Copy linkLink copied to clipboard!
You can use the storage RHEL system role to configure stripe sizes for RAID LVM volumes.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.NoteSetting
raid_levelat thevolumeslevel creates LVM RAID logical volumes.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that stripe size is set to the required size:
ansible managed-node-01.example.com -m command -a 'lvs -o+stripesize /dev/my_pool/my_volume'
# ansible managed-node-01.example.com -m command -a 'lvs -o+stripesize /dev/my_pool/my_volume'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.9. Configuring an LVM-VDO volume by using the storage RHEL system role Copy linkLink copied to clipboard!
You can use the storage RHEL system role to create a VDO volume on LVM (LVM-VDO) with enabled compression and deduplication.
Because of the storage system role use of LVM-VDO, only one volume can be created per pool.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
vdo_pool_size: <size>- The actual size that the volume takes on the device. You can specify the size in human-readable format, such as 10 GiB. If you do not specify a unit, it defaults to bytes.
size: <size>- The virtual size of VDO volume.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
View the current status of compression and deduplication:
ansible managed-node-01.example.com -m command -a 'lvs -o+vdo_compression,vdo_compression_state,vdo_deduplication,vdo_index_state'
$ ansible managed-node-01.example.com -m command -a 'lvs -o+vdo_compression,vdo_compression_state,vdo_deduplication,vdo_index_state'Copy to Clipboard Copied! Toggle word wrap Toggle overflow LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert VDOCompression VDOCompressionState VDODeduplication VDOIndexState mylv1 myvg vwi-a-v--- 3.00t vpool0 enabled online enabled online
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert VDOCompression VDOCompressionState VDODeduplication VDOIndexState mylv1 myvg vwi-a-v--- 3.00t vpool0 enabled online enabled onlineCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.10. Creating a LUKS2 encrypted volume by using the storage RHEL system role Copy linkLink copied to clipboard!
You can use the storage role to create and configure a volume encrypted with LUKS by running an Ansible Playbook.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:luks_password: <password>
luks_password: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
encryption_cipher: <cipher>-
Specifies the LUKS cipher. Possible values are:
twofish-xts-plain64,serpent-xts-plain64, andaes-xts-plain64(default). encryption_key_size: <key_size>-
Specifies the LUKS key size. The default is
512bit. encryption_luks_version: luks2-
Specifies the LUKS version. The default is
luks2.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the created LUKS encrypted volume:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.12. Resizing physical volumes by using the storage RHEL system role Copy linkLink copied to clipboard!
With the storage system role, you can resize LVM physical volumes after resizing the underlying storage or disks from outside of the host. For example, you increased the size of a virtual disk and want to use the extra space in an existing LVM.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - The size of the underlying block storage has been changed.
Procedure
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
grow_to_filltrueThe role automatically expands the storage volume to use any new capacity on the disk.falseThe role leaves the storage volume at its current size, even if the underlying disk has grown.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --syntax-check ~/playbook.yml
$ ansible-playbook --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook ~/playbook.yml
$ ansible-playbook ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the
grow_to_fillsetting works as expected. Prepare a test PV and VG:pvcreate /dev/sdf vgcreate myvg /dev/sdf
# pvcreate /dev/sdf # vgcreate myvg /dev/sdfCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check and record the initial physical volume size:
pvs
# pvsCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Edit the playbook to set
grow_to_fill: falseand run the playbook. - Check the volume size and verify that it remained unchanged.
-
Edit the playbook to set
grow_to_fill: trueand re-run the playbook. - Check the volume size and verify that it has expanded.
Chapter 3. Managing partitions using the web console Copy linkLink copied to clipboard!
You can manage file systems on RHEL 10 using the web console.
3.1. Displaying partitions formatted with file systems in the web console Copy linkLink copied to clipboard!
The Storage section in the web console displays all available file systems in the Filesystems table.
In addition to the list of partitions formatted with file systems, you can also use the page for creating new storage.
Prerequisites
-
The
cockpit-storagedpackage is installed on your system.
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
Procedure
- Log in to the RHEL 10 web console.
Click the Storage tab.
In the Storage table, you can see all available partitions formatted with file systems, their ID, types, locations, sizes, and how much space is available on each partition.
You can also use the drop-down menu in the top-right corner to create new local or networked storage.
3.2. Creating partitions in the web console Copy linkLink copied to clipboard!
To create a new partition:
- Use an existing partition table
- Create a partition
Prerequisites
-
The
cockpit-storagedpackage is installed on your system.
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- An unformatted volume connected to the system is visible in the Storage table of the Storage tab.
Procedure
- Log in to the RHEL 10 web console.
- Click the Storage tab.
- In the Storage table, click the device which you want to partition to open the page and options for that device.
- On the device page, click the menu button, , and select Create partition table.
In the Initialize disk dialog box, select the following:
Partitioning:
- Compatible with all systems and devices (MBR)
- Compatible with modern system and hard disks > 2TB (GPT)
- No partitioning
Overwrite:
Select the Overwrite existing data with zeros checkbox if you want the RHEL web console to rewrite the whole disk with zeros. This option is slower because the program has to go through the whole disk, but it is more secure. Use this option if the disk includes any data and you need to overwrite it.
If you do not select the Overwrite existing data with zeros checkbox, the RHEL web console rewrites only the disk header. This increases the speed of formatting.
- Click .
- Click the menu button, , next to the partition table you created. It is named Free space by default.
- Click .
- In the Create partition dialog box, enter a Name for the file system.
- Add a Mount point.
In the Type drop-down menu, select a file system:
- XFS file system supports large logical volumes, switching physical drives online without outage, and growing an existing file system. Leave this file system selected if you do not have a different strong preference.
ext4 file system supports:
- Logical volumes
- Switching physical drives online without outage
- Growing a file system
- Shrinking a file system
Additional option is to enable encryption of partition done by LUKS (Linux Unified Key Setup), which allows you to encrypt the volume with a passphrase.
- Enter the Size of the volume you want to create.
Select the Overwrite existing data with zeros checkbox if you want the RHEL web console to rewrite the whole disk with zeros. This option is slower because the program has to go through the whole disk, but it is more secure. Use this option if the disk includes any data and you need to overwrite it.
If you do not select the Overwrite existing data with zeros checkbox, the RHEL web console rewrites only the disk header. This increases the speed of formatting.
If you want to encrypt the volume, select the type of encryption in the Encryption drop-down menu.
If you do not want to encrypt the volume, select No encryption.
- In the At boot drop-down menu, select when you want to mount the volume.
In Mount options section:
- Select the Mount read only checkbox if you want the to mount the volume as a read-only logical volume.
- Select the Custom mount options checkbox and add the mount options if you want to change the default mount option.
Create the partition:
- If you want to create and mount the partition, click the button.
If you want to only create the partition, click the button.
Formatting can take several minutes depending on the volume size and which formatting options are selected.
Verification
- To verify that the partition has been successfully added, switch to the Storage tab and check the Storage table and verify whether the new partition is listed.
3.3. Deleting partitions in the web console Copy linkLink copied to clipboard!
You can remove partitions in the web console interface.
Prerequisites
-
The
cockpit-storagedpackage is installed on your system.
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
Procedure
- Log in to the RHEL 10 web console.
- Click the Storage tab.
- Click the device from which you want to delete a partition.
- On the device page and in the GPT partitions section, click the menu button, next to the partition you want to delete.
From the drop-down menu, select .
The RHEL web console terminates all processes that are currently using the partition and unmount the partition before deleting it.
Verification
- To verify that the partition has been successfully removed, switch to the Storage tab and check the Storage table.
Chapter 6. Overview of persistent naming attributes Copy linkLink copied to clipboard!
As a system administrator, you need to refer to storage volumes using persistent naming attributes to build storage setups that are reliable over multiple system boots.
6.1. Disadvantages of non-persistent naming attributes Copy linkLink copied to clipboard!
Red Hat Enterprise Linux provides a number of ways to identify storage devices. It is important to use the correct option to identify each device when used in order to avoid inadvertently accessing the wrong device, particularly when installing to or reformatting drives.
Traditionally, non-persistent names in the form of /dev/sd(major number)(minor number) are used on Linux to refer to storage devices. The major and minor number range and associated sd names are allocated for each device when it is detected. This means that the association between the major and minor number range and associated sd names can change if the order of device detection changes.
Such a change in the ordering might occur in the following situations:
- The parallelization of the system boot process detects storage devices in a different order with each system boot.
-
A disk fails to power up or respond to the SCSI controller. This results in it not being detected by the normal device probe. The disk is not accessible to the system and subsequent devices will have their major and minor number range, including the associated
sdnames shifted down. For example, if a disk normally referred to assdbis not detected, a disk that is normally referred to assdcwould instead appear assdb. -
A SCSI controller (host bus adapter, or HBA) fails to initialize, causing all disks connected to that HBA to not be detected. Any disks connected to subsequently probed HBAs are assigned different major and minor number ranges, and different associated
sdnames. - The order of driver initialization changes if different types of HBAs are present in the system. This causes the disks connected to those HBAs to be detected in a different order. This might also occur if HBAs are moved to different PCI slots on the system.
-
Disks connected to the system with Fibre Channel, iSCSI, or FCoE adapters might be inaccessible at the time the storage devices are probed, due to a storage array or intervening switch being powered off, for example. This might occur when a system reboots after a power failure, if the storage array takes longer to come online than the system take to boot. Although some Fibre Channel drivers support a mechanism to specify a persistent SCSI target ID to WWPN mapping, this does not cause the major and minor number ranges, and the associated
sdnames to be reserved; it only provides consistent SCSI target ID numbers.
These reasons make it undesirable to use the major and minor number range or the associated sd names when referring to devices, such as in the /etc/fstab file. There is the possibility that the wrong device will be mounted and data corruption might result.
Occasionally, however, it is still necessary to refer to the sd names even when another mechanism is used, such as when errors are reported by a device. This is because the Linux kernel uses sd names (and also SCSI host/channel/target/LUN tuples) in kernel messages regarding the device.
6.2. File system and device identifiers Copy linkLink copied to clipboard!
File system identifiers are tied to the file system itself, while device identifiers are linked to the physical block device. Understanding the difference is important for proper storage management.
File system identifiers
File system identifiers are tied to a particular file system created on a block device. The identifier is also stored as part of the file system. If you copy the file system to a different device, it still carries the same file system identifier. However, if you rewrite the device, such as by formatting it with the mkfs utility, the device loses the attribute.
File system identifiers include:
- Unique identifier (UUID)
- Label
Device identifiers
Device identifiers are tied to a block device: for example, a disk or a partition. If you rewrite the device, such as by formatting it with the mkfs utility, the device keeps the attribute, because it is not stored in the file system.
Device identifiers include:
- World Wide Identifier (WWID)
- Partition UUID
- Serial number
Recommendations
- Some file systems, such as logical volumes, span multiple devices. Red Hat recommends accessing these file systems using file system identifiers rather than device identifiers.
6.3. Device names managed by the udev mechanism in /dev/disk/ Copy linkLink copied to clipboard!
The udev mechanism is used for all types of devices in Linux, and is not limited only for storage devices. It provides different kinds of persistent naming attributes in the /dev/disk/ directory. In the case of storage devices, Red Hat Enterprise Linux contains udev rules that create symbolic links in the /dev/disk/ directory.
This mechanism enables you to refer to storage devices by:
- Their content
- A unique identifier
- Their serial number.
Although udev naming attributes are persistent, in that they do not change on their own across system reboots, some are also configurable.
6.3.1. File system identifiers Copy linkLink copied to clipboard!
The UUID attribute in /dev/disk/by-uuid/
Entries in this directory provide a symbolic name that refers to the storage device by a unique identifier (UUID) in the content (that is, the data) stored on the device. For example:
/dev/disk/by-uuid/3e6be9de-8139-11d1-9106-a43f08d823a6
/dev/disk/by-uuid/3e6be9de-8139-11d1-9106-a43f08d823a6
You can use the UUID to refer to the device in the /etc/fstab file using the following syntax:
UUID=3e6be9de-8139-11d1-9106-a43f08d823a6
UUID=3e6be9de-8139-11d1-9106-a43f08d823a6
You can configure the UUID attribute when creating a file system, and you can also change it later on.
The Label attribute in /dev/disk/by-label/
Entries in this directory provide a symbolic name that refers to the storage device by a label in the content (that is, the data) stored on the device. For example:
/dev/disk/by-label/Boot
/dev/disk/by-label/Boot
You can use the label to refer to the device in the /etc/fstab file using the following syntax:
LABEL=Boot
LABEL=Boot
You can configure the Label attribute when creating a file system, and you can also change it later on.
6.3.2. Device identifiers Copy linkLink copied to clipboard!
The WWID attribute in /dev/disk/by-id/
The World Wide Identifier (WWID) is a persistent, system-independent identifier that the SCSI Standard requires from all SCSI devices. The WWID identifier is guaranteed to be unique for every storage device, and independent of the path that is used to access the device. The identifier is a property of the device but is not stored in the content (that is, the data) on the devices.
This identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data (page 0x83) or Unit Serial Number (page 0x80).
Red Hat Enterprise Linux automatically maintains the proper mapping from the WWID-based device name to a current /dev/sd name on that system. Applications can use the /dev/disk/by-id/ name to reference the data on the disk, even if the path to the device changes, and even when accessing the device from different systems.
If your are using an NVMe device, you might run into a disk by-id naming change for some vendors, if the serial number of your device has leading whitespace.
Example of WWID mappings:
| WWID symlink | Non-persistent device | Note |
|---|---|---|
|
|
|
A device with a page |
|
|
|
A device with a page |
|
|
| A disk partition |
In addition to these persistent names provided by the system, you can also use udev rules to implement persistent names of your own, mapped to the WWID of the storage.
The Partition UUID attribute in /dev/disk/by-partuuid
The Partition UUID (PARTUUID) attribute identifies partitions as defined by GPT partition table.
Example of Partition UUID mappings:
| PARTUUID symlink | Non-persistent device |
|---|---|
|
|
|
|
|
|
|
|
|
The Path attribute in /dev/disk/by-path/
This attribute provides a symbolic name that refers to the storage device by the hardware path used to access the device.
The Path attribute fails if any part of the hardware path (for example, the PCI ID, target port, or LUN number) changes. The Path attribute is therefore unreliable. However, the Path attribute may be useful in one of the following scenarios:
- You need to identify a disk that you are planning to replace later.
- You plan to install a storage service on a disk in a specific location.
6.4. The World Wide Identifier with DM Multipath Copy linkLink copied to clipboard!
You can configure Device Mapper (DM) Multipath to map between the World Wide Identifier (WWID) and non-persistent device names.
If there are multiple paths from a system to a device, DM Multipath uses the WWID to detect this. DM Multipath then presents a single "pseudo-device" in the /dev/mapper/wwid directory, such as /dev/mapper/3600508b400105df70000e00000ac0000.
The command multipath -l shows the mapping to the non-persistent identifiers:
-
Host:Channel:Target:LUN -
/dev/sdname -
major:minornumber
Example 6.1. WWID mappings in a multipath configuration
An example output of the multipath -l command:
DM Multipath automatically maintains the proper mapping of each WWID-based device name to its corresponding /dev/sd name on the system. These names are persistent across path changes, and they are consistent when accessing the device from different systems.
When the user_friendly_names feature of DM Multipath is used, the WWID is mapped to a name of the form /dev/mapper/mpathN. By default, this mapping is maintained in the file /etc/multipath/bindings. These mpathN names are persistent as long as that file is maintained.
If you use user_friendly_names, then additional steps are required to obtain consistent names in a cluster.
6.5. Limitations of the udev device naming convention Copy linkLink copied to clipboard!
There are some challenges and constraints involved with the udev device naming convention, including event timing, device accessibility, latency, naming stability, and potential conflicts with external processes in dynamic storage environments.
The following are some limitations of the udev naming convention:
-
It is possible that the device might not be accessible at the time the query is performed because the
udevmechanism might rely on the ability to query the storage device when theudevrules are processed for audevevent. This is more likely to occur with Fibre Channel, iSCSI or FCoE storage devices when the device is not located in the server chassis. -
The kernel might send
udevevents at any time, causing the rules to be processed and possibly causing the/dev/disk/by-*/links to be removed if the device is not accessible. -
There might be a delay between when the
udevevent is generated and when it is processed, such as when a large number of devices are detected and the user-spaceudevdservice takes some amount of time to process the rules for each one. This might cause a delay between when the kernel detects the device and when the/dev/disk/by-*/names are available. -
External programs such as
blkidinvoked by the rules might open the device for a brief period of time, making the device inaccessible for other uses. -
The device names managed by the
udevmechanism in /dev/disk/ may change between major releases, requiring you to update the links.
6.6. Listing persistent naming attributes Copy linkLink copied to clipboard!
You can find out the persistent naming attributes of non-persistent storage devices.
Procedure
To list the UUID and Label attributes, use the
lsblkutility:lsblk --fs storage-device
$ lsblk --fs storage-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, view the UUID label of a file system:
lsblk --fs /dev/sda1
$ lsblk --fs /dev/sda1 NAME FSTYPE LABEL UUID MOUNTPOINT sda1 xfs Boot afa5d5e3-9050-48c3-acc1-bb30095f3dc4 /bootCopy to Clipboard Copied! Toggle word wrap Toggle overflow To list the PARTUUID attribute, use the
lsblkutility with the--output +PARTUUIDoption:lsblk --output +PARTUUID
$ lsblk --output +PARTUUIDCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, view the PARTUUID attribute of a partition:
lsblk --output +PARTUUID /dev/sda1
$ lsblk --output +PARTUUID /dev/sda1 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT PARTUUID sda1 8:1 0 512M 0 part /boot 4cd1448a-01Copy to Clipboard Copied! Toggle word wrap Toggle overflow To list the WWID attribute, examine the targets of symbolic links in the
/dev/disk/by-id/directory.For example, view the WWID of all storage devices on the system:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
6.7. Modifying persistent naming attributes Copy linkLink copied to clipboard!
You can change the UUID or Label persistent naming attribute of a file system.
Changing udev attributes happens in the background and might take a long time. The udevadm settle command waits until the change is fully registered, which ensures that your next command will be able to use the new attribute correctly.
In the following commands:
-
Replace new-uuid with the UUID you want to set; for example,
1cdfbc07-1c90-4984-b5ec-f61943f5ea50. You can generate a UUID using theuuidgencommand. -
Replace new-label with a label; for example,
backup_data.
Prerequisites
- If you are modifying the attributes of an XFS file system, unmount it first.
Procedure
To change the UUID or Label attributes of an XFS file system, use the
xfs_adminutility:xfs_admin -U new-uuid -L new-label storage-device udevadm settle
# xfs_admin -U new-uuid -L new-label storage-device # udevadm settleCopy to Clipboard Copied! Toggle word wrap Toggle overflow To change the UUID or Label attributes of an ext4, ext3, or ext2 file system, use the
tune2fsutility:tune2fs -U new-uuid -L new-label storage-device udevadm settle
# tune2fs -U new-uuid -L new-label storage-device # udevadm settleCopy to Clipboard Copied! Toggle word wrap Toggle overflow To change the UUID or Label attributes of a swap volume, use the
swaplabelutility:swaplabel --uuid new-uuid --label new-label swap-device udevadm settle
# swaplabel --uuid new-uuid --label new-label swap-device # udevadm settleCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 7. Partition operations with parted Copy linkLink copied to clipboard!
parted is a program to manipulate disk partitions. It supports multiple partition table formats, including MS-DOS and GPT. It is useful for creating space for new operating systems, reorganizing disk usage, and copying data to new hard disks.
7.1. Viewing the partition table with parted Copy linkLink copied to clipboard!
Display the partition table of a block device to see the partition layout and details about individual partitions. You can view the partition table on a block device by using the parted utility. For more information, see the parted(8) man page on your system.
Procedure
Start the
partedutility. For example, the following output lists the device/dev/sda:parted /dev/sda
# parted /dev/sdaCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the partition table:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Switch to the device you want to examine next:
(parted) select block-device
(parted) select block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow For a detailed description of the print command output, see the following:
Model: ATA SAMSUNG MZNLN256 (scsi)- The disk type, manufacturer, model number, and interface.
Disk /dev/sda: 256GB- The file path to the block device and the storage capacity.
Partition Table: msdos- The disk label type.
Number-
The partition number. For example, the partition with minor number 1 corresponds to
/dev/sda1. StartandEnd- The location on the device where the partition starts and ends.
Type- Valid types are metadata, free, primary, extended, or logical.
File system-
The file system type. If the
File systemfield of a device shows no value, this means that its file system type is unknown. Thepartedutility cannot recognize the file system on encrypted devices. Flags-
Lists the flags set for the partition. The most commonly used flags are
boot,root,swap,hidden,raid,lvm, orlba. For a complete list of flags, seeparted(8)man page on your system.
7.2. Creating a partition table on a disk with parted Copy linkLink copied to clipboard!
Create a partition table on a disk to define the layout for organizing storage space into separate, manageable sections. This essential setup step enables you to create multiple partitions for different purposes and operating systems.
Formatting a block device with a partition table deletes all data stored on the device.
Procedure
Start the interactive
partedshell:parted block-device
# parted block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Determine if there already is a partition table on the device:
(parted) print
(parted) printCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the device already contains partitions, they will be deleted in the following steps.
Create the new partition table:
(parted) mklabel table-type
(parted) mklabel table-typeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace table-type with with the intended partition table type:
-
msdosfor MBR gptfor GPTFor example to create a GPT table on the disk, use:
(parted) mklabel gpt
(parted) mklabel gptCopy to Clipboard Copied! Toggle word wrap Toggle overflow The changes start applying after you enter this command.
-
View the partition table to confirm that it is created:
(parted) print
(parted) printCopy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
partedshell:(parted) quit
(parted) quitCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.3. Creating a partition with parted Copy linkLink copied to clipboard!
Create new disk partitions to organize storage space efficiently and separate different types of data. This fundamental storage management task allows you to set up dedicated areas for system files, user data, and swap space.
Prerequisites
- A partition table on the disk.
- If the partition you want to create is larger than 2 TiB, format the disk with the GUID Partition Table (GPT).
The required partitions are swap, /boot/, and / (root).
Procedure
Start the
partedutility:parted block-device
# parted block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the current partition table to determine if there is enough free space:
(parted) print
(parted) printCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Resize the partition in case there is not enough free space.
From the partition table, determine:
- The start and end points of the new partition.
- On MBR, what partition type it should be.
Create the new partition:
For MS-DOS:
(parted) mkpart part-type fs-type start end
(parted) mkpart part-type fs-type start endCopy to Clipboard Copied! Toggle word wrap Toggle overflow For GPT:
(parted) mkpart part-name fs-type start end
(parted) mkpart part-name fs-type start endCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace part-type with
primary,logical, orextended. This applies only to the MBR partition table. - Replace name with an arbitrary partition name. This is required for GPT partition tables.
-
Replace fs-type with
xfs,ext2,ext3,ext4,fat16,fat32,hfs,hfs+,linux-swap,ntfs, orreiserfs. The fs-type parameter is optional. Note that thepartedutility does not create the file system on the partition. Replace start and end with the sizes that determine the starting and ending points of the partition, counting from the beginning of the disk. You can use size suffixes, such as
512MiB,20GiB, or1.5TiB. The default size is in megabytes.For example, to create a primary partition from 1024 MiB until 2048 MiB on an MBR table, use:
(parted) mkpart primary 1024MiB 2048MiB
(parted) mkpart primary 1024MiB 2048MiBCopy to Clipboard Copied! Toggle word wrap Toggle overflow The changes start applying after you enter the command.
-
Replace part-type with
View the partition table to confirm that the created partition is in the partition table with the correct partition type, file system type, and size:
(parted) print
(parted) printCopy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
partedshell:(parted) quit
(parted) quitCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the kernel recognizes the new partition:
cat /proc/partitions
# cat /proc/partitionsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
7.4. Removing a partition with parted Copy linkLink copied to clipboard!
Remove unnecessary disk partitions to reclaim storage space for other purposes. This operation helps you reorganize disk layout, eliminate unused partitions, and optimize storage utilization on your system.
Procedure
Start the interactive
partedshell:parted block-device
# parted block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace block-device with the path to the device where you want to remove a partition: for example,
/dev/sda.
-
Replace block-device with the path to the device where you want to remove a partition: for example,
View the current partition table to determine the minor number of the partition to remove:
(parted) print
(parted) printCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the partition:
(parted) rm partition-number
(parted) rm partition-numberCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace partition-number with the partition number you want to remove.
The changes start applying as soon as you enter this command.
Verify that you have removed the partition from the partition table:
(parted) print
(parted) printCopy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
partedshell:(parted) quit
(parted) quitCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the kernel registers that the partition is removed:
cat /proc/partitions
# cat /proc/partitionsCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Remove the partition from the
/etc/fstabfile, if it is present. Find the line that declares the removed partition, and remove it from the file. Regenerate mount units so that your system registers the new
/etc/fstabconfiguration:systemctl daemon-reload
# systemctl daemon-reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantTo remove a partition mentioned in
/proc/cmdlineor that is part of an LVM, see Configuring and managing logical volumes, and thedracut(8)andgrubby (8)man pages on your system.
7.5. Resizing a partition with parted Copy linkLink copied to clipboard!
Using the parted utility, extend a partition to use unused disk space, or shrink a partition to use its capacity for different purposes. For more information, see the parted(8) man page on your system.
Prerequisites
- Back up the data before shrinking a partition.
- If the partition you want to create is larger than 2 TiB, format the disk with the GUID Partition Table (GPT).
- If you want to shrink the partition, first shrink the file system so that it is not larger than the resized partition.
XFS does not support shrinking.
Procedure
Start the
partedutility:parted block-device
# parted block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the current partition table:
(parted) print
(parted) printCopy to Clipboard Copied! Toggle word wrap Toggle overflow From the partition table, determine:
- The minor number of the partition.
The location of the existing partition and its new ending point after resizing.
ImportantWhen resizing a partition, ensure there is enough unallocated space between the end of the partition being resized and either the beginning of the next partition, or the end of the disk if it is the last partition. If there is not sufficient space,
partedwill return an error. However, it is best to verify the available space before attempting to resize to avoid partition overlap.
Resize the partition:
(parted) resizepart 1 2GiB
(parted) resizepart 1 2GiBCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace 1 with the minor number of the partition that you are resizing.
-
Replace 2 with the size that determines the new ending point of the resized partition, counting from the beginning of the disk. You can use size suffixes, such as
512MiB,20GiB, or1.5TiB. The default size is in megabytes.
View the partition table to confirm that the resized partition is in the partition table with the correct size:
(parted) print
(parted) printCopy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the
partedshell:(parted) quit
(parted) quitCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the kernel registers the new partition:
cat /proc/partitions
# cat /proc/partitionsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Optional: If you extended the partition, extend the file system on it as well.
Chapter 8. Strategies for repartitioning a disk Copy linkLink copied to clipboard!
Most RHEL systems manage storage space by using LVM. However, manipulating the partition table remains a fundamental and low-level method of managing storage space occurring at the device level. You can use parted, fdisk, or other graphical tools to perform disk partitioning operations.
There are different approaches to repartitioning a disk. These include:
- Unpartitioned free space is available.
- An unused partition is available.
- Free space in an actively used partition is available.
The following examples provide a general overview of partitioning techniques. They are simplified for clarity and do not reflect the exact partition layout during a typical Red Hat Enterprise Linux installation.
8.1. Using unpartitioned free space Copy linkLink copied to clipboard!
Partitions that are already defined and do not span the entire hard disk, leave unallocated space that is not part of any defined partition.
An unused hard disk also falls into this category. The only difference is that all the space is not part of any defined partition.
On a new disk, you can create the necessary partitions from the unused space. Most preinstalled operating systems are configured to take up all available space on a disk drive.
8.2. Using space from an unused partition Copy linkLink copied to clipboard!
To use the space allocated to the unused partition, delete the partition and then create the appropriate Linux partition instead. Alternatively, during the installation process, delete the unused partition and manually create new partitions.
8.3. Using free space from an active partition Copy linkLink copied to clipboard!
Managing this process can be hard if the required free space is on an active partition that’s already in use. Most computers with preinstalled software have a single large partition that holds both the operating system and user data.
If you attempt to resize or modify an active partition that contains an operating system (OS), there is a risk of losing the or making the OS unbootable. As a result, in some cases, you might need to reinstall the OS. Check whether your system includes a recovery or installation media before proceeding.
To optimise the use of available free space, you can use the methods of destructive or non-destructive repartitioning.
8.3.1. Destructive repartitioning Copy linkLink copied to clipboard!
Destructive repartitioning destroys the partition on your hard drive and creates new partitions in its place. Backup any needed data from the original partition as this method deletes the entire contents.
After creating a new partition from your existing operating system, you can:
- Reinstall software.
- Restore your data.
This method deletes all data previously stored in the original partition.
8.3.2. Non-destructive repartitioning Copy linkLink copied to clipboard!
Non-destructive repartitioning resizes partitions, without any data loss. This method is reliable, however it takes longer processing time on large drives.
The following is a list of methods, which can help initiate non-destructive repartitioning.
- Reorganize existing data
The storage location of some data cannot be changed which can prevent the resizing of a partition to the required size; ultimately requiring a destructive repartition process. Reorganizing data within an existing partition can help you resize the partitions as needed to create space for additional partitions or maximize the free space available.
To avoid any possible data loss, create a backup before continuing with the data migration process.
- Resize the existing partition
By resizing an already existing partition, you can free up unused space. Depending on your resizing software, the results may vary. In the majority of cases, you can create a new unformatted partition of the same type, as the original partition.
The steps you take after resizing can depend on the software you use. In the following example, the best practice is to delete the new DOS (Disk Operating System) partition, and create a Linux partition instead. Verify what is most suitable for your disk before initiating the resizing process.
Resizing and creating partitions may vary depending on the tool you are using, for example parted, GParted. Refer to the documentation of the tool for specific instructions.
- Optional: Create new partitions
Some pieces of resizing software support Linux based systems. In such cases, there is no need to delete the newly created partition after resizing. Creating a new partition afterwards depends on the software you use.
Chapter 9. Getting started with XFS Copy linkLink copied to clipboard!
This is an overview of how to create and maintain XFS file systems.
9.1. The XFS file system Copy linkLink copied to clipboard!
XFS is a highly scalable, high-performance, robust, and mature 64-bit journaling file system that supports very large files and file systems on a single host. It is the default file system in Red Hat Enterprise Linux 10. XFS was originally developed in the early 1990s by SGI and has a long history of running on extremely large servers and storage arrays.
The features of XFS include:
- Reliability
- Metadata journaling, which ensures file system integrity after a system crash by keeping a record of file system operations that can be replayed when the system is restarted and the file system remounted
- Extensive run-time metadata consistency checking
- Scalable and fast repair utilities
- Quota journaling. This avoids the need for lengthy quota consistency checks after a crash.
- Scalability and performance
- Supported file system size up to 1024 TiB
- Ability to support a large number of concurrent operations
- B-tree indexing for scalability of free space management
- Sophisticated metadata read-ahead algorithms
- Optimizations for streaming video workloads
- Allocation schemes
- Extent-based allocation
- Stripe-aware allocation policies
- Delayed allocation
- Space pre-allocation
- Dynamically allocated inodes
- Other features
- Reflink-based file copies
- Tightly integrated backup and restore utilities
- Online defragmentation
- Online file system growing
- Comprehensive diagnostics capabilities
-
Extended attributes (
xattr). This allows the system to associate several additional name/value pairs per file. - Project or directory quotas. This allows quota restrictions over a directory tree.
- Subsecond timestamps
- Performance characteristics
XFS has a high performance on large systems with enterprise workloads. A large system is one with a relatively high number of CPUs, multiple HBAs, and connections to external disk arrays. XFS also performs well on smaller systems that have a multi-threaded, parallel I/O workload.
XFS performs comparably well on smaller systems, but is more focused on scalability and large data sets.
9.2. Comparison of tools used with ext4 and XFS Copy linkLink copied to clipboard!
Different tools and commands accomplish common file system tasks on ext4 and XFS, including creation, checking, resizing, and backup operations.
This section compares which tools to use to accomplish common tasks on the ext4 and XFS file systems.
| Task | ext4 | XFS |
|---|---|---|
| Create a file system |
|
|
| File system check |
|
|
| Resize a file system |
|
|
| Save an image of a file system |
|
|
| Label or tune a file system |
|
|
| Back up a file system |
|
|
| Quota management |
|
|
| File mapping |
|
|
If you want a complete client-server solution for backups over network, you can use bacula backup utility that is available in RHEL 9. For more information about Bacula, see Bacula backup solution.
Chapter 10. Creating an XFS file system Copy linkLink copied to clipboard!
As a system administrator, you can create an XFS file system on a block device to enable it to store files and directories.
10.1. Creating an XFS file system with mkfs.xfs Copy linkLink copied to clipboard!
Create an XFS file system to take advantage of high performance, scalability, and advanced features for large-scale storage environments. XFS is particularly effective for applications requiring large files and high throughput.
Procedure
To create the file system:
If the device is a regular partition, an LVM volume, an MD volume, a disk, or a similar device, use the following command:
mkfs.xfs block-device
# mkfs.xfs block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace block-device with the path to the block device. For example,
/dev/sdb1,/dev/disk/by-uuid/05e99ec8-def1-4a5e-8a9d-5945339ceb2a, or/dev/my-volgroup/my-lv. - In general, the default options are optimal for common use.
-
When using
mkfs.xfson a block device containing an existing file system, add the-foption to overwrite that file system.
-
Replace block-device with the path to the block device. For example,
To create the file system on a hardware RAID device, check if the system correctly detects the stripe geometry of the device:
If the stripe geometry information is correct, no additional options are needed. Create the file system:
mkfs.xfs block-device
# mkfs.xfs block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the information is incorrect, specify stripe geometry manually with the
suandswparameters of the-doption. Thesuparameter specifies the RAID chunk size, and theswparameter specifies the number of data disks in the RAID device.For example:
mkfs.xfs -d su=64k,sw=4 /dev/sda3
# mkfs.xfs -d su=64k,sw=4 /dev/sda3Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Use the following command to wait for the system to register the new device node:
udevadm settle
# udevadm settleCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see the
mkfs.xfs(8)man page on your system.
Chapter 11. Backing up an XFS file system Copy linkLink copied to clipboard!
As a system administrator, you can use the xfsdump to back up an XFS file system into a file or on a tape. This provides a simple backup mechanism.
11.1. Features of XFS backup Copy linkLink copied to clipboard!
This section describes key concepts and features of backing up an XFS file system with the xfsdump utility.
You can use the xfsdump utility to:
Perform backups to regular file images.
Only one backup can be written to a regular file.
Perform backups to tape drives.
The
xfsdumputility also enables you to write multiple backups to the same tape. A backup can span multiple tapes.To back up multiple file systems to a single tape device, simply write the backup to a tape that already contains an XFS backup. This appends the new backup to the previous one. By default,
xfsdumpnever overwrites existing backups.Create incremental backups.
The
xfsdumputility uses dump levels to determine a base backup to which other backups are relative. Numbers from 0 to 9 refer to increasing dump levels. An incremental backup only backs up files that have changed since the last dump of a lower level:- To perform a full backup, perform a level 0 dump on the file system.
- A level 1 dump is the first incremental backup after a full backup. The next incremental backup would be level 2, which only backs up files that have changed since the last level 1 dump; and so on, to a maximum of level 9.
- Exclude files from a backup using size, subtree, or inode flags to filter them.
For more information, see the xfsdump(8) man page on your system.
11.2. Backing up an XFS file system with xfsdump Copy linkLink copied to clipboard!
You can use the xfsdump utility to back up the content of an XFS file system into a file or a tape.
Prerequisites
- An XFS file system that you can back up.
- Another file system or a tape drive where you can store the backup.
Procedure
Use the following command to back up an XFS file system:
xfsdump -l level [-L label] \ -f backup-destination path-to-xfs-filesystem
# xfsdump -l level [-L label] \ -f backup-destination path-to-xfs-filesystemCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace level with the dump level of your backup. Use
0to perform a full backup or1to9to perform consequent incremental backups. -
Replace backup-destination with the path where you want to store your backup. The destination can be a regular file, a tape drive, or a remote tape device. For example,
/backup-files/Data.xfsdumpfor a file or/dev/st0for a tape drive. -
Replace path-to-xfs-filesystem with the mount point of the XFS file system you want to back up. For example,
/mnt/data/. The file system must be mounted. When backing up multiple file systems and saving them on a single tape device, add a session label to each backup using the
-L labeloption so that it is easier to identify them when restoring. Replace label with any name for your backup: for example,backup_data.For example, to back up the content of XFS file systems mounted on the
/boot/and/data/directories and save them as files in the/backup-files/directory:xfsdump -l 0 -f /backup-files/boot.xfsdump /boot xfsdump -l 0 -f /backup-files/data.xfsdump /data
# xfsdump -l 0 -f /backup-files/boot.xfsdump /boot # xfsdump -l 0 -f /backup-files/data.xfsdump /dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Replace level with the dump level of your backup. Use
To back up multiple file systems on a single tape device, add a session label to each backup using the
-L labeloption:xfsdump -l 0 -L "backup_boot" -f /dev/st0 /boot xfsdump -l 0 -L "backup_data" -f /dev/st0 /data
# xfsdump -l 0 -L "backup_boot" -f /dev/st0 /boot # xfsdump -l 0 -L "backup_data" -f /dev/st0 /dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see the
xfsdump(8)man page on your system.
Chapter 12. Restoring an XFS file system from backup Copy linkLink copied to clipboard!
As a system administrator, you can use the xfsrestore utility to restore XFS backup created with the xfsdump utility and stored in a file or on a tape.
12.1. Features of restoring XFS from backup Copy linkLink copied to clipboard!
You can restore XFS file systems from backups using xfsrestore. Discover available restore modes, session identification, and methods for selectively recovering files, helping ensure data integrity and flexible recovery solutions.
The xfsrestore utility restores file systems from backups produced by xfsdump. The xfsrestore utility has two modes:
- The simple mode enables users to restore an entire file system from a level 0 dump. This is the default mode.
- The cumulative mode enables file system restoration from an incremental backup: that is, level 1 to level 9.
A unique session ID or session label identifies each backup. Restoring a backup from a tape containing multiple backups requires its corresponding session ID or label.
To extract, add, or delete specific files from a backup, enter the xfsrestore interactive mode. The interactive mode provides a set of commands to manipulate the backup files.
For more information, see the xfsrestore(8) man page on your system.
12.2. Restoring an XFS file system from backup with xfsrestore Copy linkLink copied to clipboard!
This procedure describes how to restore the content of an XFS file system from a file or tape backup.
Prerequisites
- A file or tape backup of XFS file systems, as described in Backing up an XFS file system.
- A storage device where you can restore the backup.
Procedure
The command to restore the backup varies depending on whether you are restoring from a full backup or an incremental one, or are restoring multiple backups from a single tape device:
xfsrestore [-r] [-S session-id] [-L session-label] [-i] -f backup-location restoration-path
# xfsrestore [-r] [-S session-id] [-L session-label] [-i] -f backup-location restoration-pathCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace backup-location with the location of the backup. This can be a regular file, a tape drive, or a remote tape device. For example,
/backup-files/Data.xfsdumpfor a file or/dev/st0for a tape drive. -
Replace restoration-path with the path to the directory where you want to restore the file system. For example,
/mnt/data/. -
To restore a file system from an incremental (level 1 to level 9) backup, add the
-roption. To restore a backup from a tape device that contains multiple backups, specify the backup using the
-Sor-Loptions.The
-Soption lets you choose a backup by its session ID, while the-Loption lets you choose by the session label. To obtain the session ID and session labels, use thexfsrestore -Icommand.Replace session-id with the session ID of the backup. For example,
b74a3586-e52e-4a4a-8775-c3334fa8ea2c. Replace session-label with the session label of the backup. For example,my_backup_session_label.To use
xfsrestoreinteractively, use the-ioption.The interactive dialog begins after
xfsrestorefinishes reading the specified device. Available commands in the interactivexfsrestoreshell includecd,ls,add,delete, andextract; for a complete list of commands, use thehelpcommand. Below is an example for restoring multiple XFS file systems:To restore the XFS backup files and save their content into directories under
/mnt/:xfsrestore -f /backup-files/boot.xfsdump /mnt/boot/ xfsrestore -f /backup-files/data.xfsdump /mnt/data/
# xfsrestore -f /backup-files/boot.xfsdump /mnt/boot/ # xfsrestore -f /backup-files/data.xfsdump /mnt/data/Copy to Clipboard Copied! Toggle word wrap Toggle overflow To restore from a tape device containing multiple backups, specify each backup by its session label or session ID:
xfsrestore -L "backup_boot" -f /dev/st0 /mnt/boot/ xfsrestore -S "45e9af35-efd2-4244-87bc-4762e476cbab" \ -f /dev/st0 /mnt/data/
# xfsrestore -L "backup_boot" -f /dev/st0 /mnt/boot/ # xfsrestore -S "45e9af35-efd2-4244-87bc-4762e476cbab" \ -f /dev/st0 /mnt/data/Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Replace backup-location with the location of the backup. This can be a regular file, a tape drive, or a remote tape device. For example,
12.3. Informational messages when restoring an XFS backup from a tape Copy linkLink copied to clipboard!
When restoring a backup from a tape with backups from multiple file systems, the xfsrestore utility might issue messages. The messages inform you whether a match of the requested backup has been found when xfsrestore examines each backup on the tape in sequential order.
For example:
The informational messages keep appearing until the matching backup is found.
Chapter 13. Increasing the size of an XFS file system Copy linkLink copied to clipboard!
As a system administrator, you can increase the size of an XFS file system to make a complete use of a larger storage capacity. However, it is not currently possible to decrease the size of XFS file systems.
Increasing a filesystem from a very small size to a significantly larger size creates a high number of allocations groups, potentially leading to performance issues. As a best practice, limit the size increase to a maximum of 10 times the original size.
13.1. Increasing the size of an XFS file system with xfs_growfs Copy linkLink copied to clipboard!
This procedure describes how to grow an XFS file system using the xfs_growfs utility.
Prerequisites
- Ensure that the underlying block device is of an appropriate size to hold the resized file system later. Use the appropriate resizing methods for the affected block device.
- Mount the XFS file system.
Procedure
While the XFS file system is mounted, use the
xfs_growfsutility to increase its size:xfs_growfs file-system -D new-size
# xfs_growfs file-system -D new-sizeCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace file-system with the mount point of the XFS file system.
With the
-Doption, replace new-size with the desired new size of the file system specified in the number of file system blocks.To find out the block size in kB of a given XFS file system, use the
xfs_infoutility:xfs_info block-device
# xfs_info block-device ... data = bsize=4096 ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Without the
-Doption,xfs_growfsgrows the file system to the maximum size supported by the underlying device.For more information, see the
xfs_growfs(8)man page on your system.
Chapter 14. Configuring XFS error behavior Copy linkLink copied to clipboard!
You can configure how an XFS file system behaves when it encounters different I/O errors.
14.1. Configurable error handling in XFS Copy linkLink copied to clipboard!
You can configure XFS file system error handling by setting retry limits and timeouts for I/O errors. Control how XFS responds to device errors, no-space conditions, and device loss, enhancing reliability and flexibility during operation and unmounting.
The XFS file system responds in one of the following ways when an error occurs during an I/O operation:
XFS repeatedly retries the I/O operation until the operation succeeds or XFS reaches a set limit.
The limit is based either on a maximum number of retries or a maximum time for retries.
- XFS considers the error permanent and stops the operation on the file system.
You can configure how XFS reacts to the following error conditions:
EIO- Error when reading or writing
ENOSPC- No space left on the device
ENODEV- Device cannot be found
You can set the maximum number of retries and the maximum time in seconds until XFS considers an error permanent. XFS stops retrying the operation when it reaches either of the limits.
You can also configure XFS so that when unmounting a file system, XFS immediately cancels the retries regardless of any other configuration. This configuration enables the unmount operation to succeed despite persistent errors.
Default behavior
The default behavior for each XFS error condition depends on the error context. Some XFS errors such as ENODEV are considered to be fatal and unrecoverable, regardless of the retry count. Their default retry limit is 0.
14.2. Configuration files for specific and undefined XFS error conditions Copy linkLink copied to clipboard!
Configure XFS error handling through files that set retry limits and timeouts for specific error conditions as well as defaults for undefined errors, ensuring robust and controlled filesystem behavior.
The following directories store configuration files that control XFS error behavior for different error conditions:
/sys/fs/xfs/device/error/metadata/EIO/-
For the
EIOerror condition /sys/fs/xfs/device/error/metadata/ENODEV/-
For the
ENODEVerror condition /sys/fs/xfs/device/error/metadata/ENOSPC/-
For the
ENOSPCerror condition /sys/fs/xfs/device/error/default/- Common configuration for all other, undefined error conditions
Each directory contains the following configuration files for configuring retry limits:
max_retries- Controls the maximum number of times that XFS retries the operation.
retry_timeout_seconds- Specifies the time limit in seconds after which XFS stops retrying the operation.
14.3. Setting XFS behavior for specific conditions Copy linkLink copied to clipboard!
This procedure configures how XFS reacts to specific error conditions.
Procedure
Set the maximum number of retries, the retry time limit, or both:
To set the maximum number of retries, write the desired number to the
max_retriesfile:echo value > /sys/fs/xfs/device/error/metadata/condition/max_retries
# echo value > /sys/fs/xfs/device/error/metadata/condition/max_retriesCopy to Clipboard Copied! Toggle word wrap Toggle overflow To set the time limit, write the desired number of seconds to the
retry_timeout_secondsfile:echo value > /sys/fs/xfs/device/error/metadata/condition/retry_timeout_second
# echo value > /sys/fs/xfs/device/error/metadata/condition/retry_timeout_secondCopy to Clipboard Copied! Toggle word wrap Toggle overflow
value is a number between -1 and the maximum possible value of the C signed integer type. This is 2147483647 on 64-bit Linux.
In both limits, the value
-1is used for continuous retries and0to stop immediately.device is the name of the device, as found in the
/dev/directory; for example,sda.
14.4. Setting XFS behavior for undefined conditions Copy linkLink copied to clipboard!
This procedure configures how XFS reacts to all undefined error conditions, which share a common configuration.
Procedure
Set the maximum number of retries, the retry time limit, or both:
To set the maximum number of retries, write the desired number to the
max_retriesfile:echo value > /sys/fs/xfs/device/error/metadata/default/max_retries
# echo value > /sys/fs/xfs/device/error/metadata/default/max_retriesCopy to Clipboard Copied! Toggle word wrap Toggle overflow To set the time limit, write the desired number of seconds to the
retry_timeout_secondsfile:echo value > /sys/fs/xfs/device/error/metadata/default/retry_timeout_seconds
# echo value > /sys/fs/xfs/device/error/metadata/default/retry_timeout_secondsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
value is a number between -1 and the maximum possible value of the C signed integer type. This is 2147483647 on 64-bit Linux.
In both limits, the value
-1is used for continuous retries and0to stop immediately.device is the name of the device, as found in the
/dev/directory; for example,sda.
14.5. Setting the XFS unmount behavior Copy linkLink copied to clipboard!
This procedure configures how XFS reacts to error conditions when unmounting the file system.
If you set the fail_at_unmount option in the file system, it overrides all other error configurations during unmount, and immediately unmounts the file system without retrying the I/O operation. This allows the unmount operation to succeed even in case of persistent errors.
You cannot change the fail_at_unmount value after the unmount process starts, because the unmount process removes the configuration files from the sysfs interface for the respective file system. You must configure the unmount behavior before the file system starts unmounting.
Procedure
Enable or disable the
fail_at_unmountoption:To cancel retrying all operations when the file system unmounts, enable the option:
echo 1 > /sys/fs/xfs/device/error/fail_at_unmount
# echo 1 > /sys/fs/xfs/device/error/fail_at_unmountCopy to Clipboard Copied! Toggle word wrap Toggle overflow To respect the
max_retriesandretry_timeout_secondsretry limits when the file system unmounts, disable the option:echo 0 > /sys/fs/xfs/device/error/fail_at_unmount
# echo 0 > /sys/fs/xfs/device/error/fail_at_unmountCopy to Clipboard Copied! Toggle word wrap Toggle overflow
device is the name of the device, as found in the
/dev/directory; for example,sda.
Chapter 15. Performance analysis of XFS with PCP Copy linkLink copied to clipboard!
The XFS PMDA ships as part of the pcp package and is enabled by default during the installation. It is used to gather performance metric data of XFS file systems in Performance Co-Pilot (PCP).
You can use PCP to analyze XFS file system’s performance.
15.1. Installing XFS PMDA manually Copy linkLink copied to clipboard!
If the XFS PMDA is not listed in the pcp configuration output, install the PMDA agent manually.
This procedure describes how to manually install the PMDA agent.
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
Procedure
Navigate to the xfs directory:
cd /var/lib/pcp/pmdas/xfs/
# cd /var/lib/pcp/pmdas/xfs/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Install the XFS PMDA manually:
xfs]# ./Install Updating the Performance Metrics Name Space (PMNS) ... Terminate PMDA if already installed ... Updating the PMCD control file, and notifying PMCD ... Check xfs metrics have appeared ... 387 metrics and 387 values
xfs]# ./Install Updating the Performance Metrics Name Space (PMNS) ... Terminate PMDA if already installed ... Updating the PMCD control file, and notifying PMCD ... Check xfs metrics have appeared ... 387 metrics and 387 valuesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
pmcdprocess is running on the host and the XFS PMDA is listed as enabled in the configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see the
pmcd(1)man page on your system.
15.2. Examining XFS performance metrics with pminfo Copy linkLink copied to clipboard!
PCP enables XFS PMDA to allow the reporting of certain XFS metrics per each of the mounted XFS file systems. This makes it easier to pinpoint specific mounted file system issues and evaluate performance.
The pminfo command provides per-device XFS metrics for each mounted XFS file system.
This procedure displays a list of all available metrics provided by the XFS PMDA.
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
Procedure
Display the list of all available metrics provided by the XFS PMDA:
pminfo xfs
# pminfo xfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Display information for the individual metrics. The following examples examine specific XFS
readandwritemetrics using thepminfotool:Display a short description of the
xfs.write_bytesmetric:pminfo --oneline xfs.write_bytes
# pminfo --oneline xfs.write_bytes xfs.write_bytes [number of bytes written in XFS file system write operations]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Display a long description of the
xfs.read_bytesmetric:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Obtain the current performance value of the
xfs.read_bytesmetric:pminfo --fetch xfs.read_bytes
# pminfo --fetch xfs.read_bytes xfs.read_bytes value 4891346238Copy to Clipboard Copied! Toggle word wrap Toggle overflow Obtain per-device XFS metrics with
pminfo:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see the
pminfo(1)man page on your system.
15.3. Resetting XFS performance metrics with pmstore Copy linkLink copied to clipboard!
With PCP, you can modify the values of certain metrics, especially if the metric acts as a control variable, such as the xfs.control.reset metric. To modify a metric value, use the pmstore tool.
This procedure describes how to reset XFS metrics using the pmstore tool.
Prerequisites
- PCP is installed. For more information, see Installing and enabling PCP.
Procedure
Display the value of a metric:
pminfo -f xfs.write
$ pminfo -f xfs.write xfs.write value 325262Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reset all the XFS metrics:
pmstore xfs.control.reset 1
# pmstore xfs.control.reset 1 xfs.control.reset old value=0 new value=1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
View the information after resetting the metric:
pminfo --fetch xfs.write
$ pminfo --fetch xfs.write xfs.write value 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see the
pmstore(1)andpminfo(1)man pages on your system.
15.4. PCP metric groups for XFS Copy linkLink copied to clipboard!
Performance Co-Pilot offers comprehensive metric groups for monitoring XFS file system operations across all devices, including allocation, transactions, and buffer activities.
The following table describes the available PCP metric groups for XFS.
| Metric Group | Metrics provided |
|
| General XFS metrics including the read and write operation counts, read and write byte counts. Along with counters for the number of times inodes are flushed, clustered and number of failure to cluster. |
|
| Range of metrics regarding the allocation of objects in the file system, these include number of extent and block creations/frees. Allocation tree lookup and compares along with extend record creation and deletion from the btree. |
|
| Metrics include the number of block map read/write and block deletions, extent list operations for insertion, deletions and lookups. Also operations counters for compares, lookups, insertions and deletion operations from the blockmap. |
|
| Counters for directory operations on XFS file systems for creation, entry deletions, count of "getdent” operations. |
|
| Counters for the number of meta-data transactions, these include the count for the number of synchronous and asynchronous transactions along with the number of empty transactions. |
|
| Counters for the number of times that the operating system looked for an XFS inode in the inode cache with different outcomes. These count cache hits, cache misses, and so on. |
|
| Counters for the number of log buffer writes over XFS file sytems includes the number of blocks written to disk. Metrics also for the number of log flushes and pinning. |
|
| Counts for the number of bytes of file data flushed out by the XFS flush deamon along with counters for number of buffers flushed to contiguous and non-contiguous space on disk. |
|
| Counts for the number of attribute get, set, remove and list operations over all XFS file systems. |
|
| Metrics for quota operation over XFS file systems, these include counters for number of quota reclaims, quota cache misses, cache hits and quota data reclaims. |
|
| Range of metrics regarding XFS buffer objects. Counters include the number of requested buffer calls, successful buffer locks, waited buffer locks, miss_locks, miss_retries and buffer hits when looking up pages. |
|
| Metrics regarding the operations of the XFS btree. |
|
| Configuration metrics which are used to reset the metric counters for the XFS stats. Control metrics are toggled by means of the pmstore tool. |
15.5. Per-device PCP metric groups for XFS Copy linkLink copied to clipboard!
Performance Co-Pilot provides metric groups for monitoring individual XFS devices, covering operations from allocation to transaction management.
The following table describes the available per-device PCP metric group for XFS.
| Metric Group | Metrics provided |
|
| General XFS metrics including the read and write operation counts, read and write byte counts. Along with counters for the number of times inodes are flushed, clustered and number of failure to cluster. |
|
| Range of metrics regarding the allocation of objects in the file system, these include number of extent and block creations/frees. Allocation tree lookup and compares along with extend record creation and deletion from the btree. |
|
| Metrics include the number of block map read/write and block deletions, extent list operations for insertion, deletions and lookups. Also operations counters for compares, lookups, insertions and deletion operations from the blockmap. |
|
| Counters for directory operations of XFS file systems for creation, entry deletions, count of "getdent” operations. |
|
| Counters for the number of meta-data transactions, these include the count for the number of synchronous and asynchronous transactions along with the number of empty transactions. |
|
| Counters for the number of times that the operating system looked for an XFS inode in the inode cache with different outcomes. These count cache hits, cache misses, and so on. |
|
| Counters for the number of log buffer writes over XFS filesytems includes the number of blocks written to disk. Metrics also for the number of log flushes and pinning. |
|
| Counts for the number of bytes of file data flushed out by the XFS flush deamon along with counters for number of buffers flushed to contiguous and non-contiguous space on disk. |
|
| Counts for the number of attribute get, set, remove and list operations over all XFS file systems. |
|
| Metrics for quota operation over XFS file systems, these include counters for number of quota reclaims, quota cache misses, cache hits and quota data reclaims. |
|
| Range of metrics regarding XFS buffer objects. Counters include the number of requested buffer calls, successful buffer locks, waited buffer locks, miss_locks, miss_retries and buffer hits when looking up pages. |
|
| Metrics regarding the operations of the XFS btree. |
Chapter 16. Checking and repairing a file system Copy linkLink copied to clipboard!
RHEL offers file system administration tools known as fsck (file system check) to check and repair file systems. System administration tools might run automatically during boot if issues are detected, but they can also be run manually when needed.
File system checkers guarantee only metadata consistency across the file system. They have no awareness of the actual data contained within the file system and are not data recovery tools.
16.1. Scenarios that require a file system check Copy linkLink copied to clipboard!
Identify situations when a file system check is necessary, such as corruption, boot failures, or system errors. Understand appropriate actions and considerations for diagnosing and repairing file system issues using standard check utilities.
The relevant fsck tools can be used to check your system if any of the following occurs:
- System fails to boot
- Files on a specific disk become corrupt
- The file system shuts down or changes to read-only due to inconsistencies
- A file on the file system is inaccessible
File system inconsistencies can occur for various reasons, including but not limited to hardware errors, storage administration errors, and software bugs.
File system check tools cannot repair hardware problems. A file system must be fully readable and writable if repair is to operate successfully. If a file system was corrupted due to a hardware error, the file system must first be moved to a good disk, for example with the dd(8) utility.
For journaling file systems, all that is normally required at boot time is to replay the journal if required and this is usually a very short operation.
However, if a file system inconsistency or corruption occurs, even for journaling file systems, then the file system checker must be used to repair the file system.
It is possible to disable file system check at boot by setting the sixth field in /etc/fstab to 0. However, Red Hat does not recommend doing so unless you are having issues with fsck at boot time, for example with extremely large or remote file systems.
For more information, see the fstab(5), fsck(8), and dd(8) man pages on your system.
16.2. Potential side effects of running fsck Copy linkLink copied to clipboard!
Generally, running the file system check and repair tool can be expected to automatically repair at least some of the inconsistencies it finds. In some cases, the following issues can arise:
- Severely damaged inodes or directories may be discarded if they cannot be repaired.
- Significant changes to the file system may occur.
To ensure that unexpected or undesirable changes are not permanently made, ensure you follow any precautionary steps outlined in the procedure.
16.3. Error-handling mechanisms in XFS Copy linkLink copied to clipboard!
This section describes how XFS handles various kinds of errors in the file system.
Unclean unmounts
Journalling maintains a transactional record of metadata changes that happen on the file system.
In the event of a system crash, power failure, or other unclean unmount, XFS uses the journal (also called log) to recover the file system. The kernel performs journal recovery when mounting the XFS file system.
Corruption
In this context, corruption means errors on the file system caused by, for example:
- Hardware faults
- Bugs in storage firmware, device drivers, the software stack, or the file system itself
- Problems that cause parts of the file system to be overwritten by something outside of the file system
When XFS detects corruption in the file system or the file-system metadata, it may shut down the file system and report the incident in the system log. Note that if the corruption occurred on the file system hosting the /var directory, these logs will not be available after a reboot.
Below is an exaple of system log entry reporting an XFS corruption:
User-space utilities usually report the Input/output error message when trying to access a corrupted XFS file system. Mounting an XFS file system with a corrupted log results in a failed mount and the following error message:
mount: /mount-point: mount(2) system call failed: Structure needs cleaning.
mount: /mount-point: mount(2) system call failed: Structure needs cleaning.
You must manually use the xfs_repair utility to repair the corruption. For more information see the xfs_repair(8) man page on your system.
16.4. Checking an XFS file system with xfs_repair Copy linkLink copied to clipboard!
Perform a read-only check of an XFS file system by using the xfs_repair utility. Unlike other file system repair utilities, xfs_repair does not run at boot time, even when an XFS file system was not cleanly unmounted. In case of an unclean unmount, XFS simply replays the log at mount time, ensuring a consistent file system; xfs_repair cannot repair an XFS file system with a dirty log without remounting it first.
Although an fsck.xfs binary is present in the xfsprogs package, this is present only to satisfy initscripts that look for an fsck.file system binary at boot time. fsck.xfs immediately exits with an exit code of 0.
Procedure
Replay the log by mounting and unmounting the file system:
mount file-system umount file-system
# mount file-system # umount file-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the mount fails with a structure needs cleaning error, the log is corrupted and cannot be replayed. The dry run should discover and report more on-disk corruption as a result.
Use the
xfs_repairutility to perform a dry run to check the file system. Any errors are printed and an indication of the actions that would be taken, without modifying the file system.xfs_repair -n block-device
# xfs_repair -n block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mount the file system:
mount file-system
# mount file-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information see the
xfs_repair(8)andxfs_metadump(8)man pages on your system.
16.5. Repairing an XFS file system with xfs_repair Copy linkLink copied to clipboard!
This procedure repairs a corrupted XFS file system using the xfs_repair utility.
Procedure
Create a metadata image prior to repair for diagnostic or testing purposes using the
xfs_metadumputility. A pre-repair file system metadata image can be useful for support investigations if the corruption is due to a software bug. Patterns of corruption present in the pre-repair image can aid in root-cause analysis.Use the
xfs_metadumpdebugging tool to copy the metadata from an XFS file system to a file. The resultingmetadumpfile can be compressed using standard compression utilities to reduce the file size if largemetadumpfiles need to be sent to support.xfs_metadump block-device metadump-file
# xfs_metadump block-device metadump-fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Replay the log by remounting the file system:
mount file-system umount file-system
# mount file-system # umount file-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
xfs_repairutility to repair the unmounted file system:If the mount succeeded, no additional options are required:
xfs_repair block-device
# xfs_repair block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the mount failed with the Structure needs cleaning error, the log is corrupted and cannot be replayed. Use the
-Loption (force log zeroing) to clear the log:WarningThis command causes all metadata updates in progress at the time of the crash to be lost, which might cause significant file system damage and data loss. This should be used only as a last resort if the log cannot be replayed.
xfs_repair -L block-device
# xfs_repair -L block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Mount the file system:
mount file-system
# mount file-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see the
xfs_repair(8)man page on your system.
16.6. Error handling mechanisms in ext2, ext3, and ext4 Copy linkLink copied to clipboard!
The ext2, ext3, and ext4 file systems use e2fsck for checks and repairs. The fsck.ext2, fsck.ext3, and fsck.ext4 binaries are hardlinks to e2fsck and run automatically at boot. Their behavior varies by file system type and its state.
A full file system check and repair is invoked for ext2, which is not a metadata journaling file system, and for ext4 file systems without a journal.
For ext3 and ext4 file systems with metadata journaling, the journal is replayed in userspace and the utility exits. This is the default action because journal replay ensures a consistent file system after a crash.
If these file systems encounter metadata inconsistencies while mounted, they record this fact in the file system superblock. If e2fsck finds that a file system is marked with such an error, e2fsck performs a full check after replaying the journal (if present).
For more information, see the fsck(8) and e2fsck(8) man pages on your system.
16.7. Checking an ext2, ext3, or ext4 file system with e2fsck Copy linkLink copied to clipboard!
This procedure checks an ext2, ext3, or ext4 file system using the e2fsck utility.
Procedure
Replay the log by remounting the file system:
mount file-system umount file-system
# mount file-system # umount file-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Perform a dry run to check the file system.
e2fsck -n block-device
# e2fsck -n block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAny errors are printed and an indication of the actions that would be taken, without modifying the file system. Later phases of consistency checking may print extra errors as it discovers inconsistencies which would have been fixed in early phases if it were running in repair mode.
For more information, see the
e2image(8)ande2fsck(8)man pages on your system.
16.8. Repairing an ext2, ext3, or ext4 file system with e2fsck Copy linkLink copied to clipboard!
This procedure repairs a corrupted ext2, ext3, or ext4 file system using the e2fsck utility.
Procedure
Save a file system image for support investigations. A pre-repair file system metadata image can be useful for support investigations if the corruption is due to a software bug. Patterns of corruption present in the pre-repair image can aid in root-cause analysis.
NoteSeverely damaged file systems may cause problems with metadata image creation.
If you are creating the image for testing purposes, use the
-roption to create a sparse file of the same size as the file system itself.e2fsckcan then operate directly on the resulting file.e2image -r block-device image-file
# e2image -r block-device image-fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you are creating the image to be archived or provided for diagnostic, use the
-Qoption, which creates a more compact file format suitable for transfer.e2image -Q block-device image-file
# e2image -Q block-device image-fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Replay the log by remounting the file system:
mount file-system umount file-system
# mount file-system # umount file-systemCopy to Clipboard Copied! Toggle word wrap Toggle overflow Automatically repair the file system. If user intervention is required,
e2fsckindicates the unfixed problem in its output and reflects this status in the exit code.e2fsck -p block-device
# e2fsck -p block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 17. Mounting file systems Copy linkLink copied to clipboard!
As a system administrator, you can mount file systems on your system to access data on them.
17.1. The Linux mount mechanism Copy linkLink copied to clipboard!
On Linux, UNIX, and similar operating systems, file systems on different partitions and removable devices (CDs, DVDs, or USB flash drives for example) can be attached to a certain point (the mount point) in the directory tree, and then detached again. While a file system is mounted on a directory, the original content of the directory is not accessible.
Note that Linux does not prevent you from mounting a file system to a directory with a file system already attached to it.
When mounting, you can identify the device by:
-
a universally unique identifier (UUID): for example,
UUID=34795a28-ca6d-4fd8-a347-73671d0c19cb -
a volume label: for example,
LABEL=home -
a full path to a non-persistent block device: for example,
/dev/sda3
When you mount a file system using the mount command without all required information, that is without the device name, the target directory, or the file system type, the mount utility reads the content of the /etc/fstab file to check if the given file system is listed there. The /etc/fstab file contains a list of device names and the directories in which the selected file systems are set to be mounted as well as the file system type and mount options. Therefore, when mounting a file system that is specified in /etc/fstab, the following command syntax is sufficient:
Mounting by the mount point:
mount directory
# mount directoryCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mounting by the block device:
mount device
# mount deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
17.2. Listing currently mounted file systems Copy linkLink copied to clipboard!
List all currently mounted file systems on the command line by using the findmnt utility.
Procedure
To list all mounted file systems, use the
findmntutility:findmnt
$ findmntCopy to Clipboard Copied! Toggle word wrap Toggle overflow To limit the listed file systems only to a certain file system type, add the
--typesoption:findmnt --types fs-type
$ findmnt --types fs-typeCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example below is an example to list only XFS file systems:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
17.3. Mounting a file system with mount Copy linkLink copied to clipboard!
Mount a file system by using the mount utility.
Prerequisites
Verify that no file system is already mounted on your chosen mount point:
findmnt mount-point
$ findmnt mount-pointCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
To attach a certain file system, use the
mountutility:mount device mount-point
# mount device mount-pointCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to mount a local XFS file system identified by UUID:
mount UUID=ea74bbec-536d-490c-b8d9-5b40bbd7545b /mnt/data
# mount UUID=ea74bbec-536d-490c-b8d9-5b40bbd7545b /mnt/dataCopy to Clipboard Copied! Toggle word wrap Toggle overflow If
mountcannot recognize the file system type automatically, specify it using the--typesoption:mount --types type device mount-point
# mount --types type device mount-pointCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to mount a remote NFS file system:
mount --types nfs4 host:/remote-export /mnt/nfs
# mount --types nfs4 host:/remote-export /mnt/nfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
17.4. Moving a mount point Copy linkLink copied to clipboard!
Change the mount point of a mounted file system to a different directory by using the mount utility.
Procedure
To change the directory in which a file system is mounted:
mount --move old-directory new-directory
# mount --move old-directory new-directoryCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to move the file system mounted in the
/mnt/userdirs/directory to the/home/mount point:mount --move /mnt/userdirs /home
# mount --move /mnt/userdirs /homeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the file system has been moved as expected:
findmnt ls old-directory ls new-directory
$ findmnt $ ls old-directory $ ls new-directoryCopy to Clipboard Copied! Toggle word wrap Toggle overflow
17.5. Unmounting a file system with umount Copy linkLink copied to clipboard!
Unmount a file system by using the umount utility.
Procedure
Try unmounting the file system using either of the following commands:
By mount point:
umount mount-point
# umount mount-pointCopy to Clipboard Copied! Toggle word wrap Toggle overflow By device:
umount device
# umount deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If the command fails with an error similar to the following, it means that the file system is in use because of a process is using resources on it:
umount: /run/media/user/FlashDrive: target is busy.
umount: /run/media/user/FlashDrive: target is busy.Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the file system is in use, use the
fuserutility to determine which processes are accessing it. For example:fuser --mount /run/media/user/FlashDrive /run/media/user/FlashDrive: 18351
$ fuser --mount /run/media/user/FlashDrive /run/media/user/FlashDrive: 18351Copy to Clipboard Copied! Toggle word wrap Toggle overflow Afterwards, stop the processes using the file system and try unmounting it again.
17.6. Mounting and unmounting file systems in the web console Copy linkLink copied to clipboard!
To be able to use partitions on RHEL systems, you need to mount a file system on the partition as a device.
You also can unmount a file system and the RHEL system will stop using it. Unmounting the file system enables you to delete, remove, or re-format devices.
Prerequisites
-
The
cockpit-storagedpackage is installed on your system.
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
- If you want to unmount a file system, ensure that the system does not use any file, service, or application stored in the partition.
Procedure
- Log in to the RHEL 10 web console.
- Click the Storage tab.
- In the Storage table, select a volume from which you want to delete the partition.
- In the GPT partitions section, click the menu button, next to the partition whose file system you want to mount or unmount.
- Click or .
17.7. Common mount options Copy linkLink copied to clipboard!
The mount utility supports various options for controlling file system behavior, access permissions, and mounting preferences across different file system types.
The following table lists the most common options of the mount utility. You can apply these mount options using the following syntax:
mount --options option1,option2,option3 device mount-point
# mount --options option1,option2,option3 device mount-point
| Option | Description |
|---|---|
|
| Enables asynchronous input and output operations on the file system. |
|
|
Enables the file system to be mounted automatically using the |
|
|
Provides an alias for the |
|
| Allows the execution of binary files on the particular file system. |
|
| Mounts an image as a loop device. |
|
|
Default behavior disables the automatic mount of the file system using the |
|
| Disallows the execution of binary files on the particular file system. |
|
| Disallows an ordinary user (that is, other than root) to mount and unmount the file system. |
|
| Remounts the file system in case it is already mounted. |
|
| Mounts the file system for reading only. |
|
| Mounts the file system for both reading and writing. |
|
| Allows an ordinary user (that is, other than root) to mount and unmount the file system. |
Chapter 18. Sharing a mount on multiple mount points Copy linkLink copied to clipboard!
As a system administrator, you can duplicate mount points to make the file systems accessible from multiple directories.
18.2. Creating a private mount point duplicate Copy linkLink copied to clipboard!
Duplicate a mount point as a private mount. File systems that you later mount under the duplicate or the original mount point are not reflected in the other.
Procedure
Create a virtual file system (VFS) node from the original mount point:
mount --bind original-dir original-dir
# mount --bind original-dir original-dirCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the original mount point as private:
mount --make-private original-dir
# mount --make-private original-dirCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, to change the mount type for the selected mount point and all mount points under it, use the
--make-rprivateoption instead of--make-private.Create the duplicate:
mount --bind original-dir duplicate-dir
# mount --bind original-dir duplicate-dirCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Example 18.1. Duplicating /media into /mnt as a private mount point
Create a VFS node from the
/mediadirectory:mount --bind /media /media
# mount --bind /media /mediaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the
/mediadirectory as private:mount --make-private /media
# mount --make-private /mediaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create its duplicate in
/mnt:mount --bind /media /mnt
# mount --bind /media /mntCopy to Clipboard Copied! Toggle word wrap Toggle overflow It is now possible to verify that
/mediaand/mntshare content but none of the mounts within/mediaappear in/mnt. For example, if the CD-ROM drive contains non-empty media and the/media/cdrom/directory exists, use:mount /dev/cdrom /media/cdrom ls /media/cdrom ls /mnt/cdrom
# mount /dev/cdrom /media/cdrom # ls /media/cdrom EFI GPL isolinux LiveOS # ls /mnt/cdrom #Copy to Clipboard Copied! Toggle word wrap Toggle overflow It is also possible to verify that file systems mounted in the
/mntdirectory are not reflected in/media. For example, if a non-empty USB flash drive that uses the/dev/sdc1device is plugged in and the/mnt/flashdisk/directory is present, use:mount /dev/sdc1 /mnt/flashdisk ls /media/flashdisk ls /mnt/flashdisk
# mount /dev/sdc1 /mnt/flashdisk # ls /media/flashdisk # ls /mnt/flashdisk en-US publican.cfgCopy to Clipboard Copied! Toggle word wrap Toggle overflow
18.4. Creating a slave mount point duplicate Copy linkLink copied to clipboard!
Duplicate a mount point as a slave mount type. File systems that you later mount under the original mount point are reflected in the duplicate but not the other way around.
Procedure
Create a virtual file system (VFS) node from the original mount point:
mount --bind original-dir original-dir
# mount --bind original-dir original-dirCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the original mount point as shared:
mount --make-shared original-dir
# mount --make-shared original-dirCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, to change the mount type for the selected mount point and all mount points under it, use the
--make-rsharedoption instead of--make-shared.Create the duplicate and mark it as the
slavetype:mount --bind original-dir duplicate-dir mount --make-slave duplicate-dir
# mount --bind original-dir duplicate-dir # mount --make-slave duplicate-dirCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Example 18.3. Duplicating /media into /mnt as a slave mount point
This example shows how to get the content of the /media directory to appear in /mnt as well, but without any mounts in the /mnt directory to be reflected in /media.
Create a VFS node from the
/mediadirectory:mount --bind /media /media
# mount --bind /media /mediaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mark the
/mediadirectory as shared:mount --make-shared /media
# mount --make-shared /mediaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create its duplicate in
/mntand mark it asslave:mount --bind /media /mnt mount --make-slave /mnt
# mount --bind /media /mnt # mount --make-slave /mntCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that a mount within
/mediaalso appears in/mnt. For example, if the CD-ROM drive contains non-empty media and the/media/cdrom/directory exists, use:mount /dev/cdrom /media/cdrom ls /media/cdrom ls /mnt/cdrom
# mount /dev/cdrom /media/cdrom # ls /media/cdrom EFI GPL isolinux LiveOS # ls /mnt/cdrom EFI GPL isolinux LiveOSCopy to Clipboard Copied! Toggle word wrap Toggle overflow Also verify that file systems mounted in the
/mntdirectory are not reflected in/media. For example, if a non-empty USB flash drive that uses the/dev/sdc1device is plugged in and the/mnt/flashdisk/directory is present, use:mount /dev/sdc1 /mnt/flashdisk ls /media/flashdisk ls /mnt/flashdisk
# mount /dev/sdc1 /mnt/flashdisk # ls /media/flashdisk # ls /mnt/flashdisk en-US publican.cfgCopy to Clipboard Copied! Toggle word wrap Toggle overflow
18.5. Preventing a mount point from being duplicated Copy linkLink copied to clipboard!
Mark a mount point as unbindable so that it is not possible to duplicate it in another mount point.
Procedure
To change the type of a mount point to an unbindable mount, use:
mount --bind mount-point mount-point mount --make-unbindable mount-point
# mount --bind mount-point mount-point # mount --make-unbindable mount-pointCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, to change the mount type for the selected mount point and all mount points under it, use the
--make-runbindableoption instead of--make-unbindable.Any subsequent attempt to make a duplicate of this mount fails with the following error:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Example 18.4. Preventing /media from being duplicated
To prevent the
/mediadirectory from being shared, use:mount --bind /media /media mount --make-unbindable /media
# mount --bind /media /media # mount --make-unbindable /mediaCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 19. Persistently mounting file systems Copy linkLink copied to clipboard!
As a system administrator, you can persistently mount file systems to configure non-removable storage.
19.1. The /etc/fstab file Copy linkLink copied to clipboard!
Use the /etc/fstab configuration file to control persistent mount points of file systems. Each line in the /etc/fstab file defines a mount point of a file system.
It includes the following fields separated by white space:
-
The block device identified by a persistent attribute or a path in the
/devdirectory. - The directory where the device will be mounted.
- The file system on the device.
-
Mount options for the file system, which includes the
defaultsoption to mount the partition at boot time with default options. The mount option field also recognizes thesystemdmount unit options in thex-systemd.optionformat. -
Backup option for the
dumputility. -
Check order for the
fsckutility.
The systemd-fstab-generator dynamically converts the entries from the /etc/fstab file to the systemd-mount units. The systemd auto mounts LVM volumes from /etc/fstab during manual activation unless the systemd-mount unit is masked.
Example 19.1. The /boot file system in /etc/fstab
| Block device | Mount point | File system | Options | Backup | Check |
|---|---|---|---|---|---|
|
|
|
|
|
|
|
The systemd service automatically generates mount units from entries in /etc/fstab.
For more information, see the fstab(5) and systemd.mount(5) man pages on your system.
19.2. Adding a file system to /etc/fstab Copy linkLink copied to clipboard!
Configure persistent mount point for a file system in the /etc/fstab configuration file.
Procedure
Find out the UUID attribute of the file system:
lsblk --fs storage-device
$ lsblk --fs storage-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, view the UUID of a partition:
lsblk --fs /dev/sda1
$ lsblk --fs /dev/sda1 NAME FSTYPE LABEL UUID MOUNTPOINT sda1 xfs Boot ea74bbec-536d-490c-b8d9-5b40bbd7545b /bootCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the mount point directory does not exist, create it:
mkdir --parents mount-point
# mkdir --parents mount-pointCopy to Clipboard Copied! Toggle word wrap Toggle overflow As root, edit the
/etc/fstabfile and add a line for the file system, identified by the UUID.For example, below is the /boot mount point in /etc/fstab
UUID=ea74bbec-536d-490c-b8d9-5b40bbd7545b /boot xfs defaults 0 0
UUID=ea74bbec-536d-490c-b8d9-5b40bbd7545b /boot xfs defaults 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Regenerate mount units so that your system registers the new configuration:
systemctl daemon-reload
# systemctl daemon-reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Try mounting the file system to verify that the configuration works:
mount mount-point
# mount mount-pointCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 20. Mounting file systems on demand Copy linkLink copied to clipboard!
As a system administrator, you can configure file systems, such as NFS, to mount automatically on demand.
20.1. The autofs service Copy linkLink copied to clipboard!
The autofs service can mount and unmount file systems automatically (on-demand), therefore saving system resources. It can be used to mount file systems such as NFS, AFS, SMBFS, CIFS, and local file systems.
One drawback of permanent mounting using the /etc/fstab configuration is that, regardless of how infrequently a user accesses the mounted file system, the system must dedicate resources to keep the mounted file system in place. This might affect system performance when, for example, the system is maintaining NFS mounts to many systems at one time.
An alternative to /etc/fstab is to use the kernel-based autofs service. It consists of:
- A kernel module that implements a file system.
- A user-space service that performs all of the other functions.
For more information, see the autofs(8) man page on your system.
20.2. The autofs configuration files Copy linkLink copied to clipboard!
This section describes the usage and syntax of configuration files used by the autofs service.
The master map file
The autofs service uses /etc/auto.master (master map) as its default primary configuration file. This can be changed to use another supported network source and name using the autofs configuration in the /etc/autofs.conf configuration file in conjunction with the Name Service Switch (NSS) mechanism.
All on-demand mount points must be configured in the master map. Mount point, host name, exported directory, and options can all be specified in a set of files (or other supported network sources) rather than configuring them manually for each host.
The master map file lists mount points controlled by autofs, and their corresponding configuration files or network sources known as automount maps. The format of the master map is:
mount-point map-name options
mount-point map-name options
The variables used in this format are:
- mount-point
-
The
autofsmount point; for example,/mnt/data. - map-file
- The map source file, which contains a list of mount points and the file system location from which those mount points should be mounted.
- options
- If supplied, these apply to all entries in the given map, if they do not themselves have options specified.
Example 20.1. The /etc/auto.master file
The following is a sample line from /etc/auto.master file:
/mnt/data /etc/auto.data
/mnt/data /etc/auto.data
Map files
Map files configure the properties of individual on-demand mount points.
The automounter creates the directories if they do not exist. If the directories exist before the automounter was started, the automounter will not remove them when it exits. If a timeout is specified, the directory is automatically unmounted if the directory is not accessed for the timeout period.
The general format of maps is similar to the master map. However, the options field appears between the mount point and the location instead of at the end of the entry as in the master map:
mount-point options location
mount-point options location
The variables used in this format are:
- mount-point
-
This refers to the
autofsmount point. This can be a single directory name for an indirect mount or the full path of the mount point for direct mounts. Each direct and indirect map entry key (mount-point) can be followed by a space separated list of offset directories (subdirectory names each beginning with/) making them what is known as a multi-mount entry. - options
-
When supplied, these options are appended to the master map entry options, if any, or used instead of the master map options if the configuration entry
append_optionsis set tono. - location
-
This refers to the file system location such as a local file system path (preceded with the Sun map format escape character
:for map names beginning with/), an NFS file system or other valid file system location.
Example 20.2. A map file
The following is a sample from a map file; for example, /etc/auto.misc. To use this map file for mounting under /misc, add the following to the master map file /etc/auto.master:
/misc /etc/auto.misc
/misc /etc/auto.misc
The /etc/auto.misc file contains:
payroll -fstype=nfs4 personnel:/exports/payroll sales -fstype=xfs :/dev/hda4
payroll -fstype=nfs4 personnel:/exports/payroll
sales -fstype=xfs :/dev/hda4
The first column in the map file indicates the autofs mount point: sales and payroll from the server called personnel. The second column indicates the options for the autofs mount. The third column indicates the source of the mount.
Following the given configuration, the autofs mount points will be /home/payroll and /home/sales. The -fstype= option is often omitted and is not needed if the file system is NFS, including mounts for NFSv4 if the system default is NFSv4 for NFS mounts.
Using the given configuration, if a process requires access to an autofs unmounted directory such as /home/payroll/2006/July.sxc, the autofs service automatically mounts the directory.
The amd map format
The autofs service recognizes map configuration in the amd format as well. This is useful if you want to reuse existing automounter configuration written for the am-utils service, which has been removed from Red Hat Enterprise Linux.
However, Red Hat recommends using the simpler autofs format described in the previous sections.
For more information, see the: * autofs(5), autofs.conf(5), and auto.master(5) man pages on your system * /usr/share/doc/autofs/README.amd-maps file
20.3. Configuring autofs mount points Copy linkLink copied to clipboard!
Configure on-demand mount points by using the autofs service.
Prerequisites
Install the
autofspackage:dnf install autofs
# dnf install autofsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start and enable the
autofsservice:systemctl enable --now autofs
# systemctl enable --now autofsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Procedure
-
Create a map file for the on-demand mount point, located at
/etc/auto.identifier. Replace identifier with a name that identifies the mount point. - In the map file, enter the mount point, options, and location fields as described in The autofs configuration files section.
- Register the map file in the master map file, as described in The autofs configuration files section.
Allow the service to re-read the configuration, so it can manage the newly configured
autofsmount:systemctl reload autofs.service
# systemctl reload autofs.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Try accessing content in the on-demand directory:
ls automounted-directory
# ls automounted-directoryCopy to Clipboard Copied! Toggle word wrap Toggle overflow
20.4. Configuring autofs to use ldap for storing and retrieving automount maps Copy linkLink copied to clipboard!
You can configure the autofs service to retrieve automount maps stored in an LDAP directory.
Prerequisites
-
The
autofsandopenldappackages are installed. - A Kerberos-capable service is running for secure authentication.
Procedure
-
To configure LDAP access, modify the
/etc/openldap/ldap.conffile. Ensure that theBASEandURIoptions are set to reflect the appropriate server and base for locating automount entries. In the
/etc/autofs.conffile configure the LDAP schema for automount maps. By default,autofswill check the commonly used schemas in the order given in the configuration file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can also set these values explicitly to marginally reduce LDAP queries. You can write the attributes in both lower and upper cases in the
/etc/autofs.conffile.The most recently established schema for storing automount maps in LDAP is described by the
rfc2307bisdraft. To use this schema, configure it in the/etc/autofs.conffile by uncommenting and setting the appropriate value for theldap_schemaoption. For example, if you are using a non-standard schema or need to override the default behavior, you can specify the relevant schema attributes in the/etc/autofs.conffile. The following values correspond to commonly used schema settings such as those from therfc2307bisdraft:default_map_object_class = automountMap default_entry_object_class = automount default_map_attribute = automountMapName default_entry_attribute = automountKey default_value_atrribute = automountInformation
default_map_object_class = automountMap default_entry_object_class = automount default_map_attribute = automountMapName default_entry_attribute = automountKey default_value_atrribute = automountInformationCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that the autofs automatically detects standard schemas, and specifying these settings is typically only necessary in custom or mixed-schema environments. If used, only one complete set of schema definitions should be active.
If you choose to specify a custom schema in the configuration, ensure that only one complete set of schema-related entries is active. Comment out any other to avoid conflicts. In the
rfc2307bisschema, theautomountKeyattribute replaces thecnattribute used in the olderrfc2307schema. The following is an example of a corresponding LDAP Data Interchange Format (LDIF) configuration:Copy to Clipboard Copied! Toggle word wrap Toggle overflow To allow authentication from an LDAP server, edit the
/etc/autofs_ldap_auth.conffile:-
Change
authrequiredto yes. Set the principal to the Kerberos host principal for the LDAP server,
host/FQDN@REALM. The principal name is used to connect to the directory as part of GSS client authentication:Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information about host principal, see Using canonicalized DNS host names in IdM. You can also run
klist -kto get the exact host principal information.
-
Change
20.5. Automounting NFS server user home directories with autofs service Copy linkLink copied to clipboard!
Configure the autofs service to mount user home directories automatically.
Prerequisites
- The autofs package is installed.
- The autofs service is enabled and running.
Procedure
Define the mount point and the local automount map by editing the
/etc/auto.masterfile on the server where you want to mount user home directories, and add:/home /etc/auto.home
/home /etc/auto.homeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create the local automount map file
/etc/auto.homeon the server where you need to mount user home directories, and add:* -fstype=nfs,rw,sync host.example.com:/home/&
* -fstype=nfs,rw,sync host.example.com:/home/&Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can skip
fstypeparameter, as it isnfsby default. For more information, seeautofs(5)man page on your system.Reload the
autofsservice:systemctl reload autofs
# systemctl reload autofsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
20.6. Overriding or augmenting autofs site configuration files Copy linkLink copied to clipboard!
It is sometimes useful to override site defaults for a specific mount point on a client system.
Initial conditions:
For example, consider the following conditions:
-
nsswitchtells autofs which services to check for maps. -
The map you want to augment or add to is named
auto.home. The
auto.homemap is stored inldapand the/etc/nsswitch.confhas the following directive:automount: files ldap
automount: files ldapCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
/etc/auto.mastermap file contains:/home /etc/auto.home
/home /etc/auto.homeCopy to Clipboard Copied! Toggle word wrap Toggle overflow The map /etc/auto.home` file contains:
* fileserver.example.com:/export/home/&
* fileserver.example.com:/export/home/&Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Using plus map inclusion:
To read the centrally managed auto.home map through nsswitch, remove the the wildcard map entry * fileserver.example.com:/export/home/& from the local /etc/auto.home file and replace it with +auto.home.
Plus map inclusion can only be used in local maps. When autofs encounters the files source through a plus map inclusion, it skips it if the included map name is identical to the map currently being read. In this case, since both are auto.home, autofs proceeds to the next source defined in nsswitch.conf, which is ldap. If a wildcard map entry is present in the map, it does not affect the directory listing, even when browse mode is enabled. This is because autofs does not know what the wildcard might match when a lookup is done. As a result, it cannot create mount point directories in advance.
Overriding or Adding Entries
To override or add specific entries locally, place them before the +auto.home line in /etc/auto.home. For example the /etc/auto.home file would look like:
mydir someserver:/export/mydir +auto.home
mydir someserver:/export/mydir
+auto.home
To display local entries like mydir when listing /home, enable browse mode by setting browse_mode = yes in /etc/autofs.conf. Wildcard entries (like *) do not appear in directory listings unless accessed.
20.7. Using systemd.automount to mount a file system on-demand with /etc/fstab Copy linkLink copied to clipboard!
Mount a file system on-demand using the automount systemd units when mount point is defined in /etc/fstab. You have to add an automount unit for each mount and enable it.
Procedure
Add desired fstab entry as documented in Persistently mounting file systems. For example:
/dev/disk/by-id/da875760-edb9-4b82-99dc-5f4b1ff2e5f4 /mount/point xfs defaults 0 0
/dev/disk/by-id/da875760-edb9-4b82-99dc-5f4b1ff2e5f4 /mount/point xfs defaults 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Add
x-systemd.automountto the options field of entry created in the previous step. Load newly created units so that your system registers the new configuration:
systemctl daemon-reload
# systemctl daemon-reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the automount unit:
systemctl start mount-point.automount
# systemctl start mount-point.automountCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that
mount-point.automountis running:systemctl status mount-point.automount
# systemctl status mount-point.automountCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that automounted directory has desired content:
ls /mount/point
# ls /mount/pointCopy to Clipboard Copied! Toggle word wrap Toggle overflow
For more information, see Managing systemd
20.8. Using systemd.automount to mount a file system on-demand with a mount unit Copy linkLink copied to clipboard!
Mount a file system on-demand using the automount systemd units when mount point is defined by a mount unit. You have to add an automount unit for each mount and enable it.
Procedure
Create a mount unit. For example:
mount-point.mount [Mount] What=/dev/disk/by-uuid/f5755511-a714-44c1-a123-cfde0e4ac688 Where=/mount/point Type=xfs
mount-point.mount [Mount] What=/dev/disk/by-uuid/f5755511-a714-44c1-a123-cfde0e4ac688 Where=/mount/point Type=xfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a unit file with the same name as the mount unit, but with extension
.automount. Open the file and create an
[Automount]section. Set theWhere=option to the mount path:[Automount] Where=/mount/point [Install] WantedBy=multi-user.target
[Automount] Where=/mount/point [Install] WantedBy=multi-user.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Load newly created units so that your system registers the new configuration:
systemctl daemon-reload
# systemctl daemon-reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable and start the automount unit instead:
systemctl enable --now mount-point.automount
# systemctl enable --now mount-point.automountCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Check that
mount-point.automountis running:systemctl status mount-point.automount
# systemctl status mount-point.automountCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check that automounted directory has desired content:
ls /mount/point
# ls /mount/pointCopy to Clipboard Copied! Toggle word wrap Toggle overflow
For more information, see Managing systemd
Chapter 21. Using SSSD component from IdM to cache the autofs maps Copy linkLink copied to clipboard!
The System Security Services Daemon (SSSD) is a system service to access remote service directories and authentication mechanisms. The data caching is useful in case of the slow network connection.
To configure the SSSD service to cache the autofs map, follow the procedures below in this section.
21.1. Configuring SSSD to cache autofs maps Copy linkLink copied to clipboard!
The SSSD service can be used to cache autofs maps stored on an IdM server without having to configure autofs to use the IdM server at all.
Prerequisites
-
The
sssdpackage is installed.
Procedure
Open the SSSD configuration file:
vim /etc/sssd/sssd.conf
# vim /etc/sssd/sssd.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
autofsservice to the list of services handled by SSSD.[sssd] domains = ldap services = nss,pam,autofs
[sssd] domains = ldap services = nss,pam,autofsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a new
[autofs]section. You can leave this blank, because the default settings for anautofsservice work with most infrastructures.Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see the
sssd.confman page on your system.Optional: Set a search base for the
autofsentries. By default, this is the LDAP search base, but a subtree can be specified in theldap_autofs_search_baseparameter.[domain/EXAMPLE] ldap_search_base = "dc=example,dc=com" ldap_autofs_search_base = "ou=automount,dc=example,dc=com"
[domain/EXAMPLE] ldap_search_base = "dc=example,dc=com" ldap_autofs_search_base = "ou=automount,dc=example,dc=com"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restart SSSD service:
systemctl restart sssd.service
# systemctl restart sssd.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the
/etc/nsswitch.conffile, so that SSSD is listed as a source for automount configuration:automount: sss files
automount: sss filesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart
autofsservice:systemctl restart autofs.service
# systemctl restart autofs.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Test the configuration by listing a user’s
/homedirectory, assuming there is a master map entry for/home:ls /home/userName
# ls /home/userNameCopy to Clipboard Copied! Toggle word wrap Toggle overflow If this does not mount the remote file system, check the
/var/log/messagesfile for errors. If necessary, increase the debug level in the/etc/sysconfig/autofsfile by setting theloggingparameter todebug.
Chapter 22. Setting read-only permissions for the root file system Copy linkLink copied to clipboard!
Sometimes, you need to mount the root file system (/) with read-only permissions. For example, to enhance security or to ensure data integrity after an unexpected system power-off.
22.1. Files and directories that always retain write permissions Copy linkLink copied to clipboard!
For the system to function properly, some files and directories need to retain write permissions. When the root file system is mounted in read-only mode, these files are mounted in RAM using the tmpfs temporary file system.
The default set of such files and directories is read from the /etc/rwtab file. Note that the readonly-root package is required to have this file present in your system.
Entries in the /etc/rwtab file follow this format:
copy-method path
copy-method path
In this syntax:
- Replace copy-method with one of the keywords specifying how the file or directory is copied to tmpfs.
- Replace path with the path to the file or directory.
The /etc/rwtab file recognizes the following ways in which a file or directory can be copied to tmpfs:
emptyAn empty path is copied to
tmpfs. For example:empty /tmp
empty /tmpCopy to Clipboard Copied! Toggle word wrap Toggle overflow dirsA directory tree is copied to
tmpfs, empty. For example:dirs /var/run
dirs /var/runCopy to Clipboard Copied! Toggle word wrap Toggle overflow filesA file or a directory tree is copied to
tmpfsintact. For example:files /etc/resolv.conf
files /etc/resolv.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The same format applies when adding custom paths to /etc/rwtab.d/.
22.2. Configuring the root file system to mount with read-only permissions on boot Copy linkLink copied to clipboard!
With this procedure, the root file system is mounted read-only on all following boots.
Procedure
In the
/etc/sysconfig/readonly-rootfile, set theREADONLYoption toyesto mount the file systems as read-only:READONLY=yes
READONLY=yesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
rooption in the root entry (/) in the/etc/fstabfile:/dev/mapper/luks-c376919e... / xfs x-systemd.device-timeout=0,ro 1 1
/dev/mapper/luks-c376919e... / xfs x-systemd.device-timeout=0,ro 1 1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the
rokernel option:grubby --update-kernel=ALL --args="ro"
# grubby --update-kernel=ALL --args="ro"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the
rwkernel option is disabled:grubby --update-kernel=ALL --remove-args="rw"
# grubby --update-kernel=ALL --remove-args="rw"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you need to add files and directories to be mounted with write permissions in the
tmpfsfile system, create a text file in the/etc/rwtab.d/directory and put the configuration there.For example, to mount the
/etc/example/filefile with write permissions, add this line to the/etc/rwtab.d/examplefile:files /etc/example/file
files /etc/example/fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantChanges made to files and directories in
tmpfsdo not persist across boots.- Reboot the system to apply the changes.
Troubleshooting
If you mount the root file system with read-only permissions by mistake, you can remount it with read-and-write permissions again using the following command:
mount -o remount,rw /
# mount -o remount,rw /Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 23. Limiting storage space usage on XFS with quotas Copy linkLink copied to clipboard!
You can restrict the amount of disk space available to users or groups by implementing disk quotas.You can also define a warning level at which system administrators are informed before a user consumes too much disk space or a partition becomes full.
The XFS quota subsystem manages limits on disk space (blocks) and file (inode) usage. XFS quotas control or report on usage of these items on a user, group, or directory or project level. Group and project quotas are only mutually exclusive on older non-default XFS disk formats.
When managing on a per-directory or per-project basis, XFS manages the disk usage of directory hierarchies associated with a specific project.
23.1. Disk quotas Copy linkLink copied to clipboard!
Disk quotas are limits that control how much disk space users and groups can consume on a file system. They help prevent any single user from using all available storage space and provide fair resource allocation across multiple users.
In most computing environments, disk space is not infinite. The quota subsystem provides a mechanism to control usage of disk space.
You can configure disk quotas for individual users as well as user groups on the local file systems. This makes it possible to manage the space allocated for user-specific files (such as email) separately from the space allocated to the projects that a user works on. The quota subsystem warns users when they exceed their allotted limit, but allows some extra space for current work (hard limit/soft limit).
If quotas are implemented, you need to check if the quotas are exceeded and make sure the quotas are accurate. If users repeatedly exceed their quotas or consistently reach their soft limits, a system administrator can either help the user determine how to use less disk space or increase the user’s disk quota.
You can set quotas to control:
- The number of consumed disk blocks.
- The number of inodes, which are data structures that contain information about files in UNIX file systems. Because inodes store file-related information, this allows control over the number of files that can be created.
23.2. The xfs_quota tool Copy linkLink copied to clipboard!
You can use the xfs_quota tool to manage quotas on XFS file systems. In addition, you can use XFS file systems with limit enforcement turned off as an effective disk usage accounting system.
The XFS quota system differs from other file systems in a number of ways. Most importantly, XFS considers quota information as file system metadata and uses journaling to provide a higher level guarantee of consistency.
For more information, see the xfs_quota(8) man page on your system.
23.3. File system quota management in XFS Copy linkLink copied to clipboard!
The XFS quota subsystem manages limits on disk space (blocks) and file (inode) usage. XFS quotas control or report on usage of these items on a user, group, or directory or project level. Group and project quotas are only mutually exclusive on older non-default XFS disk formats.
When managing on a per-directory or per-project basis, XFS manages the disk usage of directory hierarchies associated with a specific project.
23.4. Enabling disk quotas for XFS Copy linkLink copied to clipboard!
Enable disk quotas for users, groups, and projects on an XFS file system. Once quotas are enabled, the xfs_quota tool can be used to set limits and report on disk usage.
Procedure
Enable quotas for users:
mount -o uquota /dev/xvdb1 /xfs
# mount -o uquota /dev/xvdb1 /xfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
uquotawithuqnoenforceto allow usage reporting without enforcing any limits.Enable quotas for groups:
mount -o gquota /dev/xvdb1 /xfs
# mount -o gquota /dev/xvdb1 /xfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
gquotawithgqnoenforceto allow usage reporting without enforcing any limits.Enable quotas for projects:
mount -o pquota /dev/xvdb1 /xfs
# mount -o pquota /dev/xvdb1 /xfsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
pquotawithpqnoenforceto allow usage reporting without enforcing any limits.Alternatively, include the quota mount options in the
/etc/fstabfile. The following example shows entries in the/etc/fstabfile to enable quotas for users, groups, and projects, respectively, on an XFS file system. These examples also mount the file system with read/write permissions:vim /etc/fstab
# vim /etc/fstab /dev/xvdb1 /xfs xfs rw,quota 0 0 /dev/xvdb1 /xfs xfs rw,gquota 0 0 /dev/xvdb1 /xfs xfs rw,prjquota 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see the
xfs(5)andxfs_quota(8)man pages on your system.
23.5. Reporting XFS usage Copy linkLink copied to clipboard!
Use the xfs_quota tool to set limits and report on disk usage. By default, xfs_quota is run interactively, and in basic mode. Basic mode subcommands simply report usage, and are available to all users.
Prerequisites
- Quotas have been enabled for the XFS file system. See Enabling disk quotas for XFS.
Procedure
Start the
xfs_quotashell:xfs_quota
# xfs_quotaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Show usage and limits for the given user:
xfs_quota> quota username
xfs_quota> quota usernameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Show free and used counts for blocks and inodes:
xfs_quota> df
xfs_quota> dfCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the help command to display the basic commands available with
xfs_quota.xfs_quota> help
xfs_quota> helpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify
qto exitxfs_quota.xfs_quota> q
xfs_quota> qCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see the
xfs_quota(8)man page on your system.
23.6. Modifying XFS quota limits Copy linkLink copied to clipboard!
Start the xfs_quota tool with the -x option to enable expert mode and run the administrator commands, which allow modifications to the quota system. The subcommands of this mode allow actual configuration of limits, and are available only to users with elevated privileges.
Prerequisites
- Quotas have been enabled for the XFS file system. See Enabling disk quotas for XFS.
Procedure
Start the
xfs_quotashell with the-xoption to enable expert mode:xfs_quota -x /path
# xfs_quota -x /pathCopy to Clipboard Copied! Toggle word wrap Toggle overflow Report quota information for a specific file system:
xfs_quota> report /path
xfs_quota> report /pathCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to display a sample quota report for
/home(on/dev/blockdevice), use the commandreport -h /home. This displays output similar to the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Modify quota limits:
xfs_quota> limit isoft=500m ihard=700m user
xfs_quota> limit isoft=500m ihard=700m userCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to set a soft and hard inode count limit of 500 and 700 respectively for user
john, whose home directory is/home/john, use the following command:xfs_quota -x -c 'limit isoft=500 ihard=700 john' /home/
# xfs_quota -x -c 'limit isoft=500 ihard=700 john' /home/Copy to Clipboard Copied! Toggle word wrap Toggle overflow In this case, pass
mount_pointwhich is the mounted xfs file system.Display the expert commands available with
xfs_quota -x:xfs_quota> help
xfs_quota> helpCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the quota limits have been modified:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see the xfs_quota(8)` man page on your system.
23.7. Setting project limits for XFS Copy linkLink copied to clipboard!
Configure limits for project-controlled directories.
Procedure
Add the project-controlled directories to
/etc/projects. For example, the following adds the/var/logpath with a unique ID of 11 to/etc/projects. Your project ID can be any numerical value mapped to your project.echo 11:/var/log >> /etc/projects
# echo 11:/var/log >> /etc/projectsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add project names to
/etc/projidto map project IDs to project names. For example, the following associates a project calledlogfileswith the project ID of 11 as defined in the previous step.echo logfiles:11 >> /etc/projid
# echo logfiles:11 >> /etc/projidCopy to Clipboard Copied! Toggle word wrap Toggle overflow Initialize the project directory. For example, the following initializes the project directory
/var:xfs_quota -x -c 'project -s logfiles' /var
# xfs_quota -x -c 'project -s logfiles' /varCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure quotas for projects with initialized directories:
xfs_quota -x -c 'limit -p bhard=1g logfiles' /var
# xfs_quota -x -c 'limit -p bhard=1g logfiles' /varCopy to Clipboard Copied! Toggle word wrap Toggle overflow For more information, see the
xfs_quota(8),projid(5), andprojects(5)man pages on your system.
Chapter 24. Limiting storage space usage on ext4 with quotas Copy linkLink copied to clipboard!
You have to enable disk quotas on your system before you can assign them. You can assign disk quotas per user, per group or per project. However, if there is a soft limit set, you can exceed these quotas for a configurable period of time, known as the grace period.
24.1. Installing the quota tool Copy linkLink copied to clipboard!
You must install the quota RPM package to implement disk quotas.
Procedure
Install the
quotapackage:dnf install quota
# dnf install quotaCopy to Clipboard Copied! Toggle word wrap Toggle overflow
24.2. Enabling quota feature on file system creation Copy linkLink copied to clipboard!
Enable quotas on file system creation.
Procedure
Enable quotas on file system creation:
mkfs.ext4 -O quota /dev/sda
# mkfs.ext4 -O quota /dev/sdaCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOnly user and group quotas are enabled and initialized by default.
Change the defaults on file system creation:
mkfs.ext4 -O quota -E quotatype=usrquota:grpquota:prjquota /dev/sda
# mkfs.ext4 -O quota -E quotatype=usrquota:grpquota:prjquota /dev/sdaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mount the file system:
mount /dev/sda
# mount /dev/sdaCopy to Clipboard Copied! Toggle word wrap Toggle overflow
24.3. Enabling quota feature on existing file systems Copy linkLink copied to clipboard!
Enable the quota feature on existing file system by using the tune2fs command.
Procedure
Unmount the file system:
umount /dev/sda
# umount /dev/sdaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable quotas on existing file system:
tune2fs -O quota /dev/sda
# tune2fs -O quota /dev/sdaCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteOnly user and group quotas are initialized by default.
Change the defaults:
tune2fs -Q usrquota,grpquota,prjquota /dev/sda
# tune2fs -Q usrquota,grpquota,prjquota /dev/sdaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Mount the file system:
mount /dev/sda
# mount /dev/sdaCopy to Clipboard Copied! Toggle word wrap Toggle overflow
24.4. Enabling quota enforcement Copy linkLink copied to clipboard!
The quota accounting is enabled by default after mounting the file system without any additional options, but quota enforcement is not.
Prerequisites
- Quota feature is enabled and the default quotas are initialized.
Procedure
Enable quota enforcement by
quotaonfor the user quota:mount /dev/sda /mnt
# mount /dev/sda /mntCopy to Clipboard Copied! Toggle word wrap Toggle overflow quotaon /mnt
# quotaon /mntCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe quota enforcement can be enabled at mount time using
usrquota,grpquota, orprjquotamount options.mount -o usrquota,grpquota,prjquota /dev/sda /mnt
# mount -o usrquota,grpquota,prjquota /dev/sda /mntCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable user, group, and project quotas for all file systems:
quotaon -vaugP
# quotaon -vaugPCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
If neither of the
-u,-g, or-Poptions are specified, only the user quotas are enabled. -
If only
-goption is specified, only group quotas are enabled. -
If only
-Poption is specified, only project quotas are enabled.
-
If neither of the
Enable quotas for a specific file system, such as
/home:quotaon -vugP /home
# quotaon -vugP /homeCopy to Clipboard Copied! Toggle word wrap Toggle overflow
24.5. Assigning quotas per user Copy linkLink copied to clipboard!
The disk quotas are assigned to users with the edquota command.
The text editor defined by the EDITOR environment variable is used by edquota. To change the editor, set the EDITOR environment variable in your ~/.bash_profile file to the full path of the editor of your choice.
Prerequisites
- User must exist prior to setting the user quota.
Procedure
Assign the quota for a user:
edquota username
# edquota usernameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace username with the user to which you want to assign the quotas.
For example, if you enable a quota for the
/dev/sdapartition and execute the commandedquota testuser, the following is displayed in the default editor configured on the system:Disk quotas for user testuser (uid 501): Filesystem blocks soft hard inodes soft hard /dev/sda 44043 0 0 37418 0 0
Disk quotas for user testuser (uid 501): Filesystem blocks soft hard inodes soft hard /dev/sda 44043 0 0 37418 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Change the desired limits.
If any of the values are set to 0, limit is not set. Change them in the text editor.
For example, the following shows the soft and hard block limits for the testuser have been set to 50000 and 55000 respectively.
Disk quotas for user testuser (uid 501): Filesystem blocks soft hard inodes soft hard /dev/sda 44043 50000 55000 37418 0 0
Disk quotas for user testuser (uid 501): Filesystem blocks soft hard inodes soft hard /dev/sda 44043 50000 55000 37418 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow - The first column is the name of the file system that has a quota enabled for it.
- The second column shows how many blocks the user is currently using.
- The next two columns are used to set soft and hard block limits for the user on the file system.
-
The
inodescolumn shows how many inodes the user is currently using. The last two columns are used to set the soft and hard inode limits for the user on the file system.
- The hard block limit is the absolute maximum amount of disk space that a user or group can use. Once this limit is reached, no further disk space can be used.
- The soft block limit defines the maximum amount of disk space that can be used. However, unlike the hard limit, the soft limit can be exceeded for a certain amount of time. That time is known as the grace period. The grace period can be expressed in seconds, minutes, hours, days, weeks, or months.
Verification
Verify that the quota for the user has been set:
quota -v testuser
# quota -v testuser Disk quotas for user testuser: Filesystem blocks quota limit grace files quota limit grace /dev/sda 1000* 1000 1000 0 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow
24.6. Assigning quotas per group Copy linkLink copied to clipboard!
You can assign quotas on a per-group basis.
Prerequisites
- Group must exist prior to setting the group quota.
Procedure
Set a group quota:
edquota -g groupname
# edquota -g groupnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to set a group quota for the
develgroup:edquota -g devel
# edquota -g develCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command displays the existing quota for the group in the text editor:
Disk quotas for group devel (gid 505): Filesystem blocks soft hard inodes soft hard /dev/sda 440400 0 0 37418 0 0
Disk quotas for group devel (gid 505): Filesystem blocks soft hard inodes soft hard /dev/sda 440400 0 0 37418 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Modify the limits and save the file.
Verification
Verify that the group quota is set:
quota -vg groupname
# quota -vg groupnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow
24.7. Assigning quotas per project Copy linkLink copied to clipboard!
You can assign quotas per project.
Prerequisites
- Project quota is enabled on your file system.
Procedure
Add the project-controlled directories to
/etc/projects. For example, the following adds the/var/logpath with a unique ID of 11 to/etc/projects. Your project ID can be any numerical value mapped to your project.echo 11:/var/log >> /etc/projects
# echo 11:/var/log >> /etc/projectsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add project names to
/etc/projidto map project IDs to project names. For example, the following associates a project calledLogswith the project ID of 11 as defined in the previous step.echo Logs:11 >> /etc/projid
# echo Logs:11 >> /etc/projidCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the desired limits:
edquota -P 11
# edquota -P 11Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can choose the project either by its project ID (
11in this case), or by its name (Logsin this case).Using
quotaon, enable quota enforcement:
Verification
Verify that the project quota is set:
quota -vP 11
# quota -vP 11Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can verify either by the project ID, or by the project name.
For more information, see the
edquota(8),projid(5), andprojects(5)man pages on your system.
24.8. Setting the grace period for soft limits Copy linkLink copied to clipboard!
If a given quota has soft limits, you can edit the grace period, which is the amount of time for which a soft limit can be exceeded. You can set the grace period for users, groups, or projects.
Procedure
Edit the grace period:
edquota -t
# edquota -tCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantWhile other
edquotacommands operate on quotas for a particular user, group, or project, the-toption operates on every file system with quotas enabled.
24.9. Turning file system quotas off Copy linkLink copied to clipboard!
Use quotaoff to turn disk quota enforcement off on the specified file systems. Quota accounting stays enabled after executing this command.
Procedure
To turn all user and group quotas off:
quotaoff -vaugP
# quotaoff -vaugPCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
If neither of the
-u,-g, or-Poptions are specified, only the user quotas are disabled. -
If only
-goption is specified, only group quotas are disabled. -
If only
-Poption is specified, only project quotas are disabled. The
-vswitch causes verbose status information to display as the command executes.For more information, see the
quotaoff(8)man page on your system.
-
If neither of the
24.10. Reporting on disk quotas Copy linkLink copied to clipboard!
Create a disk quota report by using the repquota utility.
Procedure
Run the
repquotacommand:repquota
# repquotaCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, the command
repquota /dev/sdaproduces this output:Copy to Clipboard Copied! Toggle word wrap Toggle overflow View the disk usage report for all quota-enabled file systems:
repquota -augP
# repquota -augPCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
--symbol displayed after each user determines whether the block or inode limits have been exceeded. If either soft limit is exceeded, a+character appears in place of the corresponding-character. The first-character represents the block limit, and the second represents the inode limit.The
gracecolumns are normally blank. If a soft limit has been exceeded, the column contains a time specification equal to the amount of time remaining on the grace period. If the grace period has expired,noneappears in its place.For more information, see the
repquota(8)man page for more information.
Chapter 25. Discarding unused blocks Copy linkLink copied to clipboard!
Discard operations improve storage performance and lifespan by informing the storage device which blocks are no longer in use. It allows SSDs to optimize wear leveling and enabling thin-provisioned storage to reclaim space.
Requirements
The block device underlying the file system must support physical discard operations.
Physical discard operations are supported if the value in the
/sys/block/<device>/queue/discard_max_bytesfile is not zero.
25.1. Types of block discard operations Copy linkLink copied to clipboard!
Block discard operations can be performed by using batch, online, or periodic methods, each with specific use cases and performance recommendations.
The following list describes the various discard operations:
- Batch discard
-
This type of discard is part of the
fstrimcommand. It discards all unused blocks in a file system that match criteria specified by the administrator. Red Hat Enterprise Linux 10 supports batch discard on XFS and ext4 formatted devices that support physical discard operations. - Online discard
This type of discard operation is configured at mount time with the discard option, and runs in real time without user intervention. However, it only discards blocks that are transitioning from used to free. Red Hat Enterprise Linux 10 supports online discard on XFS and ext4 formatted devices.
Use batch discard, except when online discard is required to maintain performance, or when batch discard is not feasible for the workload of the system.
- Periodic discard
-
Batch operations that are run regularly by a
systemdservice.
All types are supported by the XFS and ext4 file systems.
Recommendations
Use batch or periodic discard.
Use online discard only if:
- the system’s workload is such that batch discard is not feasible, or
- online discard operations are necessary to maintain performance.
25.2. Performing batch block discard Copy linkLink copied to clipboard!
You can perform a batch block discard operation to discard unused blocks on a mounted file system.
Prerequisites
- The file system is mounted.
- The block device underlying the file system supports physical discard operations.
Procedure
Use the
fstrimutility:To perform discard only on a selected file system, use:
fstrim mount-point
# fstrim mount-pointCopy to Clipboard Copied! Toggle word wrap Toggle overflow To perform discard on all mounted file systems, use:
fstrim --all
# fstrim --allCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If you run the
fstrimcommand on:- a device that does not support discard operations, or
a logical device (LVM or MD) composed of multiple devices, where any one of the device does not support discard operations, the following message displays:
fstrim /mnt/non_discard
# fstrim /mnt/non_discard fstrim: /mnt/non_discard: the discard operation is not supportedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
25.3. Enabling online block discard Copy linkLink copied to clipboard!
You can perform online block discard operations to automatically discard unused blocks on all supported file systems. For more information, see the mount(8) and fstab(5) man pages on your system.
Procedure
Enable online discard at mount time:
When mounting a file system manually, add the
-o discardmount option:mount -o discard device mount-point
# mount -o discard device mount-pointCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
When mounting a file system persistently, add the
discardoption to the mount entry in the/etc/fstabfile.
25.4. Enabling periodic block discard Copy linkLink copied to clipboard!
You can enable a systemd timer to regularly discard unused blocks on all supported file systems.
Procedure
Enable and start the
systemdtimer:systemctl enable --now fstrim.timer
# systemctl enable --now fstrim.timer Created symlink /etc/systemd/system/timers.target.wants/fstrim.timer → /usr/lib/systemd/system/fstrim.timer.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the status of the timer:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 26. Factors affecting I/O and file system performance Copy linkLink copied to clipboard!
The appropriate settings for storage and file system performance are highly dependent on the storage purpose. I/O and file system performance can be affected by various factors.
Below is a list of factors that can affect I/O and file system performance:
- Data write or read patterns
- Sequential or random
- Buffered or Direct IO
- Data alignment with underlying geometry
- Block size
- File system size
- Journal size and location
- Recording access times
- Ensuring data reliability
- Pre-fetching data
- Pre-allocating disk space
- File fragmentation
- Resource contention
26.1. Tools for monitoring and diagnosing I/O and file system issues Copy linkLink copied to clipboard!
Monitor and diagnose I/O and file system issues efficiently using tools that track performance metrics, analyze device load, latency, and trace operations, helping pinpoint bottlenecks and optimize system performance in Red Hat Enterprise Linux 10 environments.
The following tools are available in Red Hat Enterprise Linux 10 for monitoring system performance and diagnosing performance problems related to I/O, file systems, and their configuration:
-
vmstattool reports on processes, memory, paging, block I/O, interrupts, and CPU activity across the entire system. It can help administrators determine whether the I/O subsystem is responsible for any performance issues. If analysis withvmstatshows that the I/O subsystem is responsible for reduced performance, administrators can use theiostattool to determine the responsible I/O device. -
iostatreports on I/O device load in your system. It is provided by thesysstatpackage. -
blktraceprovides detailed information about how time is spent in the I/O subsystem. The companion utilityblkparsereads the raw output fromblktraceand produces a human readable summary of input and output operations recorded byblktrace. bttanalyzesblktraceoutput and displays the amount of time that data spends in each area of the I/O stack, making it easier to spot bottlenecks in the I/O subsystem. This utility is provided as part of theblktracepackage. Some of the important events tracked by theblktracemechanism and analyzed bybttare:-
Queuing of the I/O event (
Q) -
Dispatch of the I/O to the driver event (
D) -
Completion of I/O event (
C)
-
Queuing of the I/O event (
-
iowatchercan use theblktraceoutput to graph I/O over time. It focuses on the Logical Block Address (LBA) of disk I/O, throughput in megabytes per second, the number of seeks per second, and I/O operations per second. This can help to identify when you are hitting the operations-per-second limit of a device. BPF Compiler Collection (BCC) is a library, which facilitates the creation of the extended Berkeley Packet Filter (
eBPF) programs. TheeBPFprograms are triggered on events, such as disk I/O, TCP connections, and process creations. The BCC tools are installed in the/usr/share/bcc/tools/directory. The followingbcc-toolshelps to analyze performance:-
biolatencysummarizes the latency in block device I/O (disk I/O) in histogram. This allows the distribution to be studied, including two modes for device cache hits and for cache misses, and latency outliers. -
biosnoopis a basic block I/O tracing tool for displaying each I/O event along with the issuing process ID, and the I/O latency. Using this tool, you can investigate disk I/O performance issues. -
biotopis used for block i/o operations in the kernel. -
filelifetool traces thestat()syscalls. -
fileslowertraces slow synchronous file reads and writes. -
filetopdisplays file reads and writes by process. ext4slower,nfsslower, andxfsslowerare tools that show file system operations slower than a certain threshold, which defaults to10ms.For more information, see the Analyzing system performance with eBPF.
-
-
bpftaceis a tracing language foreBPFused for analyzing performance issues. It also provides trace utilities like BCC for system observation, which is useful for investigating I/O performance issues. The following
SystemTapscripts may be useful in diagnosing storage or file system performance problems:-
disktop.stp: Checks the status of reading or writing disk every 5 seconds and outputs the top ten entries during that period. -
iotime.stp: Prints the amount of time spent on read and write operations, and the number of bytes read and written. -
traceio.stp: Prints the top ten executable based on cumulative I/O traffic observed, every second. -
traceio2.stp: Prints the executable name and process identifier as reads and writes to the specified device occur. -
Inodewatch.stp: Prints the executable name and process identifier each time a read or write occurs to the specified inode on the specified major or minor device. -
inodewatch2.stp: Prints the executable name, process identifier, and attributes each time the attributes are changed on the specified inode on the specified major or minor device.
-
For more information, see:
-
vmstat(8),iostat(1),blktrace(8),blkparse(1),btt(1),bpftrace, andiowatcher(1)man pages on your system. - Analyzing system performance with eBPF
26.2. Available tuning options for formatting a file system Copy linkLink copied to clipboard!
Some file system configuration decisions cannot be changed after the device is formatted. These include the size, block size, geometry, and external journals.
The following are the details of the options that are available before formatting a storage device:
Size- Create an appropriately-sized file system for your workload. Smaller file systems require less time and memory for file system checks. However, if a file system is too small, its performance suffers from high fragmentation.
Block sizeThe block is the unit of work for the file system. The block size determines how much data can be stored in a single block, and therefore the smallest amount of data that is written or read at one time.
The default block size is appropriate for most use cases. However, your file system performs better and stores data more efficiently if the block size or the size of multiple blocks is the same as or slightly larger than the amount of data that is typically read or written at one time. A small file still uses an entire block. Files can be spread across multiple blocks, but this can create additional runtime overhead.
Additionally, some file systems are limited to a certain number of blocks, which in turn limits the maximum size of the file system. Block size is specified as part of the file system options when formatting a device with the
mkfscommand. The parameter that specifies the block size varies with the file system.GeometryFile system geometry is concerned with the distribution of data across a file system. If your system uses striped storage, like RAID, you can improve performance by aligning data and metadata with the underlying storage geometry when you format the device.
Many devices export recommended geometry, which is then set automatically when the devices are formatted with a particular file system. If your device does not export these recommendations, or you want to change the recommended settings, you must specify geometry manually when you format the device with the
mkfscommand.The parameters that specify file system geometry vary with the file system.
External journals- Journaling file systems document the changes that will be made during a write operation in a journal file prior to the operation being executed. This reduces the likelihood that a storage device will become corrupted in the event of a system crash or power failure, and speeds up the recovery process.
Red Hat does not recommend using the external journals option.
Metadata-intensive workloads involve very frequent updates to the journal. A larger journal uses more memory, but reduces the frequency of write operations. Additionally, you can improve the seek time of a device with a metadata-intensive workload by placing its journal on dedicated storage that is as fast as, or faster than, the primary storage.
Ensure that external journals are reliable. Losing an external journal device causes file system corruption. External journals must be created at format time, with journal devices being specified at mount time.
26.3. Available tuning options for mounting a file system Copy linkLink copied to clipboard!
You can explore key tuning options for mounting file systems, including atime, noatime, and read-ahead settings, to select mount options that balance performance and functionality for different workloads.
The following are the options available to most file systems and can be specified as the device is mounted:
Access TimeEvery time a file is read, its metadata is updated with the time at which access occurred (
atime). This involves additional write I/O. Therelatimeis the defaultatimesetting for most file systems.However, if updating this metadata is time consuming, and if accurate access time data is not required, you can mount the file system with the
noatimemount option. This disables updates to metadata when a file is read. It also enablesnodiratimebehavior, which disables updates to metadata when a directory is read.
Disabling atime updates by using the noatime mount option can break applications that rely on them, for example, backup programs.
Read-aheadRead-aheadbehavior speeds up file access by pre-fetching data that is likely to be needed soon and loading it into the page cache, where it can be retrieved more quickly than if it were on disk. The higher the read-ahead value, the further ahead the system pre-fetches data.Red Hat Enterprise Linux attempts to set an appropriate read-ahead value based on what it detects about your file system. However, accurate detection is not always possible. For example, if a storage array presents itself to the system as a single LUN, the system detects the single LUN, and does not set the appropriate read-ahead value for an array.
Workloads that involve heavy streaming of sequential I/O often benefit from high read-ahead values. The storage-related tuned profiles provided with Red Hat Enterprise Linux raise the read-ahead value, as does using LVM striping, but these adjustments are not always sufficient for all workloads.
26.4. Discarding blocks that are unused Copy linkLink copied to clipboard!
Regularly discarding blocks that are not in use by the file system is a recommended practice for both solid-state disks and thinly-provisioned storage.
For more information on the types of block discard operations, see Types of block discard operations
26.5. Solid-state disks tuning considerations Copy linkLink copied to clipboard!
Solid-state disks (SSD) use NAND flash chips rather than rotating magnetic platters to store persistent data. SSD provides a constant access time for data across their full Logical Block Address range, and does not incur measurable seek costs like their rotating counterparts.
They are more expensive per gigabyte of storage space and have a lesser storage density, but they also have lower latency and greater throughput than HDDs.
Performance generally degrades as the used blocks on an SSD approach the capacity of the disk. The degree of degradation varies by vendor, but all devices experience degradation in this circumstance. Enabling discard behavior can help to alleviate this degradation. For more information, see Types of block discard operations
The default I/O scheduler and virtual memory options are suitable for use with SSDs. Consider the following factors when configuring settings that can affect SSD performance:
I/O SchedulerAny I/O scheduler is expected to perform well with most SSDs. However, as with any other storage type, Red Hat recommends benchmarking to determine the optimal configuration for a given workload. When using SSDs, Red Hat advises changing the I/O scheduler only for benchmarking particular workloads. For instructions on how to switch between I/O schedulers, see the
/usr/share/doc/kernel-version/Documentation/block/switching-sched.txtfile.For single queue HBA, the default I/O scheduler is
deadline. For multiple queue HBA, the default I/O scheduler isnone.Virtual Memory-
Like the I/O scheduler, virtual memory (VM) subsystem requires no special tuning. Given the fast nature of I/O on SSD, try turning down the
vm_dirty_background_ratioandvm_dirty_ratiosettings, as increased write-out activity does not usually have a negative impact on the latency of other operations on the disk. However, this tuning can generate more overall I/O, and is therefore not generally recommended without workload-specific testing. Swap- An SSD can also be used as a swap device, and is likely to produce good page-out and page-in performance.
26.6. Generic block device tuning parameters Copy linkLink copied to clipboard!
The generic tuning parameters listed here are available in the /sys/block/sdX/queue/ directory.
The following listed tuning parameters are separate from I/O scheduler tuning, and are applicable to all I/O schedulers:
add_random-
Some I/O events contribute to the entropy pool for the
/dev/random. This parameter can be set to0if the overhead of these contributions become measurable. iostatsBy default,
iostatsis enabled and the default value is1. Settingiostatsvalue to0disables the gathering of I/O statistics for the device, which removes a small amount of overhead with the I/O path. Settingiostatsto0might slightly improve performance for very high performance devices, such as certain NVMe solid-state storage devices. It is recommended to leaveiostatsenabled unless otherwise specified for the given storage model by the vendor.If you disable
iostats, the I/O statistics for the device are no longer present within the/proc/diskstatsfile. The content of/sys/diskstatsfile is the source of I/O information for monitoring I/O tools, such assaroriostats. Therefore, if you disable theiostatsparameter for a device, the device is no longer present in the output of I/O monitoring tools.max_sectors_kbSpecifies the maximum size of an I/O request in kilobytes. The default value is
512KB. The minimum value for this parameter is determined by the logical block size of the storage device. The maximum value for this parameter is determined by the value of themax_hw_sectors_kb.Red Hat recommends
max_sectors_kbto always be a multiple of the optimal I/O size and the internal erase block size. Use a value oflogical_block_sizefor either parameter if they are zero or not specified by the storage device.nomerges-
Most workloads benefit from request merging. However, disabling merges can be useful for debugging purposes. By default, the
nomergesparameter is set to0, which enables merging. To disable simple one-hit merging, setnomergesto1. To disable all types of merging, setnomergesto2. nr_requests-
It is the maximum allowed number of the queued I/O. If the current I/O scheduler is
none, this number can only be reduced; otherwise the number can be increased or reduced. optimal_io_size- Some storage devices report an optimal I/O size through this parameter. If this value is reported, Red Hat recommends that applications issue I/O aligned to and in multiples of the optimal I/O size wherever possible.
read_ahead_kbDefines the maximum number of kilobytes that the operating system may read ahead during a sequential read operation. As a result, the necessary information is already present within the kernel page cache for the next sequential read, which improves read I/O performance.
Device mappers often benefit from a high
read_ahead_kbvalue.128KB for each device to be mapped is a good starting point, but increasing theread_ahead_kbvalue up to request queue’smax_sectors_kbof the disk might improve performance in application environments where sequential reading of large files takes place.rotational-
Some solid-state disks do not correctly advertise their solid-state status, and are mounted as traditional rotational disks. Manually set the
rotationalvalue to0to disable unnecessary seek-reducing logic in the scheduler. rq_affinity-
The default value of the
rq_affinityis1. It completes the I/O operations on one CPU core, which is in the same CPU group of the issued CPU core. To perform completions only on the processor that issued the I/O request, set therq_affinityto2. To disable the mentioned two abilities, set it to0. scheduler-
To set the scheduler or scheduler preference order for a particular storage device, edit the
/sys/block/devname/queue/schedulerfile, where devname is the name of the device you want to configure.
Chapter 27. Setting up Stratis file systems Copy linkLink copied to clipboard!
Stratis is a local storage-management solution for Red Hat Enterprise Linux. It is focused on simplicity, ease of use, and gives you access to advanced storage features.
Stratis runs as a service to manage pools of physical storage devices, simplifying local storage management with ease of use while helping you set up and manage complex storage configurations.
Stratis can help you with:
- Initial configuration of storage
- Making changes later
- Using advanced storage features
The central concept of Stratis is a storage pool. This pool is created from one or more local disks or partitions, and file systems are created from the pool. The pool enables features such as:
- File system snapshots
- Thin provisioning
- Caching
- Encryption
27.1. Components of a Stratis file system Copy linkLink copied to clipboard!
Stratis consists of three main components that work together to provide advanced storage management: block devices, storage pools, and file systems. These components enable thin provisioning, snapshots, and automatic space management.
Externally, Stratis presents the following file system components on the command line and through the API:
blockdev- Block devices, such as disks or disk partitions.
poolComposed of one or more block devices.
A pool has a fixed total size, equal to the size of the block devices.
The pool contains most Stratis layers, such as the non-volatile data cache using the
dm-cachetarget.Stratis creates a
/dev/stratis/my-pool/directory for each pool. This directory contains links to devices that represent Stratis file systems in the pool.filesystemEach pool can contain zero or more file systems. A pool containing file systems can store any number of files.
File systems are thinly provisioned and do not have a fixed total size. The actual size of a file system grows with the data stored on it. If the size of the data approaches the virtual size of the file system, Stratis grows the thin volume and the file system automatically.
The file systems are formatted with the XFS file system. Stratis utilizes the XFS file system for its storage, and provisions a Stratis volume.
A Stratis volume will be referred to as a “Stratis filesystem” throughout the rest of the documentation to retain alignment with the command line interface.
Stratis tracks information about file systems that it created which XFS is not aware of, and changes made using XFS do not automatically create updates in Stratis. Users must not reformat or reconfigure XFS file systems that are managed by Stratis.
Stratis creates links to file systems at the /dev/stratis/my-pool/my-fs path.
Stratis uses many Device Mapper devices, which appear in dmsetup listings and the /proc/partitions file. Similarly, the lsblk command output reflects the internal workings and layers of Stratis.
27.2. Block devices compatible with Stratis Copy linkLink copied to clipboard!
Stratis supports various block devices including physical drives, logical volumes, and networked storage. You can build Stratis pools from different storage types while maintaining advanced features like thin provisioning and snapshots.
Storage devices that can be used with Stratis.
Supported devices
Stratis pools have been tested to work on these types of block devices:
- LUKS
- LVM logical volumes
- MD RAID
- DM Multipath
- iSCSI
- HDDs and SSDs
- NVMe devices
27.3. Installing Stratis Copy linkLink copied to clipboard!
Install the Stratis storage management system to enable advanced storage features such as thin provisioning, file system snapshots, and flexible pool-based storage management on your RHEL system.
Procedure
Install packages that provide the Stratis service and command-line utilities:
dnf install stratisd stratis-cli
# dnf install stratisd stratis-cliCopy to Clipboard Copied! Toggle word wrap Toggle overflow To start the
stratisdservice and enable it to launch at boot:systemctl enable --now stratisd
# systemctl enable --now stratisdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the
stratisdservice is enabled and is running:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
27.4. Creating an unencrypted Stratis pool Copy linkLink copied to clipboard!
You can create an unencrypted Stratis pool from one or more block devices.
Prerequisites
-
Stratis is installed and the
stratisdservice is running. For more information, see Installing Stratis. - The block device on which you are creating a Stratis pool is not in use, unmounted, and is at least 1 GB in space.
On the IBM Z architecture, the
/dev/dasd*block devices must be partitioned. Use the partition device for creating the Stratis pool.For information about partitioning DASD devices, see Configuring a Linux instance on 64-bit IBM Z.
You can only encrypt a Stratis pool during creation, and not later.
Procedure
Erase any file system, partition table, or RAID signatures that exist on each block device that you want to use in the Stratis pool:
wipefs --all block-device
# wipefs --all block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
block-devicevalue is the path to the block device; for example,/dev/sdb.Create the new unencrypted Stratis pool on the selected block device:
stratis pool create my-pool block-device
# stratis pool create my-pool block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
block-devicevalue is the path to an empty or wiped block device.You can also specify multiple block devices on a single line by using the following command:
stratis pool create my-pool block-device-1 block-device-2
# stratis pool create my-pool block-device-1 block-device-2Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the new Stratis pool was created:
stratis pool list
# stratis pool listCopy to Clipboard Copied! Toggle word wrap Toggle overflow
27.5. Creating an unencrypted Stratis pool by using the web console Copy linkLink copied to clipboard!
You can use the web console to create an unencrypted Stratis pool from one or more block devices.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
-
Stratis is installed and the
stratisdservice is running. For more information, see Installing Stratis. - The block device on which you are creating a Stratis pool is not in use, unmounted, and is at least 1 GB in space.
You cannot encrypt an unencrypted Stratis pool after it is created.
Procedure
- Log in to the RHEL 10 web console.
- Click .
- In the Storage table, click the menu button and select Create Stratis pool.
- In the Name field, enter a name for the Stratis pool.
- Select the Block devices from which you want to create the Stratis pool.
- Optional: If you want to specify the maximum size for each file system that is created in the pool, select Manage filesystem sizes.
- Click .
Verification
- Go to the Storage section and verify that you can see the new Stratis pool in the Devices table.
27.6. Creating an encrypted Stratis pool using a key in the kernel keyring Copy linkLink copied to clipboard!
To secure your data, you can use the kernel keyring to create an encrypted Stratis pool from one or more block devices.
When you create an encrypted Stratis pool this way, the kernel keyring is used as the primary encryption mechanism. After subsequent system reboots this kernel keyring is used to unlock the encrypted Stratis pool.
When creating an encrypted Stratis pool from one or more block devices, note the following:
-
Each block device is encrypted using the
cryptsetuplibrary and implements theLUKS2format. - Each Stratis pool can either have a unique key or share the same key with other pools. These keys are stored in the kernel keyring.
- The block devices that comprise a Stratis pool must be either all encrypted or all unencrypted. It is not possible to have both encrypted and unencrypted block devices in the same Stratis pool.
- Block devices added to the data cache of an encrypted Stratis pool are automatically encrypted.
Prerequisites
-
Stratis v2.1.0 or later is installed and the
stratisdservice is running. For more information, see Installing Stratis. - The block device on which you are creating a Stratis pool is not in use, unmounted, and is at least 1 GB in space.
On the IBM Z architecture, the
/dev/dasd*block devices must be partitioned. Use the partition in the Stratis pool.For information about partitioning DASD devices, see Configuring a Linux instance on 64-bit IBM Z.
Procedure
Erase any file system, partition table, or RAID signatures that exist on each block device that you want to use in the Stratis pool:
wipefs --all block-device
# wipefs --all block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
block-devicevalue is the path to the block device; for example,/dev/sdb.If you have not set a key already, run the following command and follow the prompts to create a key set to use for the encryption:
stratis key set --capture-key key-description
# stratis key set --capture-key key-descriptionCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
key-descriptionis a reference to the key that gets created in the kernel keyring. You will be prompted to enter a key value at the command-line. You can also place the key value in a file and use the--keyfile-pathoption instead of the--capture-keyoption.Create the encrypted Stratis pool and specify the key description to use for the encryption:
stratis pool create --key-desc key-description my-pool block-device
# stratis pool create --key-desc key-description my-pool block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow key-description- References the key that exists in the kernel keyring, which you created in the previous step.
my-pool- Specifies the name of the new Stratis pool.
block-deviceSpecifies the path to an empty or wiped block device.
You can also specify multiple block devices on a single line by using the following command:
stratis pool create --key-desc key-description my-pool block-device-1 block-device-2
# stratis pool create --key-desc key-description my-pool block-device-1 block-device-2Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the new Stratis pool was created:
stratis pool list
# stratis pool listCopy to Clipboard Copied! Toggle word wrap Toggle overflow
27.7. Creating an encrypted Stratis pool using Clevis Copy linkLink copied to clipboard!
Starting with Stratis 2.4.0, you can create an encrypted pool using the Clevis mechanism by specifying Clevis options at the command line.
Prerequisites
-
Stratis v2.3.0 or later is installed and the
stratisdservice is running. For more information, see Installing Stratis. - An encrypted Stratis pool is created. For more information, see Creating an encrypted Stratis pool using a key in the kernel keyring.
- Your system supports TPM 2.0.
Procedure
Erase any file system, partition table, or RAID signatures that exist on each block device that you want to use in the Stratis pool:
wipefs --all block-device
# wipefs --all block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
block-devicevalue is the path to the block device; for example,/dev/sdb.Create the encrypted Stratis pool and specify the Clevis mechanism to use for the encryption:
stratis pool create --clevis tpm2 my-pool block-device
# stratis pool create --clevis tpm2 my-pool block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow tpm2- Specifies the Clevis mechanism to use.
my-pool- Specifies the name of the new Stratis pool.
block-deviceSpecifies the path to an empty or wiped block device.
Alternatively, use the Clevis tang server mechanism by using the following command:
stratis pool create --clevis tang --tang-url my-url --thumbprint thumbprint my-pool block-device
# stratis pool create --clevis tang --tang-url my-url --thumbprint thumbprint my-pool block-deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow tang- Specifies the Clevis mechanism to use.
my-url- Specifies the URL of the tang server.
thumbprintReferences the thumbprint of the tang server.
You can also specify multiple block devices on a single line by using the following command:
stratis pool create --clevis tpm2 my-pool block-device-1 block-device-2
# stratis pool create --clevis tpm2 my-pool block-device-1 block-device-2Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the new Stratis pool was created:
stratis pool list
# stratis pool listCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can also create an encrypted pool using both Clevis and keyring mechanisms by specifying both Clevis and keyring options at the same time during pool creation.
27.8. Creating an encrypted Stratis pool by using the storage RHEL system role Copy linkLink copied to clipboard!
To secure your data, you can create an encrypted Stratis pool with the storage RHEL system role. In addition to a passphrase, you can use Clevis and Tang or TPM protection as an encryption method.
You can configure Stratis encryption only on the entire pool.
Prerequisites
- You have prepared the control node and the managed nodes.
- You are logged in to the control node as a user who can run playbooks on the managed nodes.
-
The account you use to connect to the managed nodes has
sudopermissions on them. - You can connect to the Tang server. For more information, see Deploying a Tang server with SELinux in enforcing mode.
Procedure
Store your sensitive variables in an encrypted file:
Create the vault:
ansible-vault create ~/vault.yml
$ ansible-vault create ~/vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow After the
ansible-vault createcommand opens an editor, enter the sensitive data in the<key>: <value>format:luks_password: <password>
luks_password: <password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Save the changes, and close the editor. Ansible encrypts the data in the vault.
Create a playbook file, for example,
~/playbook.yml, with the following content:Copy to Clipboard Copied! Toggle word wrap Toggle overflow The settings specified in the example playbook include the following:
encryption_password- Password or passphrase used to unlock the LUKS volumes.
encryption_clevis_pin-
Clevis method that you can use to encrypt the created pool. You can use
tangandtpm2. encryption_tang_url- URL of the Tang server.
For details about all variables used in the playbook, see the
/usr/share/ansible/roles/rhel-system-roles.storage/README.mdfile on the control node.Validate the playbook syntax:
ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml
$ ansible-playbook --ask-vault-pass --syntax-check ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note that this command only validates the syntax and does not protect against a wrong but valid configuration.
Run the playbook:
ansible-playbook --ask-vault-pass ~/playbook.yml
$ ansible-playbook --ask-vault-pass ~/playbook.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the pool was created with Clevis and Tang configured:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
27.9. Creating an encrypted Stratis pool by using the web console Copy linkLink copied to clipboard!
To secure your data, you can use the web console to create an encrypted Stratis pool from one or more block devices.
When creating an encrypted Stratis pool from one or more block devices, note the following:
- Each block device is encrypted using the cryptsetup library and implements the LUKS2 format.
- Each Stratis pool can either have a unique key or share the same key with other pools. These keys are stored in the kernel keyring.
- The block devices that comprise a Stratis pool must be either all encrypted or all unencrypted. It is not possible to have both encrypted and unencrypted block devices in the same Stratis pool.
- Block devices added to the data tier of an encrypted Stratis pool are automatically encrypted.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
-
Stratis v2.1.0 or later is installed and the the
stratisdservice is running. - The block device on which you are creating a Stratis pool is not in use, unmounted, and is at least 1 GB in space.
Procedure
- Log in to the RHEL 10 web console.
- Click .
- In the Storage table, click the menu button and select Create Stratis pool.
- In the Name field, enter a name for the Stratis pool.
- Select the Block devices from which you want to create the Stratis pool.
Select the type of encryption, you can use a passphrase, a Tang keyserver, or both:
Passphrase:
- Enter a passphrase.
- Confirm the passphrase.
Tang keyserver:
- Enter the keyserver address. For more information, see Deploying a Tang server with SELinux in enforcing mode.
- Optional: If you want to specify the maximum size for each file system that is created in pool, select Manage filesystem sizes.
- Click .
Verification
- Go to the Storage section and verify that you can see the new Stratis pool in the Devices table.
27.10. Renaming a Stratis pool by using the web console Copy linkLink copied to clipboard!
You can use the web console to rename an existing Stratis pool.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
Stratis is installed and the
stratisdservice is running.The web console detects and installs Stratis by default. However, for manually installing Stratis, see Installing Stratis.
- A Stratis pool is created.
Procedure
- Log in to the RHEL 10 web console.
- Click .
- In the Storage table, click the Stratis pool you want to rename.
- On the Stratis pool page, click next to the Name field.
- In the Rename Stratis pool dialog box, enter a new name.
- Click .
27.11. Setting overprovisioning mode in Stratis file system Copy linkLink copied to clipboard!
By default, every Stratis pool is overprovisioned meaning the logical file system size can exceed the physically allocated space. Stratis monitors the file system usage, and automatically increases the allocation by using available space when needed. However, if all the available space is already allocated and the pool is full, no additional space can be assigned to the file system.
If the file system runs out of space, users might lose data. For applications where the risk of data loss outweighs the benefits of overprovisioning, this feature can be disabled.
Stratis continuously monitors the pool usage and reports the values using the D-Bus API. Storage administrators must monitor these values and add devices to the pool as needed to prevent it from reaching capacity.
Prerequisites
- Stratis is installed. For more information, see Installing Stratis.
Procedure
To set up the pool correctly, you have two possibilities:
Create a pool from one or more block devices to make the pool fully provisioned at the time of creation:
stratis pool create --no-overprovision pool-name /dev/sdb
# stratis pool create --no-overprovision pool-name /dev/sdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow By using the
--no-overprovisionoption, the pool cannot allocate more logical space than actual available physical space.Set overprovisioning mode in the existing pool:
stratis pool overprovision pool-name <yes|no>
# stratis pool overprovision pool-name <yes|no>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If set to "yes", you enable overprovisioning to the pool. This means that the sum of the logical sizes of the Stratis file systems, supported by the pool, can exceed the amount of available data space. If the pool is overprovisioned and the sum of the logical sizes of all the file systems exceeds the space available on the pool, then the system cannot turn off overprovisioning and returns an error.
Verification
View the full list of Stratis pools:
stratis pool list
# stratis pool list Name Total Physical Properties UUID Alerts pool-name 1.42 TiB / 23.96 MiB / 1.42 TiB ~Ca,~Cr,~Op cb7cb4d8-9322-4ac4-a6fd-eb7ae9e1e540Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Check if there is an indication of the pool overprovisioning mode flag in the
stratis pool listoutput. The " ~ " is a math symbol for "NOT", so~Opmeans no-overprovisioning. Optional: Check overprovisioning on a specific pool:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
27.12. Binding a Stratis pool to NBDE Copy linkLink copied to clipboard!
Binding an encrypted Stratis pool to Network Bound Disk Encryption (NBDE) requires a Tang server. When a system containing the Stratis pool reboots, it connects with the Tang server to automatically unlock the encrypted pool without you having to provide the kernel keyring description.
Binding a Stratis pool to a supplementary Clevis encryption mechanism does not remove the primary kernel keyring encryption.
Prerequisites
-
Stratis v2.3.0 or later is installed and the
stratisdservice is running. For more information, see Installing Stratis. - An encrypted Stratis pool is created, and you have the key description of the key that was used for the encryption. For more information, see Creating an encrypted Stratis pool using a key in the kernel keyring.
- You can connect to the Tang server. For more information, see Deploying a Tang server with SELinux in enforcing mode.
Procedure
Bind an encrypted Stratis pool to NBDE:
stratis pool bind nbde --trust-url my-pool tang-server
# stratis pool bind nbde --trust-url my-pool tang-serverCopy to Clipboard Copied! Toggle word wrap Toggle overflow my-pool- Specifies the name of the encrypted Stratis pool.
tang-server- Specifies the IP address or URL of the Tang server.
27.13. Binding a Stratis pool to TPM Copy linkLink copied to clipboard!
When you bind an encrypted Stratis pool to the Trusted Platform Module (TPM) 2.0, the system containing the pool reboots, and the pool is automatically unlocked without you having to provide the kernel keyring description.
Prerequisites
-
Stratis v2.3.0 or later is installed and the
stratisdservice is running. For more information, see Installing Stratis. - An encrypted Stratis pool is created, and you have the key description of the key that was used for the encryption. For more information, see Creating an encrypted Stratis pool using a key in the kernel keyring.
- Your system supports TPM 2.0.
Procedure
Bind an encrypted Stratis pool to TPM:
stratis pool bind tpm my-pool
# stratis pool bind tpm my-poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow my-pool- Specifies the name of the encrypted Stratis pool.
key-description- References the key that exists in the kernel keyring, which was generated when you created the encrypted Stratis pool.
27.14. Unlocking an encrypted Stratis pool with kernel keyring Copy linkLink copied to clipboard!
After a system reboot, your encrypted Stratis pool or the block devices that comprise it might not be visible. You can unlock the pool using the kernel keyring that was used to encrypt the pool.
Prerequisites
-
Stratis v2.1.0 is installed and the
stratisdservice is running. For more information, see Installing Stratis. - An encrypted Stratis pool is created. For more information, see Creating an encrypted Stratis pool using a key in the kernel keyring.
Procedure
Re-create the key set using the same key description that was used previously:
stratis key set --capture-key key-description
# stratis key set --capture-key key-descriptionCopy to Clipboard Copied! Toggle word wrap Toggle overflow key-descriptionreferences the key that exists in the kernel keyring, which was generated when you created the encrypted Stratis pool.Verify that the Stratis pool is visible:
stratis pool list
# stratis pool listCopy to Clipboard Copied! Toggle word wrap Toggle overflow
27.15. Unbinding a Stratis pool from supplementary encryption Copy linkLink copied to clipboard!
When you unbind an encrypted Stratis pool from a supported supplementary encryption mechanism, the primary kernel keyring encryption remains in place. This is not true for pools that are created with Clevis encryption from the start.
Prerequisites
- Stratis v2.3.0 or later is installed on your system. For more information, see Installing Stratis.
- An encrypted Stratis pool is created. For more information, see Creating an encrypted Stratis pool using a key in the kernel keyring.
- The encrypted Stratis pool is bound to a supported supplementary encryption mechanism.
Procedure
Unbind an encrypted Stratis pool from a supplementary encryption mechanism:
stratis pool unbind clevis my-pool
# stratis pool unbind clevis my-poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow my-poolspecifies the name of the Stratis pool you want to unbind.
27.16. Starting and stopping Stratis pool Copy linkLink copied to clipboard!
You can start and stop Stratis pools. This gives you the option to disassemble or bring down all the objects that were used to construct the pool, such as file systems, cache devices, thin pool, and encrypted devices. Note that if the pool actively uses any device or file system, it might issue a warning and not be able to stop.
The stopped state is recorded in the pool’s metadata. These pools do not start on the following boot, until the pool receives a start command.
Prerequisites
-
Stratis is installed and the
stratisdservice is running. For more information, see Installing Stratis. - An unencrypted or an encrypted Stratis pool is created. For more information, see Creating an unencrypted Stratis pool or Creating an encrypted Stratis pool using a key in the kernel keyring.
Procedure
Use the following command to stop the Stratis pool. This tears down the storage stack but leaves all metadata intact:
stratis pool stop --name pool-name
# stratis pool stop --name pool-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the following command to start the Stratis pool. The
--unlock-methodoption specifies the method of unlocking the pool if it is encrypted:stratis pool start --unlock-method <keyring|clevis> --name pool-name
# stratis pool start --unlock-method <keyring|clevis> --name pool-nameCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can start the pool by using either the pool name or the pool UUID.
Verification
Use the following command to list all active pools on the system:
stratis pool list
# stratis pool listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the following command to list all the stopped pools:
stratis pool list --stopped
# stratis pool list --stoppedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the following command to view detailed information for a stopped pool. If the UUID is specified, the command prints detailed information about the pool corresponding to the UUID:
stratis pool list --stopped --uuid UUID
# stratis pool list --stopped --uuid UUIDCopy to Clipboard Copied! Toggle word wrap Toggle overflow
27.17. Creating a Stratis file system Copy linkLink copied to clipboard!
Create a Stratis file system to leverage advanced storage capabilities including thin provisioning, automatic file system growth, and integrated snapshot support within a storage pool.
Prerequisites
-
Stratis is installed and the
stratisdservice is running. For more information, see Installing Stratis. - A Stratis pool is created. For more information, see Creating an unencrypted Stratis pool or using a key in the kernel keyring.
Procedure
Create a Stratis file system on a pool:
stratis filesystem create --size number-and-unit my-pool my-fs
# stratis filesystem create --size number-and-unit my-pool my-fsCopy to Clipboard Copied! Toggle word wrap Toggle overflow number-and-unit- Specifies the size of a file system. The specification format must follow the standard size specification format for input, that is B, KiB, MiB, GiB, TiB or PiB.
my-pool- Specifies the name of the Stratis pool.
my-fsSpecifies an arbitrary name for the file system.
For example, create a Stratis file system:
stratis filesystem create --size 10GiB pool1 filesystem1
# stratis filesystem create --size 10GiB pool1 filesystem1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Set a size limit of a file system:
stratis filesystem create --size number-and-unit --size-limit number-and-unit my-pool my-fs
# stratis filesystem create --size number-and-unit --size-limit number-and-unit my-pool my-fsCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThis option is available starting with Stratis 3.6.0.
You can also remove the size limit later, if needed:
stratis filesystem unset-size-limit my-pool my-fs
# stratis filesystem unset-size-limit my-pool my-fsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List file systems within the pool to check if the Stratis file system is created:
stratis fs list my-pool
# stratis fs list my-poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow
27.18. Mounting a Stratis file system Copy linkLink copied to clipboard!
Mount an existing Stratis file system to access the content.
Prerequisites
-
Stratis is installed and the
stratisdservice is running. For more information, see Installing Stratis. - A Stratis file system is created. For more information, see Creating a Stratis file system.
Procedure
To mount the file system, use the entries that Stratis maintains in the
/dev/stratis/directory:mount /dev/stratis/my-pool/my-fs mount-point
# mount /dev/stratis/my-pool/my-fs mount-pointCopy to Clipboard Copied! Toggle word wrap Toggle overflow The file system is now mounted on the mount-point directory and ready to use.
NoteUnmount all file systems belonging to a pool before stopping it. The pool will not stop if any file system is still mounted.
27.19. Configuring mounting a Stratis file system at boot Copy linkLink copied to clipboard!
You can configure a Stratis filesystem to mount at boot by setting up a mechanism that starts the pool of the file system correctly. Without this, the mount operation fails. Stratis provides two systemd services to support this process.
Stratis does not support managing the root filesystem currently. These instructions apply only to non-root filesystems.
Prerequisites
- A Stratis file system, for example <my-fs> is created on a Stratis pool, for example <my-pool>. For more information, see Creating a Stratis file system.
- A mountpoint directory is created, for example <mount-point>.
- The UUID of the filesystem’s pool is determined, for example <pool-uuid>.
Procedure
As root, edit the
/etc/fstabfile.Add
x-systemd.requires=stratis-fstab-setup@<pool-uuid>.serviceto the mount options of the filesystem’s/etc/fstabentry:/dev/stratis/<my-pool>/<my-fs> <mount-point> xfs defaults,x-systemd.requires=stratis-fstab-setup@<pool-uuid>.service
/dev/stratis/<my-pool>/<my-fs> <mount-point> xfs defaults,x-systemd.requires=stratis-fstab-setup@<pool-uuid>.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add
x-systemd.requires=stratis-fstab-setup-with-network@<pool-uuid>.serviceand_netdevif the pool is encrypted using NBDE (Network Bound Disk Encryption) via Clevis:/dev/stratis/<my-pool>/<my-fs> <mount-point> xfs defaults,x-systemd.requires=stratis-fstab-setup-with-network@<pool-uuid>.service,_netdev
/dev/stratis/<my-pool>/<my-fs> <mount-point> xfs defaults,x-systemd.requires=stratis-fstab-setup-with-network@<pool-uuid>.service,_netdevCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace:
- <my-pool> with the name of the Stratis pool that contains the file system.
- <my-fs> with the name of the Stratis file system created within the pool.
- <mount-point> with the name of the directory where you want to mount the file system.
<pool-uuid> with the UUID of the file system’s pool.
ImportantMounting a Stratis filesystem from an encrypted Stratis pool on boot can cause the boot process to stop until a password is provided. If the pool is encrypted using any unattended mechanism, for example, NBDE or TPM2, the Stratis pool will be unlocked automatically. If not, the user may need to enter a password in the console to unlock the filesystem’s pool.
27.20. Creating and configuring a Stratis file system using the web console Copy linkLink copied to clipboard!
You can use the web console to create a file system on an existing Stratis pool.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
-
The
stratisdservice is running. - A Stratis pool is created.
Procedure
- Log in to the RHEL 10 web console.
- Click .
- Click the Stratis pool on which you want to create a file system.
- On the Stratis pool page, scroll to the Stratis filesystems section and click .
- Enter a name for the file system.
- Enter a mount point for the file system.
- Select the mount option.
- In the At boot drop-down menu, select when you want to mount your file system.
Create the file system:
- If you want to create and mount the file system, click .
- If you want to only create the file system, click .
Verification
- The new file system is visible on the Stratis pool page under the Stratis filesystems tab.
Chapter 28. Extending a Stratis pool with additional block devices Copy linkLink copied to clipboard!
You can attach additional block devices to a Stratis pool to provide more storage capacity for Stratis file systems. You can do it manually or by using the web console.
28.1. Adding block devices to a Stratis pool Copy linkLink copied to clipboard!
You can add one or more block devices to a Stratis pool.
Prerequisites
-
Stratis is installed and the
stratisdservice is running. For more information, see Installing Stratis. - The block device on which you are creating a Stratis pool is not in use, unmounted, and is at least 1 GB in space.
Procedure
To add one or more block devices to the pool, use:
stratis pool add-data my-pool device-1 device-2 device-n
# stratis pool add-data my-pool device-1 device-2 device-nCopy to Clipboard Copied! Toggle word wrap Toggle overflow
28.2. Adding a block device to a Stratis pool by using the web console Copy linkLink copied to clipboard!
You can use the web console to add a block device to an existing Stratis pool. You can also add caches as a block device.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
-
The
stratisdservice is running. - A Stratis pool is created.
- The block device on which you are creating a Stratis pool is not in use, unmounted, and is at least 1 GB in space.
Procedure
- Log in to the RHEL 10 web console.
- Click .
- In the Storage table, click the Stratis pool to which you want to add a block device.
- On the Stratis pool page, click and select the Tier where you want to add a block device as data or cache.
- If you are adding the block device to a Stratis pool that is encrypted with a passphrase, enter the passphrase.
- Under Block devices, select the devices you want to add to the pool.
- Click .
Chapter 29. Monitoring Stratis file systems Copy linkLink copied to clipboard!
As a Stratis user, you can view information about Stratis file systems on your system to monitor their state and free space.
29.1. Displaying information about Stratis file systems Copy linkLink copied to clipboard!
You can list statistics about your Stratis file systems, such as the total, used, and free size or file systems and block devices belonging to a pool, by using the stratis utility.
The size of an XFS file system is the total amount of user data that it can manage. On a thinly provisioned Stratis pool, a Stratis file system can appear to have a size that is larger than the space allocated to it. The XFS file system is sized to match this apparent size, which means it is usually larger than the allocated space. Standard Linux utilities, such as df, report the size of the XFS file system. This value generally overestimates the space required by the XFS file system and hence the space allocated for it by Stratis.
Regularly monitor the usage of your overprovisioned Stratis pools. If a file system usage approaches the allocated space, Stratis automatically increases the allocation using available space in the pool. However, if all the available space is already allocated and the pool is full, no additional space can be assigned causing the file system to run out of space. This may lead to the risk of data loss in the application using the Stratis file system.
Prerequisites
-
Stratis is installed and the
stratisdservice is running. For more information, See Installing Stratis.
Procedure
To display information about all block devices used for Stratis on your system:
stratis blockdev
# stratis blockdev Pool Name Device Node Physical Size Tier UUID my-pool /dev/sdb 9.10 TiB Data ec9fb718-f83c-11ef-861e-7446a09dccfbCopy to Clipboard Copied! Toggle word wrap Toggle overflow To display information about all Stratis pools on your system:
stratis pool
# stratis pool Name Total/Used/Free Properties UUID Alerts my-pool 8.00 GiB / 800.99 MiB / 7.22 GiB -Ca,-Cr,Op e22772c2-afe9-446c-9be5-2f78f682284e WS001Copy to Clipboard Copied! Toggle word wrap Toggle overflow To display information about all Stratis file systems on your system:
stratis filesystem
# stratis filesystem Pool Filesystem Total/Used/Free/Limit Device UUID Spool1 sfs1 1 TiB / 546 MiB / 1023.47 GiB / None /dev/stratis/spool1/sfs1 223265f5-8f17-4cc2-bf12-c3e9e71ff7bfCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can also display detailed information about a Stratis file system on your system by specifying the file system name or UUID:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
29.2. Viewing a Stratis pool by using the web console Copy linkLink copied to clipboard!
You can use the web console to view an existing Stratis pool and the file systems it contains.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
-
The
stratisdservice is running. - You have an existing Stratis pool.
Procedure
- Log in to the RHEL 10 web console.
- Click .
In the Storage table, click the Stratis pool you want to view.
The Stratis pool page displays all the information about the pool and the file systems that you created in the pool.
Chapter 30. Using snapshots on Stratis file systems Copy linkLink copied to clipboard!
You can use snapshots on Stratis file systems to capture file system state at arbitrary times and restore it in the future.
30.1. Characteristics of Stratis snapshots Copy linkLink copied to clipboard!
Stratis snapshots are regular file systems created as point-in-time copies of other Stratis file systems. They operate independently of their source file system and can be used for backup, testing, or data recovery purposes.
The current snapshot implementation in Stratis is characterized by the following:
- A snapshot of a file system is another file system.
- A snapshot and its origin are not linked in lifetime. A snapshotted file system can live longer than the file system it was created from.
- A file system does not have to be mounted to create a snapshot from it.
- Each snapshot uses around half a gigabyte of actual backing storage, which is needed for the XFS log.
30.2. Creating a Stratis snapshot Copy linkLink copied to clipboard!
You can create a Stratis file system as a snapshot of an existing Stratis file system.
Prerequisites
-
Stratis is installed and the
stratisdservice is running. For more information, see Installing Stratis. - You have created a Stratis file system. For more information, see Creating a Stratis file system.
Procedure
Create a Stratis snapshot:
stratis fs snapshot my-pool my-fs my-fs-snapshot
# stratis fs snapshot my-pool my-fs my-fs-snapshotCopy to Clipboard Copied! Toggle word wrap Toggle overflow A snapshot is a first class Stratis file system. You can create multiple Stratis snapshots. These include snapshots of a single origin file system or another snapshot file system. If a file system is a snapshot, then its origin field will display the UUID of its origin file system in the detailed file system listing.
30.3. Accessing the content of a Stratis snapshot Copy linkLink copied to clipboard!
You can mount a snapshot of a Stratis file system to make it accessible for read and write operations.
Prerequisites
-
Stratis is installed and the
stratisdservice is running. For more information, see Installing Stratis. - You have created a Stratis snapshot. For more information, see Creating a Stratis snapshot.
Procedure
To access the snapshot, mount it as a regular file system from the
/dev/stratis/my-pool/directory:mount /dev/stratis/my-pool/my-fs-snapshot mount-point
# mount /dev/stratis/my-pool/my-fs-snapshot mount-pointCopy to Clipboard Copied! Toggle word wrap Toggle overflow
30.4. Reverting a Stratis file system to a previous snapshot Copy linkLink copied to clipboard!
You can revert the content of a Stratis file system to the state captured in a Stratis snapshot.
Prerequisites
-
Stratis is installed and the
stratisdservice is running. For more information, see Installing Stratis. - You have created a Stratis snapshot. For more information, see Creating a Stratis snapshot.
Procedure
Optional: Back up the current state of the file system to be able to access it later:
stratis filesystem snapshot my-pool my-fs my-fs-backup
# stratis filesystem snapshot my-pool my-fs my-fs-backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow Schedule a revert of your file system to the previously taken snapshot:
stratis filesystem schedule-revert my-pool my-fs-snapshot
# stratis filesystem schedule-revert my-pool my-fs-snapshotCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Run the following to check if the revert is scheduled successfully:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIt is not possible to schedule more than one revert operation into the same origin filesystem. Also, if you try to destroy either the origin file system, or the snapshot to which the revert is scheduled, the destroy operation fails.
You can also cancel the revert operation any time before you restart the pool:
stratis filesystem cancel-revert my-pool my-fs-snapshot
# stratis filesystem cancel-revert my-pool my-fs-snapshotCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can run the following to check if the cancellation is scheduled successfully:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If not cancelled, the scheduled revert will proceed when you restart the pool:
stratis pool stop --name my-pool stratis pool start --name my-pool
# stratis pool stop --name my-pool # stratis pool start --name my-poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List the file system belonging to the pool:
stratis filesystem list my-pool
# stratis filesystem list my-poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The my-fs-snapshot now does not appear in the list of file systems in the pool as it is reverted to the previously copied my-fs-snapshot state. The content of the file system named my-fs is now identical to the snapshot my-fs-snapshot.
30.5. Removing a Stratis snapshot Copy linkLink copied to clipboard!
You can remove a Stratis snapshot from a pool. Data on the snapshot are lost.
Prerequisites
-
Stratis is installed and the
stratisdservice is running. For more information,see Installing Stratis. - You have created a Stratis snapshot. For more information, see Creating a Stratis snapshot.
Procedure
Unmount the snapshot:
umount /dev/stratis/my-pool/my-fs-snapshot
# umount /dev/stratis/my-pool/my-fs-snapshotCopy to Clipboard Copied! Toggle word wrap Toggle overflow Destroy the snapshot:
stratis filesystem destroy my-pool my-fs-snapshot
# stratis filesystem destroy my-pool my-fs-snapshotCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 31. Removing Stratis file systems Copy linkLink copied to clipboard!
You can remove an existing Stratis file system or pool. Once a Stratis file system or pool is removed, it cannot be recovered.
31.1. Removing a Stratis file system Copy linkLink copied to clipboard!
You can remove an existing Stratis file system. Data stored on it are lost.
Prerequisites
-
Stratis is installed and the
stratisdservice is running. For more information, see Installing Stratis. - You have created a Stratis file system. For more information, see Creating a Stratis file system.
Procedure
Unmount the file system:
umount /dev/stratis/my-pool/my-fs
# umount /dev/stratis/my-pool/my-fsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Destroy the file system:
stratis filesystem destroy my-pool my-fs
# stratis filesystem destroy my-pool my-fsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the file system no longer exists:
stratis filesystem list my-pool
# stratis filesystem list my-poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow
31.2. Deleting a file system from a Stratis pool by using the web console Copy linkLink copied to clipboard!
You can use the web console to delete a file system from an existing Stratis pool.
Deleting a Stratis pool file system erases all the data it contains.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
Stratis is installed and the
stratisdservice is running..The web console detects and installs Stratis by default. However, for manually installing Stratis, see Installing Stratis.
- You have an existing Stratis pool and a file system is created on the Stratis pool.
Procedure
- Log in to the RHEL 10 web console.
- Click .
- In the Storage table, click the Stratis pool from which you want to delete a file system.
- On the Stratis pool page, scroll to the Stratis filesystems section and click the menu button for the file system you want to delete.
- From the drop-down menu, select .
- In the Confirm deletion dialog box, click .
31.3. Removing a Stratis pool Copy linkLink copied to clipboard!
You can remove an existing Stratis pool. Data stored on it are lost.
Prerequisites
-
Stratis is installed and the
stratisdservice is running. For more information, see Installing Stratis. You have created a Stratis pool:
- To create an unencrypted pool, see Creating an unencrypted Stratis pool.
- To create an encrypted pool, see Creating an encrypted Stratis pool using a key in the kernel keyring.
Procedure
List file systems on the pool:
stratis filesystem list my-pool
# stratis filesystem list my-poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow Unmount all file systems on the pool:
umount /dev/stratis/my-pool/my-fs-1 \ /dev/stratis/my-pool/my-fs-2 \ /dev/stratis/my-pool/my-fs-n# umount /dev/stratis/my-pool/my-fs-1 \ /dev/stratis/my-pool/my-fs-2 \ /dev/stratis/my-pool/my-fs-nCopy to Clipboard Copied! Toggle word wrap Toggle overflow Destroy the file systems:
stratis filesystem destroy my-pool my-fs-1 my-fs-2
# stratis filesystem destroy my-pool my-fs-1 my-fs-2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Destroy the pool:
stratis pool destroy my-pool
# stratis pool destroy my-poolCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify that the pool no longer exists:
stratis pool list
# stratis pool listCopy to Clipboard Copied! Toggle word wrap Toggle overflow
31.4. Deleting a Stratis pool by using the web console Copy linkLink copied to clipboard!
You can use the web console to delete an existing Stratis pool.
Deleting a Stratis pool erases all the data it contains.
Prerequisites
You have installed the RHEL 10 web console.
For instructions, see Installing and enabling the web console.
-
The
stratisdservice is running. - You have an existing Stratis pool.
Procedure
- Log in to the RHEL 10 web console.
- Click .
- In the Storage table, click the menu button for the Stratis pool you want to delete.
- From the drop-down menu, select .
- In the Permanently delete pool dialog box, click .
Chapter 32. Getting started with an ext4 file system Copy linkLink copied to clipboard!
As a system administrator, you can create, mount, resize, back up, and restore ext4 file systems. The ext4 file system is a scalable extension of ext3. In Red Hat Enterprise Linux 10, it supports individual files up to 16 TB and file systems up to 50 TB.
32.1. Features of an ext4 file system Copy linkLink copied to clipboard!
Explore key ext4 file system features, including extents for efficient large file handling, improved file system check times, advanced allocation techniques, metadata integrity, extended attributes, quota journaling, and subsecond timestamps.
Following are details of the features of an ext4 file system:
- Using extents: The ext4 file system uses extents, which improves performance when using large files and reduces metadata overhead for large files.
- Ext4 labels unallocated block groups and inode table sections accordingly, which allows the block groups and table sections to be skipped during a file system check. It leads to a quick file system check, which becomes more beneficial as the file system grows in size.
- Metadata checksum: By default, this feature is enabled in Red Hat Enterprise Linux 10.
Allocation features of an ext4 file system:
- Persistent pre-allocation
- Delayed allocation
- Multi-block allocation
- Stripe-aware allocation
-
Extended attributes (
xattr): This allows the system to associate several additional name and value pairs per file. Quota journaling: This avoids the need for lengthy quota consistency checks after a crash.
NoteThe only supported journaling mode in ext4 is
data=ordered(default). For more information, see the Red Hat Knowledgebase solution Is the EXT journaling option "data=writeback" supported in RHEL?.- Subsecond timestamps - This gives timestamps to the subsecond.
For more information, see the ext4 man page on your system.
32.2. Creating an ext4 file system Copy linkLink copied to clipboard!
As a system administrator, you can create an ext4 file system on a block device using mkfs.ext4 command.
Prerequisites
- A partition on your disk. For information about creating MBR or GPT partitions, see Creating a partition table on a disk with parted.
- Alternatively, use an LVM or MD volume.
Procedure
To create an ext4 file system:
For a regular-partition device, an LVM volume, an MD volume, or a similar device, use the following command:
mkfs.ext4 /dev/block_device
# mkfs.ext4 /dev/block_deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace /dev/block_device with the path to a block device.
For example,
/dev/sdb1,/dev/disk/by-uuid/05e99ec8-def1-4a5e-8a9d-5945339ceb2a, or/dev/my-volgroup/my-lv. In general, the default options are optimal for most usage scenarios.For striped block devices (for example, RAID5 arrays), the stripe geometry can be specified at the time of file system creation. Using proper stripe geometry enhances the performance of an ext4 file system. For example, to create a file system with a 64k stride (that is, 16 x 4096) on a 4k-block file system, use the following command:
mkfs.ext4 -E stride=16,stripe-width=64 /dev/block_device
# mkfs.ext4 -E stride=16,stripe-width=64 /dev/block_deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the given example:
- stride=value: Specifies the RAID chunk size
- stripe-width=value: Specifies the number of data disks in a RAID device, or the number of stripe units in the stripe.
NoteTo specify a UUID when creating a file system:
mkfs.ext4 -U UUID /dev/block_device
# mkfs.ext4 -U UUID /dev/block_deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace UUID with the UUID you want to set: for example,
7cd65de3-e0be-41d9-b66d-96d749c02da7.Replace /dev/block_device with the path to an ext4 file system to have the UUID added to it: for example,
/dev/sda8.To specify a label when creating a file system:
mkfs.ext4 -L label-name /dev/block_device
# mkfs.ext4 -L label-name /dev/block_deviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
To view the created ext4 file system:
blkid
# blkidCopy to Clipboard Copied! Toggle word wrap Toggle overflow
32.3. Mounting an ext4 file system Copy linkLink copied to clipboard!
As a system administrator, you can mount an ext4 file system using the mount utility.
Prerequisites
- An ext4 file system. For information about creating an ext4 file system, see Creating an ext4 file system.
Procedure
To create a mount point to mount the file system:
mkdir /mount/point
# mkdir /mount/pointCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace /mount/point with the directory name where mount point of the partition must be created.
To mount an ext4 file system:
To mount an ext4 file system with no extra options:
mount /dev/block_device /mount/point
# mount /dev/block_device /mount/pointCopy to Clipboard Copied! Toggle word wrap Toggle overflow - To mount the file system persistently, see Persistently mounting file systems.
To view the mounted file system:
df -h
# df -hCopy to Clipboard Copied! Toggle word wrap Toggle overflow
32.4. Resizing an ext4 file system Copy linkLink copied to clipboard!
As a system administrator, you can resize an ext4 file system using the resize2fs utility. The resize2fs utility reads the size in units of file system block size, unless a suffix indicating a specific unit is used.
The following suffixes indicate specific units:
-
s (sectors) -
512byte sectors -
K (kilobytes) -
1,024bytes -
M (megabytes) -
1,048,576bytes -
G (gigabytes) -
1,073,741,824bytes -
T (terabytes) -
1,099,511,627,776bytes
Prerequisites
- An ext4 file system. For information about creating an ext4 file system, see Creating an ext4 file system.
- An underlying block device of an appropriate size to hold the file system after resizing.
Procedure
To resize an ext4 file system, take the following steps:
To shrink and grow the size of an unmounted ext4 file system:
umount /dev/block_device e2fsck -f /dev/block_device resize2fs /dev/block_device size
# umount /dev/block_device # e2fsck -f /dev/block_device # resize2fs /dev/block_device sizeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace /dev/block_device with the path to the block device, for example
/dev/sdb1.Replace size with the required resize value using
s,K,M,G, andTsuffixes.An ext4 file system may be grown while mounted using the
resize2fscommand:resize2fs /mount/device size
# resize2fs /mount/device sizeCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe size parameter is optional (and often redundant) when expanding. The
resize2fsautomatically expands to fill the available space of the container, usually a logical volume or partition.
To view the resized file system:
df -h
# df -hCopy to Clipboard Copied! Toggle word wrap Toggle overflow
32.5. Comparison of tools used with ext4 and XFS Copy linkLink copied to clipboard!
Different tools and commands accomplish common file system tasks on ext4 and XFS, including creation, checking, resizing, and backup operations.
This section compares which tools to use to accomplish common tasks on the ext4 and XFS file systems.
| Task | ext4 | XFS |
|---|---|---|
| Create a file system |
|
|
| File system check |
|
|
| Resize a file system |
|
|
| Save an image of a file system |
|
|
| Label or tune a file system |
|
|
| Back up a file system |
|
|
| Quota management |
|
|
| File mapping |
|
|
If you want a complete client-server solution for backups over network, you can use bacula backup utility that is available in RHEL 9. For more information about Bacula, see Bacula backup solution.

