Chapter 6. Asynchronous errata updates
This section describes the bug fixes, known issues, and enhancements of the z-stream releases.
6.1. Red Hat Ceph Storage 7.1z1
Red Hat Ceph Storage release 7.1z1 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:5080 and RHBA-2024:5081 advisories.
6.1.1. Enhancements
This section lists enhancements introduced in this release of Red Hat Ceph Storage.
6.1.1.1. Ceph File System
New clone creation no longer slows down due to parallel clone limit
Previously, upon reaching the limit of parallel clones, the rest of the clones would queue up, slowing down the cloning.
With this enhancement, upon reaching the limit of parallel clones at a time, the new clone creation requests are rejected. This feature is enabled by default but can be disabled.
Ceph File System names can now be swapped for enhanced disaster recovery
This enhancement provides the option for two file systems to swap their names, by using the ceph fs swap command. The file system IDs can also optionally be swapped with this command.
The function of this API is to facilitate file system swaps for disaster recovery. In particular, it avoids situations where a named file system is temporarily missing which could potentially prompt a higher level storage operator to recreate the missing file system.
quota.max_bytes
is now set in more understandable size values
Previously, the quota.max_bytes
value was set in bytes, resulting in often very large size values, which was hard to set or changed.
With this enhancement, the quota.max_bytes
values can now be set with human-friendly values, such as M/Mi, G/Gi, or T/Ti. For example, 10GiB or 100K.
Health warnings for a standby-replay MDS are no longer included
Previously, all inode and stray counters health warnings were displayed during a standby-replay MDS.
With this enhancement, the standby-replay MDS health warnings are no longer displayed as they are not relevant.
6.1.1.2. Ceph Object Gateway
S3 requests are no longer cut off in the middle of transmission during shutdown
Previously, a few clients faced issues with the S3 request being cut off in the middle of transmission during shutdown without waiting.
With this enhancement, the S3 requests can be configured (off by default) to wait for the duration defined in the rgw_exit_timeout_secs
parameter for all outstanding requests to complete before exiting the Ceph Object Gateway process unconditionally. Ceph Object Gateway will wait for up to 120 seconds (configurable) for all on-going S3 requests to complete before exiting unconditionally. During this time, new S3 requests will not be accepted.
In containerized deployments, an additional extra_container_agrs
parameter configuration of --stop-timeout=120
(or the value of rgw_exit_timeout_secs
parameter, if not default) is also necessary.
6.1.2. Known issues
This section documents known issues that are found in this release of Red Hat Ceph Storage.
6.1.2.1. Ceph Object Gateway
Intel QAT Acceleration for Object Compression & Encryption
Intel QuickAssist Technology (QAT) is implemented to help reduce node CPU usage and improve the performance of Ceph Object Gateway when enabling compression and encryption. It’s a known issue that QAT can only be configured on new setups (Greenfield only). QAT Ceph Object Gateway daemons cannot be configured in the same cluster as non-QAT (regular) Ceph Object Gateway daemons.
6.1.2.2. Ceph Upgrade
Cluster keys and certain configuration directories are removed during RHEL 8 to RHEL 9 upgrade
Due to the RHEL 8 deprecation of the libunwind
package, this package is removed when upgrading to RHEL 9. The ceph-common
package depends on the libunwind
package and therefore is removed as as well. Removing the ceph-common
package results in the removal of the cluster keys and the certain configurations in the /etc/ceph
and /var/log/ceph
directories.
As a result, various node failures can occur. Ceph operations may not work on some nodes, due to the removal of the /etc/ceph
package. systemd and Podman cannot start on Ceph services on the node due to the removal of /var/log/ceph
package.
As a workaround, configure LEAPP to not remove the libunwind
package. For full instructions, see Upgrading RHCS 5 hosts from RHEL 8 to RHEL 9 removes ceph-common package. Services fail to start on the Red Hat Customer Portal.
6.1.2.3. The Cephadm utility
Using ceph orch ls
command with the --export
flag corrupts the cert/key files format
Previously, long multi-line strings like cert/key files format would be mangled when using ceph orch ls
with the --export
flag. Specifically, some newlines are stripped. As a result, if users re-apply a specification with a cert/key as they got it from ceph orch ls
with --export
provided, the cert/key will be unusable by the daemon.
As a workaround, to modify a specification while using ceph orch ls
with --export
to get the current contents, you need to modify the formatting of the cert/key file before re-applying the specification. It’s recommended to use the format with a '|' and an indented string.
Example:
client_cert: | -----BEGIN CERTIFICATE----- MIIFCTCCAvGgAwIBAgIUO6yXXkNb1+1tJzxZDplvgKpwWkMwDQYJKoZIhvcNAQEL BQAwFDESMBAGA1UEAwwJbXkuY2xpZW50MB4XDTI0MDcyMzA3NDI1N1oXDTM0MDcy ...
6.2. Red Hat Ceph Storage 7.1z2
Red Hat Ceph Storage release 7.1z2 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:9010 and RHBA-2024:9011 advisories.
6.2.1. Enhancements
This section lists enhancements introduced in this release of Red Hat Ceph Storage.
6.2.1.1. Ceph File System
Metrics support for the Replication Start/End Notifications.
With this enhancement, metrics support for the Replication Start/End notifications is provided. These metrics enable monitoring logic for data replication.
This enhancement provides labeled metrics: last_synced_start
, last_synced_end
, last_synced_duration
, last_synced_bytes
as requested.
6.2.1.2. RADOS
New mon_cluster_log_level
command option to control the cluster log level verbosity for external entities
Previously, debug verbosity logs were sent to all external logging systems regardless of their level settings. As a result, the /var/
filesystem would rapidly fill up.
With this enhancement, mon_cluster_log_file_level
and mon_cluster_log_to_syslog_level
command options have been removed. From this release, use only the new generic mon_cluster_log_level
command option to control the cluster log level verbosity for the cluster log file and all external entities.
6.2.2. Known issues
This section documents known issues that are found in this release of Red Hat Ceph Storage.
6.2.2.1. Build
Cluster keys and certain configuration directories are removed during RHEL 8 to RHEL 9 upgrade
Due to the RHEL 8 deprecation of the libunwind
package, this package is removed when upgrading to RHEL 9. The ceph-common
package depends on the libunwind
package and therefore is removed as as well. Removing the ceph-common
package results in the removal of the cluster keys and the certain configurations in the /etc/ceph
and /var/log/ceph
directories.
As a result, various node failures can occur. Ceph operations may not work on some nodes, due to the removal of the /etc/ceph
package. systemd and Podman cannot start on Ceph services on the node due to the removal of /var/log/ceph
package.
As a workaround, configure LEAPP to not remove the libunwind
package. For full instructions, see Upgrading RHCS 5 hosts from RHEL 8 to RHEL 9 removes ceph-common package. Services fail to start on the Red Hat Customer Portal.
6.3. Red Hat Ceph Storage 7.1z3
Red Hat Ceph Storage release 7.1z3 is now available. The bug fixes that are included in the update are listed in the RHBA-2025:1770 and RHBA-2025:1772 advisories.
6.3.1. Enhancements
This section lists enhancements introduced in this release of Red Hat Ceph Storage.
6.3.1.1. Ceph Dashboard
Update rgw
configurations using UI and API
Previously, UI and API were dependent on is_updatable_at_runtime
flag which returned incorrect values. As a result customers could not update the rgw
configuration from the UI or by using the API.
With this enhancement, customers can now update the rgw
configuration at runtime using the API and from the UI. The rgw
configuration can also be updated using the CLI.
6.3.1.2. Ceph Object Gateway
Increased efficiency for ordered bucket listings when namespaced bucket index entries exist
Previously, when ignoring namespaced bucket index entries, the code would still access the ignored entry. As a result, there would be an unnecessary latency in the ignored listings.
With this enhancement, ordered bucket index listing is faster when incomplete multipart uploads or other namespaced entries are in place.
Ceph Object Gateway (RGW) users can be created without a key using the radosgw-admin command-line
Previously, there was no provision to create RGW users without a key using the radosgw-admin command-line. This feature is available for the adminops.
With this enhancement, the provision to create RGW users without a key is provided on the command-line if --generate-key false
flag is set to radosgw-admin
user create.
6.3.1.3. Ceph File System
quota.max_bytes
is now set in more understandable size values
Previously, the quota.max_bytes
value was set in bytes, resulting in often very large size values, which were hard to set or change.
With this enhancement, the quota.max_bytes
values can now be set with human-friendly values, such as K/Ki, M/Mi, G/Gi, or T/Ti. For example, 10GiB or 100K.
6.3.1.4. RADOS
Inspection of disk allocator state through the admin socket
command
This enhancement provides a middle point between the allocator score which gives a single number, and the allocator dump which lists all free chunks.
As a result, the fragmentation histogram groups free chunks by size, giving some approximation of the allocator state. This gives a chance to estimate the severity of current fragmentation. The fragmentation histogram works for block/bluefs-db/bluefs-wal allocators. Extra parameter <disk_alloc> influences the calculation how many of free chunks are unaligned to the disk_alloc boundary. Extra parameter <num_buckets> determines the size of the histogram, but the granularity remains the same.
For example: bluestore allocator fragmentation histogram block 4096 12
The admin socket
command now works with default parameters
Previously, listing allocator histogram = admin socket with bluestore allocator fragmentation histogram
did not work for bluefs-db and bluefs-wal with default parameters. With this enhancement, the admin socket
command works with the default parameters.
6.3.1.5. Build
Lower CPU usage on s390x for CRC32
Previously, the CPU usage was high due to the CRC32 software implementation.
With this enhancement, the CRC32 hardware supports s390x architecture (z13 or later) and lowers the CPU usage on s390x for CRC32.
6.3.2. Known issues
This section documents known issues that are found in this release of Red Hat Ceph Storage.
6.3.2.1. Build
Cluster keys and certain configuration directories are removed during RHEL 8 to RHEL 9 upgrade
Due to the RHEL 8 deprecation of the libunwind
package, this package is removed when upgrading to RHEL 9. The ceph-common
package depends on the libunwind
package and therefore is removed as as well. Removing the ceph-common
package results in the removal of the cluster keys and the certain configurations in the /etc/ceph
and /var/log/ceph
directories.
As a result, various node failures can occur. Ceph operations may not work on some nodes, due to the removal of the /etc/ceph
package. systemd and Podman cannot start on Ceph services on the node due to the removal of /var/log/ceph
package.
As a workaround, configure LEAPP to not remove the libunwind
package. For full instructions, see Upgrading RHCS 5 hosts from RHEL 8 to RHEL 9 removes ceph-common package. Services fail to start on the Red Hat Customer Portal.