Chapter 6. Asynchronous errata updates
This section describes the bug fixes, known issues, and enhancements of the z-stream releases.
6.1. Red Hat Ceph Storage 7.1z1 Copy linkLink copied to clipboard!
Red Hat Ceph Storage release 7.1z1 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:5080 and RHBA-2024:5081 advisories.
6.1.1. Enhancements Copy linkLink copied to clipboard!
This section lists enhancements introduced in this release of Red Hat Ceph Storage.
6.1.1.1. Ceph File System Copy linkLink copied to clipboard!
New clone creation no longer slows down due to parallel clone limit
Previously, upon reaching the limit of parallel clones, the rest of the clones would queue up, slowing down the cloning.
With this enhancement, upon reaching the limit of parallel clones at a time, the new clone creation requests are rejected. This feature is enabled by default but can be disabled.
Ceph File System names can now be swapped for enhanced disaster recovery
This enhancement provides the option for two file systems to swap their names, by using the ceph fs swap command. The file system IDs can also optionally be swapped with this command.
The function of this API is to facilitate file system swaps for disaster recovery. In particular, it avoids situations where a named file system is temporarily missing which could potentially prompt a higher level storage operator to recreate the missing file system.
quota.max_bytes
is now set in more understandable size values
Previously, the quota.max_bytes
value was set in bytes, resulting in often very large size values, which was hard to set or changed.
With this enhancement, the quota.max_bytes
values can now be set with human-friendly values, such as M/Mi, G/Gi, or T/Ti. For example, 10GiB or 100K.
Health warnings for a standby-replay MDS are no longer included
Previously, all inode and stray counters health warnings were displayed during a standby-replay MDS.
With this enhancement, the standby-replay MDS health warnings are no longer displayed as they are not relevant.
6.1.1.2. Ceph Object Gateway Copy linkLink copied to clipboard!
S3 requests are no longer cut off in the middle of transmission during shutdown
Previously, a few clients faced issues with the S3 request being cut off in the middle of transmission during shutdown without waiting.
With this enhancement, the S3 requests can be configured (off by default) to wait for the duration defined in the rgw_exit_timeout_secs
parameter for all outstanding requests to complete before exiting the Ceph Object Gateway process unconditionally. Ceph Object Gateway will wait for up to 120 seconds (configurable) for all on-going S3 requests to complete before exiting unconditionally. During this time, new S3 requests will not be accepted.
In containerized deployments, an additional extra_container_agrs
parameter configuration of --stop-timeout=120
(or the value of rgw_exit_timeout_secs
parameter, if not default) is also necessary.
6.1.2. Known issues Copy linkLink copied to clipboard!
This section documents known issues that are found in this release of Red Hat Ceph Storage.
6.1.2.1. Ceph Object Gateway Copy linkLink copied to clipboard!
Intel QAT Acceleration for Object Compression & Encryption
Intel QuickAssist Technology (QAT) is implemented to help reduce node CPU usage and improve the performance of Ceph Object Gateway when enabling compression and encryption. It’s a known issue that QAT can only be configured on new setups (Greenfield only). QAT Ceph Object Gateway daemons cannot be configured in the same cluster as non-QAT (regular) Ceph Object Gateway daemons.
6.1.2.2. Ceph Upgrade Copy linkLink copied to clipboard!
Cluster keys and certain configuration directories are removed during RHEL 8 to RHEL 9 upgrade
Due to the RHEL 8 deprecation of the libunwind
package, this package is removed when upgrading to RHEL 9. The ceph-common
package depends on the libunwind
package and therefore is removed as as well. Removing the ceph-common
package results in the removal of the cluster keys and the certain configurations in the /etc/ceph
and /var/log/ceph
directories.
As a result, various node failures can occur. Ceph operations may not work on some nodes, due to the removal of the /etc/ceph
package. systemd and Podman cannot start on Ceph services on the node due to the removal of /var/log/ceph
package.
As a workaround, configure LEAPP to not remove the libunwind
package. For full instructions, see Upgrading RHCS 5 hosts from RHEL 8 to RHEL 9 removes ceph-common package. Services fail to start on the Red Hat Customer Portal.
6.1.2.3. The Cephadm utility Copy linkLink copied to clipboard!
Using ceph orch ls
command with the --export
flag corrupts the cert/key files format
Previously, long multi-line strings like cert/key files format would be mangled when using ceph orch ls
with the --export
flag. Specifically, some newlines are stripped. As a result, if users re-apply a specification with a cert/key as they got it from ceph orch ls
with --export
provided, the cert/key will be unusable by the daemon.
As a workaround, to modify a specification while using ceph orch ls
with --export
to get the current contents, you need to modify the formatting of the cert/key file before re-applying the specification. It’s recommended to use the format with a '|' and an indented string.
Example:
6.2. Red Hat Ceph Storage 7.1z2 Copy linkLink copied to clipboard!
Red Hat Ceph Storage release 7.1z2 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:9010 and RHBA-2024:9011 advisories.
6.2.1. Enhancements Copy linkLink copied to clipboard!
This section lists enhancements introduced in this release of Red Hat Ceph Storage.
6.2.1.1. Ceph File System Copy linkLink copied to clipboard!
Metrics support for the Replication Start/End Notifications.
With this enhancement, metrics support for the Replication Start/End notifications is provided. These metrics enable monitoring logic for data replication.
This enhancement provides labeled metrics: last_synced_start
, last_synced_end
, last_synced_duration
, last_synced_bytes
as requested.
6.2.1.2. RADOS Copy linkLink copied to clipboard!
New mon_cluster_log_level
command option to control the cluster log level verbosity for external entities
Previously, debug verbosity logs were sent to all external logging systems regardless of their level settings. As a result, the /var/
filesystem would rapidly fill up.
With this enhancement, mon_cluster_log_file_level
and mon_cluster_log_to_syslog_level
command options have been removed. From this release, use only the new generic mon_cluster_log_level
command option to control the cluster log level verbosity for the cluster log file and all external entities.
6.2.2. Known issues Copy linkLink copied to clipboard!
This section documents known issues that are found in this release of Red Hat Ceph Storage.
6.2.2.1. Build Copy linkLink copied to clipboard!
Cluster keys and certain configuration directories are removed during RHEL 8 to RHEL 9 upgrade
Due to the RHEL 8 deprecation of the libunwind
package, this package is removed when upgrading to RHEL 9. The ceph-common
package depends on the libunwind
package and therefore is removed as as well. Removing the ceph-common
package results in the removal of the cluster keys and the certain configurations in the /etc/ceph
and /var/log/ceph
directories.
As a result, various node failures can occur. Ceph operations may not work on some nodes, due to the removal of the /etc/ceph
package. systemd and Podman cannot start on Ceph services on the node due to the removal of /var/log/ceph
package.
As a workaround, configure LEAPP to not remove the libunwind
package. For full instructions, see Upgrading RHCS 5 hosts from RHEL 8 to RHEL 9 removes ceph-common package. Services fail to start on the Red Hat Customer Portal.
6.3. Red Hat Ceph Storage 7.1z3 Copy linkLink copied to clipboard!
Red Hat Ceph Storage release 7.1z3 is now available. The bug fixes that are included in the update are listed in the RHBA-2025:1770 and RHBA-2025:1772 advisories.
6.3.1. Enhancements Copy linkLink copied to clipboard!
This section lists enhancements introduced in this release of Red Hat Ceph Storage.
6.3.1.1. Ceph Dashboard Copy linkLink copied to clipboard!
Update rgw
configurations using UI and API
Previously, UI and API were dependent on is_updatable_at_runtime
flag which returned incorrect values. As a result customers could not update the rgw
configuration from the UI or by using the API.
With this enhancement, customers can now update the rgw
configuration at runtime using the API and from the UI. The rgw
configuration can also be updated using the CLI.
6.3.1.2. Ceph Object Gateway Copy linkLink copied to clipboard!
Increased efficiency for ordered bucket listings when namespaced bucket index entries exist
Previously, when ignoring namespaced bucket index entries, the code would still access the ignored entry. As a result, there would be an unnecessary latency in the ignored listings.
With this enhancement, ordered bucket index listing is faster when incomplete multipart uploads or other namespaced entries are in place.
Ceph Object Gateway (RGW) users can be created without a key using the radosgw-admin command-line
Previously, there was no provision to create RGW users without a key using the radosgw-admin command-line. This feature is available for the adminops.
With this enhancement, the provision to create RGW users without a key is provided on the command-line if --generate-key false
flag is set to radosgw-admin
user create.
6.3.1.3. Ceph File System Copy linkLink copied to clipboard!
quota.max_bytes
is now set in more understandable size values
Previously, the quota.max_bytes
value was set in bytes, resulting in often very large size values, which were hard to set or change.
With this enhancement, the quota.max_bytes
values can now be set with human-friendly values, such as K/Ki, M/Mi, G/Gi, or T/Ti. For example, 10GiB or 100K.
6.3.1.4. RADOS Copy linkLink copied to clipboard!
Inspection of disk allocator state through the admin socket
command
This enhancement provides a middle point between the allocator score which gives a single number, and the allocator dump which lists all free chunks.
As a result, the fragmentation histogram groups free chunks by size, giving some approximation of the allocator state. This gives a chance to estimate the severity of current fragmentation. The fragmentation histogram works for block/bluefs-db/bluefs-wal allocators. Extra parameter <disk_alloc> influences the calculation how many of free chunks are unaligned to the disk_alloc boundary. Extra parameter <num_buckets> determines the size of the histogram, but the granularity remains the same.
For example: bluestore allocator fragmentation histogram block 4096 12
The admin socket
command now works with default parameters
Previously, listing allocator histogram = admin socket with bluestore allocator fragmentation histogram
did not work for bluefs-db and bluefs-wal with default parameters. With this enhancement, the admin socket
command works with the default parameters.
6.3.1.5. Build Copy linkLink copied to clipboard!
Lower CPU usage on s390x for CRC32
Previously, the CPU usage was high due to the CRC32 software implementation.
With this enhancement, the CRC32 hardware supports s390x architecture (z13 or later) and lowers the CPU usage on s390x for CRC32.
6.3.2. Known issues Copy linkLink copied to clipboard!
This section documents known issues that are found in this release of Red Hat Ceph Storage.
6.3.2.1. Build Copy linkLink copied to clipboard!
Cluster keys and certain configuration directories are removed during RHEL 8 to RHEL 9 upgrade
Due to the RHEL 8 deprecation of the libunwind
package, this package is removed when upgrading to RHEL 9. The ceph-common
package depends on the libunwind
package and therefore is removed as as well. Removing the ceph-common
package results in the removal of the cluster keys and the certain configurations in the /etc/ceph
and /var/log/ceph
directories.
As a result, various node failures can occur. Ceph operations may not work on some nodes, due to the removal of the /etc/ceph
package. systemd and Podman cannot start on Ceph services on the node due to the removal of /var/log/ceph
package.
As a workaround, configure LEAPP to not remove the libunwind
package. For full instructions, see Upgrading RHCS 5 hosts from RHEL 8 to RHEL 9 removes ceph-common package. Services fail to start on the Red Hat Customer Portal.
6.4. Red Hat Ceph Storage 7.1z4 Copy linkLink copied to clipboard!
Red Hat Ceph Storage release 7.1z4 is now available. The security and bug fixes that are included in the update are listed in the RHSA-2025:4664 and RHSA-2025:4667 advisories.
6.4.1. Enhancements Copy linkLink copied to clipboard!
This section lists enhancements introduced in this release of Red Hat Ceph Storage.
6.4.1.1. The Cephadm utility Copy linkLink copied to clipboard!
Improved core dump handling in cephadm systemd units
Previously, core dumps were not generated or were truncated when services crashed—especially in hard-to-reproduce cases—resulting in the loss of valuable debugging information.
With this enhancement, cephadm now sets LimitCORE=infinity
in its systemd unit file template and configures the ProcessSizeMax
and ExternalSizeMax
settings for coredumpctl
, provided that the mgr/cephadm/set_coredump_overrides
setting is enabled. The maximum size for core dumps is controlled by the mgr/cephadm/coredump_max_size
setting. As a result, services now generate complete core dumps, improving the ability to debug crash issues.
6.4.1.2. Ceph Object Gateway Copy linkLink copied to clipboard!
Bucket notifications can now be sent to a multi-node Kafka cluster
Previously, Ceph Object Gateway could only send a message to a single node Kafka cluster.
With this enhancement, bucket notifications can now be sent to a multi-node Kafka cluster. With multi-node Kafka cluster support, there is proper utilization of the cluster’s high availability (HA). In cases that a node is in a down state with other Kafka nodes being up, messages can now be sent. In addition, since Ceph Object Gateway is now connected to each node, bucket notification failures no longer occur due to topic partitions not being replicated to all the nodes.
Sites can now configure RGW error handling for existing bucket creation
Previously, RGW returned a success response when creating a bucket that already existed in the same zone, even if no new bucket was created. This caused confusion in automated workflows.
With this enhancement, sites can now configure RGW to return an error instead of success when attempting to create a bucket that already exists in the zone. If the configuration option rgw_bucket_exist_override
is set to true, RGW returns a 409 BucketAlreadyExists
error for duplicate bucket creation requests. By default, this option is set to false
.
6.4.1.3. RADOS Copy linkLink copied to clipboard!
PG scrub performance improved by removing unnecessary object ID repair check
Previously, every PG scrub invoked repair_oinfo_oid()
, a function meant to fix mismatched object IDs in rare cases linked to a historical filesystem bug. This added unnecessary overhead, as the check applied only under very specific conditions.
With this enhancement, the check was removed, improving deep scrub performance by over 10%. Shallow scrubs are expected to benefit even more.
New ceph osd rm-pg-up map-primary-all
command for OSDMap cleanup
Previously, users had to remove pg_upmap_primary
mappings individually using ceph osd rm-pg-upmap-primary PGID
, which was time-consuming and error-prone, especially when cleaning up invalid mappings after pool deletion.
With this enhancement, users can run the new ceph osd rm-pg-upmap-primary-all
command to clear all pg_upmap_primary
mappings from the OSDMap at once, simplifying management and cleanup.
6.4.2. Known issues Copy linkLink copied to clipboard!
This section documents known issues that are found in this release of Red Hat Ceph Storage.
6.4.2.1. The Cephadm utility Copy linkLink copied to clipboard!
Exporter daemons report error state after receiving SIGTERM
Currently, the ceph-exporter
and node-exporter
daemons return a non-zero return code when they receive a SIGTERM signal. As a result, the system marks these daemons as being in an error state instead of the expected stopped state.
There is no workaround available at this time.
Daemons incorrectly marked as stopped after exiting maintenance mode
When a node is taken out of maintenance mode, cephadm may temporarily display the daemons on that node as being in a stopped state even if they have already started. This occurs because cephadm has not yet refreshed the daemon status on the host. As a result, users may mistakenly see daemons as stopped immediately after maintenance mode ends.
There is no workaround at this time.
6.5. Red Hat Ceph Storage 7.1z5 Copy linkLink copied to clipboard!
Red Hat Ceph Storage release 7.1z5 is now available. The security and bug fixes that are included in the update are listed in the RHBA-2025:9335 and RHSA-2025:9340 advisories.
6.6. Red Hat Ceph Storage 7.1z6 Copy linkLink copied to clipboard!
Red Hat Ceph Storage release 7.1z6 is now available. This is a container-only only and includes updates and security bug fixes that are listed in the RHSA-2025:11889 advisory.
6.7. Red Hat Ceph Storage 7.1z7 Copy linkLink copied to clipboard!
Red Hat Ceph Storage release 7.1z7 is now available. This is a container-only only and includes updates and security bug fixes that are listed in the RHSA-2025:13671 advisory.