Questo contenuto non è disponibile nella lingua selezionata.
Chapter 6. Asynchronous errata updates
This section describes the bug fixes, known issues, and enhancements of the z-stream releases.
6.1. Red Hat Ceph Storage 8.1z1 Copia collegamentoCollegamento copiato negli appunti!
Red Hat Ceph Storage release 8.1z1 is now available. The security updates and bug fixes that are included in the update are listed in the RHSA-2025:11749 advisory.
6.2. Red Hat Ceph Storage 8.1z2 Copia collegamentoCollegamento copiato negli appunti!
Red Hat Ceph Storage release 8.1z2 is now available. The bug fixes that are included in the update are listed in the RHBA-2025:14015 and RHBA-2025:13981 advisories.
6.3. Red Hat Ceph Storage 8.1z3 Copia collegamentoCollegamento copiato negli appunti!
Red Hat Ceph Storage release 8.1z3 is now available. The bug fixes that are included in the update are listed in the RHBA-2025:17047 and RHBA-2025:17048 advisories.
6.3.1. Enhancements Copia collegamentoCollegamento copiato negli appunti!
This section lists all the major updates, and enhancements introduced in this release of Red Hat Ceph Storage.
6.3.1.1. Ceph Object Gateway Copia collegamentoCollegamento copiato negli appunti!
Enhanced conditional operations
This enhancement introduces support for conditional PUT and DELETE operations, including bulk and multi-delete requests. These conditional operations improve data consistency for some workloads.
The conditional InitMultipartUpload is not implemented in this release.
Bugzilla:2375001, Bugzilla:2383253
Rate limits now applied for LIST and DELETE request
LIST and DELETE requests are sub-operations of GET and PUT, respectively, but are typically more resource-intensive.
With this enhancement, it is now possible to configure rate limits for LIST and DELETE requests independently or in conjunction with existing GET and PUT rate limits. This provides more flexible granularity in managing system performance and resource usage.
6.4. Red Hat Ceph Storage 8.1z4 Copia collegamentoCollegamento copiato negli appunti!
Red Hat Ceph Storage release 8.1z4 is now available. The bug fixes that are included in the update are listed in the the RHSA-2025:21068 and RHSA-2025:21203 advisories.
6.4.1. Known issues Copia collegamentoCollegamento copiato negli appunti!
This section documents known issues found in this release of Red Hat Ceph Storage.
6.4.1.1. The Cephadm utility Copia collegamentoCollegamento copiato negli appunti!
QAT cannot be used for TLS offload or acceleration mode together with SSL set
Enabling QAT on HAProxy with SSL enabled injects legacy OpenSSL engine directives. The legacy OpenSSL engine path breaks the TLS handshake, emitting the tlsv1 alert internal error error. With the TLS handshake broken, the TLS termination fails.
As a workaround, disable the QAT at HAProxy in order to keep the TLS handshake. Set the configuration file specifications as follows:
-
haproxy_qat_support: false -
ssl: true
As a result, QAT is disabled and the HAProxy TLS works as expected.
Under heavy connection rates higher CPU usage may be seen versus QAT-offloaded handshakes.
6.5. Red Hat Ceph Storage 8.1z5 Copia collegamentoCollegamento copiato negli appunti!
Red Hat Ceph Storage release 8.1z5 is now available. This release numerous security updates, bug fixes, and a known issue.
6.5.1. Notable bug fixes Copia collegamentoCollegamento copiato negli appunti!
This section describes bugs with significant impact on users that were fixed in this release of Red Hat Ceph Storage. In addition, the section includes descriptions of fixed known issues found in previous versions.
For a full list of bug fixes in this release, see All bug fixes.
6.5.1.1. Ceph File System (CephFS) Copia collegamentoCollegamento copiato negli appunti!
The ceph tell command now displays proper error messages for the wrong MDS type.
Previously, the`ceph tell` command did not display a proper error message if the MDS type was incorrect. As a result, the command failed with no error message, and it was difficult to understand what was wrong with the command. With this fix, the ceph tell command returns an appropriate error message, stating "unknown <type_name>" when an incorrect MDS type is used.
(IBMCEPH-11012)
Updated subvolume removal workflow to prevent inconsistent states
Previously, removing a subvolume in a full-cluster condition could leave the subvolume in an invalid state.
With this fix, the subvolume removal workflow has been updated so that metadata is now updated before the UUID directory is moved to the .trash directory. This change ensures that any ENOSPC error is detected during the metadata update, allowing the operation to fail safely and preventing inconsistent state.
As a result, the system no longer leaves subvolumes in a partially removed or invalid state, and subsequent subvolume operations complete successfully.
(IBMCEPH-9439) .readdr requests now complete successfully with directory listings working as expected Previously, on big-endian systems, a bug in the Ceph MDS caused incorrect encoding of directory fragments. As a result, the CephFS kernel driver received an invalid directory-fragment value. This caused the driver to repeatedly send readdir requests without completing them. User-initiated ls commands did not finish. With this fix, the directory-fragment encoding now uses a common endianness format, and the system automatically detects and handles fragments that were created with the previous incorrect encoding.
(IBMCEPH-12782)
Early return added to prevent NULL dereference in MDS
Previously, the MDS could crash when a NULL pointer was dereferenced.
With this fix, the logic now returns early when the MDRequestRef is NULL instead of dereferencing it. As a result, crashes caused by this condition are prevented, and MDS stability is improved.
(IBMCEPH-12892)
Improved xattr dump handling to prevent MDS crash
Previously, the MDS could crash when handling xattrs in CInode.cc due to empty bufptr values being dumped.
With this fix, the code now checks whether the buffer contains data before dumping it, and explicitly dumps an empty string when the buffer length is zero. This prevents spurious empty buffer entries and ensures safe handling of xattr values. As a result, xattr dumps are cleaner and more accurate, and the MDS no longer crashes in this scenario.
(IBMCEPH-12738)
6.5.1.2. Ceph Object Gateway (RGW) Copia collegamentoCollegamento copiato negli appunti!
Object unlink handling updated for zero-shard configuration
Previously, the system did not correctly handle object unlink operations in specific zero-shard configurations.
With this fix, the code has been updated to ensure proper handling when both bucket_index_max_shards and the bucket’s num_shards set to 0. As a result, object unlink operations now succeed in this scenario. (IBMCEPH-12702)
Tenant user policy and role-based permissions now work as expected after upgrade
Previously, some policy or role-based permissions involving legacy tenant users behaved differently after upgrading to releases that support IAM accounts. As a result, expected access grants would fail.
With this fix, a configuration option has been introduced to allow backward compatibility with previous version behavior.
(IBMCEPH-12352)
6.5.2. All bug fixes Copia collegamentoCollegamento copiato negli appunti!
This section lists a complete listing of all bug fixes in this release of Red Hat Ceph Storage.
| Issue key | Severity | Summary |
|---|---|---|
| IBMCEPH-10423 | Critical | Multisite deployment using rgw module is failing with the timeout error on secondary site |
| IBMCEPH-12352 | Critical | Observing 403 error for multi-part request while other requests like ‘cp’ etc are working fine |
| IBMCEPH-12686 | Critical | Syncing stopped "daemon_health: UNKNOWN" post shutdown and recovery of managed cluster [8.1z] |
| IBMCEPH-12761 | Critical | Shallow Clone does not work as expected when an RWX clone is in progress. |
| IBMCEPH-12844 | Critical | pg_autoscaler is calculating correctly but not implementing PG counts and changes, due to high threshold |
| IBMCEPH-12877 | Critical | A few RBD images report error due to incomplete group snapshots on the secondary cluster after workload deployment [8.1z] |
| IBMCEPH-11812 | Important | MGR crashes during CephFS system test due to assertion failure in src/common/RefCountedObj.cc: 14 |
| IBMCEPH-12433 | Important | Accessing RGW ratelimit for user fails with error: failed to get a ratelimit for user id: 'UID', errno: (2) No such file or directory |
| IBMCEPH-12585 | Important | cephadm crashes and doesn’t recover with ganesha-rados-grace tool failed: Failure: -126 |
| IBMCEPH-12705 | Important | Observing slow_ops on a mon daemon post site down tests in a 3 AZ cluster |
| IBMCEPH-12732 | Important | OSD crashes with ceph_assert(diff ⇐ bytes_per_au[pos]) |
| IBMCEPH-12738 | Important | MDS crassed executing asok_command: dump tree with assert ceph::__ceph_assert_fail(char const*, char const*, int, char const*) |
| IBMCEPH-12782 | Important | Application Pod stays in Init state as the CephFS VolumeAttachment doesn’t complete. |
| IBMCEPH-12803 | Important | unable to see old metrics using loki query after upgrade 6 to 7 |
| IBMCEPH-12816 | Important | Stale OLH/plain index entries with pending_removal=true in versioned buckets |
| IBMCEPH-12828 | Important | Unexpected error getting (earmark|encryption tag): error in getxattr: No data available [Errno 61] |
| IBMCEPH-12867 | Important | rbd-mirror daemon restart fails to resume partially synced demote snapshot synchronization on secondary [8.1z] |
| IBMCEPH-12892 | Important | ceph-mds crashed - mds-rank-fin |
| IBMCEPH-12904 | Important | ceph-crash not authenticating with cluster correctly |
| IBMCEPH-12998 | Important | Group replayer shutdown can hang in the face of active m_in_flight_op_tracker ops |
| IBMCEPH-13000 | Important | Bug 2411968 : Group replayer shutdown can hang in the face of active m_in_flight_op_tracker ops [8.1z] |
| IBMCEPH-13002 | Important | RPMInspect fails on executable stack |
| IBMCEPH-13123 | Important | CLONE - radosgw-admin 'bucket rm --bypass-gc' ignores refcount (can lead to DL) |
| IBMCEPH-7933 | Important | mon_memory_target is ignored at startup when set without mon_memory_autotune in the config database |
| IBMCEPH-9439 | Important | Clone In-Progress operations start and error out in a loop. |
| IBMCEPH-11011 | Moderate | Error message is not descriptive for ceph tell command |
| IBMCEPH-11012 | Moderate | Error message is not descriptive for ceph tell command |
| IBMCEPH-12442 | Moderate | Prometheus module error causing cluster to go to HEALTH_ERR |
| IBMCEPH-12579 | Moderate | segmentation fault osd : tick_without_osd_lock() |
| IBMCEPH-12702 | Moderate | When "bucket_index_max_shards" is set to 0 in the zone group , and bucket has num_shards 0 , the "object unlink" fails |
6.5.3. Security fixes Copia collegamentoCollegamento copiato negli appunti!
This section lists security fixes from this release of Red Hat Ceph Storage.
For details about each CVE, see CVE Records.
- CVE-2019-10790
- CVE-2021-23358
- CVE-2022-34749
- CVE-2024-31884
- CVE-2024-51744
- CVE-2024-55565
- CVE-2025-7783
- CVE-2025-12816
- CVE-2025-26791
- CVE-2025-47907
- CVE-2025-47913
- CVE-2025-52555
- CVE-2025-58183
- CVE-2025-66031
- CVE-2025-66418
- CVE-2025-66471
- CVE-2025-68429
6.5.4. Known issues Copia collegamentoCollegamento copiato negli appunti!
This section documents known issues found in this release of Red Hat Ceph Storage.
6.5.4.1. Ceph Object Gateway multi-site Copia collegamentoCollegamento copiato negli appunti!
Bucket index shows stale metadata after lifecycle expiration in versioned buckets
In rare cases, when lifecycle expiration removes objects from versioned buckets, some omap entries in the bucket index might remain even though the objects have already been removed.
As a result, some omap entries may remain in the bucket index. In cases that many leftover keys accumulate, the following error is emitted: (27) File too large. This inconsistency can affect tools or processes that depend on accurate bucket index listings.
As a workaround:
Scan the bucket for leftover keys:
radosgw-admin bucket check olh --bucket=BUCKET_NAME --dump-keys --hide-progress
radosgw-admin bucket check olh --bucket=BUCKET_NAME --dump-keys --hide-progressCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the leftover omap entries.
radosgw-admin bucket check olh --bucket=BUCKET_NAME --fix
radosgw-admin bucket check olh --bucket=BUCKET_NAME --fixCopy to Clipboard Copied! Toggle word wrap Toggle overflow
(IBMCEPH-12980)