Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 8. Asynchronous errata updates
This section describes the bug fixes, known issues, and enhancements of the z-stream releases.
8.1. Red Hat Ceph Storage 9.0z1 Copier lienLien copié sur presse-papiers!
Red Hat Ceph Storage release 9.0z1 is now available. This release includes enhancements, bug fixes, and a new known issue.
8.1.1. Bug fixes Copier lienLien copié sur presse-papiers!
This section describes bugs with significant user impact which were fixed in this release of Red Hat Ceph Storage. In addition, the section includes descriptions of fixed known issues found in previous versions.
8.1.1.1. Ceph File System (CephFS) Copier lienLien copié sur presse-papiers!
Learn about bug fixes for Ceph File System included in this release.
Improved xattr dump handling to prevent MDS crash
Previously, the MDS could crash when handling xattrs in CInode.cc due to empty bufptr values being dumped.
With this fix, the code now checks whether the buffer contains data before dumping it, and explicitly dumps an empty string when the buffer length is zero. This prevents spurious empty buffer entries and ensures safe handling of xattr values. As a result, xattr dumps are cleaner and more accurate, and the MDS no longer crashes in this scenario.
(IBMCEPH-12883)
Subvolume operations no longer blocked during asynchronous clone
Previously, the CephFS Python binding used by the asynchronous cloner in the volumes module invoked the client library API while holding the Python Global Interpreter Lock (GIL). Because the GIL was held for an extended duration, other subvolume operations in the volumes module were blocked while waiting to acquire the lock.
With this fix, the CephFS client API is now invoked without holding the GIL. As a result, subvolume operations in the volumes module can progress normally even when an asynchronous clone operation is running. (IBMCEPH-12760)
MDS crash due to NULL pointer dereference prevented
Previously, the MDS could crash when a NULL MDRequestRef pointer was dereferenced.
With this fix, the logic now returns early when the MDRequestRef is NULL, instead of attempting to dereference it.
As a result, crashes caused by this condition are prevented, improving overall MDS stability.
(IBMCEPH-12900)
8.1.1.2. Ceph Object Gateway Copier lienLien copié sur presse-papiers!
Learn about bug fixes for Ceph Object Gateway included in this release.
Object unlink handling updated for zero-shard configuration
Previously, the system did not correctly handle object unlink operations in specific zero-shard configurations.
With this fix, the code has been updated to ensure proper handling when both bucket_index_max_shards and the bucket’s num_shards are set to 0. As a result, object unlink operations now succeed in this scenario.
(IBMCEPH-12915)
Updated processing logic to ensure all topics are handled
Previously, only the first 1,000 topics were repeatedly processed, preventing the remaining topics from being handled as expected.
With this fix, the system now processes topics in batches of 1,000, ensuring that all topics are eventually processed rather than cycling over only the initial set. As a result, bucket notifications are now sent for all topics, and topic queues no longer fill up or block service operation.
(IBMCEPH-12914)
8.1.2. All bug fixes Copier lienLien copié sur presse-papiers!
This section lists a complete listing of all bug fixes in this release of Red Hat Ceph Storage.
| Issue key | Severity | Summary |
|---|---|---|
| IBMCEPH-12760 | Critical | Shallow Clone does not work as expected when an RWX clone is in progress. |
| IBMCEPH-12841 | Critical | pg_autoscaler is calculating correctly but not implementing PG counts and changes, due to high threshold |
| IBMCEPH-12932 | Critical | Samba service creation failed due to image pull error |
| IBMCEPH-10918 | Important | Allow Ingress service to expose the metrics via HTTPS |
| IBMCEPH-12770 | Important | unix attributes stored on objects appear not to be persistent (as required) |
| IBMCEPH-12827 | Important | Unexpected error getting (earmark|encryption tag): error in getxattr: No data available [Errno 61] |
| IBMCEPH-12883 | Important | MDS crassed executing asok_command: dump tree with assert ceph::__ceph_assert_fail(char const*, char const*, int, char const*) |
| IBMCEPH-12900 | Important | ceph-mds crashed - mds-rank-fin |
| IBMCEPH-12903 | Important | ceph-crash not authenticating with cluster correctly |
| IBMCEPH-12906 | Important | COT get attribute command fails for BlueFS ENOSPC OSD |
| IBMCEPH-12914 | Important | notification code will go into infinite loop when there are more than 1K topics |
| IBMCEPH-12981 | Important | RBD Group mirror snapshots remain in "created" / "not copied" state after rbd-mirror daemon stop & kill |
| IBMCEPH-12982 | Important | Group resync does not recover snapshots stuck in ‘not copied’ state |
| IBMCEPH-12999 | Important | RPMInspect fails on executable stack |
| IBMCEPH-13113 | Important | radosgw-admin 'bucket rm --bypass-gc' ignores refcount (can lead to DL) |
| IBMCEPH-12786 | Moderate | Fix AdminOps Api GetAccount() and DeleteAccount() |
| IBMCEPH-12853 | Moderate | ECDummyOp memory leak in Fast EC |
| IBMCEPH-12915 | Moderate | When "bucket_index_max_shards" is set to 0 in the zone group , and bucket has num_shards 0 , the "object unlink" fails |
8.1.3. Security fixes Copier lienLien copié sur presse-papiers!
This section lists security fixes from this release of Red Hat Ceph Storage.
For details about each CVE, see CVE Records.
- CVE-2021-23358
- CVE-2024-51744
- CVE-2024-55565
- CVE-2025-22868
- CVE-2025-26791
- CVE-2025-66418
- CVE-2025-66471
- CVE-2025-7783
8.1.4. Known issues Copier lienLien copié sur presse-papiers!
This section documents known issues found in this release of Red Hat Ceph Storage.
8.1.4.1. Ceph Object Gateway multi-site Copier lienLien copié sur presse-papiers!
Get to know the known issues for Ceph Object Gateway multi-site found in this release.
Bucket index shows stale metadata after lifecycle expiration in versioned buckets
In rare cases, when lifecycle expiration removes objects from versioned buckets, some omap entries in the bucket index might remain even though the objects have already been removed.
As a result, some omap entries may remain in the bucket index. In cases that many leftover keys accumulate, the following error is emitted: (27) File too large. This inconsistency can affect tools or processes that depend on accurate bucket index listings.
As a workaround:
Scan the bucket for leftover keys.
radosgw-admin bucket check olh --bucket=testbucket --dump-keys --hide-progress
radosgw-admin bucket check olh --bucket=testbucket --dump-keys --hide-progressCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the leftover omap entries.
radosgw-admin bucket check olh --bucket=testbucket --fix
radosgw-admin bucket check olh --bucket=testbucket --fixCopy to Clipboard Copied! Toggle word wrap Toggle overflow