Este contenido no está disponible en el idioma seleccionado.
Chapter 8. Asynchronous errata updates
This section describes the bug fixes, known issues, and enhancements of the z-stream releases.
8.1. Red Hat Ceph Storage 9.0z1 Copiar enlaceEnlace copiado en el portapapeles!
Red Hat Ceph Storage release 9.0z1 is now available. This release includes enhancements, bug fixes, and a new known issue.
8.1.1. Bug fixes Copiar enlaceEnlace copiado en el portapapeles!
This section describes bugs with significant user impact which were fixed in this release of Red Hat Ceph Storage. In addition, the section includes descriptions of fixed known issues found in previous versions.
8.1.1.1. Ceph File System (CephFS) Copiar enlaceEnlace copiado en el portapapeles!
Learn about bug fixes for Ceph File System included in this release.
Improved xattr dump handling to prevent MDS crash
Previously, the MDS could crash when handling xattrs in CInode.cc due to empty bufptr values being dumped.
With this fix, the code now checks whether the buffer contains data before dumping it, and explicitly dumps an empty string when the buffer length is zero. This prevents spurious empty buffer entries and ensures safe handling of xattr values. As a result, xattr dumps are cleaner and more accurate, and the MDS no longer crashes in this scenario.
(IBMCEPH-12883)
Subvolume operations no longer blocked during asynchronous clone
Previously, the CephFS Python binding used by the asynchronous cloner in the volumes module invoked the client library API while holding the Python Global Interpreter Lock (GIL). Because the GIL was held for an extended duration, other subvolume operations in the volumes module were blocked while waiting to acquire the lock.
With this fix, the CephFS client API is now invoked without holding the GIL. As a result, subvolume operations in the volumes module can progress normally even when an asynchronous clone operation is running. (IBMCEPH-12760)
MDS crash due to NULL pointer dereference prevented
Previously, the MDS could crash when a NULL MDRequestRef pointer was dereferenced.
With this fix, the logic now returns early when the MDRequestRef is NULL, instead of attempting to dereference it.
As a result, crashes caused by this condition are prevented, improving overall MDS stability.
(IBMCEPH-12900)
8.1.1.2. Ceph Object Gateway Copiar enlaceEnlace copiado en el portapapeles!
Learn about bug fixes for Ceph Object Gateway included in this release.
Object unlink handling updated for zero-shard configuration
Previously, the system did not correctly handle object unlink operations in specific zero-shard configurations.
With this fix, the code has been updated to ensure proper handling when both bucket_index_max_shards and the bucket’s num_shards are set to 0. As a result, object unlink operations now succeed in this scenario.
(IBMCEPH-12915)
Updated processing logic to ensure all topics are handled
Previously, only the first 1,000 topics were repeatedly processed, preventing the remaining topics from being handled as expected.
With this fix, the system now processes topics in batches of 1,000, ensuring that all topics are eventually processed rather than cycling over only the initial set. As a result, bucket notifications are now sent for all topics, and topic queues no longer fill up or block service operation.
(IBMCEPH-12914)
8.1.2. All bug fixes Copiar enlaceEnlace copiado en el portapapeles!
This section lists a complete listing of all bug fixes in this release of Red Hat Ceph Storage.
| Issue key | Severity | Summary |
|---|---|---|
| IBMCEPH-12760 | Critical | Shallow Clone does not work as expected when an RWX clone is in progress. |
| IBMCEPH-12841 | Critical | pg_autoscaler is calculating correctly but not implementing PG counts and changes, due to high threshold |
| IBMCEPH-10918 | Important | Allow Ingress service to expose the metrics via HTTPS |
| IBMCEPH-12770 | Important | unix attributes stored on objects appear not to be persistent (as required) |
| IBMCEPH-12827 | Important | Unexpected error getting (earmark|encryption tag): error in getxattr: No data available [Errno 61] |
| IBMCEPH-12883 | Important | MDS crassed executing asok_command: dump tree with assert ceph::__ceph_assert_fail(char const*, char const*, int, char const*) |
| IBMCEPH-12900 | Important | ceph-mds crashed - mds-rank-fin |
| IBMCEPH-12903 | Important | ceph-crash not authenticating with cluster correctly |
| IBMCEPH-12906 | Important | COT get attribute command fails for BlueFS ENOSPC OSD |
| IBMCEPH-12914 | Important | notification code will go into infinite loop when there are more than 1K topics |
| IBMCEPH-12981 | Important | RBD Group mirror snapshots remain in "created" / "not copied" state after rbd-mirror daemon stop & kill |
| IBMCEPH-12982 | Important | Group resync does not recover snapshots stuck in ‘not copied’ state |
| IBMCEPH-12999 | Important | RPMInspect fails on executable stack |
| IBMCEPH-13113 | Important | radosgw-admin 'bucket rm --bypass-gc' ignores refcount (can lead to DL) |
| IBMCEPH-12786 | Moderate | Fix AdminOps Api GetAccount() and DeleteAccount() |
| IBMCEPH-12853 | Moderate | ECDummyOp memory leak in Fast EC |
| IBMCEPH-12915 | Moderate | When "bucket_index_max_shards" is set to 0 in the zone group , and bucket has num_shards 0 , the "object unlink" fails |
8.1.3. Security fixes Copiar enlaceEnlace copiado en el portapapeles!
This section lists security fixes from this release of Red Hat Ceph Storage.
For details about each CVE, see CVE Records.
- CVE-2021-23358
- CVE-2024-51744
- CVE-2024-55565
- CVE-2025-22868
- CVE-2025-26791
- CVE-2025-66418
- CVE-2025-66471
- CVE-2025-7783
8.1.4. Known issues Copiar enlaceEnlace copiado en el portapapeles!
This section documents known issues found in this release of Red Hat Ceph Storage.
8.1.4.1. Ceph Object Gateway multi-site Copiar enlaceEnlace copiado en el portapapeles!
Get to know the known issues for Ceph Object Gateway multi-site found in this release.
Bucket index shows stale metadata after lifecycle expiration in versioned buckets
In rare cases, when lifecycle expiration removes objects from versioned buckets, some omap entries in the bucket index might remain even though the objects have already been removed.
As a result, some omap entries may remain in the bucket index. In cases that many leftover keys accumulate, the following error is emitted: (27) File too large. This inconsistency can affect tools or processes that depend on accurate bucket index listings.
As a workaround:
Scan the bucket for leftover keys.
radosgw-admin bucket check olh --bucket=testbucket --dump-keys --hide-progressRemove the leftover omap entries.
radosgw-admin bucket check olh --bucket=testbucket --fix
8.2. Red Hat Ceph Storage 9.0z2 Copiar enlaceEnlace copiado en el portapapeles!
Red Hat Ceph Storage release 9.0z2 is now available. This release includes enhancements, bug fixes, and known issues.
8.2.1. Enhancements Copiar enlaceEnlace copiado en el portapapeles!
This section lists all the major updates, and enhancements introduced in this release of Red Hat Ceph Storage.
8.2.1.1. Ceph build Copiar enlaceEnlace copiado en el portapapeles!
More cluster deployment support options
ARM-based platform cluster deployment support was previously available as limited release. This enhancement provides full availability for new and existing customers in production environments.
Red Hat Ceph Storage clusters now support cluster deployments on ARM 64 (aarch64), IBM POWER, and S390x architectures. This enhancement enables you to consume Ceph storage from cost‑effective AWS Graviton instances and other ARM‑based platforms.
(ISCE-1400)
PKCE enforcement for OAuth2 authorization code flow
PKCE (Proof Key for Code Exchange) enforcement has been added to python‑oauthlib for the OAuth2 authorization code flow. This enhancement strengthens OAuth2 authentication by providing protection against man‑in‑the‑middle (MITM) attacks.
With this change, applications that use OAuth2 through python‑oauthlib can enforce PKCE as part of the authorization process, improving overall authentication security.
(IBMCEPH-12630)
8.2.1.2. Ceph Object Gateway Copiar enlaceEnlace copiado en el portapapeles!
Improved Object Gateway Lifecycle Processing Performance
This release enhances Ceph Object Gateway lifecycle (LC) processing performance for buckets with multiple, overlapping lifecycle rules.
The improvement groups lifecycle rules that share the same prefix and object tag conditions, allowing Object Gateway to enumerate objects and fetch tags only once instead of performing multiple passes for each rule. This reduces unnecessary I/O and improves concurrency during lifecycle execution.
As a result, lifecycle operations complete faster and more predictably for large buckets with complex lifecycle configurations, improving overall efficiency in high‑scale environments.
(IBMCEPH-13353)
8.2.2. Bug fixes Copiar enlaceEnlace copiado en el portapapeles!
This section describes bugs with significant user impact which were fixed in this release of Red Hat Ceph Storage. In addition, the section includes descriptions of fixed known issues found in previous versions.
8.2.2.1. cephadm utility Copiar enlaceEnlace copiado en el portapapeles!
Optional installation of service dependencies during cephadm prepare-host
An issue that prevented administrators from controlling the installation of service dependencies when running the cephadm prepare-host command has been fixed.
Previously, the command always installed required system packages and dependencies, even in environments where hosts were already preconfigured or managed by external tools.
With this fix, cephadm prepare-host now supports more flexible host preparation behavior, allowing administrators to better align dependency management with their existing provisioning and automation workflows.
(IBMCEPH-13460)
cephadm‑ansible playbooks correctly resolve the ceph_config module
An issue that caused cephadm‑ansible playbooks to fail due to an unresolved ceph_config module has been fixed.
Previously, playbooks that relied on the ceph_config module could not run successfully, even though the same playbooks worked as expected in earlier releases. This regression prevented automation workflows and tests from retrieving or updating Ceph configuration values through Ansible.
With this fix, the ceph_config module is resolved correctly, and cephadm‑ansible playbooks that depend on this module now run successfully as expected.
(IBMCEPH-13542)
8.2.2.2. Ceph build Copiar enlaceEnlace copiado en el portapapeles!
gRPC reflection support for the new gRPC service
An issue that prevented gRPC reflection from being available for the new gRPC service has been fixed.
Previously, gRPC clients that rely on reflection were unable to discover service descriptors automatically, which caused requests to fail unless the service definition was provided explicitly.
With this fix, the gRPC reflection capability is available as expected. gRPC clients can now discover service descriptors dynamically, enabling standard tooling and workflows to interact with the service without requiring manual proto file configuration.
(IBMCEPH-13627)
Ceph version string correctly reflects production build after upgrade to RHCS 9
After upgrading a Red Hat Ceph Storage (RHCS) 8 cluster to RHCS 9, the Ceph version output could display the string rc - RelWithDebInfo.
Previously, this caused confusion by suggesting that a release candidate or debug build was running, even though the cluster was using a supported production image.
With this fix, the Ceph version string now correctly reflects the production build status after upgrading to RHCS 9.
(IBMCEPH-13276)
8.2.2.3. Ceph File System (CephFS) Copiar enlaceEnlace copiado en el portapapeles!
Listing encrypted case‑insensitive CephFS directories
An issue that caused errors when listing the contents of encrypted, case‑insensitive CephFS directories has been fixed.
Previously, directory listing operations could fail when both encryption and case‑insensitive directory features were enabled, even though the data existed and permissions were correctly configured.
With this fix, directory contents can now be listed successfully when encryption and case‑insensitive directory support are enabled together.
(IBMCEPH-13452)
8.2.2.4. Ceph Object Gateway Copiar enlaceEnlace copiado en el portapapeles!
Intermittent HTTP 409 errors for Ceph Object Gateway S3 requests
An issue that caused Ceph Object Gateway to intermittently return HTTP 409 ConcurrentModification errors for S3 requests has been fixed.
Previously, these errors could occur during normal S3 operations, including GET requests, even when no conflicting client activity was expected.
With this fix, Ceph Object Gateway handles concurrent access scenarios more reliably, reducing unexpected request failures and improving consistency for S3 client operations.
(IBMCEPH-13756)
GET requests for multipart‑uploaded objects return correct results
An issue that caused GET requests to fail with HTTP 404 for multipart‑uploaded objects, even though HEAD requests succeeded, has been fixed.
This inconsistency could lead applications to incorrectly assume that objects were unavailable after validating their existence.
With this fix, GET requests for multipart‑uploaded objects now behave consistently, ensuring reliable access to objects uploaded using multipart upload.
(IBMCEPH-13618)
Ceph Object Gateway successfully starts after upgrade
An issue that caused the Object Gateway service to fail to start after upgrading to Red Hat Ceph Storage 9.0z1 has been fixed.
Previously, Ceph Object Gateway could enter an error state during startup, preventing object storage services from becoming available after the upgrade.
With this fix, Ceph Object Gateway initializes successfully following the upgrade, restoring object storage availability and normal administrative operations.
(IBMCEPH-13877)
8.2.3. All bug fixes Copiar enlaceEnlace copiado en el portapapeles!
This section lists a complete listing of all bug fixes in this release of Red Hat Ceph Storage.
| Issue key | Summary |
|---|---|
| IBMCEPH-10839 | Squid deployed OSDs are crashing with !ito→is_valid() in BlueStore::Blob::copy_extents_over_empty() |
| IBMCEPH-11376 | QoS Bandwidth limit shows PerClient behavior when set as PerShare behavior |
| IBMCEPH-11414 | IOPS, throughput and latency perf panels do not show data when filtered by time picker |
| IBMCEPH-12396 | Cluster QoS default port number modification and corresponding firewall rule |
| IBMCEPH-12630 | oauthlib: Missing PKCE enforcement in OAuth2 Authorization Code Flow in oauthlib |
| IBMCEPH-12758 | Replication and application resources stuck during application deletion |
| IBMCEPH-12787 | Ganesha crashes when the ingress mode is configured as haproxy-protocol |
| IBMCEPH-12823 | Namespace creation error help message has wrong command for namespace list |
| IBMCEPH-12916 | Allow formatting availability score in JSON |
| IBMCEPH-12929 | Upgrade RHCS 6.1z9 to RHCS 8.1z1 failed due to custom alertmanager.yml |
| IBMCEPH-12935 | enable_cluster_qos field is not displayed when qos is enabled |
| IBMCEPH-13232 | Removing the cluster qos disable command from cqos |
| IBMCEPH-13276 | Upgrading RHCS 8 cluster to RHCS 9 shows version as (rc - RelWithDebInfo) |
| IBMCEPH-13302 | OSD deployment and startup fails on nodes with high OSD count |
| IBMCEPH-13353 | Backport request for PR#66367 to 9.0 for GCHQ |
| IBMCEPH-13361 | CEPHADM_STRAY_DAEMON health warning |
| IBMCEPH-13369 | Bandwidth does not get distributed when qos bandwidth control is enabled at cluster and export level |
| IBMCEPH-13434 | Proxy module improvements |
| IBMCEPH-13452 | cephfs: Error listing encrypted case-insensitive directory contents |
| IBMCEPH-13453 | Add support to update read and write bandwidth parameters in MB/sec |
| IBMCEPH-13460 | Allow optional installation of service dependencies during host preparation |
| IBMCEPH-13493 | Bandwidth control limit set at export level does not work as expected |
| IBMCEPH-13542 | cephadm-ansible playbook fails with couldn’t resolve module/action ceph_config |
| IBMCEPH-13568 | Upgrading cluster from dashboard upgrades to the upstream versions |
| IBMCEPH-13618 | HEAD returns the object but GET requests fail with code 404 for multi-part upload |
| IBMCEPH-13627 | The new gRPC service is missing the corresponding reflections library |
| IBMCEPH-13692 | ceph orch daemon add osd ignores db_devices passed via command for OSD deployment |
| IBMCEPH-13694 | cephadm shell ceph-volume inventory errors out when Thin LVS are created |
| IBMCEPH-13726 | Failed to instantiate ACL |
| IBMCEPH-13756 | HTTP 409 error for Ceph RGW S3 requests |
| IBMCEPH-13847 | PerClient limiter is not working as expected in CQOS (ops and bandwidth control) in cloud build |
| IBMCEPH-13868 | libcephfs proxy daemon crashes when configuring fscrypt key |
| IBMCEPH-13877 | fails to start after upgrade to Tentacle due to inability to decode current period object (.3) from .rgw.root pool |
8.2.4. Security fixes Copiar enlaceEnlace copiado en el portapapeles!
This section lists security fixes from this release of Red Hat Ceph Storage.
For details about each CVE, see CVE Records.
- CVE-2023-48795
- CVE-2024-11831
- CVE-2025-59436
- CVE-2025-59437
8.2.5. Known Issues Copiar enlaceEnlace copiado en el portapapeles!
This section documents known issues found in this release of Red Hat Ceph Storage 9.0z2.
8.2.5.1. cephadm utility Copiar enlaceEnlace copiado en el portapapeles!
Alertmanager redeployment fails during upgrade with legacy custom templates
During an upgrade from Red Hat Ceph Storage 6.1z9 to 8.1z1, the cephadm‑managed Alertmanager service can fail to redeploy if a custom Alertmanager template references the legacy default_webhook_urls variable.
In Red Hat Ceph Storage 8.x, changes to Alertmanager template rendering no longer guarantee the presence of this variable. As a result, template rendering can fail with an undefined variable error, preventing Alertmanager from redeploying and pausing the upgrade.
When this issue occurs, the upgrade remains paused, the cluster enters a HEALTH_WARN state, and the upgrade cannot continue until the configuration is corrected.
As a workaround, update the custom Alertmanager template to replace references to default_webhook_urls with webhook_urls, and then resume the upgrade. After the template is updated, the Alertmanager service redeploys successfully and the upgrade completes.
(IBMCEPH‑12929)
8.2.5.2. Ceph build Copiar enlaceEnlace copiado en el portapapeles!
Upgrading of ceph-common package from version 8.1z5 to later version breaks
While upgrading the ceph-common package, the dependent libcephfs-daemon package causes a failure as it is not included in the later versions.
Affected upgrade paths: Upgrade from Red Hat Ceph Storage 8.1z5 to any of the following versions: 9.0, 9.0z1, 9.0z2
As a workaround, upgrade together with the --allowerasing tag.
With this tag, the libcephfs-daemon package gets uninstalled and the upgrade proceeds successfully.
(IBMCEPH-13912)
8.2.5.3. Ceph Object Gateway multi-site Copiar enlaceEnlace copiado en el portapapeles!
Bucket index shows stale metadata after lifecycle expiration in versioned buckets
In rare cases, when lifecycle expiration removes objects from versioned buckets, some bucket index metadata might not be cleaned up correctly. As a result, stale index entries can remain even though the corresponding objects have already been removed.
If many stale entries accumulate, operations that depend on accurate bucket index listings can be affected. In some cases, tools that read the bucket index might report errors, such as (27) File too large.
This issue can impact administrative operations or background processes that rely on consistent bucket index metadata after lifecycle expiration runs on versioned buckets.
As a workaround, administrators can use available Object Gateway tooling to scan for and remove leftover bucket index entries.
(IBMCEPH-12980)