Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 7. Known issues


This section documents known issues found in this release of Red Hat Ceph Storage.

7.1. The Ceph Ansible utility

Deploying the placement group autoscaler does not work as expected on CephFS related pools only

To work around this issue, the placement group autoscaler can be manually enabled on CephFS related pools after the playbook has run.

(BZ#1836431)

Ceph OSD fails to use osd_max_markdown_count parameter as the systemd unit template enforces the parameter`Restart=always`

The systemd unit templates of OSD Daemons enforce the parameter Restart=always thereby preventing the use of osd_max_markdown_count parameter resulting in the restarting of services. To workaround this issue, use the ceph_osd_systemd_overrides variable to override the Restart= parameter in the OSD systemd template, that is,

[osds]
osd0 ceph_osd_systemd_overrides="{'Service': {'Restart': 'no'}}"
Copy to Clipboard Toggle word wrap

(BZ#1860739)

The filestore-to-bluestore playbook does not support the`osd_auto_discovery` scenario

Red Hat Ceph Storage 4 deployments based on osd_auto_recovery scenario can’t use the filestore-to-bluestore playbook to ease the BlueStore migration.

To work around this issue, use shrink-osd playbook and redeploy the shrinked OSD with osd_objectstore: bluestore.

(BZ#1881523)

The upgrade process does not automatically stop the ceph-crash container daemons.

The upgrade process issues a call to the role ceph-crash, but the call only starts the ceph-crash service. If the ceph-crash container daemons are still running during the upgrade process, they will not be restarted when upgrade is complete.

To work around this issue, manually restart the ceph-crash containers after upgrading.

(BZ#1943471)

7.2. The Ceph Volume utility

When users run osd.yml or site.yml playbook, ceph-ansible does not create OSDs on the new devices

When users explicitly pass a set of db devices,--db-devices or wal devices --wal-devices where one is unavailable to the ceph-volume lvm batch, it is then filtered out and the results are different then what is expected. Current implementation of ceph-volume lvm batch does not allow the adding of new OSDs in a non-interactive mode if one of the passed db or wal device is unavailable to prevent an expected OSD topology. Due to this ceph-volume limitation, ceph-ansible is unable to add new OSDs in the batch scenario of devices and dedicated_devices.

(BZ#1896803)

7.3. Multi-site Ceph Object Gateway

Objects fail to sync in a Ceph Object Gateway multisite set-up

Some objects may fail to sync and might have a status mismatch when users run radosgw-admin sync status command in a Ceph Object Gateway multisite set-up.

Currently, there is no workaround for this issue.

(BZ#1905369)

Nach oben
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2025 Red Hat