Search

Chapter 7. Known issues

download PDF

This section documents known issues found in this release of Red Hat Ceph Storage.

7.1. The Ceph Ansible utility

Deploying the placement group autoscaler does not work as expected on CephFS related pools only

To work around this issue, the placement group autoscaler can be manually enabled on CephFS related pools after the playbook has run.

(BZ#1836431)

Ceph OSD fails to use osd_max_markdown_count parameter as the systemd unit template enforces the parameter`Restart=always`

The systemd unit templates of OSD Daemons enforce the parameter Restart=always thereby preventing the use of osd_max_markdown_count parameter resulting in the restarting of services. To workaround this issue, use the ceph_osd_systemd_overrides variable to override the Restart= parameter in the OSD systemd template, that is,

[osds]
osd0 ceph_osd_systemd_overrides="{'Service': {'Restart': 'no'}}"

(BZ#1860739)

The filestore-to-bluestore playbook does not support the`osd_auto_discovery` scenario

Red Hat Ceph Storage 4 deployments based on osd_auto_recovery scenario can’t use the filestore-to-bluestore playbook to ease the BlueStore migration.

To work around this issue, use shrink-osd playbook and redeploy the shrinked OSD with osd_objectstore: bluestore.

(BZ#1881523)

The upgrade process does not automatically stop the ceph-crash container daemons.

The upgrade process issues a call to the role ceph-crash, but the call only starts the ceph-crash service. If the ceph-crash container daemons are still running during the upgrade process, they will not be restarted when upgrade is complete.

To work around this issue, manually restart the ceph-crash containers after upgrading.

(BZ#1943471)

7.2. The Ceph Volume utility

When users run osd.yml or site.yml playbook, ceph-ansible does not create OSDs on the new devices

When users explicitly pass a set of db devices,--db-devices or wal devices --wal-devices where one is unavailable to the ceph-volume lvm batch, it is then filtered out and the results are different then what is expected. Current implementation of ceph-volume lvm batch does not allow the adding of new OSDs in a non-interactive mode if one of the passed db or wal device is unavailable to prevent an expected OSD topology. Due to this ceph-volume limitation, ceph-ansible is unable to add new OSDs in the batch scenario of devices and dedicated_devices.

(BZ#1896803)

7.3. Multi-site Ceph Object Gateway

Objects fail to sync in a Ceph Object Gateway multisite set-up

Some objects may fail to sync and might have a status mismatch when users run radosgw-admin sync status command in a Ceph Object Gateway multisite set-up.

Currently, there is no workaround for this issue.

(BZ#1905369)

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.