Chapter 7. Known issues
This section documents known issues found in this release of Red Hat Ceph Storage.
7.1. The Ceph Ansible utility
Deploying the placement group autoscaler does not work as expected on CephFS related pools only
To work around this issue, the placement group autoscaler can be manually enabled on CephFS related pools after the playbook has run.
Ceph OSD fails to use osd_max_markdown_count
parameter as the systemd unit template enforces the parameter`Restart=always`
The systemd unit templates of OSD Daemons enforce the parameter Restart=always
thereby preventing the use of osd_max_markdown_count
parameter resulting in the restarting of services. To workaround this issue, use the ceph_osd_systemd_overrides
variable to override the Restart=
parameter in the OSD systemd template, that is,
[osds] osd0 ceph_osd_systemd_overrides="{'Service': {'Restart': 'no'}}"
The filestore-to-bluestore
playbook does not support the`osd_auto_discovery` scenario
Red Hat Ceph Storage 4 deployments based on osd_auto_recovery
scenario can’t use the filestore-to-bluestore
playbook to ease the BlueStore
migration.
To work around this issue, use shrink-osd
playbook and redeploy the shrinked OSD with osd_objectstore: bluestore
.
The upgrade process does not automatically stop the ceph-crash
container daemons.
The upgrade process issues a call to the role ceph-crash
, but the call only starts the ceph-crash
service. If the ceph-crash
container daemons are still running during the upgrade process, they will not be restarted when upgrade is complete.
To work around this issue, manually restart the ceph-crash
containers after upgrading.
7.2. The Ceph Volume utility
When users run osd.yml
or site.yml
playbook, ceph-ansible does not create OSDs on the new devices
When users explicitly pass a set of db devices,--db-devices
or wal devices --wal-devices
where one is unavailable to the ceph-volume lvm batch
, it is then filtered out and the results are different then what is expected. Current implementation of ceph-volume lvm batch
does not allow the adding of new OSDs in a non-interactive mode if one of the passed db or wal device is unavailable to prevent an expected OSD topology. Due to this ceph-volume limitation, ceph-ansible is unable to add new OSDs in the batch scenario of devices and dedicated_devices.
7.3. Multi-site Ceph Object Gateway
Objects fail to sync in a Ceph Object Gateway multisite set-up
Some objects may fail to sync and might have a status mismatch when users run radosgw-admin sync status
command in a Ceph Object Gateway multisite set-up.
Currently, there is no workaround for this issue.