Chapter 7. Known issues


This section documents known issues found in this release of Red Hat Ceph Storage.

7.1. The Ceph Ansible utility

Deploying the placement group autoscaler does not work as expected on CephFS related pools only

To work around this issue, the placement group autoscaler can be manually enabled on CephFS related pools after the playbook has run.

(BZ#1836431)

The filestore-to-bluestore playbook does not support the`osd_auto_discovery` scenario

Red Hat Ceph Storage 4 deployments based on osd_auto_recovery scenario can’t use the filestore-to-bluestore playbook to ease the BlueStore migration.

To work around this issue, use shrink-osd playbook and redeploy the shrinked OSD with osd_objectstore: bluestore.

(BZ#1881523)

7.2. Ceph Management Dashboard

The Dashboard does not provide correct Ceph iSCSI error messages

If the Ceph iSCSI returns an error, for example the HTTP "400" code when trying to delete an iSCSI target while a user is logged in, the Red Hat Ceph Storage Dashboard does not forward that error code and message to the Dashboard user using the pop-up notifications, but displays a generic "500 Internal Server Error". Consequently, the message that the Dashboard provides is not informative and even misleading; an expected behavior ("users cannot delete a busy resource") is perceived as an operational failure ("internal server error"). To work around this issue, see the Dashboard logs.

(BZ#1786457)

7.3. The Ceph Volume utility

Ceph OSD fails to start because udev resets the permissions for BlueStore DB and WAL devices

When specifying the BlueStore DB and WAL partitions for an OSD using the ceph-volume lvm create command or specifying the partitions, using the lvm_volume option with Ceph Ansible can cause those devices to fail on startup. The udev subsystem resets the partition permissions back to root:disk.

To work around this issue, manually start the systemd ceph-volume service. For example, to start the OSD with an ID of 8, run the following: systemctl start 'ceph-volume@lvm-8-*'. You can also use the service command, for example: service ceph-volume@lvm-8-4c6ddc44-9037-477d-903c-63b5a789ade5 start. Manually starting the OSD results in the partition having the correct permission, ceph:ceph.

(BZ#1822134)

7.4. Ceph Object Gateway

Deleting buckets or objects in the Ceph Object Gateway causes orphan RADOS objects

Deleting buckets or objects after the Ceph Object Gateway garbage collection (GC) has processed the GC queue causes large quantities of orphan RADOS objects. These RADOS objects are "leaked" data that belonged to the deleted buckets.

Over time, the number of orphan RADOS objects can fill the data pool and degrade the performance of the storage cluster.

To reclaim the space from these orphan RADOS objects, refer to the Finding orphan and leaky objects section of the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide.

(BZ#1844720)

7.5. Multi-site Ceph Object Gateway

The radosgw-admin commands that create and modify users are not allowed in secondary zones for multi-site Ceph Obejct Gateway environments

Using the radosgw-admin commands to create or modify users and subusers on the secondary zone does not propagate those changes to the master zone, even if the --yes-i-really-mean-it option was used.

To workaround this issue, use the REST APIs instead of the radosgw-admin commands. The REST APIs enable you to create and modify users in secondary zone, and then propagate those changes to the master zone.

(BZ#1553202)

7.6. Packages

Current version of Grafana causes certain bugs in the Dashboard

Red Hat Ceph Storage 4 uses the Grafana version 5.2.4. This version causes the following bugs in the Red Hat Ceph Storage Dashboard:

  • When navigating to Pools > Overall Performance, Grafana returns the following error:

    TypeError: l.c[t.type] is undefined
    true
  • When viewing a pool’s performance details (Pools > select a pool from the list > Performance Details) the Grafana bar is displayed along with other graphs and values, but it should not be there.

These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red Hat Ceph Storage.

(BZ#1786107)

7.7. RADOS

The ceph device command does not work when querying MegaRaid devices

Currently, the ceph device query-daemon-health-metrics command does not support querying the health metrics of disks attached to MegaRaid devices. This command displays an error similar to the following:

smartctl returned invalid JSON

The disk failure prediction module for MegaRaid devices is unusable at this time. Currently, there is no workaround for this issue.

See the Red Hat Ceph Storage Hardware Guide for more information on using RAID solutions with Red Hat Ceph Storage.

(BZ#1810396)

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.