Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 7. Known issues
This section documents known issues found in this release of Red Hat Ceph Storage.
7.1. The Ceph Ansible utility Link kopierenLink in die Zwischenablage kopiert!
Deploying the placement group autoscaler does not work as expected on CephFS related pools only
To work around this issue, the placement group autoscaler can be manually enabled on CephFS related pools after the playbook has run.
The filestore-to-bluestore playbook does not support the`osd_auto_discovery` scenario
Red Hat Ceph Storage 4 deployments based on osd_auto_recovery scenario can’t use the filestore-to-bluestore playbook to ease the BlueStore migration.
To work around this issue, use shrink-osd playbook and redeploy the shrinked OSD with osd_objectstore: bluestore.
7.2. Ceph Management Dashboard Link kopierenLink in die Zwischenablage kopiert!
The Dashboard does not provide correct Ceph iSCSI error messages
If the Ceph iSCSI returns an error, for example the HTTP "400" code when trying to delete an iSCSI target while a user is logged in, the Red Hat Ceph Storage Dashboard does not forward that error code and message to the Dashboard user using the pop-up notifications, but displays a generic "500 Internal Server Error". Consequently, the message that the Dashboard provides is not informative and even misleading; an expected behavior ("users cannot delete a busy resource") is perceived as an operational failure ("internal server error"). To work around this issue, see the Dashboard logs.
7.3. The Ceph Volume utility Link kopierenLink in die Zwischenablage kopiert!
Ceph OSD fails to start because udev resets the permissions for BlueStore DB and WAL devices
When specifying the BlueStore DB and WAL partitions for an OSD using the ceph-volume lvm create command or specifying the partitions, using the lvm_volume option with Ceph Ansible can cause those devices to fail on startup. The udev subsystem resets the partition permissions back to root:disk.
To work around this issue, manually start the systemd ceph-volume service. For example, to start the OSD with an ID of 8, run the following: systemctl start 'ceph-volume@lvm-8-*'. You can also use the service command, for example: service ceph-volume@lvm-8-4c6ddc44-9037-477d-903c-63b5a789ade5 start. Manually starting the OSD results in the partition having the correct permission, ceph:ceph.
7.4. Ceph Object Gateway Link kopierenLink in die Zwischenablage kopiert!
Deleting buckets or objects in the Ceph Object Gateway causes orphan RADOS objects
Deleting buckets or objects after the Ceph Object Gateway garbage collection (GC) has processed the GC queue causes large quantities of orphan RADOS objects. These RADOS objects are "leaked" data that belonged to the deleted buckets.
Over time, the number of orphan RADOS objects can fill the data pool and degrade the performance of the storage cluster.
To reclaim the space from these orphan RADOS objects, refer to the Finding orphan and leaky objects section of the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide.
7.5. Multi-site Ceph Object Gateway Link kopierenLink in die Zwischenablage kopiert!
The radosgw-admin commands that create and modify users are not allowed in secondary zones for multi-site Ceph Obejct Gateway environments
Using the radosgw-admin commands to create or modify users and subusers on the secondary zone does not propagate those changes to the master zone, even if the --yes-i-really-mean-it option was used.
To workaround this issue, use the REST APIs instead of the radosgw-admin commands. The REST APIs enable you to create and modify users in secondary zone, and then propagate those changes to the master zone.
7.6. Packages Link kopierenLink in die Zwischenablage kopiert!
Current version of Grafana causes certain bugs in the Dashboard
Red Hat Ceph Storage 4 uses the Grafana version 5.2.4. This version causes the following bugs in the Red Hat Ceph Storage Dashboard:
When navigating to Pools > Overall Performance, Grafana returns the following error:
TypeError: l.c[t.type] is undefined true
TypeError: l.c[t.type] is undefined trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow - When viewing a pool’s performance details (Pools > select a pool from the list > Performance Details) the Grafana bar is displayed along with other graphs and values, but it should not be there.
These bugs will be fixed after rebasing to a newer Grafana version in a future release of Red Hat Ceph Storage.
7.7. RADOS Link kopierenLink in die Zwischenablage kopiert!
The ceph device command does not work when querying MegaRaid devices
Currently, the ceph device query-daemon-health-metrics command does not support querying the health metrics of disks attached to MegaRaid devices. This command displays an error similar to the following:
smartctl returned invalid JSON
smartctl returned invalid JSON
The disk failure prediction module for MegaRaid devices is unusable at this time. Currently, there is no workaround for this issue.
See the Red Hat Ceph Storage Hardware Guide for more information on using RAID solutions with Red Hat Ceph Storage.