Chapter 6. Known issues


This section documents known issues found in this release of Red Hat Ceph Storage.

6.1. The Cephadm utility

Using the haproxy_qat_support setting in ingress specification causes the haproxy daemon to fail deployment

Currently, the haproxy_qat_support is present but not functional in the ingress specification. This was added to allow haproxy to offload encryption operations on machines with QAT hardware, intending to improve performance. The added function does not work as intended, due to an incomplete code update. If the haproxy_qat_support setting is used, then the haproxy daemon fails to deploy.

To avoid this issue, do not use this setting until it is fixed in a later release.

Bugzilla:2308344

PROMETHEUS_API_HOST may not get set when Cephadm initially deploys Prometheus

Currently, PROMETHEUS_API_HOST may not get set when Cephadm initially deploys Prometheus. This issue is seen most commonly when bootstrapping a cluster with --skip-monitoring-stack, then deploying Prometheus at a later time. Due to this, a few monitoring information may be unavailable.

As a workaround, use the command ceph orch redeploy prometheus to set the PROMETHEUS_API_HOST as it redeploys the Prometheus daemon(s). Additionally, the value can be set manually with the ceph dashboard set-prometheus-api-host <value> command.

Bugzilla:2315072

6.2. Ceph Manager plugins

Sometimes ceph-mgr modules are temporarily unavailable and their commands fail

Occasionally, the balancer module takes a long time to load after a ceph-mgr restart. As a result, other ceph-mgr modules can become temporarily unavailable and their commands fail.

For example,

[ceph: root@host01 /]# ceph crash ls
Error ENOTSUP: Warning: due to ceph-mgr restart, some PG states may not be up to date
Module 'crash' is not enabled/loaded (required by command 'crash ls'): use `ceph mgr module enable crash` to enable it

As a workaround, in cases after a ceph-mgr restart, commands from certain ceph-mgr modules fail, check the status of the balancer, using the ceph balancer status command. This might occur, for example, during an upgrade. * If the balancer was previously active "active": true but now it is marked as “active": false, check its status until it is active again, then rerun the other ceph-mgr module commands. * In other cases, try to turn off the balancer ceph-mgr module. ceph balancer off After turning off the balancer, rerun the other ceph-mgr module commands.

Bugzilla:2314146

6.3. Ceph Dashboard

Ceph Object Gateway page does not load after a multi-site configuration

The Ceph Object Gateway page does not load because the dashboard cannot find the correct access key and secret key for the new realm during multi-site configuration.

As a workaround, use the ceph dashboard set-rgw-credentials command to manually update the keys.

Bugzilla:2231072

CephFS path is updated with the correct subvolume path when navigating through the subvolume tab

In the Create NFS Export form for CephFS, the CephFS path is updating the subvolume group path instead of the subvolume.

Currently, there is no workaround.

Bugzilla:2303247

Multi-site automation wizard mentions multi-cluster for both Red Hat and IBM Storage Ceph products

Within the multi-site automation wizard both Red Hat and IBM Storage Ceph products are mentioned in reference to multi-cluster. Only IBM Storage Ceph supports multi-cluster.

Bugzilla:2322398

6.4. Ceph Object Gateway

Objects uploaded as Swift SLO cannot be downloaded by anonymous users

Objects that are uploaded as Swift SLO cannot be downloaded by anonymous users.

Currently, there is no workaround for this issue.

Bugzilla:2272648

Not all apparently eligible reads can be performed locally

Currently, if a RADOS object has been recently created and in some cases, modified, it is not immediately possible to make a local read. Even when correctly configured and operating, not all apparently eligible reads can be performed locally. This is due to limitations of the RADOS protocol. In test environments, many objects are created and it is easy to create an unrepresentative sample of read-local I/Os.

Bugzilla:2309383

6.5. Multi-site Ceph Object Gateway

Buckets created by tenanted users do not replicate correctly

Currently, buckets that are created by tenanted users do not replicate correctly.

To avoid this issue, bucket owners should avoid using tenanted users to create buckets on secondary zone but instead only create them on master zone.

Bugzilla:2325018

When a secondary zone running Red Hat Ceph Storage 8.0 replicates user metadata from a pre-8.0 metadata master zone, access keys of those users are erroneously marked as "inactive".

Currently, when a secondary zone running Red Hat Ceph Storage 8.0 replicates user metadata from a pre-8.0 metadata master zone, the access keys of those users are erroneously marked as "inactive". Inactive keys cannot be used to authenticate requests, so those users are denied access to the secondary zone.

As a workaround, the current primary zone must be upgraded before other sites.

Bugzilla:2327402

6.6. RADOS

Placement groups are not properly scaled-down after removing the bulk flag on the cluster

Currently, pg-upmap-primary entries are not properly removed for placement groups (PGs) that are pending merge. For example, when the bulk flag is removed on a pool, or any case where the number of PGs in a pool decreases. As a result, the PG scale-down process gets stuck and the number of PGs in the affected pool do not decrease as expected.

As a workaround, remove the pg_upmap_primary entries in the OSD map of the affected pool. To view the entries, run the ceph osd dump command and then run ceph osd rm-pg-upmap-primary PG_ID for reach PG in the affected pool.

After using the workaround, the PG scale-down process resumes as expected.

Bugzilla:2302230

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.