Chapter 6. Notable Bug Fixes


This section describes bugs fixed in this release of Red Hat Ceph Storage that have significant impact on users.

The output of the radosgw-admin realm rename command now alerts the administrator to run the command separately on each of the realm’s clusters

In a multi-site configuration, the name of a realm is only stored locally and is not shared as part of the period. As a consequence, when it is changed on one cluster, the name is not updated on the other cluster. Previously, users could easily miss this step, which could lead to confusion. With this update, the output of the radosgw-admin realm rename command contains instructions to rename the realm on other clusters as well. (BZ#1423886)

In the Ceph Object Gateway multi-site configuration, when the data log for replication was larger than 1000 entries, queries to list the data log entered to an infinite loop and used all memory. As a consequence, objects were not replicated from the primary to the secondary Ceph Object Gateway. With this update, the queries no longer loop over the entries, which prevents them to enter to infinite loops. As a result, the objects are replicated as expected. (BZ#1465446)

"radosgw-admin zone create" no longer creates an incorrect zone ID

Previously, the radosgw-admin zone create command with a specified zone ID created a zone with a different zone ID. This bug has been fixed, and the command now creates a zone with the specified zone ID. (BZ#1418235)

The radosgw-admin utility no longer logs an unnecessary message

Previously, the radosgw-admin utility logged the following message every time even if the message was not relevant:

`2017-02-11 00:09:56.704029 7f9011d259c0 0 System already converted`

The log level of this message has been changed from 0 to 20. As a result, the radosgw-admin command logs the aforementioned message only when appropriate. (BZ#1421819)

Results from deep scrubbing are no longer overwritten by shallow scrubbing

Previously, when performing shallow scrubbing after deep scrubbing, results from deep scrubbing were overwritten by results from shallow scrubbing. As a consequence, the deep scrubbing results were lost. Now, unless the nodeep_scrub flag is set, no shallow scrubbing is performed regularly, so the information from deep scrubbing is regenerated. (BZ#1330023)

An OSD failure no longer causes significant delay

Previously, when an OSD and a Monitor were colocated on the same node and the node failed, Ceph waited some time before sending the note to the Monitor so the Monitor could decide if it wanted to mark the OSD as down. This could lead to a significant delay. With this update, when an OSD is known to be down, the cluster becomes aware immediately after the failure report, and Ceph sends the note to the Monitor right away. (BZ#1425115)

The Ceph Object Gateway provides valid time stamps for newly created objects

Previously, the Ceph Object Gateway was storing 0 in the x-timestamp fields for all objects. This bug has been fixed, and newly created objects have the correct time stamps. Note that old objects will still retain the 0 time stamps. (BZ#1439917)

Swift SLOs can now be read from any other zones

Previously, the Ceph Object Gateway failed to fetch manifest files of Swift Static Large Objects (SLO). As a consequence, an attempt to read those objects from any other zone than the zone where the object was originally uploaded failed. This bug has been fixed, and the objects are read from all zones as expected. (BZ#1423858)

Ansible and "ceph-disk" no longer fail to create encrypted OSDs if the cluster name is different than "ceph"

Previously, the ceph-disk utility did not support configuring the dmcrypt utility if the cluster name was different than "ceph". Consequently, it was not possible to use the ceph-ansible utility to create encrypted OSDs if you use a custom cluster name.

This bug has been fixed, and custom cluster names can now be used. (BZ#1391920)

Two new parameters have been introduced to cope with the errors caused by modern Keystone token types

The token revocation API that the Ceph Object Gateway uses no longer works with modern token types in OpenStack and Keystone. This causes errors in the Ceph log and Python backtraces in Keystone.

To cope with these errors, two new parameters rgw_keystone_token_cache_size and rgw_keystone_revocation_interval have been introduced. Setting the rgw_keystone_toke_cache_size parameter to 0 in the Ceph configuration file removes the errors. Setting the rgw_keystone_revocation_interval parameter to 0 improves performance, but removes the ability to revoke tokens. (BZ#1438965)

ceph-radosgw starts as expected after upgrading from 1.3 to 2 when a non-default value is used for rgw_region_root_pool and rgw_zone_root_pool

Previously, the ceph-radosgw service did not start after upgrading the Ceph Object Gateway from 1.3 to 2, when the Gateway used non-default values for the rgw_region_root_pool and rgw_zone_root_pool parameters. This bug has been fixed and the ceph-radosgw now starts as expected. (BZ#1396956)

bi-list operations now perform as expected

Previously, the addition of new bucket index key ranges for multi-site replication induced an unintended bucket index entry decoding problem in the bi-list operation, which is now used during bucket resharding. Consequently, bucket resharding failed when multi-site replication was used.

The logic has been changed in the bi-list operation to resolve this bug, and bi-list operations can be performed as expected when multi-site replication is used. (BZ#1446665)

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.