Chapter 7. Asynchronous errata updates


This section describes the bug fixes, known issues, and enhancements of the z-stream releases.

7.1. Red Hat Ceph Storage 7.0z2

Red Hat Ceph Storage release 7.0z2 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:2743 and RHBA-2024:2744 advisories.

7.1.1. Enhancements

7.1.1.1. Ceph File System

The snap-schedule module retains a defined number of snapshots

With this release, snap-schedule module supports a new retention specification to retain a user-defined number of snapshots. For example, if you have specified 50 snapshots to retain irrespective of the snapshot creation cadence, then the snapshot is pruned after a new snapshot is created. The actual number of snapshots retained is 1 less than the maximum number specified. In this example, 49 snapshots are retained so that there is a margin of 1 snapshot that can be created on the file system on the next iteration. The retained snapshot avoids breaching the system configured limit of mds_max_snaps_per_dir.

Important

Be careful when configuring mds_max_snaps_per_dir and snapshot scheduling limits to avoid unintentional deactivation of snapshot schedules due to the file system returning a "Too many links" error if the mds_max_snaps_per_dir limit is breached.

Bugzilla:2227809

Introduction of dump dir command

With this release, a new command dump dir is introduced to dump the directory information.

Bugzilla:2269686

Snapshot schedule for subvolume with option --subvol is now available

With this enhancement, users can schedule snapshots for subvolumes in default as well as non-default subvolume groups using the --subvol argument.

Bugzilla:2425085

7.1.1.2. Ceph Metrics

The output of the counter dump command is now split into different sections

With this enhancement, labeled performance counters for the Ceph Object Gateway operation metrics are split into different sections in the output of counter dump command for the user operation counters and bucket operation counters.

Bugzilla:2251055

7.1.1.3. Ceph Objecy Gateway

Introduction of a new configuration variable rgw_op_counters_dump_expiration

With this enhancement, a new configuration variable rgw_op_counters_dump_expiration is introduced. This new variable controls the number of seconds a labeled performance counter is going to be emitted from the ceph counter dump command.

After the rgw_op_counters_dump_expiration number of seconds, if a bucket or user labeled counter is not updated, it will not show up in the json output of ceph counter dump.

To turn this filtering OFF, the value of rgw_op_counters_dump_expiration must be set to 0.

Note

The value of rgw_op_counters_dump_expiration must not be changed at runtime.

Bugzilla:2271715

Perf counters for S3 operations can now be sent Prometheus

With this enhancement, you can send a new labeled counter to Prometheus by using the Ceph exporter daemon and is useful for common S3 operations, such as PUTs, which can be visualized per bucket or per user.

Perf counters for S3 operations labeled by either the user or the bucket are emitted by using the ceph counter dump command.

Bugzilla:1929346

7.1.2. Known issues

7.1.2.1. Multi-site Ceph Object Gateway

Large omap warnings in the archive zone environment

Presently, in the archive zone environment, it is possible that you see large omap warnings due to several versions of the same object in a bucket being written to a single bucket index shard object. The recommendation is to reduce the 'max_objs_per_shard' config option to 50,000 to account for the omap olh entries on the archive zone. This will help keep the number of omap entries per bucket index shard object in check to prevent large omap warnings.

Bugzilla:2260117

7.1.2.2. Ceph dashboard

Token import throws an error when trying to import multisite configuration from a remote cluster

The multi-site period information does not return realm name now due to this import multi-site configuration gives error on submitting the form. Workaround: Import the multi-site configuration through command line interface.

Bugzilla:2273911

7.1.2.3. Security

CVE-2023-49569 for grafana-container: go-git is present in the Ceph 7.0z2 release

A CVE is present in Ceph 7.0z2 because the Ceph 7.0 release uses an older version of Grafana than what is available in the Ceph 6.1 release.

Ceph 7.0 uses Grafana 9.4.12 while 6.1z6 release is on Grafana 10.4.0 where this CVE is fixed.

Bugzilla:2271879

7.2. Red Hat Ceph Storage 7.0z1

Red Hat Ceph Storage release 7.0z1 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:1214 and RHBA-2024:1215 advisories.

7.2.1. Enhancements

7.2.1.1. Ceph Block Devices

Improved rbd_diff_iterate2() API performance

Previously, RBD diff-iterate was not guaranteed to execute locally if exclusive lock was available when diffing against the beginning of time (fromsnapname == NULL) in fast-diff mode (whole_object == true with fast-diff image feature enabled and valid).

With this enhancement, rbd_diff_iterate2() API performance is improved, thereby increasing the performance for QEMU live disk synchronization and backup use cases, where the fast-diff image feature is enabled.

Bugzilla:2259052

7.2.1.2. Ceph Object Gateway

rgw-restore-bucket-index tool can now restore the bucket indices for versioned buckets

With this enhancement, the rgw-restore-bucket-index tool now works as broadly as possible, with the ability to restore the bucket indices for un-versioned as well as for versioned buckets.

Bugzilla:2240992

7.2.2. Known issues

7.2.2.1. Multi-site Ceph Object Gateway

Some Ceph Object Gateway applications using S3 client SDKs can experience unexpected errors

Presently, some applications using S3 client SDKs could experience an unexpected 403 error when uploading a zero-length object, if an external checksum is requested.

As a workaround, use Ceph Object Gateway services with SSL.

Bugzilla:2256969

7.2.3. Removed functionality

7.2.3.1. Ceph Object Gateway

Prometheus metrics are no longer used

This release introduces new feature-rich labeled perf counters, replacing the Object Gateway-related Prometheus metrics previously used. The new metrics are being introduced before complete removal to allow overlapping usage.

Important

The Prometheus metrics are currently still available for use simultaneously with the newer metrics during this transition. However, the Prometheus metrics will be completely removed in the Red Hat Ceph Storage 8.0 release.

Use the following table for knowing the replaced metrics in 7.0z1 and later.

Table 7.1. Replacement metrics

Deprecated Prometheus metric

New metric in 7.0z1

ceph_rgw_get

ceph_rgw_op_global_get_obj_ops

ceph_rgw_get_b

ceph_rgw_op_global_get_obj_bytes

ceph_rgw_get_initial_lat_sum

ceph_rgw_op_global_get_obj_lat_sum

ceph_rgw_get_initial_lat_count

ceph_rgw_op_global_get_obj_lat_count

ceph_rgw_put

ceph_rgw_op_global_put_obj_ops

cdeph_rgw_put_b

ceph_rgw_op_global_put_obj_bytes

ceph_rgw_put_initial_lat_sum

ceph_rgw_op_global_put_obj_lat_sum

ceph_rgw_put_initial_lat_count

ceph_rgw_op_global_put_obj_lat_count

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.