Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 7. Asynchronous errata updates
This section describes the bug fixes, known issues, and enhancements of the z-stream releases.
7.1. Red Hat Ceph Storage 7.0z2
Red Hat Ceph Storage release 7.0z2 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:2743 and RHBA-2024:2744 advisories.
7.1.1. Enhancements
7.1.1.1. Ceph File System
The snap-schedule
module retains a defined number of snapshots
With this release, snap-schedule
module supports a new retention specification to retain a user-defined number of snapshots. For example, if you have specified 50 snapshots to retain irrespective of the snapshot creation cadence, then the snapshot is pruned after a new snapshot is created. The actual number of snapshots retained is 1 less than the maximum number specified. In this example, 49 snapshots are retained so that there is a margin of 1 snapshot that can be created on the file system on the next iteration. The retained snapshot avoids breaching the system configured limit of mds_max_snaps_per_dir
.
Be careful when configuring mds_max_snaps_per_dir
and snapshot scheduling limits to avoid unintentional deactivation of snapshot schedules due to the file system returning a "Too many links" error if the mds_max_snaps_per_dir
limit is breached.
Introduction of dump dir
command
With this release, a new command dump dir
is introduced to dump the directory information.
Snapshot schedule for subvolume with option --subvol
is now available
With this enhancement, users can schedule snapshots for subvolumes in default as well as non-default subvolume groups using the --subvol
argument.
7.1.1.2. Ceph Metrics
The output of the counter dump
command is now split into different sections
With this enhancement, labeled performance counters for the Ceph Object Gateway operation metrics are split into different sections in the output of counter dump
command for the user operation counters and bucket operation counters.
7.1.1.3. Ceph Objecy Gateway
Introduction of a new configuration variable rgw_op_counters_dump_expiration
With this enhancement, a new configuration variable rgw_op_counters_dump_expiration
is introduced. This new variable controls the number of seconds a labeled performance counter is going to be emitted from the ceph counter dump
command.
After the rgw_op_counters_dump_expiration
number of seconds, if a bucket or user labeled counter is not updated, it will not show up in the json output of ceph counter dump
.
To turn this filtering OFF, the value of rgw_op_counters_dump_expiration
must be set to 0.
The value of rgw_op_counters_dump_expiration
must not be changed at runtime.
Perf counters for S3 operations can now be sent Prometheus
With this enhancement, you can send a new labeled counter to Prometheus by using the Ceph exporter daemon and is useful for common S3 operations, such as PUTs, which can be visualized per bucket or per user.
Perf counters for S3 operations labeled by either the user or the bucket are emitted by using the ceph counter dump
command.
7.1.2. Known issues
7.1.2.1. Multi-site Ceph Object Gateway
Large omap
warnings in the archive zone environment
Presently, in the archive zone environment, it is possible that you see large omap
warnings due to several versions of the same object in a bucket being written to a single bucket index shard object. The recommendation is to reduce the 'max_objs_per_shard' config option to 50,000 to account for the omap olh
entries on the archive zone. This will help keep the number of omap
entries per bucket index shard object in check to prevent large omap
warnings.
7.1.2.2. Ceph dashboard
Token import throws an error when trying to import multisite configuration from a remote cluster
The multi-site period information does not return realm name now due to this import multi-site configuration gives error on submitting the form. Workaround: Import the multi-site configuration through command line interface.
7.1.2.3. Security
CVE-2023-49569 for grafana-container: go-git
is present in the Ceph 7.0z2 release
A CVE is present in Ceph 7.0z2 because the Ceph 7.0 release uses an older version of Grafana than what is available in the Ceph 6.1 release.
Ceph 7.0 uses Grafana 9.4.12 while 6.1z6 release is on Grafana 10.4.0 where this CVE is fixed.
7.2. Red Hat Ceph Storage 7.0z1
Red Hat Ceph Storage release 7.0z1 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:1214 and RHBA-2024:1215 advisories.
7.2.1. Enhancements
7.2.1.1. Ceph Block Devices
Improved rbd_diff_iterate2()
API performance
Previously, RBD diff-iterate was not guaranteed to execute locally if exclusive lock was available when diffing against the beginning of time (fromsnapname == NULL
) in fast-diff mode (whole_object == true
with fast-diff
image feature enabled and valid).
With this enhancement, rbd_diff_iterate2()
API performance is improved, thereby increasing the performance for QEMU live disk synchronization and backup use cases, where the fast-diff
image feature is enabled.
7.2.1.2. Ceph Object Gateway
rgw-restore-bucket-index
tool can now restore the bucket indices for versioned buckets
With this enhancement, the rgw-restore-bucket-index
tool now works as broadly as possible, with the ability to restore the bucket indices for un-versioned as well as for versioned buckets.
7.2.2. Known issues
7.2.2.1. Multi-site Ceph Object Gateway
Some Ceph Object Gateway applications using S3 client SDKs can experience unexpected errors
Presently, some applications using S3 client SDKs could experience an unexpected 403 error when uploading a zero-length object, if an external checksum is requested.
As a workaround, use Ceph Object Gateway services with SSL.
7.2.3. Removed functionality
7.2.3.1. Ceph Object Gateway
Prometheus metrics are no longer used
This release introduces new feature-rich labeled perf counters, replacing the Object Gateway-related Prometheus metrics previously used. The new metrics are being introduced before complete removal to allow overlapping usage.
The Prometheus metrics are currently still available for use simultaneously with the newer metrics during this transition. However, the Prometheus metrics will be completely removed in the Red Hat Ceph Storage 8.0 release.
Use the following table for knowing the replaced metrics in 7.0z1 and later.
Deprecated Prometheus metric | New metric in 7.0z1 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|