Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 6. Asynchronous errata updates
This section describes the bug fixes, known issues, and enhancements of the z-stream releases.
6.1. Red Hat Ceph Storage 7.1z1
Red Hat Ceph Storage release 7.1z1 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:5080 and RHBA-2024:5081 advisories.
6.1.1. Known issues
This section documents known issues that are found in this release of Red Hat Ceph Storage.
6.1.1.1. Ceph Object Gateway
Intel QAT Acceleration for Object Compression & Encryption
Intel QuickAssist Technology (QAT) is implemented to help reduce node CPU usage and improve the performance of Ceph Object Gateway when enabling compression and encryption. It’s a known issue that QAT can only be configured on new setups (Greenfield only). QAT Ceph Object Gateway daemons cannot be configured in the same cluster as non-QAT (regular) Ceph Object Gateway daemons.
6.1.1.2. Ceph Upgrade
Cluster keys and certain configuration directories are removed during RHEL 8 to RHEL 9 upgrade
Due to the RHEL 8 deprecation of the libunwind
package, this package is removed when upgrading to RHEL 9. The ceph-common
package depends on the libunwind
package and therefore is removed as as well. Removing the ceph-common
package results in the removal of the cluster keys and the certain configurations in the /etc/ceph
and /var/log/ceph
directories.
As a result, various node failures can occur. Ceph operations may not work on some nodes, due to the removal of the /etc/ceph
package. systemd and Podman cannot start on Ceph services on the node due to the removal of /var/log/ceph
package.
As a workaround, configure LEAPP to not remove the libunwind
package. For full instructions, see Upgrading RHCS 5 hosts from RHEL 8 to RHEL 9 removes ceph-common package. Services fail to start on the Red Hat Customer Portal.
6.1.1.3. The Cephadm utility
Using ceph orch ls
command with the --export
flag corrupts the cert/key files format
Previously, long multi-line strings like cert/key files format would be mangled when using ceph orch ls
with the --export
flag. Specifically, some newlines are stripped. As a result, if users re-apply a specification with a cert/key as they got it from ceph orch ls
with --export
provided, the cert/key will be unusable by the daemon.
As a workaround, to modify a specification while using ceph orch ls
with --export
to get the current contents, you need to modify the formatting of the cert/key file before re-applying the specification. It’s recommended to use the format with a '|' and an indented string.
Example:
client_cert: | -----BEGIN CERTIFICATE----- MIIFCTCCAvGgAwIBAgIUO6yXXkNb1+1tJzxZDplvgKpwWkMwDQYJKoZIhvcNAQEL BQAwFDESMBAGA1UEAwwJbXkuY2xpZW50MB4XDTI0MDcyMzA3NDI1N1oXDTM0MDcy ...
6.1.2. Enhancements
This section lists enhancements introduced in this release of Red Hat Ceph Storage.
6.1.2.1. Ceph File System
New clone creation no longer slows down due to parallel clone limit
Previously, upon reaching the limit of parallel clones, the rest of the clones would queue up, slowing down the cloning.
With this enhancement, upon reaching the limit of parallel clones at a time, the new clone creation requests are rejected. This feature is enabled by default but can be disabled.
Ceph File System names can now be swapped for enhanced disaster recovery
This enhancement provides the option for two file systems to swap their names, by using the ceph fs swap command. The file system IDs can also optionally be swapped with this command.
The function of this API is to facilitate file system swaps for disaster recovery. In particular, it avoids situations where a named file system is temporarily missing which could potentially prompt a higher level storage operator to recreate the missing file system.
quota.max_bytes
is now set in more understandable size values
Previously, the quota.max_bytes
value was set in bytes, resulting in often very large size values, which was hard to set or changed.
With this enhancement, the quota.max_bytes
values can now be set with human-friendly values, such as M/Mi, G/Gi, or T/Ti. For example, 10GiB or 100K.
Health warnings for a standby-replay MDS are no longer included
Previously, all inode and stray counters health warnings were displayed during a standby-replay MDS.
With this enhancement, the standby-replay MDS health warnings are no longer displayed as they are not relevant.
6.1.2.2. Ceph Object Gateway
S3 requests are no longer cut off in the middle of transmission during shutdown
Previously, a few clients faced issues with the S3 request being cut off in the middle of transmission during shutdown without waiting.
With this enhancement, the S3 requests can be configured (off by default) to wait for the duration defined in the rgw_exit_timeout_secs
parameter for all outstanding requests to complete before exiting the Ceph Object Gateway process unconditionally. Ceph Object Gateway will wait for up to 120 seconds (configurable) for all on-going S3 requests to complete before exiting unconditionally. During this time, new S3 requests will not be accepted.
In containerized deployments, an additional extra_container_agrs
parameter configuration of --stop-timeout=120
(or the value of rgw_exit_timeout_secs
parameter, if not default) is also necessary.
6.2. Red Hat Ceph Storage 7.1z2
Red Hat Ceph Storage release 7.1z2 is now available. The bug fixes that are included in the update are listed in the RHBA-2024:9010 and RHBA-2024:9011 advisories.
6.2.1. Known issues
This section documents known issues that are found in this release of Red Hat Ceph Storage.
6.2.1.1. Build
Cluster keys and certain configuration directories are removed during RHEL 8 to RHEL 9 upgrade
Due to the RHEL 8 deprecation of the libunwind
package, this package is removed when upgrading to RHEL 9. The ceph-common
package depends on the libunwind
package and therefore is removed as as well. Removing the ceph-common
package results in the removal of the cluster keys and the certain configurations in the /etc/ceph
and /var/log/ceph
directories.
As a result, various node failures can occur. Ceph operations may not work on some nodes, due to the removal of the /etc/ceph
package. systemd and Podman cannot start on Ceph services on the node due to the removal of /var/log/ceph
package.
As a workaround, configure LEAPP to not remove the libunwind
package. For full instructions, see Upgrading RHCS 5 hosts from RHEL 8 to RHEL 9 removes ceph-common package. Services fail to start on the Red Hat Customer Portal.
6.2.2. Enhancements
This section lists enhancements introduced in this release of Red Hat Ceph Storage.
6.2.2.1. Ceph File System
Metrics support for the Replication Start/End Notifications.
With this enhancement, metrics support for the Replication Start/End notifications is provided. These metrics enable monitoring logic for data replication.
This enhancement provides labeled metrics: last_synced_start
, last_synced_end
, last_synced_duration
, last_synced_bytes
as requested.
6.2.2.2. RADOS
New mon_cluster_log_level
command option to control the cluster log level verbosity for external entities
Previously, debug verbosity logs were sent to all external logging systems regardless of their level settings. As a result, the /var/
filesystem would rapidly fill up.
With this enhancement, mon_cluster_log_file_level
and mon_cluster_log_to_syslog_level
command options have been removed. From this release, use only the new generic mon_cluster_log_level
command option to control the cluster log level verbosity for the cluster log file and all external entities.