このコンテンツは選択した言語では利用できません。

Chapter 3. Major Updates


This section lists all major updates, enhancements, and new features introduced in this release of Red Hat Ceph Storage.

Scrub processes can now be disabled during recovery

A new option osd_scrub_during_recovery has been added with this release. Setting this option to false in the Ceph configuration file disables starting new scrub processes during recovery. As a result, the speed of the recovery is enhanced.

The radosgw-admin utility supports a new bucket limitcheck command

The radosgw-admin utility has a new command bucket limitcheck to warn the administrator when a bucket needs resharding. Previously, buckets with more objects than is recommended could be unnoticed and cause performance issues. This new command reports on bucket status with respect to the configured bucket sharding recommendations ensuring that administrators can detect overloaded buckets easily.

Red Hat Ceph Storage now ships with a companion ISO that contains the debuginfo packages

Previously, it was difficult for users to consume the debuginfo packages in restricted environments where the Red Hat CDN was not directly accessible. Red Hat Ceph Storage now ships with a companion ISO for Red Hat Enterprise Linux that contains the debuginfo packages for the product. User can now use this ISO to obtain the debuginfo packages.

The process of enabling SELinux on a Ceph Storage Cluster has been improved

A new subcommand has been added to the ceph-disk utility that can help make the process of enabling SELinux on a Ceph Storage Cluster faster. Previously, the standard way of SELinux labeling did not take into account the fact that OSDs usually reside on different disks. This caused the labelling process to be slow. This new subcommand is designed to speed up the process by labeling the Ceph files in parallel per OSD.

Subscription Manager now reports on the raw disk capacity available per OSD

With this release, Red Hat Subscription Manager can report on the raw disk capacity available per OSD. To do so:

# subscription-manager facts
Copy to Clipboard Toggle word wrap

The Ceph Object Gateway data logs are trimmed automatically

Previously, after data logs in the Ceph Object Gateway were processed by data syncs and were no longer needed, they remained in the Ceph Object Gateway host taking up space. Ceph now automatically removes these logs.

Improved error messaging in the Ceph Object Gateway

Previously, when an invalid placement group configuration prevented the Ceph Object Gateway from creating any of its internal pools, the error message was insufficient, making it difficult to deduce the root cause of failure.

The error message now suggests there might be an issue with the configuration such as an insufficient number of placement groups or inconsistent values set for the pg_num and pgp_num parameters, making it easier for the administrator to solve the problem.

Use CEPH_ARGS to ensure all commands work for clusters with unique names

In Red Hat Ceph Storage, the cluster variable in group_vars/all determines the name of the cluster. Changing the default value to something else means that all the command line calls need to be changed as well. For example, if the cluster name is foo, then ceph health becomes ceph --cluster foo health.

An easier way to handle this is to use the environment variable CEPH_ARGS. In this case, run export CEPH_ARGS="--cluster foo". With that, you can run all command line calls normally.

Improvements to the snapshot trimmer

This release improves control and throttling of the snapshot trimmer in the underlying Reliable Autonomic Distributed Object Store (RADOS).

A new osd_max_trimming_pgs option has been introduced, which limits how many placement groups on an OSD can be trimming snapshots at any given time. The default setting for this option is 2.

This release also restores the safe use of the osd_snap_trim_sleep option. This option adds the given number of seconds in delay between every dispatch of snapshot trim operations to the underlying system. By default, this option is set to 0.

The new version of the Ceph container is fully supported

The new version of the Ceph container image is based on the Red Hat Ceph Storage 2.3 and Red Hat Enterprise Linux 7.3. This version is now fully supported.

For details, see the Deploying Red Hat Ceph Storage 2 as a Container Image Red Hat Knowledgebase article.

Exporting namespaces to NFS-Ganesha

NFS-Ganesha is an NFS interface for the Ceph Object Gateway that presents buckets and objects as directories and files. With this update, NFS-Ganesha fully supports the ability to export Amazon S3 object namespaces by using NFS 4.1.

For details, see the Exporting the Namespace to NFS-Ganesha section in the Red Hat Ceph Storage 2 Object Gateway for Red Hat Enterprise Linux.

In addition, NFSv3 is newly added as a technology preview. For details, see the Technology Previews section.

The Troubleshooting Guide is available

With this release, the Troubleshooting Guide is available. The guide contains information about fixing the most common errors you can encounter with the Ceph Storage Cluster.

トップに戻る
Red Hat logoGithubredditYoutubeTwitter

詳細情報

試用、購入および販売

コミュニティー

Red Hat ドキュメントについて

Red Hat をお使いのお客様が、信頼できるコンテンツが含まれている製品やサービスを活用することで、イノベーションを行い、目標を達成できるようにします。 最新の更新を見る.

多様性を受け入れるオープンソースの強化

Red Hat では、コード、ドキュメント、Web プロパティーにおける配慮に欠ける用語の置き換えに取り組んでいます。このような変更は、段階的に実施される予定です。詳細情報: Red Hat ブログ.

会社概要

Red Hat は、企業がコアとなるデータセンターからネットワークエッジに至るまで、各種プラットフォームや環境全体で作業を簡素化できるように、強化されたソリューションを提供しています。

Theme

© 2025 Red Hat