이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 3. Major Updates


This section lists all major updates, enhancements, and new features.

Support for SELinux has been added

With this release, the SELinux policy for Red Hat Ceph Storage has been added. SELinux provides another security layer by enforcing Mandatory Access Control (MAC) mechanism over all processes. To learn more about SELinux, see the SELinux User’s and Administrator’s Guide for Red Hat Enterprise Linux 7.

SELinux support for Ceph is not enabled by default. To use it, install the ceph-selinux package. For detailed information about this process, see the SELinux section in the Red Hat Ceph Storage Installation Guide for Red Hat Enterprise Linux.

Note

All Ceph daemons will be down for the time the ceph-selinux package is being installed. Therefore, your cluster node will not be able to serve any data at this point. This operation is necessary in order to update the metadata of the files located on the underlying file system and to make Ceph daemons run with the correct context. This operation may take several minutes depending on the size and speed of the underlying storage.

Package caching for Ubuntu is now supported

With this release, a caching server can be set up to provide Red Hat Ceph Storage repositories for offline Ceph clusters. See the Package Caching for Red Hat Ceph Storage on Ubuntu article on the Red Hat Customer Portal to learn more.

A new "ceph osd crush tree" command has been added

The CRUSH map contains a list of buckets for aggregating the devices into physical locations. With this update, a new ceph osd crush tree command has been added to Red Hat Ceph Storage. The command prints CRUSH buckets and items in a tree view. As a result, it is now easier to analyze the CRUSH map to determine a list of OSD daemons in a particular bucket.

TCMalloc thread cache is now configurable

With Red Hat Ceph Storage 1.3.2, support for modifying the size of the TCMalloc thread cache has been added. Increasing the thread cache size significantly improves Ceph cluster performance.

To set the thread cache size, edit the value of the TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES parameter in the Ceph system configuration file, that is /etc/sysconfig/ceph for Red Hat Enterprise Linux and /etc/default/ceph for Ubuntu.

In addition, the default value of TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES has been changed from 32 MB to 128 MB.

Red Hat Satellite 5 and Red Hat Ceph Storage integration

Red Hat Ceph Storage nodes can be connected to the Red Hat Satellite 5 Server. The server then hosts package repositories and provides system updates.

Once you register your Ceph nodes with the Satellite 5 server, you can deliver upgrades to the Ceph cluster without allowing a direct connection to the Internet, as well as search and view errata applicable to the cluster nodes.

To learn more, see the How to Register Ceph with Satellite 5 article on the Red Hat Customer Portal.

맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat