Chapter 2. Overview


Red Hat Ceph Storage v1.2.3 is the first release of Ceph Storage using the Red Hat build process. This version includes all the features of Inktank Ceph Enterprise v1.2.2 and the enhancements described in these release notes. Between v1.2.2 and v1.2.3, there were no major features added.

2.1. Packaging

The primary change between v1.2.2 and v1.2.3 involves building Ceph using the Red Hat build tools. This implies packaging changes. Packaging changes include:

  1. The ceph-mon and ceph-osd binaries were previously in the main ceph RPM. They are now split up into separate ceph-mon and ceph-osd packages.
  2. The ceph-devel package has been split up into separate librados-devel and librbd-devel packages.
  3. The python-ceph package has been split up into separate python-rados and python-rbd packages.
  4. The libcephfs1 package and its headers are no longer present in v1.2.3.
  5. For the installer, the ice_setup.py utility is no longer a single file. It is a full ice_setup RPM.
  6. The RHCS ISOs do not contain the ceph-radosgw package or radosgw-agent packages. These packages are available in Red Hat’s "RH-COMMON" channel for RHEL 6 and 7. See the Object Gateway documentation for more details about how to add this repository using subscription-manager.
  7. The RHCS ISOs for RHEL 6 do not contain qemu-kvm. This package is available from Base RHEL or other add-on channels.
  8. The RHCS ISOs for RHEL 6 do not contain xfsprogs. This program is available through the Scalable File System add-on. See the Red Hat Ceph Storage Installation Guide or Object Gateway documentation for more details about how to add this repository using subscription-manager.

2.2. Ceph Core

Enhancements in Ceph’s core features include fixes for ceph-disk; namely, dmcrypt key permissions and key location; and more robust checks for partitions.

The librados API now closes an I/O context cleanly on shutdown, handles reply race with pool deletion and its C API operates effectively when read timeout is enabled.

Enhancements to CRUSH include aligning rule and ruleset IDs; addressing negative weight issues during create_or_move_item and preventing buffer overflows in erasure-coded pools.

Erasure-coding and cache tiering are tech previews only and are not supported for production clusters.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat