Chapter 2. Overview
Red Hat Ceph Storage v1.2.3 is the first release of Ceph Storage using the Red Hat build process. This version includes all the features of Inktank Ceph Enterprise v1.2.2 and the enhancements described in these release notes. Between v1.2.2 and v1.2.3, there were no major features added.
2.1. Packaging Copy linkLink copied to clipboard!
The primary change between v1.2.2 and v1.2.3 involves building Ceph using the Red Hat build tools. This implies packaging changes. Packaging changes include:
-
The
ceph-mon
andceph-osd
binaries were previously in the mainceph
RPM. They are now split up into separateceph-mon
andceph-osd
packages. -
The
ceph-devel
package has been split up into separatelibrados-devel
andlibrbd-devel
packages. -
The
python-ceph
package has been split up into separatepython-rados
andpython-rbd
packages. -
The
libcephfs1
package and its headers are no longer present in v1.2.3. -
For the installer, the
ice_setup.py
utility is no longer a single file. It is a fullice_setup
RPM. -
The RHCS ISOs do not contain the
ceph-radosgw
package orradosgw-agent
packages. These packages are available in Red Hat’s "RH-COMMON" channel for RHEL 6 and 7. See the Object Gateway documentation for more details about how to add this repository usingsubscription-manager
. -
The RHCS ISOs for RHEL 6 do not contain
qemu-kvm
. This package is available from Base RHEL or other add-on channels. -
The RHCS ISOs for RHEL 6 do not contain
xfsprogs
. This program is available through the Scalable File System add-on. See the Red Hat Ceph Storage Installation Guide or Object Gateway documentation for more details about how to add this repository usingsubscription-manager
.
2.2. Ceph Core Copy linkLink copied to clipboard!
Enhancements in Ceph’s core features include fixes for ceph-disk
; namely, dmcrypt
key permissions and key location; and more robust checks for partitions.
The librados
API now closes an I/O context cleanly on shutdown, handles reply race with pool deletion and its C API operates effectively when read timeout is enabled.
Enhancements to CRUSH include aligning rule and ruleset IDs; addressing negative weight issues during create_or_move_item
and preventing buffer overflows in erasure-coded pools.
Erasure-coding and cache tiering are tech previews only and are not supported for production clusters.