Search

Chapter 5. Technology previews

download PDF

This section describes bugs with significant impact on users that were fixed in this release of Red Hat Ceph Storage. In addition, the section includes descriptions of fixed known issues found in previous versions.

Important

Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details.

5.1. Crimson OSD

Newly implemented Crimson-OSD of the core Ceph object storage daemon (OSD) component replaces ceph-osd

With this enhancement, the next generation ceph-osd is implemented for multi-core scalability and to improve performance with fast network and storage devices, employing state-of-the-art technologies that includes DPDK and SPDK. Crimson aims to be compatible with an earlier version of OSD daemon with the class ceph-osd.

For more information, see Crimson (Technology Preview).

5.2. Ceph Object gateway

Object storage archive zone in Red Hat Ceph Storage

With this enhancement, the archive zone receives all objects from the production zones and keeps every version for every object, providing the user with an object catalogue that contains the full history of the object. This provides a secured object storage deployment that guarantees data retrieval even if the object/buckets in the production zones have been lost or compromised.

For more information, see Configuring the archive zone (Technology Preview).

Protect object storage data outside of a production cluster using per-bucket enable and disable sync to an archive zone

As an administrator, you can now recover any version of any object that has existed on the primary site from the archive zone. In the case of data loss or a ransomware attack, valid versions of all objects are accessible, if needed.

For more information, see Configuring the archive zone (Technology Preview).

5.3. RADOS

Balancing Red Hat Ceph Storage cluster using read balancer

With this release, to ensure that each device gets its fair share of primary OSDs so that read requests get distributed across OSDs in the cluster, evenly, read balancer is implemented. Read balancing is cheap and the operation is fast as there is no data movement involved. Read balancing supports replicated pools only. Erasure coded pools are not supported.

For more information, see Balancing Red Hat Ceph Storage cluster using read balancer (Technology Preview) and Ceph rebalancing and recovery.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.