Chapter 3. Major Updates


This section lists all major updates, enhancements, and new features introduced in this release of Red Hat Ceph Storage.

Installation by using Ansible is now supported

With this release, the Ansible automation application can be used to install Red Hat Ceph Storage. The ceph-ansible utility contains a set of Ansible playbooks that allow users to add or update monitor, OSD, Ceph Object Gateway, and Ceph Metadata Server nodes.

For details, see the Installation Guide for Red Hat Enterprise Linux and Installation Guide for Ubuntu.

Red Hat Storage Console

Red Hat Storage Console is a new Red Hat product that provides a graphical management platform for Red Hat Ceph Storage 2. Red Hat Storage Console allows users to install, monitor, and manage a Red Hat Ceph Storage cluster.

For details, see the Red Hat Ceph Storage Console documentation set.

SELinux is now in enforcing mode by default

The SELinux policy for Red Hat Ceph Storage was added in Red Hat Ceph Storage 1.3.2 but was not enabled by default. With this release, SELinux is enabled by default for all nodes, except for the Red Hat Storage Console node, where it runs in permissive mode.

To learn more about SELinux, see the SELinux User’s and Administrator’s Guide for Red Hat Enterprise Linux 7.

Changed behavior of calamari-ctl initialize

The calamari-ctl initialize command no longer prompts for creating the administrator’s account. With this release, calamari-ctl initialize requires to specify the administrator’s account by using the following command-line arguments:

calamari-ctl initialize --admin-username <username> --admin-password <password> --admin-email <email>

The Ceph API contains more details about OSD nodes

The Ceph API now includes locations of the data that is stored on an OSD and the path to the OSD journal.

The API is accessible from /api/v2/cluster/<fsid>/osd?format=json. Also, the Red Hat Storage Console uses this information to infer what devices the OSD daemons use.

Support for per tenant namespaces has been added

The Ceph Object Gateway now supports multiple tenants with associated namespaces. Previously, the Ceph Object Gateway had only a single global namespace for buckets and containers. As a consequence, names of buckets and containers had to be unique, even across distinct accounts. With this new feature, the names have to be unique only within a tenant.

For details, see the Ceph Object Gateway Guide for Red Hat Enterprise Linux and Ceph Object Gateway Guide for Ubuntu.

The AWS4 authentication is now supported

Red Hat Ceph Storage now supports the new Amazon Web Services (AWS) Signature Version 4 authentication.

crushtool now checks for overlapping CRUSH rules

The crushtool now checks if the CRUSH rules overlap. To check if the CRUSH rules overlap, use the following command. If crushtool finds any overlapping rules, it outputs them:

$ crushtool --check -i <crush_file>
overlapped rules in ruleset 0: rule-r0, rule-r1, rule-r2

Image metadata is now supported

With this release, it is possible to tag images with custom key-value pairs. Also, metadata can be used to override the RADOS Block Device (RBD) image configuration settings for particular images.

For details, see the Working with Image Metadata section in the Block Device guide for Red Hat Ceph Storage 2.

Calamari now supports passing the rbd commands

With this release, the Calamari /cli endpoint supports passing the rbd commands.

Support for randomized scrub scheduling has been added

The new Ceph osd_scrub_interval_randomize_ratio configuration option randomizes scrub scheduling by adding a random delay to the value specified by the osd_scrub_min_interval option. As a result, scrubbing of newly created pool or placement groups does not happen at the same time, and I/O impact is reduced.

Scrub operations are now in the client queue

The scrub and trimming operations have been moved to the client operations queue to prioritize client I/O more accurately.

The deep flatten feature has been added

Without the deep flatten feature, it is not possible to dissociate a clone from its parent if the clone has any snapshots. As a consequence, the parent image or snapshot cannot be deleted. With deep flatten enabled, it is possible to dissociate a clone from its parent and to delete no longer needed snapshots.

The deep flatten feature must be enabled when creating an image. It is not possible to enable it on already existing images.

Support for Swift Static Large Object (SLO) has been added

Ceph now supports the Swift Static Large Object (SLO) feature. This feature allows uploading large objects. To do so, SLO splits a large object into smaller objects, uploads the smaller objects, and then treats the result as a single large object.

For details, see the Static Large Object (SLO) support chapter in the Configuration Reference guide for Red Hat OpenStack Platform 8.

Support for renaming snapshots

With this release, support for renaming snapshots has been added to the Red Hat Ceph Storage 2. To rename a snapshot:

rbd snap rename <pool-name>/<image-name>@<original-snapshot-name> <pool-name>/<image-name>@<new-snapshot-name>

For example:

$ rbd snap rename data/dataset@snap1 data/dataset@snap2

For details, see the Renaming Snapshots section in the Block Device guide for Red Hat Ceph Storage 2.

Keystone 3 authentication is now supported

Red Hat Ceph Storage now supports OpenStack Keystone 3 authentication. As a result, users can use Keystone 3 to authenticate to the Ceph Object Gateway.

For details, see the Using Keystone to Authenticate Ceph Object Gateway Users guide.

Block Device mirroring is now supported

The RADOS Block Device (RBD) mirroring feature has been added to the Red Hat Ceph Storage 2. RBD mirroring is a process of replicating Ceph Block Device images between two peer Ceph Storage Clusters. The mirroring is asynchronous, crash-consistent, and serves primarily for disaster recovery.

To learn more about RBD mirroring, see the Block Device Mirroring chapter in the Block Device guide for Red Hat Ceph Storage 2.

systemd now restarts failed Ceph services

When a Ceph service, such as ceph-mon or ceph-osd, fails to start, the systemd daemon now attempts to restart the service. Prior to this update, Ceph services remained in the failed state.

Image features can be now enabled or disabled on already existing images

Image features, such as fast diff, exclusive-lock, object map, or journaling, can now be enabled or disabled on already existing images. The deep-flatten feature can be disabled but not enabled on existing images.

For details, see the Enabling and Disabling Image Features section in the Block Device guide for Red Hat Ceph Storage 2.

The scrub feature has been enhanced

The OSD daemon now persists scrub results. In addition, a new librados API is provided to query the scrub results in detail.

Changes in the rbd command

This release introduces the following changes in the rbd command:

  • The rbd help command can now output help for a particular subcommand instead of for all options and commands. To do so, use:

    rbd help <subcommand>
    Note

    The -h option still displays help for all available commands.

  • Size can now be specified in units, for example in megabytes, gigabytes, or terabytes.
  • Object size can now be specified directly rather than as a power of 2.

New multi-site configurations for the Ceph Object Gateway

The Red Hat Ceph Storage now supports an active-active zone configuration of Ceph Object Gateways. Previously, only active-passive configuration was supported, meaning the users could write only to one (master) zone. With this update, users can write also to non-master zones.

In addition, this update introduces the following changes:

  • A region is now called zone-group.
  • Support for configuring realms has been added. A realm is a single zone group or multiple zone groups with a globally unique namespace.
  • A configuration period has been added. The period is a representation of the multi-site configuration at a point in time. The current period contains an epoch that advances as changes are made to the configuration. A new period is generated when the location of the multi-site master zone changes.
  • The ceph-radosgw daemon handles the synchronization, eliminating the need for a separate synchronization agent.

For details see the Multi-site chapter in the Object Gateway Guide for Red Hat Enterprise Linux and the Multi-site chapter in the Object Gateway Guide for Ubuntu.

The fast-diff feature has been added

The fast-diff feature adds the ability to quickly determine if an object has changed since the last snapshot was made. As a result, generating the difference between two images as well as using the rbd disk-usage command is faster.

For details, see the Enabling and Disabling Image Features section in the Block Device guide for Red Hat Ceph Storage 2.

Support for Swift bulk-delete has been added

Red Hat Ceph Storage now supports the Swift bulk-delete feature. With this new feature, users can delete multiple objects from an account with a single request.

For details see the Bulk delete section in the Configuration Reference guide for Red Hat OpenStack Platform 8.

Swift Object Expiration is now supported

Ceph now supports expiration of Swift objects.

For details see the Expiring Object Support section of the OpenStack documentation.

Ceph daemons now run as the ceph user and group

With this release, Ceph daemons, such as ceph-osd or ceph-mon, no longer run as root but run as the ceph user that belongs to the ceph group. This change improves security of the Ceph cluster.

Active Directory/LDAP authentication is now supported

Red Hat Ceph Storage now supports LDAP authentication. As a result, users can use LDAP accounts to access buckets in the Ceph Object Gateway. In addition, they can use LDAP authentication to authenticate against Microsoft Active Directory.

For details see the Ceph Object Gateway with LDAP/AD guide.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.