Chapter 3. New features


This section lists all major updates, enhancements, and new features introduced in this release of Red Hat Ceph Storage.

3.1. The Ceph Ansible utility

Users can now purge the dashboard and monitoring stack only

Previously, users could not purge only the Ceph Manager Dashboard and Monitoring stack components such as Alertmanager, Prometheus, Grafana, and node-exporter separately.

With the `purge-dashboard.yml ` playbook, users can remove only the dashboard and the monitoring stack components.

Purging the storage cluster with osd_auto_discovery: true now purges the cluster and removes the Ceph OSDs

Previously, purging the storage cluster deployed with osd_auto_discovery: true would not purge the Ceph OSDs. With this release, the purge playbook works as expected and removes the Ceph OSDs when the storage cluster is deployed with osd_auto_discovery: true scenario.

The Alertmanager configuration is customizable

With this release, you can customize the Alertmanager configuration using the alertmanager_conf_overrides parameter in the /group_vars/all.yml file.

The Red Hat Ceph Storage Dashboard deployment is supported on a dedicated network

Previously, ceph-ansible stated the address that should be used for deploying the dashboard was on the same subnet as the public_network.

With this release, you can override the default dedicated subnet for the dashboard by setting the dashboard_network parameter in the /group_vars/all.yml file with the CIDR subnet address.

Setting the global NFS options in the configuration file is supported

Previously, ceph-ansible would not allow overriding any parameter in the configuration file.

With this release, you can override any parameter in the NFS_CORE_PARAM block section in the ganesha.conf file by setting the variable ganesha_core_param_overrides in group_vars/all.yml and update client-related configuration.

ceph-ansible checks for the Ceph Monitor quorum before starting the upgrade

Previously, when the storage cluster was in a HEALTH ERR or HEALTH WARN state due to one of the Ceph monitors being down, the rolling_upgrade.yml playbook would run. However, the upgrade would fail and the quorum was lost resulting in I/O down or a cluster failure.

With this release, an additional condition occurs where ceph-ansible checks the Ceph Monitor quorum before starting the upgrade.

The systemd target units for containerized deployments are now supported

Previously, there was no way to stop all Ceph daemons on a node in a containerized deployment.

With this release, systemd target units for containerized deployments are supported and you can stop all the Ceph daemons on a host or specific Ceph daemons similar to bare-metal deployments.

ceph-ansible now checks the relevant release version during an upgrade before executing the playbook

With this release, during a storage cluster upgrade, ceph-ansible first checks for the relevant release version and the playbook fails with an error message if a wrong Ceph version is provided.

3.2. Ceph Management Dashboard

A new Grafana Dashboard to display graphs for Ceph Object Gateway multi-site setup

With this release, a new Grafana dashboard is now available and displays graphs for Ceph Object Gateway multisite sync performance including two-way replication throughput, polling latency, and unsuccessful replications.

See the Monitoring Ceph object gateway daemons on the dashboard section in the Red Hat Ceph Storage Dashboard Guide for more information.

3.3. Ceph File System

Use max_concurrent_clones option to configure the number of clone threads

Previously, the number of concurrent clones was not configurable and the default was 4.

With this release, the maximum number of concurrent clones is configurable using the manager configuration option:

Syntax

ceph config set mgr mgr/volumes/max_concurrent_clones VALUE

Increasing the maximum number of concurrent clones could improve the performance of the storage cluster.

3.4. Ceph Object Gateway

The role name and the role session information is displayed in the ops log for S3 operations

With this release, you get information such as the role name and the role session in the ops log for all the S3 operations that use temporary credentials returned by AssumeRole* operations for debugging and auditing purposes.

3.5. Multi-site Ceph Object Gateway

Data sync logging experienced delays in processing

Previously, data sync logging could be subject to delays in processing large backlogs of log entries.

With this release, data sync includes caching for bucket sync status. The addition of the cache speeds the processing of duplicate datalog entries when a backlog exists.

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.