Chapter 3. New features
This section lists all major updates, enhancements, and new features introduced in this release of Red Hat Ceph Storage.
3.1. The Cephadm utility
The Cephadm-Ansible
modules
The Cephadm-Ansible
package provides several modules that wrap the new integrated control plane, cephadm
, for those users that wish to manage their entire datacenter with Ansible. It does not intend to provide backward compatibility with Ceph-Ansible
, but it aims to deliver a supported set of playbooks that customers can use to update their Ansible integration.
See The cephadm-ansible
modules for more details.
Bootstrap Red Hat Ceph Storage cluster is supported on Red Hat Enterprise Linux 9
With this release, cephadm
bootstrap is available on Red Hat Enterprise Linux 9 hosts to enable Red Hat Ceph Storage 5.2 support for Red Hat Enterprise Linux 9. Users can now bootstrap a Ceph cluster on Red Hat Enterprise Linux 9 hosts.
cephadm rm-cluster
command cleans up the old systemd unit files from host
Previously, the rm-cluster
command would tear down the daemons without removing the systemd unit files.
With this release, cephadm rm-cluster
command , along with purging the daemons, cleans up the old systemd unit files as well from the host.
cephadm
raises health warnings if it fails to apply a specification
Previously, failures to apply a specification would only be reported as a service event which the users would often not check.
With this release, cephadm
raises health warnings if it fails to apply a specification, such as an incorrect pool name in an iscsi specification, to alert users.
Red Hat Ceph Storage 5.2 supports staggered upgrade
Starting with Red Hat Ceph Storage 5.2, you can selectively upgrade large Ceph clusters in cephadm
in multiple smaller steps.
The ceph orch upgrade start
command accepts the following parameters:
-
--daemon-types
-
--hosts
-
--services
-
--limit
These parameters selectively upgrade daemons that match the provided values.
These parameters are rejected if they cause cephadm
to upgrade daemons out of the supported order.
These upgrade parameters are accepted if your active Ceph Manager daemon is on a Red Hat Ceph Storage 5.2 build. Upgrades to Red Hat Ceph Storage 5.2 from an earlier version does not support these parameters.
fs.aio-max-nr
is set to 1048576 on hosts with OSDs
Previously, leaving fs.aio-max-nr
as the default value of 65536
on hosts managed by Cephadm
could cause some OSDs to crash.
With this release, fs.aio-max-nr
is set to 1048576 on hosts with OSDs and OSDs no longer crash as a result of the value of fs.aio-max-nr
parameter being too low.
ceph orch rm <service-name>
command informs users if the service they attempted to remove exists or not
Previously, removing a service would always return a successful message, even for non-existent services causing confusion among users.
With this release, running ceph orch rm SERVICE_NAME
command informs users if the service they attempted to remove exists or not in cephadm
.
A new playbook rocksdb-resharding.yml
for resharding procedure is now available in cephadm-ansible
Previously, the rocksdb
resharding procedure entailed tedious manual steps.
With this release, the cephadm-ansible playbook, rocksdb-resharding.yml
is implemented for enabling rocksdb
resharding which makes the process easy.
cephadm
now supports deploying OSDs without an LVM layer
With this release, to support users who do not want a LVM layer for their OSDs, cephadm
or ceph-volume
support is provided for raw OSDs. You can include "method: raw" in an OSD specification file passed to Cephadm, to deploy OSDs in raw mode through Cephadm without the LVM layer.
With this release, cephadm supports using method: raw in the OSD specification yaml file to deploy OSDs in raw mode without an LVM layer.
See Deploying Ceph OSDs on specific devices and hosts for more details.
3.2. Ceph Dashboard
Start, stop, restart, and redeploy actions can be performed on underlying daemons of services
Previously, orchestrator services could only be created, edited, and deleted. No action could be performed on the underlying daemons of the services
With this release, actions such as starting, stopping, restarting, and redeploying can be performed on the underlying daemons of orchestrator services.
OSD page and landing page on the Ceph Dashboard displays different colours in the usage bar of OSDs
Previously, whenever an OSD would reach near full or full status, the cluster health would change to WARN or ERROR status, but from the landing page, there was no other sign of failure.
With this release, when an OSD turns to near full ratio or full, the OSD page for that particular OSD, as well as the landing page, displays different colours in the usage bar.
Dashboard displays onode hit or miss counters
With this release, the dashboard provides details pulled from the Bluestore stats, to display the onode hit or miss counters to help you deduce whether increasing the RAM per OSD could help improve the cluster performance.
Users can view the CPU and memory usage of a particular daemon
With this release, you can see CPU and memory usage of a particular daemon on the Red Hat Ceph Storage Dashboard under Cluster > Host> Daemons.
Improved Ceph Dashboard features for rbd-mirroring
With this release, the RBD Mirroring tab on the Ceph Dashboard is enhanced with the following features that were previously present only in the command-line interface (CLI):
- Support for enabling or disabling mirroring in images.
- Support for promoting and demoting actions.
- Support for resyncing images.
- Improve visibility for editing site names and create bootstrap key.
- A blank page consisting a button to automatically create an rbd-mirror appears if none exist.
User can now create OSDs in simple and advanced mode on the Red Hat Ceph Storage Dashboard
With this release, to simplify OSD deployment for the clusters with simpler deployment scenarios, “Simple” and “Advanced” modes for OSD Creation are introduced.
You can now choose from three new options:
- Cost/Capacity-optimized: All the available HDDs are used to deploy OSDs.
- Throughput-optimized: HDDs are supported for data devices and SSDs for DB/WAL devices.
- IOPS-optimized: All the available NVMEs are used as data devices.
See Management of OSDs using the Ceph Orchestrator for more details.
Ceph Dashboard Login page displays customizable text
Corporate users want to ensure that anyone accessing their system is acknowledged and is committed to comply with the legal/security terms.
With this release, a placeholder is provided on the Ceph Dashboard login page, to display a customized banner or warning text. The Ceph Dashboard admin can set, edit, or delete the banner with the following commands:
Example
[ceph: root@host01 /]# ceph dashboard set-login-banner -i filename.yaml [ceph: root@host01 /]# ceph dashboard get-login-banner [ceph: root@host01 /]# ceph dashboard unset-login-banner
When enabled, the Dashboard login page displays a customizable text.
Major version number and internal Ceph version is displayed on the Ceph Dashboard
With this release, along with the major version number, the internal Ceph version is also displayed on the Ceph Dashboard, to help users relate Red Hat Ceph Storage downstream releases to Ceph internal versions. For example, Version: 16.2.9-98-gccaadd
. Click the top navigation bar, click the question mark menu (?), and navigate to the About modal box to identify the Red Hat Ceph Storage release number and the corresponding Ceph version.
3.3. Ceph File System
New capabilities are available for CephFS subvolumes in ODF configured in external mode
If CephFS in ODF is configured in external mode, users like to use volume/subvolume metadata to store some Openshift specific metadata information, such as the PVC/PV/namespace from the volumes/subvolumes.
With this release, the following capabilities to set, get, update, list, and remove custom metadata from CephFS
subvolume are added.
Set custom metadata on the subvolume as a key-value pair using:
Syntax
ceph fs subvolume metadata set VOLUME_NAME SUBVOLUME_NAME KEY_NAME VALUE [--group-name SUBVOLUME_GROUP_NAME]
Get custom metadata set on the subvolume using the metadata key:
Syntax
ceph fs subvolume metadata get VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group-name SUBVOLUME_GROUP_NAME ]
List custom metadata, key-value pairs, set on the subvolume:
Syntax
ceph fs subvolume metadata ls VOLUME_NAME SUBVOLUME_NAME [--group-name SUBVOLUME_GROUP_NAME ]
Remove custom metadata set on the subvolume using the metadata key:
Syntax
ceph fs subvolume metadata rm VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group-name SUBVOLUME_GROUP_NAME ] [--force]
Reason for clone failure shows up when using clone status
command
Previously, whenever a clone failed, the only way to check the reason for failure was by looking into the logs.
With this release, the reason for clone failure is shown in the output of the clone status
command:
Example
[ceph: root@host01 /]# ceph fs clone status cephfs clone1 { "status": { "state": "failed", "source": { "volume": "cephfs", "subvolume": "subvol1", "snapshot": "snap1" "size": "104857600" }, "failure": { "errno": "122", "errstr": "Disk quota exceeded" } } }
The reason for a clone failure is shown in two fields:
-
errno
: error number -
error_msg
: failure error string
3.4. Ceph Manager plugins
CephFS NFS export can be dynamically updated using the ceph nfs export apply
command
Previously, when updating a CephFS NFS export, the NFS-Ganesha servers were always restarted. This temporarily affected all the client connections served by the ganesha servers including those exports that were not updated.
With this release, a CephFS NFS export can now be dynamically updated using the ceph nfs export apply
command. The NFS servers are no longer restarted every time a CephFS NFS export is updated.
3.5. The Ceph Volume utility
Users need not manually wipe devices prior to redeploying OSDs
Previously, users were forced to manually wipe devices prior to redeploying OSDs.
With this release, post zapping, physical volumes on devices are removed when no volume groups or logical volumes are remaining, thereby users are not forced to manually wipe devices anymore prior to redeploying OSDs.
3.6. Ceph Object Gateway
Ceph Object Gateway can now be configured to direct its Ops Log to an ordinary Unix file.
With this release, Ceph Object Gateway can be configured to direct its Ops Log to an ordinary Unix file, as a file-based log is simpler to work with in some sites, when compared to a Unix domain socket. The content of the log file is identical to what would be sent to the Ops Log socket in the default configuration.
Use the radosgw lc process
command to process a single bucket’s lifecycle
With this release, users can now use the radosgw-admin lc process
command to process only a single bucket’s lifecycle from the command line interface by specifying its name --bucket
or ID --bucket-id
, as processing the lifecycle for a single bucket is convenient in many situations, such as debugging.
User identity information is added to the Ceph Object Gateway Ops Log output
With this release, user identity information is added to the Ops Log output to enable customers to access this information for auditing of S3 access. User identities can be reliably tracked by S3 request in all versions of the Ceph Object Gateway Ops Log.
Log levels for Ceph Object Gateway’s HTTP access logging can be controlled independently with debug_rgw_access
parameter
With this release, log levels for Ceph Object Gateway’s HTTP access logging can be controlled independently with the debug_rgw_access
parameter to provide users the ability to disable all Ceph Object Gateway logging such as debug_rgw=0
except for these HTTP access log lines.
Level 20 Ceph Object Gateway log messages are reduced when updating bucket indices
With this release, the Ceph Object Gateway level 20 log messages are reduced when updating bucket indices to remove messages that do not add value and to reduce size of logs.
3.7. Multi-site Ceph Object Gateway
current_time
field is added to the output of several radosgw-admin
commands
With this release, current_time
field is added to several radosgw-admin
commands, specifically sync status
, bucket sync status
, metadata sync status
, data sync status
, and bilog status
.
Logging of HTTP client
Previously, Ceph Object Gateway was neither printing error bodies of HTTP responses nor was there a way to match the request to the response.
With this release, a more thorough logging of HTTP client is implemented by maintaining a tag to match a HTTP request to a HTTP response for the async HTTP client and error bodies. When the Ceph Object Gateway debug is set to twenty, error bodies and other details are printed.
Read-only role for OpenStack Keystone is now available
The OpenStack Keystone service provides three roles: admin
, member
, and reader
. In order to extend the role based access control (RBAC) capabilities to OpenStack, a new read-only Admin role can now be assigned to specific users in the Keystone service.
The support scope for RBAC is based on the OpenStack release.
3.8. Packages
New version of grafana container provides security fixes and improved functionality
With this release, a new version of grafana container, rebased with grafana v8.3.5
is built, which provides security fixes and improved functionality.
3.9. RADOS
MANY_OBJECTS_PER_PG
warning is no longer reported when pg_autoscale_mode
is set to on
Previously, Ceph health warning MANY_OBJECTS_PER_PG
was reported in instances where pg_autoscale_mode
was set to on
with no distinction between the different modes that reported the health warning.
With this release, a check is added to omit reporting MANY_OBJECTS_PER_PG
warning when pg_autoscale_mode
is set to on
.
OSDs report the slow operations details in an aggregated format to the Ceph Manager service
Previously, slow requests would overwhelm a cluster log with too many details, filling up the monitor database.
With this release, slow requests get logged in the cluster log by operation type and pool information and is based on OSDs reporting aggregated slow operations details to the manager service.
Users can now blocklist a CIDR range
With this release, you can blocklist a CIDR range, in addition to individual client instances and IPs. In certain circumstances, you would want to blocklist all clients in an entire data center or rack instead of specifying individual clients to blocklist. For example, failing over a workload to a different set of machines and wanting to prevent the old workload instance from continuing to partially operate. This is now possible using a "blocklist range" analogous to the existing "blocklist" command.
3.10. The Ceph Ansible utility
A new Ansible playbook is now available for backup and restoring Ceph files
Previously, users had to manually backup and restore files when either upgrading the OS from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 or reprovisioning their machines, which was quite inconvenient especially in case of large cluster deployments.
With this release, the backup-and-restore-ceph-files.yml
playbook is added to backup and restore Ceph files, such as /etc/ceph
and /var/lib/ceph
that eliminates the need for the user to manually restore files.