Chapter 3. New features
This section lists all major updates, enhancements, and new features introduced in this release of Red Hat Ceph Storage.
The main features added by this release are:
- General availability of the BlueStore OSD back end
- Ceph Management Dashboard
- Ceph Object Gateway Beast Asio web server
- Erasure coding for Ceph Block Device
- Auto-scaling placement groups
- Web-based installation interface
- On-wire encryption
- RBD performance monitoring and metrics gathering tools
- Red Hat Enterprise Linux in FIPS mode
3.1. The ceph-ansible
Utility
Ceph OSDs created with ceph-disk
are migrated to ceph-volume
during upgrade
When upgrading to Red Hat Ceph Storage 4, all running Ceph OSDs previously created by the ceph-disk
utility will be migrated to the ceph-volume
utility because ceph-disk
has been deprecated in this release.
For bare-metal and container deployments of Red Hat Ceph Storage, the ceph-volume
utility does a simple scan and takes over the existing Ceph OSDs deployed by the ceph-disk
utility. Also, do not use these migrated devices in configurations for subsequent deployments. Note that you cannot create any new Ceph OSDs during the upgrade process.
After the upgrade, all Ceph OSDs created by ceph-disk
will start and operate like any Ceph OSDs created by ceph-volume
.
Ansible playbooks for scaling of all Ceph services
Previously, ceph-ansible
playbooks offered limited scale up and scale down features only to core Ceph products, such as Monitors and OSDs. With this update, additional Ansible playbooks allow for scaling of all Ceph services.
Ceph iSCSI packages merged into single package
The ceph-iscsi-cli
and ceph-iscsi-config
packages have been merged to one package named ceph-iscsi
.
The nfs-ganesha
service is now supported as a standalone deployment
Red Hat Openstack Directory (OSPd) requires the ceph-ansible
utility to be able to deploy the nfs-ganesha
service and configure it so that it points to an external, unmanaged, pre-existing, Ceph cluster. As of Red Hat Ceph Storage 4, ceph-ansible
allows the deployment of an internal nfs-ganesha
service with an external Ceph cluster.
Ceph container can now write logs to a respective daemon file
The previous way of logging for containerized Ceph environments did not allow limiting the journalctl
output when looking at log data by using the sosreport
collection. With this release, logging can be enabled or disabled for a particular Ceph daemon with the following command:
ceph config set daemon.id log_to_file true
Where daemon is the type of the daemon and id is its ID. For example, to enable logging for the Monitor daemon with ID mon0
:
ceph config set mon.mon0 log_to_file true
This new feature makes debugging easier.
Ability to configure Ceph Object Gateway to use TLS encryption
This release of Red Hat Ceph Storage provides the ability to configure the Ceph Object Gateway listener with an SSL certificate for TLS encryption by using the radosgw_frontend_ssl_certificate
variable to secure the Transmission Control Protocol (TCP) traffic.
Ansible playbook for migrating OSDs from FileStore to BlueStore
A new Ansible playbook has been added to migrate OSDs from FileStore to BlueStore. The object store migration is not done as part of the upgrade process to Red Hat Ceph Storage 4. Do the migration after the upgrade completes. For details, see the How to migrate the object store from FileStore to BlueStore section in the Red Hat Ceph Storage Administration Guide.
3.2. Ceph Management Dashboard
Improvements to information for pool usage
With this enhancement, valuable information was added to the pools table. The following columns were added: usage, read bytes, write bytes, read operations, and write operations. Also, the Placement Groups column was renamed to Pg Status.
Red Hat Ceph Storage Dashboard alerts
Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholds. The Prometheus AlertManager configures, gathers, and triggers the alerts. The alerts are displayed in the Dashboard as a pop-up notifications in the upper-right corner. You can view details of recent alerts in Cluster > Alerts. You can configure the alerts only in Prometheus, but you can temporarily mute them from the Dashboard by creating "Alert Silences" in Cluster > Silences.
Displaying and hiding Ceph components in Dashboard
In the Red Hat Ceph Storage Dashboard you can display or hide Ceph components, such as Ceph iSCSI, RBD mirroring, Ceph Block Devices, Ceph File System, or Ceph Object Gateway. This feature allows you to hide components that are not configured.
Ceph dashboard has been added to the Ceph Ansible playbooks
With this release, the Ceph dashboard installation code was merged into the Ceph Ansible playbooks. Ceph Ansible does a containerized deployment of the Ceph dashboard regardless of the Red Hat Ceph Storage deployment type, bare metal or containers. These four new roles were added: ceph-grafana
, ceph-dashboard
, ceph-prometheus
and ceph-node-exporter
.
Red Hat Ceph Storage Dashboard alerts
Red Hat Ceph Storage Dashboard supports alerts based on Ceph metrics and configured thresholds. The Prometheus AlertManager configures, gathers, and triggers the alerts.
Viewing the cluster hierarchy from the Dashboard
Red Hat Ceph Storage Dashboard provides the ability to view the cluster hierarchy. For details see the Viewing the CRUSH map section in the Dashboard Guide for Red Hat Ceph Storage 4.
3.3. Ceph File System
ceph -w
now shows information about CephFS scrubs
Previously, it was not possible to check the ongoing Ceph File System (CephFS) scrubs status aside from checking the Metadata server (MDS) logs. With this update, the ceph -w
command, shows information about active CephFS scrubs to better understand the status.
ceph-mgr
volumes module for managing CephFS exports
This release provides Ceph Manager (ceph-mgr
) volumes module to manage Ceph File System (CephFS) exports. The volumes module implements the following file system export abstractions:
- FS volumes, an abstraction for CephFS file systems
- FS subvolumes, an abstraction for independent CephFS directory trees
- FS subvolume groups, an abstraction for a directory higher than FS subvolumes to effect policies, such as file layouts, across a set of subvolumes.
In addition, these new commands are now supported:
-
fs subvolume ls
for listing subvolumes -
fs subvolumegroup ls
for listing subvolumes groups -
fs subvolume snapshot ls
for listing subvolume snapshots -
fs subvolumegroup snapshot ls
for listing subvolume group snapshots -
fs subvolume rm
for removing snapshots
3.4. Ceph Medic
ceph-medic
can check the health of Ceph running in containers
With this release, the ceph-medic
utility can now check the health of a Red Hat Ceph Storage cluster running within a container.
3.5. iSCSI Gateway
Non-administrators can now be used for the ceph-iscsi
service
As of Red Hat Ceph Storage 4, non-administrative Ceph users may be used for the ceph-iscsi
service by setting the cluster_client_name
in the /etc/ceph/iscsi-gateway.cfg
on all iSCSI Gateways. This allows resources to be restricted based on users.
Running Ceph iSCSI Gateways can now be removed
As of Red Hat Ceph Storage 4, running iSCSI Gateways can now be removed from a Ceph iSCSI cluster for maintenance or to reallocate resources. The gateway’s iSCSI target and its portals will be stopped, and all iSCSI target objects for that gateway will be removed from the kernel and the gateway configuration. Removing gateways that are down is not yet supported.
3.6. Object Gateway
The Beast HTTP front end
In Red Hat Ceph Storage 4, the default HTTP front end for the Ceph Object Gateway is Beast. The Beast front end uses the Boost.Beast
library for HTTP parsing and the Boost.Asio
library for asynchronous I/O. For details, see the Using the Beast front end in the Object Gateway Configuration and Administration Guide for Red Hat Ceph Storage 4.
Support for S3 MFA-Delete
With this release, the Ceph Object Gateway supports S3 MFA-Delete using Time-Based One-Time Password (TOTP) one-time passwords as an authentication factor. This feature adds security against inappropriate data removal. You can configure buckets to require a TOTP one-time token in addition to standard S3 authentication to delete data.
Users can now create new IAM policies and roles using REST APIs
With the release of Red Hat Ceph Storage 4, REST APIs for IAM roles and user policies are now available in the same namespace as S3 APIs and can be accessed using the same endpoint as S3 APIs in the Ceph Object Gateway. This allows end users to create new IAM policies and roles using REST APIs.
3.7. Packages
Ability to install a Ceph cluster using a web-based interface
With this release, the Cockpit web-based interface is supported. Cockpit allows you to install a Red Hat Ceph Storage 4 cluster and other components, such as Metadata Servers, Ceph clients, or the Ceph Object Gateway on base-metal or in containers. For details see the Installing Red Hat Ceph Storage using the Cockpit Web User Interface chapter in the Red Hat Ceph Storage 4 Installation Guide. Note that minimal experience with Red Hat Ceph Storage is required.
3.8. RADOS
Ceph on-wire encryption
Starting with Red Hat Ceph Storage 4, you can enable encryption for all Ceph traffic over the network with the introduction of the messenger version 2 protocol. For details see the Ceph on-wire encryption chapter in the Architecture Guide and Encryption in transit section in the Data Security and Hardening Guide for Red Hat Ceph Storage 4.
OSD BlueStore is now fully supported
BlueStore is a new back end for the OSD daemons that allows for storing objects directly on the block devices. Because BlueStore does not need any file system interface, it improves performance of Ceph storage clusters. To learn more about the BlueStore OSD back end, see the OSD BlueStore chapter in the Administration Guide for Red Hat Ceph Storage 4.
Red Hat Enterprise Linux in FIPS mode
With this release, you can install Red Hat Ceph Storage on Red Hat Enterprise Linux where the FIPS mode is enabled.
Changes the ceph df
output and a new ceph osd df
command
The output of the ceph df
command has been improved. Notably, the RAW USED and %RAW USED values now show the preallocated space for the db
and wal
BlueStore partitions. The ceph osd df
command shows the OSD utilization stats, such as the amount of written data.
Asynchronous recovery for non-acting OSD sets
Previously, recovery with Ceph was a synchronous process by blocking write operations to objects until those objects were recovered. In this release, the recovery process is now asynchronous by not blocking write operations to objects only in the non-acting set of OSDs. This new feature requires having more than the minimum number of replicas, as to have enough OSDs in the non-acting set.
The new configuration option, osd_async_recovery_min_cost
, controls how much asynchronous recovery to do. The default value for this option is 100
. A higher value means asynchronous recovery will be less, whereas a lower value means asynchronous recovery will be more.
Configuration is now stored in Monitors accessible by using ceph config
In this release, Red Hat Ceph Storage centralizes configuration in Monitors instead of using the Ceph configuration file (ceph.conf
). Previously, changing configuration included manually updating ceph.conf
, distributing it to appropriate nodes, and restarting all affected daemons. Now, the Monitors manage a configuration database that has the same semantic structure as ceph.conf
. The database is accessible by the ceph config
command. Any changes to the configuration are applied to daemons or clients in the system immediately and restarting them is no longer needed. Use the ceph config -h
command for details on the available set of commands. Note that a Ceph configuration file is still required to identify the Monitors nodes.
Placement groups can now be auto-scaled
Red Hat Ceph Storage 4 introduces the ability to auto-scale placement groups (PGs). The number of placement groups (PGs) in a pool plays a significant role in how a cluster peers, distributes data, and rebalances. Auto-scaling the number of PGs can make managing the cluster easier. The new pg-autoscaling
command provides recommendations for scaling PGs, or automatically scales PGs based on how the cluster is being used. For more details about auto-scaling PGs, see the Auto-scaling placement groups section in the Storage Strategies Guide for Red Hat Ceph Storage 4.
Introduction of diskprediction
module
The Red Hat Ceph Storage diskprediction
module gathers metrics to predict disk failures before they happen. The module has two modes, cloud and local. With this release, only the local mode is supported. The local mode does not require any external server for data analysis. It uses an internal predictor module for disk prediction service, and then returns the disk prediction result to the Ceph system.
To enable the diskprediction
module:
ceph mgr module enable diskprediction_local
To set the prediction mode:
ceph config set global device_failure_prediction_mode local
To disable the diskprediction
module:
ceph config set global device_failure_prediction_mode none
New configurable option: mon_memory_target
Red Hat Ceph Storage 4 introduces a new configurable option, mon_memory_target
, used to set the target amount of bytes for Monitor memory usage. It specifies the amount of memory to allocate and manage using the priority cache tuner for the associated Monitor daemon caches. The default value of mon_memory_target
is set to 2 GiB and you can change it during runtime with:
# ceph config set global mon_memory_target size
Prior to this release, as a cluster scaled, the Monitor specific RSS usage exceeded the limits that were set using the mon_osd_cache_size
option, which led to issues. This enhancement allows for improved management of memory allocated to the monitor caches and keeps the usage within specified limits.
3.9. Block Devices (RBD)
Erasure coding for Ceph Block Device
Erasure coding for Ceph Block Device (RBD) is now fully supported. This feature allows RBD to store their data in an erasure coded pool. For details see the Erasure Coding with Overwrites section in the Storage Strategies for Red Hat Ceph Storage 4.
RBD performance monitoring and metrics gathering tools
Red Hat Ceph Storage 4 now incorporates new Ceph Block Device performance monitoring utilities for aggregated RBD image metrics for IOPS, throughput, and latency. Per-image RBD metrics are now available using the Ceph Manager Prometheus module, the Ceph Dashboard, and the rbd
CLI using the rbd perf image iostat
or rbd perf image iotop
commands.
Cloned images can be created from non-primary images
Creating cloned child RBD images from mirrored non-primary parent image is now supported. Previously, cloning of mirrored images was only supported for primary images. When cloning golden images for virtual machines, this restriction prevented the creation of new cloned images from the golden non-primary image. This update removes this restriction, and cloned images can be created from non-primary mirrored images.
Segregating RBD images within isolated namespaces within the same pool
RBD images can now be segregated within isolated namespaces within the same pool. When using Ceph Block Devices directly without a higher-level system, such as OpenStack or OpenShift Container Storage, it was not possible to restrict user access to specific RBD images. When combined with CephX capabilities, users can be restricted to specific pool namespaces to restrict access to RBD images.
Moving RBD images between different pools within the same cluster
This version of Red Hat Ceph Storage adds the ability to move RBD images between different pools within the same cluster. For details, see the Moving images between pools section in the Block Device Guide for Red Hat Ceph Storage 4.
Long-running RBD operations can run in the background
Long-running RBD operations, such as image removal or cloned image flattening, can now be scheduled to run in the background. RBD operations that involve iterating over every backing RADOS object for the image can take a long time depending on the size of the image. When using the CLI to perform one of these operations, the rbd
CLI is blocked until the operation is complete. These operations can now be scheduled to run by the Ceph Manager as a background task by using the ceph rbd task add
commands. The progress of these tasks is visible on the Ceph dashboard as well as by using the CLI.
3.10. RBD Mirroring
Support for multiple active instances of RBD mirror daemon in a single storage cluster
Red Hat Ceph Storage 4 now supports deploying multiple active instances of the RBD mirror daemon in a single storage cluster. This enables multiple RBD mirror daemons to perform replication for the RBD images or pools using an algorithm for chunking the images across the number of active mirroring daemons.