Chapter 3. New features
This section lists all major updates, enhancements, and new features introduced in this release of Red Hat Ceph Storage.
The main features added by this release are:
Containerized Cluster
Red Hat Ceph Storage 5 supports only containerized daemons. It does not support non-containerized storage clusters. If you are upgrading a non-containerized storage cluster from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5, the upgrade process includes the conversion to a containerized deployment.
For more information, see the Upgrading a Red Hat Ceph Storage cluster from RHCS 4 to RHCS 5 section in the Red Hat Ceph Storage Installation Guide for more details.
Cephadm
Cephadm is a new containerized deployment tool that deploys and manages a Red Hat Ceph Storage 5 cluster by connecting to hosts from the manager daemon. The
cephadm
utility replacesceph-ansible
for Red Hat Ceph Storage deployment. The goal of Cephadm is to provide a fully-featured, robust, and well installed management layer for running Red Hat Ceph Storage.The
cephadm
command manages the full lifecycle of a Red Hat Ceph Storage cluster.The
cephadm
command can perform the following operations:- Bootstrap a new Ceph storage cluster.
- Launch a containerized shell that works with the Ceph command-line interface (CLI).
Aid in debugging containerized daemons.
The
cephadm
command usesssh
to communicate with the nodes in the storage cluster and add, remove, or update Ceph daemon containers. This allows you to add, remove, or update Red Hat Ceph Storage containers without using external tools.The
cephadm
command has two main components:-
The
cephadm
shell launches abash
shell within a container. This enables you to run storage cluster installation and setup tasks, as well as to runceph
commands in the container. The
cephadm
orchestrator commands enable you to provision Ceph daemons and services, and to expand the storage cluster.For more information, see the Red Hat Ceph Storage Installation Guide.
Management API
The management API creates management scripts that are applicable for Red Hat Ceph Storage 5 and continues to operate unchanged for the version lifecycle. The incompatible versioning of the API would only happen across major release lines.
For more information, see the Red Hat Ceph Storage Developer Guide.
Disconnected installation of Red Hat Ceph Storage
Red Hat Ceph Storage 5 supports the disconnected installation and bootstrapping of storage clusters on private networks. A disconnected installation uses custom images and configuration files and local hosts, instead of downloading files from the network.
You can install container images that you have downloaded from a proxy host that has access to the Red Hat registry, or by copying a container image to your local registry. The bootstrapping process requires a specification file that identifies the hosts to be added by name and IP address. Once the initial monitor host has been bootstrapped, you can use Ceph Orchestrator commands to expand and configure the storage cluster.
See the Red Hat Ceph Storage Installation Guide for more details.
Ceph File System geo-replication
Starting with the Red Hat Ceph Storage 5 release, you can replicate Ceph File Systems (CephFS) across geographical locations or between different sites. The new
cephfs-mirror
daemon does asynchronous replication of snapshots to a remote CephFS.See the Ceph File System mirrors section in the Red Hat Ceph Storage File System Guide for more details.
A new Ceph File System client performance tool
Starting with the Red Hat Ceph Storage 5 release, the Ceph File System (CephFS) provides a
top
-like utility to display metrics on Ceph File Systems in realtime. Thecephfs-top
utility is acurses
-based Python script that uses the Ceph Managerstats
module to fetch and display client performance metrics.See the Using the
cephfs-top
utility section in the Red Hat Ceph Storage File System Guide for more details.Monitoring the Ceph object gateway multisite using the Red Hat Ceph Storage Dashboard
The Red Hat Ceph Storage dashboard can now be used to monitor an Ceph object gateway multisite configuration.
After the multi-zones are set-up using the
cephadm
utility, the buckets of one zone is visible to other zones and other sites. You can also create, edit, delete buckets on the dashboard.See the Management of buckets of a multisite object configuration on the Ceph dashboard chapter in the Red Hat Ceph Storage Dashboard Guide for more details.
Improved BlueStore space utilization
The Ceph Object Gateway and the Ceph file system (CephFS) stores small objects and files as individual objects in RADOS. With this release, the default value of BlueStore’s
min_alloc_size
for SSDs and HDDs is 4 KB. This enables better use of space with no impact on performance.See the OSD BlueStore chapter in the Red Hat Ceph Storage Administration Guide for more details.
3.1. The Cephadm utility
cephadm
supports colocating multiple daemons on the same host
With this release, multiple daemons, such as Ceph Object Gateway and Ceph Metadata Server (MDS), can be deployed on the same host thereby providing an additional performance benefit.
Example
service_type: rgw placement: label: rgw count-per-host: 2
For single node deployments, cephadm
requires atleast two running Ceph Manager daemons in upgrade scenarios. It is still highly recommended even outside of upgrade scenarios but the storage cluster will function without it.
Configuration of NFS-RGW using Cephadm is now supported
In Red Hat Ceph Storage 5.0 configuration, use of NFS-RGW required the use of dashboard as a workaround and it was recommended for such users to delay upgrade until Red Hat Ceph Storage 5.1
With this release, NFS-RGW configuration is supported and the users with this configuration can upgrade their storage cluster and it works as expected.
Users can now bootstrap their storage clusters with custom monitoring stack images
Previously, users had to adjust the image used for their monitoring stack daemons manually after bootstrapping the cluster
With this release, you can specify custom images for monitoring stack daemons during bootstrap by passing a configuration file formatted as follows:
Syntax
[mgr] mgr/cephadm/container_image_grafana = GRAFANA_IMAGE_NAME mgr/cephadm/container_image_alertmanager = ALERTMANAGER_IMAGE_NAME mgr/cephadm/container_image_prometheus = PROMETHEUS_IMAGE_NAME mgr/cephadm/container_image_node_exporter = NODE_EXPORTER_IMAGE_NAME
You can run bootstrap with the --config CONFIGURATION_FILE_NAME
option in the command. If you have other configuration options, you can simply add the lines above in your configuration file before bootstrapping the storage cluster.
cephadm
enables automated adjustment of osd_memory_target
With this release, cephadm
enables automated adjustment of osd_memory_target
configuration parameter by default.
Users can now specify CPU limits for the daemons by service
With this release, you can customize the CPU limits for all daemons within any given service by adding the CPU limit to the service specification file via the extra_container_args field.
Example
service_type: mon service_name: mon placement: hosts: - host01 - host02 - host03 extra_container_args: - "--cpus=2" service_type: osd service_id: osd_example placement: hosts: - host01 extra_container_args: - "--cpus=2" spec: data_devices: paths: - /dev/sdb
cephadm
now supports IPv6 networks for Ceph Object Gateway deployment
With this release, cephadm
supports specifying an IPv6 network for Ceph Object Gateway specifications. An example of a service configuration file for deploying Ceph Object Gateway is:
Example
service_type: rgw service_id: rgw placement: count: 3 networks: - fd00:fd00:3000::/64
The ceph nfs export create rgw
command now supports exporting Ceph Object Gateway users
Previously, the ceph nfs export create rgw
command would only create Ceph Object Gateway exports at the bucket level.
With this release, the command creates the Ceph Object Gateway exports at both the user and bucket level.
Syntax
ceph nfs export create rgw --cluster-id CLUSTER_ID --pseudo-path PSEUDO_PATH --user-id USER_ID [--readonly] [--client_addr VALUE...] [--squash VALUE]
Example
[ceph: root@host01 /]# ceph nfs export create rgw --cluster-id mynfs --pseudo-path /bucketdata --user-id myuser --client_addr 192.168.10.0/24
3.2. Ceph Dashboard
Users can now view the HAProxy metrics on the Red Hat Ceph Storage Dashboard
With this release, Red Hat introduces the new Grafana dashboard for ingress service used for Ceph Object Gateway endpoints. You can now view the four HAProxy metrics under Ceph Object Gateway Daemons Overall Performance such as Total responses by HTTP code, Total requests/responses, Total number of connections, and Current total of incoming/outgoing bytes.
User can view mfa_ids
on the Red Hat Ceph Storage Dashboard
With this release, you can view the mfa_ids
for a user configured with multi-factor authentication (MFA) for the Ceph Object Gateway user in the User Details section on the Red Hat Ceph Storage Dashboard.
3.3. Ceph Manager plugins
The global recovery event in the progress module is now optimized
With this release, computing for the progress of global recovery events is optimized for a large number of placement groups in a large storage cluster by using the C++ code instead of the python module thereby reducing the CPU utilization.
3.4. The Ceph Volume utility
The lvm
commands does not cause metadata corruption when run within the containers
Previously, when the lvm
commands were run directly within the containers, it would cause LVM metadata corruption.
With this release, ceph-volume
uses the host namespace to run the lvm
commands and avoids metadata corruption.
3.5. Ceph Object Gateway
Lock contention messages from the Ceph Object Gateway reshard queue are marked as informational
Previously, when the Ceph Object Gateway failed to get a lock on a reshard queue, the output log entry would appear to be an error causing concern to customers.
With this release, the entries in the output log appear as informational and are tagged as “INFO:”.
The support for OIDC JWT token validation using modulus and exponent is available
With this release, OIDC JSON web token (JWT) validation supports the usage of modulus and exponent for signature calculation. It also extends the support for available methods to validate OIDC JWT validation.
The role name and role session fields are now available in ops log for temporary credentials
Previously, the role name and role session were not available and it was difficult for the administrator to know which role was being assumed and which session was active for the temporary credentials being used.
With this release, role name and role session to ops log are available for temporary credentials, returned by AssumeRole* APIs, to perform S3 operations.
Users can now use the --bucket
argument to process bucket lifecycles
With this release, you can provide a --bucket=BUCKET_NAME
argument to the radosgw-admin lc process
command to process the lifecycle for the corresponding bucket. This is convenient for debugging lifecycle problems that affect specific buckets and for backfilling lifecycle processing for specific buckets that have fallen behind.
3.6. Multi-site Ceph Object Gateway
Multi-site configuration supports dynamic bucket index resharding
Previously, only manual resharding of the buckets for multi-site configurations was supported.
With this release, dynamic bucket resharding is supported in multi-site configurations. Once the storage clusters are upgraded, enable the resharding
feature and reshard the buckets either manually with radogw-admin bucket reshard
command or automatically with dynamic resharding, independently of other zones in the storage cluster.
3.7. RADOS
Use the noautoscale
flag to manage the PG autoscaler
With this release, pg_autoscaler
can be turned on
or off
globally using the noautoscale
flag. This flag is set to off
by default. When this flag is set, then all the pools have pg_autoscale_mode
as off
For more information, see the Manually updating autoscale profile section in the Red Hat Ceph Storage Storage Strategies Guide.
Users can now create pools with the --bulk
flag
With this release, you can create pools with the --bulk
flag. It uses a profile of the pg_autoscaler
and provides better performance from the start and has a full complement of placement groups (PGs) and scales down only when the usage ratio across the pool is not even.
If the pool does not have the --bulk
flag, the pool starts out with minimal PGs.
To create a pool with the bulk flag:
Syntax
ceph osd pool create POOL_NAME --bulk
To set/unset bulk flag of existing pool:
Syntax
ceph osd pool set POOL_NAME bulk TRUE/FALSE/1/0 ceph osd pool unset POOL_NAME bulk TRUE/FALSE/1/0
To get bulk flag of existing pool:
Syntax
ceph osd pool get POOL_NAME --bulk