Chapter 3. New features
This section lists all major updates, enhancements, and new features introduced in this release of Red Hat Ceph Storage.
3.1. The Ceph Ansible utility
ceph-ansible
playbook gathers logs from multiple nodes
With this release, the playbook gathers logs from multiple nodes in a large cluster automatically.
ceph-ansible
performs additional connectivity check between two sites
With this update, ceph-ansible
performs additional connectivity checks between two sites prior to a realm pull.
The purge playbook removes the unused Ceph files
With this release, the purge cluster playbook removes all the Ceph related unused files on the grafana-server node after purging the Red Hat Ceph Storage cluster.
Use the --skip-tags wait_all_osds_up
option to skip the check that waits for all the OSDs to be up
With this release, during the upgrade of the storage cluster, by using the --skip-tags wait_all_osds_up
option at Ansible runtime users can skip this check thereby preventing the failure of the rolling_update.yml
playbook when you have a disk failure.
crush_rule
for existing pools can be updated
Previously, the crush_rule
value for a specific pool was set during the creation of the pool and it was not possible to update later. With this release, crush_rule
value can be updated for an existing pool.
Custom crush_rule
can be set for RADOS Gateway pools
With this release, RADOS gateway pools can have custom crush_rule
values in addition to other pools like OpenStack, MDS and Client.
Set ceph_docker_http_proxy
and ceph_docker_https_proxy
to resolve proxy issues with container registry behind a HTTP(s) proxy
Previously, the environment variables defined in the /etc/profile.d
directory were not loaded resulting in failure of login and pull registry operations. With this update, by setting the environment variables ceph_docker_http_proxy
and/or ceph_docker_https_proxy
, the container registry behind a HTTP(s) proxy works as expected.
Ceph Ansible works with Ansible 2.9 only
Previously, ceph-ansible
supported 2.8 and 2.9 versions of Ansible as a migration solution. With this release, ceph-ansible
supports Ansible 2.9 only.
Dashboard is set to HTTPS by default
Previously, the dashboard was set to http
. With this release, the dashboard is set to https
by default.
The ceph-mon
service is unmasked before exiting the playbook
Previously, during a failure, the ceph-mon
systemd service would be masked resulting in the playbook failure thereby preventing the service from restarting manually. With this release, the ceph-mon
service is unmasked before exiting the playbook during a failure and users can now manually restart the ceph-mon
service before restarting the rolling update playbook.
3.2. Ceph Management Dashboard
View the user’s bucket quota usage in the Red Hat Ceph Storage Dashboard
With this release, the Red Hat Ceph Storage Dashboard displays the user’s bucket quota usage, including the current size, percentage used, and number of objects.
3.3. Ceph File System
The mgr/volumes CLI can now be used to list cephx auth IDs
Earlier, ceph_volume_client
interface was used to list the cephx auth IDs. This interface is now deprecated.
With this release, consumers like Manila can use the mgr/volume interface to list the cephx auth IDs that are granted access to the subvolumes.
Syntax
ceph fs subvolume authorized_list _VOLUME_NAME_ _SUB_VOLUME_NAME_ [--group_name=_GROUP_NAME_]
3.4. Ceph Manager plugins
Internal python to C++ interface is modified to improve Ceph manager performance
Previously, pg_dump
provided all the information thereby affecting the performance of Ceph manager. With this release, the internal python to C++ interface is modified and the modules provide information on pg_ready
, pg_stats
, pool_stats
, and ‘osd_ping_times`.
Progress module can be turned off
Previously, the progress module could not be turned off since it was an always-on
manager module. With this release, the progress module can be turned off by using ceph progress off
and turned on by using ceph progress on
.
3.5. Ceph Object Gateway
Ceph Object Gateway’s default shard requests on bucket index, rgw_bucket_index_max_aio
, increased to 128
Previously, outstanding shard requests on a bucket index was limited to 8 causing slow performance with listing buckets. With this release, the default number of shard requests on a bucket index, rgw_bucket_index_max_aio
, has been increased from 8 to 128, thereby improving the bucket listing performance.
Cluster log information now includes latency information for buckets
Previously, cluster information in logs provided the latency for bucket requests, but did not specify latency information for each bucket. With this release, each line in the log includes the bucket name, object name, request ID, operation start time, and operation name.
This enhancement makes it easier for customers to gather this information when parsing logs. To calculate the latency of the operation, use an awk
script to subtract the time of the log message from the time the operation started.
The Ceph Object Gateway log includes the access log for Beast
With this release, Beast, the front-end web server, now includes an Apache-style access log line in the Ceph Object Gateway log. This update to the log helps diagnose connection and client network issues.
Explicit request timeout for the Beast front end
Previously, slow client connections, such as clients connected over high-latency networks, might be dropped if they remained idle.
With this release, the new request_timeout_ms
option in /etc/ceph.conf
adds the ability to set an explicit timeout for the Beast front end. The default value for request_timeout_ms
is 65 seconds.
Setting a larger request timeout can make the Ceph Object Gateway more tolerant of slow clients, and can result in fewer dropped connections.
List RGW objects with missing data
Previously, RGW objects that had data erroneously deleted would were unknown to administrators, so they could not determine how best to address this issue. With this release, cluster administrators can use rgw-gap-list
to list candidate RGW objects that may have missing data.
3.6. Multi-site Ceph Object Gateway
Data sync logging experienced delays in processing
Previously, data sync logging could be subject to delays in processing large backlogs of log entries.
With this release, data sync includes caching for bucket sync status. The addition of the cache speeds the processing of duplicate datalog entries when a backlog exists.
Data sync logging experienced delays in processing
Previously, data sync logging could be subject to delays in processing large backlogs of log entries.
With this release, data sync includes caching for bucket sync status. The addition of the cache speeds the processing of duplicate datalog entries when a backlog exists.
Multisite sync logging can now use FIFO to offload logging to RADOS data objects
Previously, multisite metadata and data logging configurations used OMAP data logs. With this release, FIFO data logging is available. To use FIFO with green field deployments, set the config option rgw_default_data_log_backing to fifo.
Configuration values are case-sensitive. Use fifo
in lowercase to set config options.
To change the data log backing that sites use, use the command radosgw-admin --log-type fifo datalog type
.
3.7. RADOS
Ceph messenger protocol revised to msgr v2.1.
With this release, a new version of Ceph messenger protocol, msgr v2.1, is implemented, which addresses several security, integrity and potential performance issues that were with the previous version, msgr v2.0. All Ceph entities, both daemons and clients, now default to msgr v2.1.
Ceph health details are logged in the cluster log
Previously, the cluster log did not have the Ceph health details , so it was difficult to conclude on the root cause of the issue. With this release, the Ceph health details are logged in the cluster log which enables the review of the issues that might arise in the cluster.
Improvement in the efficiency of the PG removal code
Previously,the code was inefficient as it did not keep a pointer to the last deleted object in the placement group (PG) in every pass which caused an unnecessary iteration over all the objects each time. With this release,there is an improved PG deletion performance with less impact on the client I/O. The parameters osd_delete_sleep_ssd
and osd_delete_sleep_hybrid
now have the default value of 1 second.
3.8. RADOS Block Devices (RBD)
New option -o noudev
to run commands from a custom network namespace on rbd kernel client
Previously, the commands like rbd map
and rbd unmap
from a custom network namespace on the rbd kernel client would hang until manual intervention. With this release, adding the option o -noudev
to commands like rbd map -o noudev
and rbd unmap -o noudev
works as expected. This is particularly useful when using Multus instead of the default OpenShift SDN for networking in OCP.