Search

Chapter 3. New features

download PDF

This section lists all major updates, enhancements, and new features introduced in this release of Red Hat Ceph Storage.

3.1. Ceph Ansible

Support for iSCSI gateway upgrades through rolling updates

Previously, when using a Ceph iSCSI gateway node, iscsi-gws could not be updated by ceph-ansible during a rolling upgrade. With this update to Red Hat Ceph Storage, ceph-ansible now supports upgrading iscsi-gws using the rolling_update.yml Ansible playbook.

Support NVMe based bucket index pools

Previously, configuring Ceph to optimize storage on high speed NVMe or SATA SSDs when using Object Gateway was a completely manual process which required complicated LVM configuration.

With this release, the ceph-ansible package provides two new Ansible playbooks that facilitate setting up SSD storage using LVM to optimize performance when using Object Gateway. See the Using NVMe with LVM Optimally chapter in the Red Hat Ceph Storage Object Gateway for Production Guide for more information.

3.2. Ceph Dashboard

Installation of Red Hat Ceph Storage Dashboard using the ansible user

Previously, installing Red Hat Ceph Storage Dashboard (cephmetrics) with Ansible required root access. Traditionally, Ansible uses passwordless ssh and sudo with a regular user to install and make changes to systems. In this release, the Red Hat Ceph Storage Dashboard can be installed with ansible using a regular user. For more information on the Red Hat Ceph Storage Dashboard, see the Administration Guide.

The Red Hat Ceph Storage Dashboard displays the amount of used and available RAM on the storage cluster nodes

Previously, there was no way to view the actual memory usage on cluster nodes from the Red Hat Ceph Storage Dashboard. With this update to Red Hat Ceph Storage, a memory usage graph has been added to the OSD Node Detail dashboard.

The Prometheus plugin for the Red Hat Ceph Storage Dashboard

Previously, the Red Hat Ceph Storage Dashboard used collectd and Graphite for gathering and reporting on Ceph metrics. With this release, Prometheus is now used for data gathering and reporting, and provides querying capabilities. Also, Prometheus is much less resource intensive. See the Red Hat Ceph Storage Administration Guide for more details on the Prometheus plugin.

The Red Hat Ceph Storage Dashboard supports OSDs provisioned by the ceph-volume utility

In this release, an update to the Red Hat Ceph Storage Dashboard adds support for displaying information on ceph-volume provisioned OSDs.

3.3. CephFS

More accurate CephFS free space information

The CephFS kernel client now reports the same, more accurate free space information as the fuse client via the df command.

3.4. iSCSI Gateway

The max_data_area_mb option is configurable per-LUN

Previously, the amount of memory the kernel used to pass SCSI command data to tcmu-runner was hard coded to 8MB. The hard coded limit was too small for many workloads and resulted in reduced throughput and/or TASK SET FULL errors filling initiator side logs. This can now be configured by setting the max_data_area_mb value with gwcli. Information on the new setting and command can be found in the Red Hat Ceph Storage Block Device Guide.

iSCSI gateway command-line utility (gwcli) supports snapshot create, delete, and rollback capabilities

Previously, to manage the snapshots of RBD-backed LUN images the rbd command line utility was utilized for this purpose. The gwcli utility now includes built-in support for managing LUN snapshots. With this release, all snapshot related operations can now be handled directly within the gwcli utility.

Disabling CHAP for iSCSI gateway authentication

Previously, CHAP authentication was required when using the Ceph iSCSI gateway. With this release, disabling CHAP authentication can be configured with the gwcli utility or with Ceph Ansible. However, mixing clients with CHAP enabled and disabled is not supported. All clients must either have CHAP enabled or disabled. If enabled, clients might have different CHAP credentials.

3.5. Object Gateway

Improved Swift container ACL conformance has been added

Previously, Red Hat Ceph Storage did not support certain ACL use cases, including setting of container ACLs whose subject is a Keystone project/tenant.

With this update of Ceph, many Swift container ACLs which were previously unsupported are now supported.

Improvements to radosgw-admin sync status commands

With this update of Red Hat Ceph Storage a new radosgw-admin bucket sync status command has been added, as well as improvements to the existing sync status and data sync status commands.

These changes will make it easier to inspect the progress of multisite syncs.

Automated trimming of bucket index logs

When multisite sync is used, all changes are logged in the bucket index. These logs can grow excessively large. They also are no longer needed once they have been processed by all peer zones.

With this update of Red Hat Ceph Storage, the bucket index logs are automatically trimmed and do not grow beyond a reasonable size.

Admin socket command to invalidate cache

Two new admin socket commands to manipulate the cache were added to the radosgw-admin tool.

The cache erase <objectname> command flushes the given object from the cache.

The cache zap command erases the entire cache.

These commands can be used to help debug problems with the cache or provide a temporary workaround when an RGW node is holding stale information in the cache. Administrators can now flush any and all objects from the cache.

New administrative sockets added for the radosgw-admin command to view the Object Gateway cache

Two new administrative sockets were added to the radosgw-admin command to view the contents of the Ceph Object Gateway cache.

The cache list [string] sub-command lists all objects in the cache. If the optional string is provided, it only matches those objects containing the string.

The cache inspect <objectname> sub-command prints detailed information about the object.

These commands can be used to help debug caching problems on any Ceph Object Gateway node.

Implementation of partial order bucket/container listing

Previously, list bucket/container operations always returned elements in a sorted order. This has high overhead with sharded bucket indexes. Some protocols can tolerate receiving elements in arbitrary order so this is now allowed. An example curl command using this new feature:

curl GET http://server:8080/tb1?allow-unordered=True

With this update to Red Hat Ceph Storage, unordered listing via Swift and S3 is supported.

Asynchronous Garbage Collection

An asynchronous mechanism for executing the Ceph Object Gateway garbage collection using the librados APIs has been introduced. The original garbage collection mechanism serialized all processing, and lagged behind applications in specific workloads. Garbage collection performance has been significantly improved, and can be tuned to specific site requirements.

Relaxed region constraint enforcement

In Red Hat Ceph Storage 3.x when using s3cmd and option --region with a zonegroup that does not exist an InvalidLocationConstraint error will be generated. This did not occur in Ceph 2.x because it did not have strict checking on the region. With this update Ceph 3.1 adds a new rgw_relaxed_region_enforcement boolean option to enable relaxed (non-enforcement of region constraint) behavior backward compatible with Ceph 2.x. The option defaults to False.

Default rgw_thread_pool_size value change to 512

The default rgw_thread_pool_size value changed from 100 to 512. This change accommodates larger workloads. Decrease this value for smaller workloads.

Increased the default value for the objecter_inflight_ops option

The default value for the objecter_inflight_ops option was changed from 1024 to 24576. The original default value was insufficient to support a typical Object Gateway workload. With this enhancement, larger workloads are supported by default.

3.6. Object Gateway Multisite

Add option --trim-delay-ms in radosgw-admin sync error trim command - to limit the frequency of osd ops

A "trim delay" option has been added to the "radosgw-admin sync error trim" command in Ceph Object Gateway multisite. Previously, many OMAP keys could have been deleted by the full operation, leading to potential for impact on client workload. With the new option, trimming can be requested with low client workload impact.

3.7. Packages

Rebase Ceph to version 12.2.5

Red Hat Ceph Storage 3.1 is now based on upstream Ceph Luminous 12.2.5.

3.8. RADOS

Warnings about objects with too many omap entries

With this update to Red Hat Ceph Storage warnings are displayed about pools which contain large omap objects. They can be seen in the output of ceph health detail. Information about the large objects in the pool are printed in the cluster logs. The settings which control when the warnings are printed are osd_deep_scrub_large_omap_object_key_threshold and osd_deep_scrub_large_omap_object_value_sum_threshold.

The filestore_merge_threshold option default has changed

Subdirectory merging has been disabled by default. The default value of the filestore_merge_threshold option has changed to -10 from 10. It has been observed to improve performance significantly on larger systems with a minimal performance impact to smaller systems. To take advantage of this performance increase set the expected-num-objects value when creating new data pools. See the Object Gateway for Production Guide for more information.

Logs now list PGs that are splitting

The FileStore split log now shows splitting placement groups (PGs).

Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.