Chapter 3. New features
This section lists all major updates, enhancements, and new features introduced in this release of Red Hat Ceph Storage.
3.1. The Cephadm utility
Add cmount_path
option and generate unique user ID
With this enhancement, you can add the optional cmount_path
option and generate a unique user ID for each Ceph File System to allow sharing CephFS clients across multiple Ganesha exports thereby reducing the memory usage for a single CephFS client.
TLS is enabled across all monitoring components, enhancing security for Prometheus
With this enhancement, to safeguard data integrity, confidentiality, and alignment with the security best practices, TLS is enabled across the monitoring stack. The enhanced security feature for Prometheus, Alert manager, and Node exporter adds an additional layer of protection by using secure communication across the monitoring stack.
Enhanced security of the monitoring stack
With this enhancement, to safeguard data integrity, confidentiality, and to align with the security best practices, the authentication feature for Prometheus, Alert manager, and the Node exporter is implemented. This enhances the security of the whole monitoring stack by enabling TLS in all the monitoring components and requiring users to provide valid credentials before accessing Prometheus and Alert Manager data. By using this new feature, an additional layer of protection is provided by using secure communication and preventing the unauthorized access to sensitive metrics and monitoring data. With TLS enabled in all the monitoring stack components and authentication in place, users must authenticate before accessing monitoring and metrics data, enhancing overall security and control over data access.
Users can now put a host into maintenance mode
Previously, users could not put a host into maintenance mode as stopping all the daemons on that host would cause data unavailability.
With this enhancement, a stronger force flag for the ceph orch host maintenance enter
command called --yes-i-really-mean-it
is added. Users can now put a host into maintenance mode, even when Cephadm warns against it.
Users can now drain a host of daemons without draining the client conf
or keyring
files
With this enhancement, users can drain a host of daemons, without also draining the client conf
or keyring
files deployed on the host by passing the --keep-conf-keyring
flag to the ceph orch host drain
command. Users can now mark a host to have all daemons drained or not placed there while still having Cephadm manage conf
or keyring
files on the host.
Cephadm services can now be marked as managed
or unmanaged
Previously, the only way to mark the Cephadm services as managed
or unmanaged
was to edit and re-apply the service specification for the service. This was inconvenient for scenarios, such as temporarily stop making OSDs on devices that match an OSD specification in Cephadm.
With this enhancement, commands are added to set the Cephadm services to be marked as managed
or unmanaged
. For example, one can run ceph orch set-unmanaged mon
command or ceph orch set-managed mon
command. These new commands allow toggling the states without having to edit and re-apply the service specification.
Users can now apply client IP restrictions on the NFS deployment using the HAProxy protocol mode
Previously, users could not apply client IP restrictions, while still using HAProxy between the client and NFS. This is because only the HAProxy IP would be recognized by NFS, making proper client IP restriction impossible.
With this enhancement, it is possible to deploy an NFS service in HAProxy protocol mode by passing --ingress-mode=haproxy-protocol
argument in the ceph nfs cluster create
command or by setting enable_haproxy_protocol: true
in both the NFS service specification and the corresponding ingress specification. Users can now apply proper client IP restriction on their NFS deployment using the new HAProxy protocol mode in their NFS deployment.
3.2. Ceph File System
quota.max_bytes
is now set in more understandable size values
Previously, the quota.max_bytes
value was set in bytes, resulting in often very large size values, which was hard to set or changed.
With this enhancement, the quota.max_bytes
values can now be set with human-friendly values, such as M/Mi, G/Gi, or T/Ti. For example, 10GiB or 100K.
Laggy clients are now evicted only if there are no laggy OSDs
Previously, monitoring performance dumps from the MDS would sometimes show that the OSDs were laggy, objecter.op_laggy
and objecter.osd_laggy
, causing laggy clients and the dirty data for cap revokes would not be flushed.
With this enhancement, if the defer_client_eviction_on_laggy_osds
parameter is set to true and a client gets laggy because of a laggy OSD then client eviction does not take place until OSDs are no longer laggy.
3.3. Ceph Dashboard
Improved overview dashboard utilization panel
With this enhancement, graphs and graph legends are improved for better usability. In addition, search queries are improved for giving better results.
Upgrade the cluster from the dashboard
Previously, a Red Hat Ceph Storage cluster could only be upgraded through the command-line interface.
With this enhancement, you can easily view and upgrade the available versions of the storage cluster and also track the upgrade progress from the Ceph dashboard.
The Ceph Dashboard has an Overview page that displays the Ceph Object Gateway information
With this enhancement, an Overview page is added in the Object Gateway section of the Ceph dashboard. From this Overview page, users can now access a dedicated object gateway overview in the Ceph dashboard. This feature enhances the user experience by providing insights into the Object Gateway performance, storage usage, and configuration details. When multi-site is configured in the cluster, the user can also see the multi-site sync status directly on this Overview page.
The Ceph dashboard displays capacity usage information for Block Device images
With this enhancement, a new capacity usage progress bar is visible within the Block > Images table on the Ceph dashboard. The bar provides visible usage information, along with a percentage.
This progress bar is only available for block images with fast-diff
enabled and no snapshot mirroring.
Manage Ceph File System volumes on the dashboard
Previously, Ceph File System (CephFS) volumes could only be managed through the command-line interface.
With this enhancement, CephFS volumes can now be listed, created, edited, and removed through the Ceph dashboard.
Manage Ceph File System subvolumes and subvolume groups on the dashboard
Previously, Ceph File System (CephFS) subvolumes and subvolumes groups could only be managed through the command-line interface.
With this enhancement, CephFS subvolumes and subvolume groups can now be listed, created, edited, and removed through the Ceph dashboard.
Users can now specify the FQDN for Ceph Object Gateway host through the CLI and the dashboard
Previously, short hostnames were picked to resolve the Ceph Object gateway hostname which would cause issues.
With this enhancement, in the Ceph dashboard, the rgw_dns_name
configuration option of the Ceph Object Gateway is used to resolve the hostname if it is provided. If you wants to specify the FQDN for the Ceph Object Gateway host, use the rgw_dns_name
configuration option in the CLI and the dashboard then picks it up and the Ceph Object Gateway requests are made against it.
Configure Ceph Object Gateway multi-site on the dashboard
With this enhancement, you can configure multi-site Ceph Object Gateway not only through the command-line interface, but also through the Ceph Dashboard. The dashboard now supports creating, updating, and deleting Object Gateway entities such as realms, zonegroups, and zones. In addition, this feature allows users to configure multi-site between two remote clusters, providing options for data replication and synchronization.
3.4. Ceph Object Gateway
The radosgw-admin
bucket command prints bucket versioning
With this enhancement, the ` radosgw-admin bucket stats ` command prints the versioning status for buckets as enabled
or off
since versioning can be enabled or disabled after creation.
S3 WORM certification with an external entity
With this enhancement, S3 WORM feature is certified with an external entity. This enables data retention in compliance with FSI regulations and a secured object storage deployment that guarantees data retrieval even if the object/buckets in the production zones have been lost or compromised.
For more information, see Introduction to WORM.
Multi-site sync instances now dynamically spread workloads among themselves
Previously, the Ceph Object Gateway multi-site throughput was generally limited to the available bandwidth of a single instance.
With this enhancement, Ceph Object Gateway multisite sync instances dynamically parallelize workload among themselves, using a work sharing algorithm. As a result, the scalability is now significantly improved and sync workloads are divided evenly throughout the sync.
Enhanced bucket granular multi-site sync policies
IBM Storage Ceph now supports bucket granular multi-site sync policies.
Bucket granular multi-sync sync replication was previously available as limited release. This enhancement provides full availability for new and existing customers in production environments.
This feature allows enabling and disabling multi-site async replication on a per bucket level. This enhancement also increases the total RAW space required and increases the amount of synchronization traffic between sites.
For more information, see Bucket granular sync policies.
Reduced object storage query times for data analytics with added JSON, Parquet, and CSV-format object support
Ceph Object Gateway now supports operating on JSON-format, Parquet-format, and CSV-format objects, expanding potential application of S3 select to widely deployed analytics frameworks, for example, Apache Spark and Trino. This added support helps reduce query times.
For more information, see S3 select content from an object.
Data can now be transitioned to the Azure cloud service, using the multi-cloud gateway (MCG)
Previously, transitioning data to the Azure cloud service was a Technology Preview feature. With this enhancement, the feature is ready to be used in a production environment.
This feature enables data transition in a Ceph Object Gateway storage environment, into the Azure cloud, for archiving purposes.
For more information, see Transitioning data to Azure cloud service.
Enhanced S3 select feature for more efficient integration with Trino
Object storage now has enhanced integration with Trino for S3 select operations. This improves the query times for semi-structured and structured datasets stored in Ceph Object Gateway.
For more information, see Integrating Ceph Object Gateway with Trino.
3.5. RADOS
New performance counters introduced for messenger v2
Previously, there were no dedicated performance counters for accounting encrypted traffic in messenger v2.
With this enhancement, msgr_recv_encrypted_bytes
and msgr_send_encrypted_bytes
, are introduced to account for receiving and sending bytes respectively that facilitate rough validation of the encryption status.
New reports available for sub-events for delayed operations
Previously, slow operations were marked as delayed but without a detailed description.
With this enhancement, you can view the detailed descriptions of delayed sub-events for operations.
3.6. NFS Ganesha
Support HAProxy’s PROXY protocol
With this enhancement, HAProxy uses load balancing servers. This allows load balancing and also enables client restrictions on access.