Chapter 3. New features
This section lists all major updates, enhancements, and new features introduced in this release of Red Hat Ceph Storage.
3.1. The Cephadm utility
High Availability can now be deployed for the Grafana, Prometheus, and Alertmanager monitoring stacks
With this enhancement, the cephadm mgmt-gateway
service offers better reliability and ensures uninterrupted monitoring by allowing these critical services to function seamlessly, even during the event of an individual instance failure. High availability is crucial for maintaining visibility into the health and performance of the Ceph cluster and responding promptly to any issues.
Use High Availability for continuous, uninterrupted operations to improve the stability and resilience of the Ceph cluster.
For more information, see Using the Ceph Management gateway.
New streamlining deployment for EC pools for Ceph Object Gateway
The Ceph Object Gateway manager module can now create pools for the rgw
service. Within the pool, data pools can receive attributes, based on the provided specification.
This enhancement streamlines deployment for users who want Ceph Object Gateway pools used for Ceph Object Gateway to use EC instead of replica.
To create a data pool with the specified attributes, use the following command:
ceph rgw realm bootstrap -i <path-to-spec-file> --start-radosgw
Currently, the EC profile fields of this specification only make use of the k
, m
, pg_num
, and crush-device-class
attributes. If other attributes are set, or if the pool type is replicated, the key value pairs pass to the ceph osd pool create
command. The other pools for the Ceph Object Gateway zone, for example, the buckets index pool, are all created as replicated pools with default settings.
A self-signed certificate can be generated by cephadm within the Ceph Object Gateway service specification
With this enhancement, adding generate_cert: true
into the Ceph Object Gateway service specification file, enables cephadm to generate a self-signed certificate for the Ceph Object Gateway service. This can be done instead of manually creating the certificate and inserting into the specification file.
Using generate_cert: true
works for the Ceph Object Gateway service, including SAN modifications based on the zonegroup_hostnames
parameter included in the Ceph Object Gateway specification file.
The following is an example of Ceph Object Gateway specification file:
service_type: rgw service_id: bar service_name: rgw.bar placement: hosts: - vm-00 - vm-02 spec: generate_cert: true rgw_realm: bar_realm rgw_zone: bar_zone rgw_zonegroup: bar_zonegroup ssl: true zonegroup_hostnames: - s3.example.com - s3.foo.com
This specification file would generate a self-signed certificate that includes the following output:
X509v3 Subject Alternative Name: DNS:s3.example.com, DNS:s3.foo.com
Setting rgw_run_sync_thread
to ‘false’ for Ceph Object gateway daemon users is now automated
With this enhancement, by setting disable_multisite_sync_traffic
to ‘true’ under the spec
section of an Ceph Object Gateway specification, Cephadm will handle setting the rgw_run_sync_thread
setting to ‘false’ for Ceph Object Gateway daemons under that service. That will stop the Ceph Object Gateway daemons from spawning threads to handle the sync of data and metadata. The process of setting rgw_run_sync_thread
to ‘false’ for Ceph Object Gateway daemon users is now automated through the Ceph Object Gateway specification file.
Cephadm can now deploy ingress over Ceph Object Gateway with the ingress service’s haproxy
daemon in TCP rather than HTTP mode
Setting up haproxy
in TCP mode allows encrypted messages to be passed through haproxy
directly to Ceph Object Gateway without haproxy
needing to understand the message contents. This allows end-to-end SSL for ingress and Ceph Object Gateway setups.
With this enhancement, users can now specify a certificate for the rgw
service and not ingress service. Specify the use_tcp_mode_over_rgw
as True
in the ingress specification to get the haproxy
daemons deployed for that service in TCP mode, rather than in HTTP mode.
New cmount_path
option with a unique user ID generated for CephFS
With this enhancement, you can add the optional cmount_path
option and generate a unique user ID for each Ceph File System. Unique user IDs allow sharing CephFS clients across multiple Ganesha exports. Reducing clients across exports also reduces memory usage for a single CephFS client.
Exports sharing the same FSAL block have a single Ceph user client linked to them
Previously, on an upgraded cluster, the export creation failed with the "Error EPERM: Failed to update caps" message.
With this enhancement, the user key generation is modified when creating an export so that any exports that share the same Ceph File System Abstraction Layer (FSAL) block will have only a single Ceph user client linked to them. This enhancement also prevents memory consumption issues in NFS Ganesha.
3.2. Ceph Dashboard
Added health warnings for when daemons are down
Previously, there were no health warnings and alerts to notify if the mgr
, mds
, and rgw
daemons were down.
With this enhancement, health warnings are emitted when any of the mgr
, mds
, and rgw
daemons are down.
Bugzilla:2138386
Ceph Object Gateway NFS export management is now available through the Ceph Dashboard
Previously, Ceph Object Gateway NFS export management was only available through the command-line interface.
With this enhancement, the Ceph Dashboard also supports managing exports that were created based on selected Ceph Object Gateway users.
For more information about editing a bucket, see Managing NFS Ganesha exports on the Ceph dashboard.
Enhanced multi-site creation with default realms, zones, and zone groups
Previously, a manual restart was required for Ceph Object Gateway services after creating a multi-site with default realms, zones, or zone groups.
With this enhancement, due to the introduction of the new multi-site replication wizard, any necessary service restarts are done automatically.
Ceph Dashboard now supports an EC 8+6 profile
With this enhancement, the dashboard supports an erasure coding 8+6 profile.
Enable or disable replication for a bucket in a multi-site configuration during bucket creation
A new Replication checkbox is added in the Ceph Object Gateway bucket creation form. This enhancement allows enabling or disabling replication from a specific bucket in a multi-site configuration.
New sync policy management through the Ceph Dashboard
Previously, there was no way to manage sync policies from the Ceph Dashboard.
With this enhancement, you can now manage sync policies directly from the Ceph Dashboard, by going to Object>Multi-site.
Improved experience for server-side encryption configuration on the Ceph Dashboard
With this enhancement, server-side encryption can easily be found by going to Objects>Configuration from the navigation menu.
New option to enable mirroring on a pool during creation
Previously, there was no option to enable mirroring on a pool during pool creation.
With this enhancement, mirroring can be enabled on a pool directly from the Create Pool form.
Enhanced output for Ceph Object Gateway ops and audit logs in the centralized logging
With this enhancement, you can now see Ceph Object Gateway ops and audit log recollection in the Ceph Dashboard centralized logs.
Improved experience when creating an erasure coded pool with the Ceph Dashboard
Previously, devices in the Ceph cluster were automatically selected when creating an erasure coded (EC) profile, such as HDD, SSD, and so on. When a device class is specified with EC pools, pools are created with only one placement group and the autoscaler did not work.
With this enhancement, a device class has to be manually selected and all devices are automatically selected and available.
Enhanced multi-cluster views on the Ceph Dashboard
Previously, the Ceph Cluster Grafana dashboard was not visible for a cluster that was connected in a multi-cluster setup and multi-cluster was not fully configurable with mTLS through the dashboard.
With these enhancements, users can connect a multi-cluster setup with mTLS enabled on both clusters. Users can also see individual cluster Grafana dashboards by expanding a particular cluster row, when going to Multi-Cluster > Manage Clusters.
CephFS subvolume groups and subvolumes can now be selected directly from the Create NFS export form
Previously, when creating a CephFS NFS export, you would need to know the existing subvolume and subvolume groups prior to creating the NFS export, and manually enter the information into the form.
With the enhancement, once a volume is selected, the relevant subvolume groups and subvolumes are available to select seamlessly from inside the Create NFS export form.
Non-default realm sync status now visible for Ceph Object Gateways
Previously, only the default realm sync status was visible in the Object>Overview sync status on the Ceph Dashboard.
With this enhancement, the sync status of any selected Object Gateway is displayed, even if it is in a non-default realm.
New RGW Sync overview dashboard in Grafana
With this release, you can now track replication differences over a time per shard from within the new RGW Sync overview dashboard in Grafana.
New S3 bucket lifecycle management through the Ceph Dashboard
With this release, a bucket lifecycle can be managed through the Edit Bucket form in the Ceph Dashboard.
For more information about editing a bucket, see Editing Ceph Object Gateway buckets on the dashboard.
3.3. Ceph File System
snapdiff
API now only syncs the difference of files between two snapshots
With this enhancement, the snapdiff
API is used to sync only the difference of files between two snapshots. Syncing only the difference avoids bulk copying during an incremental snapshot sync, providing performance improvement, as only snapdiff
delta is being synced.
New metrics for data replication monitoring logic
This enhancement provides added labeled metrics for the replication start and end notifications.
The new labeled metrics are: last_synced_start
, last_synced_end
, last_synced_duration
, and last_synced_bytes
.
Enhanced output remote metadata information in peer status
With this enhancement, the peer status output shows state
, failed
, and 'failure_reason' when there is invalid metadata in a remote snapshot.
New support for NFS-Ganesha async FSAL
With this enhancement, the non-blocking Ceph File System Abstraction Layer (FSAL), or async, is introduced. The FSAL reduces thread utilization, improves performance, and lowers resource utilization.
New support for earmarking subvolumes
Previously, the Ceph storage system did not support a mixed protocol being used within the same subvolume. Attempting to use a mixed protocol could lead to data corruption.
With this enhancement, subvolumes have protocol isolation. The isolation prevents data integrity issues and reduces the complexity of managing multi-protocol environments, such as SMB and NFS.
3.4. Ceph Object Gateway
The CopyObject
API can now be used to copy the objects across storage classes
Previously, objects could only be copied within the same storage class. This limited the scope of the CopyObject
function. Users would have to download the objects and then reupload them to another storage class.
With this enhancement, the objects can be copied to any storage class within the same Ceph Object Gateway cluster from the server-side.
Improved read operations for Ceph Object Gateway
With this enhancement, read affinity is added to the Ceph Object Gateway. The read affinity allows read calls to the nearest OSD by adding the flags and setting the correct CRUSH location.
S3 requests are no longer cut off in the middle of transmission during shutdown
Previously, a few clients faced issues with the S3 request being cut off in the middle of transmission during shutdown without waiting.
With this enhancement, the S3 requests can be configured (off by default) to wait for the duration defined in the rgw_exit_timeout_secs
parameter for all outstanding requests to complete before exiting the Ceph Object Gateway process unconditionally. Ceph Object Gateway will wait for up to 120 seconds (configurable) for all on-going S3 requests to complete before exiting unconditionally. During this time, new S3 requests will not be accepted.
In containerized deployments, an additional extra_container_args
parameter configuration of --stop-timeout=120
(or the value of rgw_exit_timeout_secs
parameter, if not default) is also necessary.
Copying of encrypted objects using copy-object
APIs is now supported
Previously, in Ceph Object gateway, copying of encrypted objects using copy-object APIs was unsupported since the inception of its server-side encryption support.
With this enhancement, copying of encrypted objects using copy-object APIs is supported and workloads that rely on copy-object operations can also use server-side encryption.
New S3 additional checksums
With this release, there is added support for S3 additional checksums. This new support provides improved data integrity for data in transit and at rest. The additional support enables use of strong checksums of object data, such as SHA256, and checksum assertions in S3 operations.
New support for S3 GetObjectAttributes
API
The GetObjectAttributes
API returns a variety of traditional and non-traditional metadata about S3 objects. The metadata returns include S3 additional checksums on objects and on the parts of objects that were originally stored as multipart uploads. The GetObjectAttributes
is exposed in the AWS CLI.
Improved efficiency of Ceph Object Gateway clusters over multiple locations
With this release, if possible, data is now read from the nearest physical OSD instance in a placement group.
As a result, the local read improves the efficiency of Ceph Object Gateway clusters that span over multiple physical locations.
Format change observed for tenant owner in the event record: ownerIdentity
–> principalId
With this release, in bucket notifications, the principalId
inside ownerIdentity
now contains complete user ID, prefixed with tenant ID.
Client IDs can be added and thumbprint lists can be updated in an existing OIDC Provider within Ceph Object Gateway
Previously, users were not able to add a new client ID or update the thumbprint list within the OIDC Provider.
With this enhancement, users can add a new client ID or update the thumbprint list within the OIDC Provider and any existing thumbprint lists are replaced.
3.5. Multi-site Ceph Object Gateway
New multi-site configuration header
With this release, GetObject and HeadObject responses for objects written in a multi-site configuration include the x-amz-replication-status: PENDING
header. After the replication succeeds, the header’s value changes to COMPLETED
.
New notification_v2 zone feature for topic and notification metadata
With this enhancement, bucket notifications and topics that are saved in fresh installation deployments (Greenfield) have their information synced between zones.
When upgrading to Red Hat Ceph Storage 8.0, this enhancement needs to be added by enabling the notification_v2 feature.
3.6. RADOS
Balanced primary placement groups can now be observed in a cluster
Previously, users could only balance primaries with the offline osdmaptool
.
With this enhancement, autobalancing is available with the upmap
balancer. Users can now choose between either the upmap-read`or `read
mode. The upmap-read
mode offers simultaneous upmap and read optimization. The read
mode can only be used to optimize reads.
For more information, see Using the Ceph manager module.
New MSR CRUSH rules for erasure encoded pools
Multi-step-retry (MSR) is a type of CRUSH rule in the Ceph cluster that defines how data is distributed across storage devices. MSR ensures efficient data retrieval, balancing, and fault tolerance.
With this enhancement, crush-osds-per-failure-domain
and crush-num-failure-domains
can now be specified for erasure-coded (EC) pools during their creation. These pools use newly introduced MSR crush rules to place multiple OSDs within each failure domain. For example, 14 OSDs split across 4 hosts.
For more information, see Ceph erasure coding.
New generalized stretch cluster configuration for three availability zones
Previously, there was no way to apply a stretch peer rule to prevent placement groups (PGs) from becoming active when there weren’t enough OSDs in the acting set from different buckets without enabling the stretch mode.
With a generalized stretch cluster configuration for three availability zones, three data centers are supported, with each site holding two copies of the data. This helps ensure that even during a data center outage, the data remains accessible and writeable from another site. With this configuration, the pool replication size is 6 and the pool min_size is 3.
For more information, see Generalized stretch cluster configuration for three availability zones
3.7. RADOS Block Devices (RBD)
Added support for live importing of an image from another cluster
With this enhancement, you can now migrate between different image formats or layouts from another Ceph cluster. When live migration is initiated, the source image is deep copied to the destination image, pulling all snapshot history while preserving the sparse allocation of data wherever possible.
For more information, see Live migration of images.
New support for cloning images from non-user type snapshots
With this enhancement, there is added support for cloning Ceph Block Device images from snapshots of non-user types. Cloning new groups from group snapshots that are created with the rbd group snap create
command is now supported with the added --snap-id
option for the rbd clone
command.
For more information, see Cloning a block device snapshot.
New commands are added for Ceph Block Device
Two new commands were added for enhanced Ceph Block Device usage. The rbd group info
command shows information about a group. The rbd group snap info
command shows information about a group snapshot.
New support for live importing an image from an NBD export
With this enhancement, images with encryption support live migration from an NBD export.
For more information, see Streams.
3.8. RBD Mirroring
New optional --remote-namespace
argument for the rbd mirror pool enable
command
With this enhancement, Ceph Block Device has the new optional --remote-namespace
argument for the rbd mirror pool enable
command. This argument provides the option for a namespace in a pool to be mirrored to a different namespace in a pool of the same name on another cluster.