Chapter 3. Major Updates
This section lists all major updates, enhancements, and new features introduced in this release of Red Hat Ceph Storage.
New ways to identify client versions
This update adds the following features that help with identifying client versions to determine which clients use an old version of Red Hat Ceph Storage.
-
The
ceph osd set-require-min-compat-client
command adds the ability to set a minimum required release for clients to prevent new connections from older clients. By default it is set tojewel
. To view its value, use theceph osd dump
command. -
The
ceph features
command that reports the total number of clients and daemons and their features and releases. -
If the debugging level for Monitors is set to
10
(debug mon = 10
), addresses and features of connecting and disconnecting clients are logged to log file on a local file system.
A new --pg-num
option for the osdmaptool
utility
The osdmaptool
utility now includes the --pg-num
option that can be used with the --test-map-pgs
option. This allows the user to test placement policies with a different number of placement groups (PGs) than are in the OSD map.
Option to add a limit on RBD snapshots
A new option to set a limit on the number of snapshots on a RADOS Block Device (RBD) image is now supported. Use the option snap limit --limit
with the rbd
command to set the limit.
Ansible now supports removing Monitors and OSDs
You can use the ceph-ansible
utility to remove Monitors and OSDs from a Ceph cluster. For details, see the Removing Monitors with Ansible and Removing OSDs with Ansible sections in the Red Hat Ceph Storage 3 Administration Guide. The same procedures apply also for removing Monitors and OSDs from a containerized Ceph cluster.
The iSCSI gateway is now fully supported
Red Hat Ceph Storage 3.0 adds full support for the iSCSI gateway. These iSCSI initiators are supported:
- Red Hat Enterprise Linux 7.4
- VMware ESX 6.5
- Microsoft Windows Server 2016
- Red Hat Virtualization 4.x
For details, see the Using and iSCSI Gateway chapter in the Block Device Guide for Red Hat Ceph Storage 3.
The rbd export-diff
and rbd import-diff
commands now support parallelism
The rbd export-diff
and rbd import-diff
commands have been improved to being capable of fully parallel operations. As a result, the commands now benefit from concurrency across the cluster. The commands are executed in parallel by default. To configure the amount of parallelism, use the --rbd-concurrent-management-ops <number>
option when using the commands.
Support for deploying logical volumes as OSDs
A new utility, ceph-volume
, is now supported. The utility enables deployment of logical volumes as OSDs on Red Hat Enterprise Linux. For details, see the Using the ceph-volume Utility to Deploy OSDs chapter in the Block Device Guide for Red Hat Ceph Storage. Note that ceph-volume
does not support deploying logical volumes as OSDs in containers. In addition, ceph-volume
is not tested on Ubuntu 16.04.03.
Bucket owners can grant permissions to other users
With this update, bucket owners can provide read access to their buckets to another user. For details, see the Ceph - How to grant access for multiple S3 users to access a single bucket solution on the Red Hat Enterprise Linux.
On a CephFS with only one data pool, the ceph df
command shows characteristics of that pool
On Ceph File Systems that contain only one data pool, the ceph df
command shows results that reflect the file storage spaces used and available in that data pool. This new functionality is available for FUSE clients only for now and will be available for kernel clients in a future release of Red Hat Enterprise Linux.
Promoting and demoting all images in a pool at once
You can now promote or demote all images in a pool at the same time by using the following commands:
rbd mirror pool promote <pool> rbd mirror pool demote <pool>
This is especially useful in an event of a failover, when all non-primary images must be promoted to primary ones.
Ansible now automatically sets online repositories for Ubuntu
This update automates the process of setting up online repositories for Red Hat Ceph Storage on Ubuntu nodes. To set up the repositories, set the following parameters in the all.yml
file located in the /usr/share/ceph-ansible/group_vars/
directory:
ceph_origin: repository ceph_repository: rhcs ceph_repository_type: cdn ceph_rhcs_cdn_debian_repo: https://customername:customerpasswd@rhcs.download.redhat.com
Specify your customer name and password.
For details, see the Installation Guide for Ubuntu.
A Red Hat Ceph Storage cluster can be deployed from an Ubuntu node by using Ansible
Previously, Red Hat did not provide the ceph-ansible
package for Ubuntu. With this update, you can use the Ansible automation application to deploy a Ceph cluster from an Ubuntu node.
For details, see the Installing a Red Hat Ceph Storage Cluster section in the Installation Guide for Ubuntu.
A new compact
command
With this update, the OSD administration socket supports the compact
command. A large number of omap
create and delete operations can cause the normal compaction of the levelDB
database during those operations to be too slow to keep up with the workload. As a result, levelDB
can grow very large and inhibit performance. The compact
command compacts the omap
database (levelDB
or RocksDB
) to a smaller size to provide more consistent performance.
Installing NFS Ganesha by using Ansible is supported
You can now install the NFS Ganesha interface by using the ceph-ansible
playbook. For additional details, see the all.yml
and nfss.yml
file in the /usr/share/ceph-ansible/
directory on the Ansible administration node.
RocksDB
now replaces levelDB
This update changes the default back end for the omap
database from the levelDB
to the RocksDB
database. RocksDB
uses the multi-threading mechanism in compaction so that it better handles the situation when the omap
directories become very large (more than 40 G). LevelDB
compaction takes a lot of time in such a situation and causes OSDs to time out.
Simplified creation of CephFS client keyring
A new command, ceph fs authorize
, is now supported. The command simplifies creation of cephx
capabilities for a Ceph File System (CephFS) client user. For example, to grant the client.1
user read and write access to MDS nodes and read access to Monitor and OSD nodes on a Ceph File System named cephfs
:
# ceph fs authorize cephfs client.1 rw r
Use this command only when creating new users. It is not possible to modify existing users with ceph fs authorize
.
Granting access to Ceph Block Device images has been simplified
The ceph auth get-or-create
command now supports two profiles, rbd
and rbd-read-only
. When using these profiles, cephx
capabilities are created automatically without the need to specify them directly. For example, to create a client.1
user with required capabilities for Monitors and OSDs:
ceph auth get-or-create client.1 mon 'profile rbd' osd 'profile rbd [pool=<pool>]'
OSDs support the rbd
and rbd-read-only
profiles. Monitors support only the rbd
profile.
MDS cache limits can be configured in bytes
New configuration options are now supported that enable configuring Metadata Server (MDS) cache limits to be configured in bytes, not only in inodes count. For details, see the Understanding MDS Cache Size Limits section in the Ceph File System Guide for Red Hat Ceph Storage 3. Note that limiting the MDS cache by the inodes count is now deprecated.
Improvements in the cluster log
The cluster log has been improved. Certain unnecessary messages, such as audit log, PGMap 5 second, or print on every osdmap
epoch, have been removed. Other messages were improved to use a more human-readable format. Also, a message is not logged when health checks fail. In addition, a new command, log last
, is now supported. The command shows the recent log messages.
Ceph health checks are more easily integrated with external alerting systems
Ceph’s built-in health checks have been refactored to enable more robust integration with external alerting systems. For each condition that is checked, there is now a unique status code, for example PG_AVAILABILITY
.
Any external script that was relying on the JSON syntax of the ceph status
or ceph health
command output must be updated for the new format. To ease migration, set the mon_health_preluminous_compat
parameter to True
on Monitors to instruct ceph status
and ceph health
to generate old-style health output in addition to the new output.
Deleting images and snapshots from full clusters is now easier
When a cluster reaches its full_ratio
, the following commands can be used to remove Ceph Block Device images and snapshots:
-
rbd remove
-
rbd snap rm
-
rbd snap unprotect
-
rbd snap purge
The Ceph Object Gateway now supports NFSv3 protocol
The Ceph Object Gateway now provides the ability to export Simple Storage Service (S3) object namespaces by using NFS version 3 alongside the existing NFS version 4. For details, see the Exporting the Namespace to NFS-Ganesha section of the Red Hat Ceph Storage 3 Object Gateway Guide for Red Hat Enterprise Linux.
Support for data compression
The Ceph Object Gateway now supports data compression at rest. For details, see the Compression section in the Object Gateway Guide for Red Hat Enterprise Linux or Ubuntu.
Support for S3 Bucket Policy
Support for Simple Storage Service (S3) Bucket Policy has been added. Note that the support has the following limitations:
- Identity and Access Management (IAM) for users and groups is not supported
- String interpolation is not supported
- Only a subset of condition keys is supported
For details see the Bucket Policies section in the Developer Guide for Red Hat Ceph Storage 3.
nfs-ganesha
rebased to 2.5
The nfs-ganesha
package has been upgraded to upstream version 2.5, which provides a number of bug fixes and enhancements over the previous version.
NFSv4 recovery state data can be stored in Ceph RADOS
NFS version 4 (NFSv4) recovery state data such as, clientids
, can now be stored in Ceph RADOS objects. This change increases the resilience of clustered NFS servers exposing Ceph storage resources.
New "radosgw-admin user list" command
Previously, the command that listed users and subusers required the user’s uid as an input. This approach required extra commands. This release introduces the radosgw-admin user list
command, which lists all users and subusers without requiring any uids.
S3 object expiration is now supported
The Ceph Object Gateway now supports the Amazon Simple Storage Service (S3) object expiration. For details see the Object Gateway S3 Application Programming Interface (API) chapter and the Bucket Lifecycle section in the Developer Guide for Red Hat Ceph Storage 3.
Support for S3 server-side encryption
The Ceph Object Gateway now supports the Amazon Simple Storage Service (S3) server-side encryption. For details, see the S3 API Server-side Encryption section in the Developer Guide for Red Hat Ceph Storage 3.
Support for the Red Hat Ceph Storage Dashboard
The Red Hat Ceph Storage Dashboard provides a monitoring dashboard for Ceph clusters to visualize the cluster state. The dashboard is accessible from a web browser and provides a number of metrics and graphs about the state of the cluster, Monitors, OSDs, Pools, or network.
For details, see the Monitoring Ceph Clusters with Red Hat Ceph Storage Dashboard section in the Administration Guide for Red Hat Ceph Storage 3.
The async
messenger
The async
messenger is used by default instead of the simple
one. For details see the Messaging and Async Messenger Settings section in the Configuration Guide for Red Hat Ceph Storage 3.
Support for dynamic bucket resharding
The Ceph Object Gateway now supports the rgw_dynamic_resharding
parameter. The process for dynamic bucket resharding periodically checks all the Ceph Object Gateway buckets and detects buckets that require resharding. If a bucket has grown larger than specified by the rgw_max_objs_per_shard
parameter, the Ceph Object Gateway reshards the bucket dynamically in the background. For details, see the Dynamic Bucket Index Resharding in RHCS 3 section in the Object Gateway Guide for Red Hat Enterprise Linux.
Note that dynamic bucket resharding is disabled in multi-site configuration.
The Ceph File System is now fully supported
The Ceph File System (CephFS) is a file system compatible with POSIX standards that provides a file access to a Ceph Storage Cluster. With this new version, CephFS is now fully supported. For details about CephFS, see the Ceph File System Guide for Red Hat Ceph Storage 3.
Scrubbing is blocked for any PG if the primary or any replica OSDs are recovering
The osd_scrub_during_recovery
parameter now defaults to false
, so that when an OSD is recovering, the scrubbing process is not initialized on it. Previously, osd_scrub_during_recovery
was set to true
by default allowing scrubbing and recovery to run simultaneously. In addition, in previous releases if the user set osd_scrub_during_recovery
to false
, only the primary OSD was checked for recovery activity.
A new ceph-medic
utility
A new utility, ceph-medic
, is now available and fully supported. The utility detects common issues with a Ceph Storage Cluster that prevents the cluster from functioning properly. For details, see the Installing and Using ceph-medic to Diagnose a Ceph Storage Cluster chapter in the Troubleshooting Guide for Red Hat Ceph Storage 3.
Colocation of containerized Ceph daemons
With this release, you can colocate specific containerized Ceph daemons with OSD daemons on the same node. This approach significantly improves total cost of ownership (TCO) at small scale, reduces the minimum configuration from six nodes to three, makes upgrading more convenient, and provides better resource isolation. Also, each daemon has system resources reserved to avoid the "noisy neighbor" effect.
For details, see the Colocation of Containerized Ceph Daemons chapter in the Container Guide for Red Hat Ceph Storage 3.
Support for Ceph Manager
Ceph Manager (ceph-mgr
) is a new daemon that takes over some of the Monitor’s workload and introduces an interface for optional Python modules. Administrators must deploy at least two ceph-mgr
daemons, or more typically, one ceph-mgr
daemon on each node where they run a ceph-mon
daemon. For details, see the Installation Guide for Red Hat Enterprise Linux or Ubuntu.
Support for the RESTful plug-in
RESTful is a plug-in for the ceph-mgr
daemon that provides an API for interacting with Ceph clusters.
For details, see the Ceph Management API: Reference and Integration Guide.