Chapter 3. New features
This section lists all major updates, enhancements, and new features introduced in this release of Red Hat Ceph Storage.
3.1. The ceph-ansible
Utility
Setting ownership is faster when using switch-from-non-containerized-to-containerized-ceph-daemons.yml
Previously, the chown
command in the switch-from-non-containerized-to-containerized-ceph-daemons.yml
playbook unconditionally re-applied the ownership of Ceph directories and files causing a lot of write operations. With this update, the command has been improved to run faster. This is especially useful on a Red Hat Ceph Storage cluster with a significant amount of directories and files in the /var/lib/ceph/
directory.
The new device_class
Ansible configuration option
With the device_class`feature, you can alleviate post deployment configuration by updating the `groups_vars/osd.yml
file in the desired layout. This feature offers you multi-backend support by avoiding to comment out sections after deploying Red Hat Ceph Storage.
Removing iSCSI targets using Ansible
Previously, the iSCSI targets had to be removed manually before purging the storage cluster. Starting with this release, the ceph-ansible
playbooks remove the iSCSI targets as expected.
For bare-metal Ceph deployments, see the Removing the Configuration section in the the Red Hat Ceph Storage 3 Block Device Guide for more details.
For Ceph container deployment, see the Red Hat Ceph Storage 3 Container Guide for more details.
osd_auto_discovery
now works with the batch
subcommand
Previously, when osd_auto_discovery
was activated, the batch
subcommand did not create OSDs as expected. With this update, when batch
is used with osd_auto_discovery
, all the devices found by the ceph-ansible
utility become OSDs and are passed in batch
as expected.
The Ceph Ansible playbooks are compatible with Ansible 2.7
Starting with this release, users can install Ansible 2.7 and run the latest ceph-ansible
playbooks for {storage-product}.
3.2. Ceph Management Dashboard
New options to use pre-downloaded container images
Previously, it was not possible to install Red Hat Ceph Storage Dashboard and the Prometheus plug-in without access to the Red Hat Container Registry. This update adds the following Ansible options that allow you to use pre-downloaded container images:
prometheus.pull_image
-
Set to
false
to not pull the Prometeheus container image prometheus.trust_image_content
-
Set to
true
to not contact the Registry for Prometheus container image verification grafana.pull_image
-
Set to
false
to not pull the Dashboard container image grafana.trust_image_content
-
Set to
true
to not contact the Registry for Dashboard container image verification
Set these options in the Ansible group_vars/all.yml
file to use the pre-downloaded container images.
3.3. Ceph Manager Plugins
The RESTful plug-in now exposes performance counters
Th RESTful plug-in for the Ceph Manager (ceph-mgr
) now exposes performance counters that include a number of Ceph Object Gateway metrics. To query the performance counters through the REST API provided by the RESTful plug-in, access the /perf
endpoint.
3.4. The ceph-volume
Utility
New ceph-volume
subcommand: inventory
The ceph-volume
utility now supports a new inventory
subcommand. The subcommand describes every device in the system, reports if it is available or not and if it is used by the ceph-disk
utility.
The ceph-volume
tool can now set the sizing of journals and block.db
Previously, sizing for journals and block.db volumes could only be set in the ceph.conf
file. With this update, the ceph-volume
tool can set the sizing of journals and block.db. This exposes sizing right on the command line interface (CLI) so the user can use tools like ceph-ansible
or the CLI directly to set or change sizing when creating an OSD.
New ceph-volume lvm zap
options: --osd.id
and --osd-fsid
The ceph-volume lvm zap
command now supports the --osd.id
and --osd-fsid
options. Use these options to remove any devices for an OSD by providing its ID or FSID, respectively. This is especially useful if you are not aware of the actual device names or logical volumes in use by that OSD.
3.5. Object Gateway
Renaming users is now supported
This update of Red Hat Ceph Storage adds the ability to rename the Ceph Object Gateway users. For details, see the Rename a User section in the Object Gateway Guide for Red Hat Enterprise Linux or for Ubuntu.
The Ceph Object Gateway now supports the use of SSE-S3 headers
Clients and applications can successfully negotiate SSE-S3 encryption using the global, default encryption key, if one has been configured. Previously, the default key only used SSE-KMS encryption.
The x-amz-version-id
header is now supported
The x-amz-version-id
header is now returned by PUT operations on versioned buckets to conform to the S3 protocol. With this enhancement, clients now know the version ID of the objects they create.
New commands to view the RADOS objects and orphans
This release adds two new commands to view how Object Gateway maps to RADOS objects and produce a potential list of orphans for further processing. The radosgw-admin bucket radoslist --bucket=<bucket_name>
command lists all RADOS objects in the bucket. The rgw-orphan-list
command lists all orphans in a specified pool. These commands keep intermediate results on the local file system.
Ability to associate one email address to multiple user accounts
This update adds the ability to create multiple Ceph Object Gateway (RGW) user accounts with the same email address.
Ability to search for users by access-key
This update adds the ability to search for users by the access-key as a search string when using the radosgw-admin
utility:
radosgw-admin user info --access-key key
Keystone S3 credential caching has been implemented
The Keystone S3 credential caching feature permits using AWSv4 request signing (AWS_HMAC_SHA256
) with Keystone as an authentication source, and accelerates Keystone authentication using S3. This also enables AWSv4 request signing, which increases client security.
3.6. Packages
nfs-ganesha
has been updated to the latest version
The nfs-ganesha
package is now based on the upstream version 2.7.4, which provides a number of bug fixes and enhancements from the previous version.
3.7. RADOS
OSD BlueStore is now fully supported
BlueStore is a new back end for the OSD daemons that allows for storing objects directly on the block devices. Because BlueStore does not need any file system interface, it improves performance of Ceph Storage Clusters.
To learn more about the BlueStore OSD back end, see the OSD BlueStore chapter in the Administration Guide for Red Hat Ceph Storage 3.
A new configuration option: osd_map_message_max_bytes
The monitoring function can sometimes send messages via the Ceph File system kernel client to the cluster which are too large, causing a traffic problem. A configuration option named osd_map_message_max_bytes
was added with a default value of 10MiB. This allows the cluster to respond in a more timely manner.
The default BlueStore and BlueFS allocator is now bitmap
Previously, the default allocator for BlueStore and BlueFS was the stupid
allocator. This allocator spreads allocations over the entire device because it allocates the first extent it finds that is large enough, starting from the last place it allocated. The stupid
allocator tracks each extent in a separate B-tree, so the amount of memory used depends on the number of extents. This behavior causes more fragmentation and requires more memory to track free space. With this update, the default allocator has been changed to bitmap
. The bitmap
allocator allocates based on the first extent possible from the start of the disk, so large extents are preserved. It uses a fixed-size tree of bitmaps to track free space, thus using constant memory regardless of number of extents. As a result, the new allocator causes less fragmentation and requires less memory.
osdmaptool
has a new option for the Ceph upmap balancer
The new --upmap-active
option for osdmaptool
command calculates and displays the number of rounds that the active balancer must complete to optimize all upmap
items. The balancer completes one round per minute. The upmap.out
file contains a line for each upmap item.
Example
$ ceph osd getmap > mymap got osdmap epoch #### $ osdmaptool --upmap upmap.out --upmap-active mymap osdmaptool: osdmap file 'mymap' writing upmap command output to: upmap.out checking for upmap cleanups upmap, max-count 10, max deviation 5 pools ....... .... prepared 0/10 changes Time elapsed ####### secs Unable to find further optimization, or distribution is already perfect osd.0 pgs ### osd.1 pgs ### osd.2 pgs ### ..... Total time elapsed ######### secs, ## rounds
The ability to inspect BlueStore fragmentation
This update adds the ability to inspect fragmentation of the BlueStore back end. To do so, use the ceph daemon
command or the ceph-bluestore-tool
utility.
For details see the Red Hat Ceph Storage 3 Administration Guide.
Updated the Ceph debug log to include the source IP address on failed incoming CRC messages
Previously, when a failed incoming Cyclic Redundancy Check (CRC) message was getting logged into the Ceph debug log, only a warning about the failed incoming CRC message was logged. With this release, the source IP address is added to this warning message. This helps system administrators identify which clients and daemons might have some networking issues.
New omap
usage statistics per PG and OSD
This update adds a better reporting of omap
data usage on a per placement group (PG) and per OSD level. PG-level data is gathered opportunistically during a deep scrub. Additional fields have been added to the output of the ceph osd df
and various ceph pg
commands to display the new values.
Listing RADOS objects in a specific PG
The rados ls
command now accepts the --pgid
option to list the RADOS objects in a specific placement group (PG).
PG IDs added to omap
log messages
The large omap
log messages now include placement group IDs to aid in locating the object.
The rocksdb_cache_size
option default is now 512 MB
BlueStore OSD rocksdb_cache_size
option default value has been changed to 512 MB to help with compaction.
The RocksDB compaction threads default value has changed
The new default value for the max_background_compactions
option is 2
. As a result, this change improves performance for write heavy OMAP workloads. This option controls the number of concurrent background compaction threads. The old default value was 1
.