Chapter 3. Major Updates
This section lists all major updates, enhancements, and new features introduced in this release of Red Hat Ceph Storage.
A heartbeat message for Jumbo frames has been added
Previously, if a network included jumbo frames and the maximum transmission unit (MTU) was not configured properly on all network parts, a lot of problems, such as slow requests, and stuck peering and backfilling processes occurred. In addition, the OSD logs did not include any heartbeat timeout messages because the heartbeat message packet size is below 1500 bytes. This update adds a heartbeat message for Jumbo frames.
The osd_hack_prune_past_interval
option is now supported
The osd_hack_prune_past_interval
option helps to reduce memory usage for the past intervals entries, which can help with recovery of unhealthy clusters.
This option can cause data loss, therefore, use it only when instructed by the Red Hat Support Engineers.
The default value for the min_in_ratio
option has been increased to 0.75
The min_in_ratio
option prevents Monitors from marking OSDs as out
when the amount of out
OSDs drops below certain fraction of all OSDs in the cluster. In previous releases, the default value of min_in_ratio
was set to 0.3
. With this update, the value has been increased to 0.75
.
RocksDB
is enabled as an option to replace levelDB
This update enables an option to use the RocksDB
back end for the omap
database as opposed to levelDB
. RocksDB
uses the multi-threading mechanism in compaction so that it better handles the situation when the omap
directories become very large (more than 40 G). LevelDB
compaction takes a lot of time in such a situation and causes OSD daemons to time out.
For details about conversion from levelDB
to RocksDB
, see the Ceph - Steps to convert OSD omap backend from leveldb to rocksdb solution on Red Hat Customer Portal.
The software repository containing the ceph-ansible
package has changed
Earlier versions of Red Hat Ceph Storage relied on the ceph-ansible
package in the rhel-7-server-rhscon-2-installer-rpms
repository. The Red Hat Storage Console 2 product is nearing end-of-life. Therefore, use the ceph-ansible
package provided by the rhel-7-server-rhceph-2-tools-rpms
repository instead.
Changing the default compression behavior of rocksdb
Disabling compression reduces the size of the I/O operations, but not the I/O operations themselves.
The old default values:
filestore_rocksdb_options = "max_background_compactions=8;compaction_readahead_size=2097152"
The new default values:
filestore_rocksdb_options = "max_background_compactions=8;compaction_readahead_size=2097152;compression=kNoCompression"
Also, this does not effect any existing OSD, only those OSDs that have been manually converted or newly provisioned OSDs.
The RocksDB
cache size can now be larger than 2 GB
Previously, you could not set values larger than 2 GB. Now, the value for rocksdb_cache_size
parameter, can be set to a larger size, such as 4 GB.
Support for the Red Hat Ceph Storage Dashboard
The Red Hat Ceph Storage Dashboard provides a monitoring dashboard for Ceph clusters to visualize the cluster state. The dashboard is accessible from a web browser and provides a number of metrics and graphs about the state of the cluster, Monitors, OSDs, Pools, or network.
For details, see the Monitoring Ceph Clusters with Red Hat Ceph Storage Dashboard section in the Administration Guide for Red Hat Ceph Storage 2.
Split threshold is now randomized
Previously, the split threshold was not randomized, so that many OSDs reached it at the same time. As a consequence, such OSDs incurred high latency because they all split directories at once. With this update, the split threshold is randomized which ensures that OSDs split directories over a large period of time.
Logging the time out of disk operations
Ceph OSDs now log when they shutdown due to disk operations timing out by default.
The --yes-i-really-mean-it
override option is mandatory for executing the radosgw-admin orphans find
command
The radosgw-admin orphans find
command can inadvertently remove data objects still in use, if followed by another operation, such as, a rados rm
command. Users are now warned before attempting to produce lists of potentially orphaned objects.
Perform offline compaction on an OSD
The ceph-osdomap-tool
now has a compact
command to perform offline compaction on an OSD’s omap
directory.
For S3 and Swift protocols, an option to list buckets/containers in natural (partial) order has been added
Listing containers in sorted order is canonical in both protocols, but is costly, and not required by some client applications. The performance and workload cost of S3 and Swift bucket/container listings is reduced for sharded buckets/containers when the allow_unordered
extension is used.
Asynchronous Garbage Collection
An asynchronous mechanism for executing the Ceph Object Gateway garbage collection using the librados
APIs has been introduced. The original garbage collection mechanism serialized all processing, and lagged behind applications in specific workloads. Garbage collection performance has been significantly improved, and can be tuned to specific site requirements.
Deploying Ceph using ceph-ansible
on Ubuntu
Previously, Red Hat did not provide the ceph-ansible
package for Ubuntu. With this release, you can use the Ansible automation application to deploy a Ceph Storage Cluster from an Ubuntu node. See Chapter 3 in the Red Hat Ceph Storage Installation Guide for Ubuntu for more details.