Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 6. Ceph Object Storage Daemon (OSD) configuration
As a storage administrator, you can configure the Ceph Object Storage Daemon (OSD) to be redundant and optimized based on the intended workload.
Prerequisites
- Installation of the Red Hat Ceph Storage software.
6.1. Ceph OSD configuration
All Ceph clusters have a configuration, which defines:
- Cluster identity
- Authentication settings
- Ceph daemon membership in the cluster
- Network configuration
- Host names and addresses
- Paths to keyrings
- Paths to OSD log files
- Other runtime options
A deployment tool, such as cephadm
, will typically create an initial Ceph configuration file for you. However, you can create one yourself if you prefer to bootstrap a cluster without using a deployment tool.
For your convenience, each daemon has a series of default values. Many are set by the ceph/src/common/config_opts.h
script. You can override these settings with a Ceph configuration file or at runtime by using the monitor tell
command or connecting directly to a daemon socket on a Ceph node.
Red Hat does not recommend changing the default paths, as it makes it more difficult to troubleshoot Ceph later.
Additional Resources
-
For more information about
cephadm
and the Ceph orchestrator, see the Red Hat Ceph Storage Operations Guide.
6.2. Scrubbing the OSD
In addition to making multiple copies of objects, Ceph ensures data integrity by scrubbing placement groups. Ceph scrubbing is analogous to the fsck
command on the object storage layer.
For each placement group, Ceph generates a catalog of all objects and compares each primary object and its replicas to ensure that no objects are missing or mismatched.
Light scrubbing (daily) checks the object size and attributes. Deep scrubbing (weekly) reads the data and uses checksums to ensure data integrity.
Scrubbing is important for maintaining data integrity, but it can reduce performance. Adjust the following settings to increase or decrease scrubbing operations.
Additional resources
- See Ceph scrubbing options in the appendix of the Red Hat Ceph Storage Configuration Guide for more details.
6.3. Backfilling an OSD
When you add Ceph OSDs to a cluster or remove them from the cluster, the CRUSH algorithm rebalances the cluster by moving placement groups to or from Ceph OSDs to restore the balance. The process of migrating placement groups and the objects they contain can reduce the cluster operational performance considerably. To maintain operational performance, Ceph performs this migration with the 'backfill' process, which allows Ceph to set backfill operations to a lower priority than requests to read or write data.
6.4. OSD recovery
When the cluster starts or when a Ceph OSD terminates unexpectedly and restarts, the OSD begins peering with other Ceph OSDs before a write operation can occur.
If a Ceph OSD crashes and comes back online, usually it will be out of sync with other Ceph OSDs containing more recent versions of objects in the placement groups. When this happens, the Ceph OSD goes into recovery mode and seeks to get the latest copy of the data and bring its map back up to date. Depending upon how long the Ceph OSD was down, the OSD’s objects and placement groups may be significantly out of date. Also, if a failure domain went down, for example, a rack, more than one Ceph OSD might come back online at the same time. This can make the recovery process time consuming and resource intensive.
To maintain operational performance, Ceph performs recovery with limitations on the number of recovery requests, threads, and object chunk sizes which allows Ceph to perform well in a degraded state.
Additional resources
- See all the Red Hat Ceph Storage Ceph OSD configuration options in OSD object daemon storage configuration options for specific option descriptions and usage.