Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 1. The basics of Ceph configuration
As a storage administrator, you need to have a basic understanding of how to view the Ceph configuration, and how to set the Ceph configuration options for the Red Hat Ceph Storage cluster. You can view and set the Ceph configuration options at runtime.
Prerequisites
- Installation of the Red Hat Ceph Storage software.
1.1. Ceph configuration
All Red Hat Ceph Storage clusters have a configuration, which defines:
- Cluster Identity
- Authentication settings
- Ceph daemons
- Network configuration
- Node names and addresses
- Paths to keyrings
- Paths to OSD log files
- Other runtime options
A deployment tool, such as cephadm
, will typically create an initial Ceph configuration file for you. However, you can create one yourself if you prefer to bootstrap a Red Hat Ceph Storage cluster without using a deployment tool.
Additional Resources
-
For more information about
cephadm
and the Ceph orchestrator, see the Red Hat Ceph Storage Operations Guide.
1.2. The Ceph configuration database
The Ceph Monitor manages a configuration database of Ceph options that centralize configuration management by storing configuration options for the entire storage cluster. By centralizing the Ceph configuration in a database, this simplifies storage cluster administration.
The priority order that Ceph uses to set options is:
- Compiled-in default values
- Ceph cluster configuration database
-
Local
ceph.conf
file -
Runtime override, using the
ceph daemon DAEMON-NAME config set
orceph tell DAEMON-NAME injectargs
commands
There are still a few Ceph options that can be defined in the local Ceph configuration file, which is /etc/ceph/ceph.conf
by default. However, ceph.conf
has been deprecated for Red Hat Ceph Storage 7.
cephadm
uses a basic ceph.conf
file that only contains a minimal set of options for connecting to Ceph Monitors, authenticating, and fetching configuration information. In most cases, cephadm
uses only the mon_host
option. To avoid using ceph.conf
only for the mon_host
option, use DNS SRV records to perform operations with Monitors.
Red Hat recommends that you use the assimilate-conf
administrative command to move valid options into the configuration database from the ceph.conf
file. For more information about assimilate-conf
, see Administrative Commands.
Ceph allows you to make changes to the configuration of a daemon at runtime. This capability can be useful for increasing or decreasing the logging output, by enabling or disabling debug settings, and can even be used for runtime optimization.
When the same option exists in the configuration database and the Ceph configuration file, the configuration database option has a lower priority than what is set in the Ceph configuration file.
Sections and Masks
Just as you can configure Ceph options globally, per daemon type, or by a specific daemon in the Ceph configuration file, you can also configure the Ceph options in the configuration database according to these sections:
Section | Description |
---|---|
| Affects all daemons and clients. |
| Affects all Ceph Monitors. |
| Affects all Ceph Managers. |
| Affects all Ceph OSDs. |
| Affects all Ceph Metadata Servers. |
| Affects all Ceph Clients, including mounted file systems, block devices, and RADOS Gateways. |
Ceph configuration options can have a mask associated with them. These masks can further restrict which daemons or clients the options apply to.
Masks have two forms:
type:location
The
type
is a CRUSH property, for example,rack
orhost
. Thelocation
is a value for the property type. For example,host:foo
limits the option only to daemons or clients running on thefoo
host.Example
ceph config set osd/host:magna045 debug_osd 20
class:device-class
The
device-class
is the name of the CRUSH device class, such ashdd
orssd
. For example,class:ssd
limits the option only to Ceph OSDs backed by solid state drives (SSD). This mask has no effect on non-OSD daemons of clients.Example
ceph config set osd/class:hdd osd_max_backfills 8
Administrative Commands
The Ceph configuration database can be administered with the subcommand ceph config ACTION
. These are the actions you can do:
ls
- Lists the available configuration options.
dump
- Dumps the entire configuration database of options for the storage cluster.
get WHO
-
Dumps the configuration for a specific daemon or client. For example, WHO can be a daemon, like
mds.a
. set WHO OPTION VALUE
- Sets a configuration option in the Ceph configuration database, where WHO is the target daemon, OPTION is the option to set, and VALUE is the desired value.
show WHO
- Shows the reported running configuration for a running daemon. These options might be different from those stored by the Ceph Monitors if there is a local configuration file in use or options have been overridden on the command line or at run time. Also, the source of the option values is reported as part of the output.
assimilate-conf -i INPUT_FILE -o OUTPUT_FILE
- Assimilate a configuration file from the INPUT_FILE and move any valid options into the Ceph Monitors’ configuration database. Any options that are unrecognized, invalid, or cannot be controlled by the Ceph Monitor return in an abbreviated configuration file stored in the OUTPUT_FILE. This command can be useful for transitioning from legacy configuration files to a centralized configuration database. Note that when you assimilate a configuration and the Monitors or other daemons have different configuration values set for the same set of options, the end result depends on the order in which the files are assimilated.
help OPTION -f json-pretty
- Displays help for a particular OPTION using a JSON-formatted output.
Additional Resources
- For more information about the command, see Setting a specific configuration at runtime.
1.3. Using the Ceph metavariables
Metavariables simplify Ceph storage cluster configuration dramatically. When a metavariable is set in a configuration value, Ceph expands the metavariable into a concrete value.
Metavariables are very powerful when used within the [global]
, [osd]
, [mon]
, or [client]
sections of the Ceph configuration file. However, you can also use them with the administration socket. Ceph metavariables are similar to Bash shell expansion.
Ceph supports the following metavariables:
$cluster
- Description
- Expands to the Ceph storage cluster name. Useful when running multiple Ceph storage clusters on the same hardware.
- Example
-
/etc/ceph/$cluster.keyring
- Default
-
ceph
$type
- Description
-
Expands to one of
osd
ormon
, depending on the type of the instant daemon. - Example
-
/var/lib/ceph/$type
$id
- Description
-
Expands to the daemon identifier. For
osd.0
, this would be0
. - Example
-
/var/lib/ceph/$type/$cluster-$id
$host
- Description
- Expands to the host name of the instant daemon.
$name
- Description
-
Expands to
$type.$id
. - Example
-
/var/run/ceph/$cluster-$name.asok
1.4. Viewing the Ceph configuration at runtime
The Ceph configuration files can be viewed at boot time and run time.
Prerequisites
- Root-level access to the Ceph node.
- Access to admin keyring.
Procedure
To view a runtime configuration, log in to a Ceph node running the daemon and execute:
Syntax
ceph daemon DAEMON_TYPE.ID config show
To see the configuration for
osd.0
, you can log into the node containingosd.0
and execute this command:Example
[root@osd ~]# ceph daemon osd.0 config show
For additional options, specify a daemon and
help
.Example
[root@osd ~]# ceph daemon osd.0 help
1.5. Viewing a specific configuration at runtime
Configuration settings for Red Hat Ceph Storage can be viewed at runtime from the Ceph Monitor node.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the Ceph Monitor node.
Procedure
Log into a Ceph node and execute:
Syntax
ceph daemon DAEMON_TYPE.ID config get PARAMETER
Example
[root@mon ~]# ceph daemon osd.0 config get public_addr
1.6. Setting a specific configuration at runtime
To set a specific Ceph configuration at runtime, use the ceph config set
command.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to the Ceph Monitor or OSD nodes.
Procedure
Set the configuration on all Monitor or OSD daemons :
Syntax
ceph config set DAEMON CONFIG-OPTION VALUE
Example
[root@mon ~]# ceph config set osd debug_osd 10
Validate that the option and value are set:
Example
[root@mon ~]# ceph config dump osd advanced debug_osd 10/10
To remove the configuration option from all daemons:
Syntax
ceph config rm DAEMON CONFIG-OPTION VALUE
Example
[root@mon ~]# ceph config rm osd debug_osd
To set the configuration for a specific daemon:
Syntax
ceph config set DAEMON.DAEMON-NUMBER CONFIG-OPTION VALUE
Example
[root@mon ~]# ceph config set osd.0 debug_osd 10
To validate that the configuration is set for the specified daemon:
Example
[root@mon ~]# ceph config dump osd.0 advanced debug_osd 10/10
To remove the configuration for a specific daemon:
Syntax
ceph config rm DAEMON.DAEMON-NUMBER CONFIG-OPTION
Example
[root@mon ~]# ceph config rm osd.0 debug_osd
If you use a client that does not support reading options from the configuration database, or if you still need to use ceph.conf
to change your cluster configuration for other reasons, run the following command:
ceph config set mgr mgr/cephadm/manage_etc_ceph_ceph_conf false
You must maintain and distribute the ceph.conf
file across the storage cluster.
1.7. OSD Memory Target
BlueStore keeps OSD heap memory usage under a designated target size with the osd_memory_target
configuration option.
The option osd_memory_target
sets OSD memory based upon the available RAM in the system. Use this option when TCMalloc is configured as the memory allocator, and when the bluestore_cache_autotune
option in BlueStore is set to true
.
Ceph OSD memory caching is more important when the block device is slow; for example, traditional hard drives, because the benefit of a cache hit is much higher than it would be with a solid state drive. However, this must be weighed into a decision to colocate OSDs with other services, such as in a hyper-converged infrastructure (HCI) or other applications.
1.7.1. Setting the OSD memory target
Use the osd_memory_target
option to set the maximum memory threshold for all OSDs in the storage cluster, or for specific OSDs. An OSD with an osd_memory_target
option set to 16 GB might use up to 16 GB of memory.
Configuration options for individual OSDs take precedence over the settings for all OSDs.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all hosts in the storage cluster.
Procedure
To set
osd_memory_target
for all OSDs in the storage cluster:Syntax
ceph config set osd osd_memory_target VALUE
VALUE is the number of GBytes of memory to be allocated to each OSD in the storage cluster.
To set
osd_memory_target
for a specific OSD in the storage cluster:Syntax
ceph config set osd.id osd_memory_target VALUE
.id
is the ID of the OSD and VALUE is the number of GB of memory to be allocated to the specified OSD. For example, to configure the OSD with ID 8 to use up to 16 GBytes of memory:Example
[ceph: root@host01 /]# ceph config set osd.8 osd_memory_target 16G
To set an individual OSD to use one maximum amount of memory and configure the rest of the OSDs to use another amount, specify the individual OSD first:
Example
[ceph: root@host01 /]# ceph config set osd osd_memory_target 16G [ceph: root@host01 /]# ceph config set osd.8 osd_memory_target 8G
Additional resources
- To configure Red Hat Ceph Storage to autotune OSD memory usage, see Automatically tuning OSD memory in the Operations Guide.
1.8. Automatically tuning OSD memory
The OSD daemons adjust the memory consumption based on the osd_memory_target
configuration option. The option osd_memory_target
sets OSD memory based upon the available RAM in the system.
If Red Hat Ceph Storage is deployed on dedicated nodes that do not share memory with other services, cephadm
automatically adjusts the per-OSD consumption based on the total amount of RAM and the number of deployed OSDs.
By default, the osd_memory_target_autotune
parameter is set to true
in the Red Hat Ceph Storage cluster.
Syntax
ceph config set osd osd_memory_target_autotune true
Cephadm starts with a fraction mgr/cephadm/autotune_memory_target_ratio
, which defaults to 0.7
of the total RAM in the system, subtract off any memory consumed by non-autotuned daemons such as non-OSDS and for OSDs for which osd_memory_target_autotune
is false, and then divide by the remaining OSDs.
The osd_memory_target
parameter is calculated as follows:
Syntax
osd_memory_target = TOTAL_RAM_OF_THE_OSD * (1048576) * (autotune_memory_target_ratio) / NUMBER_OF_OSDS_IN_THE_OSD_NODE - (SPACE_ALLOCATED_FOR_OTHER_DAEMONS)
SPACE_ALLOCATED_FOR_OTHER_DAEMONS may optionally include the following daemon space allocations:
- Alertmanager: 1 GB
- Grafana: 1 GB
- Ceph Manager: 4 GB
- Ceph Monitor: 2 GB
- Node-exporter: 1 GB
- Prometheus: 1 GB
For example, if a node has 24 OSDs and has 251 GB RAM space, then osd_memory_target
is 7860684936
.
The final targets are reflected in the configuration database with options. You can view the limits and the current memory consumed by each daemon from the ceph orch ps
output under MEM LIMIT
column.
The default setting of osd_memory_target_autotune
true
is unsuitable for hyperconverged infrastructures where compute and Ceph storage services are colocated. In a hyperconverged infrastructure, the autotune_memory_target_ratio
can be set to 0.2
to reduce the memory consumption of Ceph.
Example
[ceph: root@host01 /]# ceph config set mgr mgr/cephadm/autotune_memory_target_ratio 0.2
You can manually set a specific memory target for an OSD in the storage cluster.
Example
[ceph: root@host01 /]# ceph config set osd.123 osd_memory_target 7860684936
You can manually set a specific memory target for an OSD host in the storage cluster.
Syntax
ceph config set osd/host:HOSTNAME osd_memory_target TARGET_BYTES
Example
[ceph: root@host01 /]# ceph config set osd/host:host01 osd_memory_target 1000000000
Enabling osd_memory_target_autotune
overwrites existing manual OSD memory target settings. To prevent daemon memory from being tuned even when the osd_memory_target_autotune
option or other similar options are enabled, set the _no_autotune_memory
label on the host.
Syntax
ceph orch host label add HOSTNAME _no_autotune_memory
You can exclude an OSD from memory autotuning by disabling the autotune option and setting a specific memory target.
Example
[ceph: root@host01 /]# ceph config set osd.123 osd_memory_target_autotune false [ceph: root@host01 /]# ceph config set osd.123 osd_memory_target 16G
1.9. MDS Memory Cache Limit
MDS servers keep their metadata in a separate storage pool, named cephfs_metadata
, and are the users of Ceph OSDs. For Ceph File Systems, MDS servers have to support an entire Red Hat Ceph Storage cluster, not just a single storage device within the storage cluster, so their memory requirements can be significant, particularly if the workload consists of small-to-medium-size files, where the ratio of metadata to data is much higher.
Example: Set the mds_cache_memory_limit
to 2000000000 bytes
ceph_conf_overrides: mds: mds_cache_memory_limit=2000000000
For a large Red Hat Ceph Storage cluster with a metadata-intensive workload, do not put an MDS server on the same node as other memory-intensive services, doing so gives you the option to allocate more memory to MDS, for example, sizes greater than 100 GB.
Additional Resources
- See Metadata Server cache size limits in Red Hat Ceph Storage File System Guide.
- See the general Ceph configuration options in Configuration options for specific option descriptions and usage.