Questo contenuto non è disponibile nella lingua selezionata.
Chapter 15. Cephadm operations
As a storage administrator, you can carry out Cephadm operations in the Red Hat Ceph Storage cluster.
15.1. Monitor cephadm log messages
Cephadm logs to the cephadm cluster log channel so you can monitor progress in real time.
To monitor progress in realtime, run the following command:
Example
[ceph: root@host01 /]# ceph -W cephadm
Example
2022-06-10T17:51:36.335728+0000 mgr.Ceph5-1.nqikfh [INF] refreshing Ceph5-adm facts 2022-06-10T17:51:37.170982+0000 mgr.Ceph5-1.nqikfh [INF] deploying 1 monitor(s) instead of 2 so monitors may achieve consensus 2022-06-10T17:51:37.173487+0000 mgr.Ceph5-1.nqikfh [ERR] It is NOT safe to stop ['mon.Ceph5-adm']: not enough monitors would be available (Ceph5-2) after stopping mons [Ceph5-adm] 2022-06-10T17:51:37.174415+0000 mgr.Ceph5-1.nqikfh [INF] Checking pool "nfs-ganesha" exists for service nfs.foo 2022-06-10T17:51:37.176389+0000 mgr.Ceph5-1.nqikfh [ERR] Failed to apply nfs.foo spec NFSServiceSpec({'placement': PlacementSpec(count=1), 'service_type': 'nfs', 'service_id': 'foo', 'unmanaged': False, 'preview_only': False, 'pool': 'nfs-ganesha', 'namespace': 'nfs-ns'}): Cannot find pool "nfs-ganesha" for service nfs.foo Traceback (most recent call last): File "/usr/share/ceph/mgr/cephadm/serve.py", line 408, in _apply_all_services if self._apply_service(spec): File "/usr/share/ceph/mgr/cephadm/serve.py", line 509, in _apply_service config_func(spec) File "/usr/share/ceph/mgr/cephadm/services/nfs.py", line 23, in config self.mgr._check_pool_exists(spec.pool, spec.service_name()) File "/usr/share/ceph/mgr/cephadm/module.py", line 1840, in _check_pool_exists raise OrchestratorError(f'Cannot find pool "{pool}" for ' orchestrator._interface.OrchestratorError: Cannot find pool "nfs-ganesha" for service nfs.foo 2022-06-10T17:51:37.179658+0000 mgr.Ceph5-1.nqikfh [INF] Found osd claims -> {} 2022-06-10T17:51:37.180116+0000 mgr.Ceph5-1.nqikfh [INF] Found osd claims for drivegroup all-available-devices -> {} 2022-06-10T17:51:37.182138+0000 mgr.Ceph5-1.nqikfh [INF] Applying all-available-devices on host Ceph5-adm... 2022-06-10T17:51:37.182987+0000 mgr.Ceph5-1.nqikfh [INF] Applying all-available-devices on host Ceph5-1... 2022-06-10T17:51:37.183395+0000 mgr.Ceph5-1.nqikfh [INF] Applying all-available-devices on host Ceph5-2... 2022-06-10T17:51:43.373570+0000 mgr.Ceph5-1.nqikfh [INF] Reconfiguring node-exporter.Ceph5-1 (unknown last config time)... 2022-06-10T17:51:43.373840+0000 mgr.Ceph5-1.nqikfh [INF] Reconfiguring daemon node-exporter.Ceph5-1 on Ceph5-1
By default, the log displays info-level events and above. To see the debug-level messages, run the following commands:
Example
[ceph: root@host01 /]# ceph config set mgr mgr/cephadm/log_to_cluster_level debug [ceph: root@host01 /]# ceph -W cephadm --watch-debug [ceph: root@host01 /]# ceph -W cephadm --verbose
Return debugging level to default
info
:Example
[ceph: root@host01 /]# ceph config set mgr mgr/cephadm/log_to_cluster_level info
To see the recent events, run the following command:
Example
[ceph: root@host01 /]# ceph log last cephadm
Theses events are also logged to ceph.cephadm.log
file on the monitor hosts and to the monitor daemon’s stderr
15.2. Ceph daemon logs
You can view the Ceph daemon logs through stderr
or files.
Logging to stdout
Traditionally, Ceph daemons have logged to /var/log/ceph
. By default, Cephadm daemons log to stderr
and the logs are captured by the container runtime environment. For most systems, by default, these logs are sent to journald
and accessible through the journalctl
command.
For example, to view the logs for the daemon on host01 for a storage cluster with ID 5c5a50ae-272a-455d-99e9-32c6a013e694:
Example
[ceph: root@host01 /]# journalctl -u ceph-5c5a50ae-272a-455d-99e9-32c6a013e694@host01
This works well for normal Cephadm operations when logging levels are low.
To disable logging to
stderr
, set the following values:Example
[ceph: root@host01 /]# ceph config set global log_to_stderr false [ceph: root@host01 /]# ceph config set global mon_cluster_log_to_stderr false
Logging to files
You can also configure Ceph daemons to log to files instead of stderr
. When logging to files, Ceph logs are located in /var/log/ceph/CLUSTER_FSID
.
To enable logging to files, set the follwing values:
Example
[ceph: root@host01 /]# ceph config set global log_to_file true [ceph: root@host01 /]# ceph config set global mon_cluster_log_to_file true
Red Hat recommends disabling logging to stderr
to avoid double logs.
Currently log rotation to a non-default path is not supported.
By default, Cephadm sets up log rotation on each host to rotate these files. You can configure the logging retention schedule by modifying /etc/logrotate.d/ceph.CLUSTER_FSID
.
15.3. Data location
Cephadm daemon data and logs are located in slightly different locations than the older versions of Ceph:
-
/var/log/ceph/CLUSTER_FSID
contains all the storage cluster logs. Note that by default Cephadm logs throughstderr
and the container runtime, so these logs are usually not present. -
/var/lib/ceph/CLUSTER_FSID
contains all the cluster daemon data, besides logs. -
var/lib/ceph/CLUSTER_FSID/DAEMON_NAME
contains all the data for an specific daemon. -
/var/lib/ceph/CLUSTER_FSID/crash
contains the crash reports for the storage cluster. -
/var/lib/ceph/CLUSTER_FSID/removed
contains old daemon data directories for the stateful daemons, for example monitor or Prometheus, that have been removed by Cephadm.
Disk usage
A few Ceph daemons may store a significant amount of data in /var/lib/ceph
, notably the monitors and Prometheus daemon, hence Red Hat recommends moving this directory to its own disk, partition, or logical volume so that the root file system is not filled up.
15.4. Cephadm custom config files
Cephadm supports specifying miscellaneous configuration files for daemons. You must provide both the content of the configuration file and the location within the daemon’s container where it should be mounted.
A YAML spec is applied with custom config files specified. Cephadm redeploys the daemons for which the config files are specified. Then these files are mounted within the daemon’s container at the specified location.
You can apply a YAML spec with custom config files:
Example
service_type: grafana service_name: grafana custom_configs: - mount_path: /etc/example.conf content: | setting1 = value1 setting2 = value2 - mount_path: /usr/share/grafana/example.cert content: | -----BEGIN PRIVATE KEY----- V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWduYSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3MgZXQgYWNjdXNhbSBldCBqdXN0byBkdW8= -----END PRIVATE KEY----- -----BEGIN CERTIFICATE----- V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4gTG9yZW0gaXBzdW0gZG9sb3Igc2l0IGFtZXQsIGNvbnNldGV0dXIgc2FkaXBzY2luZyBlbGl0ciwgc2VkIGRpYW0gbm9udW15IGVpcm1vZCB0ZW1wb3IgaW52aWR1bnQgdXQgbGFib3JlIGV0IGRvbG9yZSBtYWduYSBhbGlxdXlhbSBlcmF0LCBzZWQgZGlhbSB2b2x1cHR1YS4gQXQgdmVybyBlb3MgZXQgYWNjdXNhbSBldCBqdXN0byBkdW8= -----END CERTIFICATE-----
You can mount the new config files within the containers for the daemons:
Syntax
ceph orch redeploy SERVICE_NAME
Example
[ceph: root@host01 /]# ceph orch redeploy grafana