15장. cephadm 작업
스토리지 관리자는 Red Hat Ceph Storage 클러스터에서 Cephadm 작업을 수행할 수 있습니다.
15.1. cephadm 로그 메시지 모니터링
cephadm logs to the cephadm cluster log channel so you can monitor progress in real time.
실시간으로 진행 상황을 모니터링하려면 다음 명령을 실행합니다.
예
[ceph: root@host01 /]# ceph -W cephadm
예
2022-06-10T17:51:36.335728+0000 mgr.Ceph5-1.nqikfh [INF] refreshing Ceph5-adm facts 2022-06-10T17:51:37.170982+0000 mgr.Ceph5-1.nqikfh [INF] deploying 1 monitor(s) instead of 2 so monitors may achieve consensus 2022-06-10T17:51:37.173487+0000 mgr.Ceph5-1.nqikfh [ERR] It is NOT safe to stop ['mon.Ceph5-adm']: not enough monitors would be available (Ceph5-2) after stopping mons [Ceph5-adm] 2022-06-10T17:51:37.174415+0000 mgr.Ceph5-1.nqikfh [INF] Checking pool "nfs-ganesha" exists for service nfs.foo 2022-06-10T17:51:37.176389+0000 mgr.Ceph5-1.nqikfh [ERR] Failed to apply nfs.foo spec NFSServiceSpec({'placement': PlacementSpec(count=1), 'service_type': 'nfs', 'service_id': 'foo', 'unmanaged': False, 'preview_only': False, 'pool': 'nfs-ganesha', 'namespace': 'nfs-ns'}): Cannot find pool "nfs-ganesha" for service nfs.foo Traceback (most recent call last): File "/usr/share/ceph/mgr/cephadm/serve.py", line 408, in _apply_all_services if self._apply_service(spec): File "/usr/share/ceph/mgr/cephadm/serve.py", line 509, in _apply_service config_func(spec) File "/usr/share/ceph/mgr/cephadm/services/nfs.py", line 23, in config self.mgr._check_pool_exists(spec.pool, spec.service_name()) File "/usr/share/ceph/mgr/cephadm/module.py", line 1840, in _check_pool_exists raise OrchestratorError(f'Cannot find pool "{pool}" for ' orchestrator._interface.OrchestratorError: Cannot find pool "nfs-ganesha" for service nfs.foo 2022-06-10T17:51:37.179658+0000 mgr.Ceph5-1.nqikfh [INF] Found osd claims -> {} 2022-06-10T17:51:37.180116+0000 mgr.Ceph5-1.nqikfh [INF] Found osd claims for drivegroup all-available-devices -> {} 2022-06-10T17:51:37.182138+0000 mgr.Ceph5-1.nqikfh [INF] Applying all-available-devices on host Ceph5-adm... 2022-06-10T17:51:37.182987+0000 mgr.Ceph5-1.nqikfh [INF] Applying all-available-devices on host Ceph5-1... 2022-06-10T17:51:37.183395+0000 mgr.Ceph5-1.nqikfh [INF] Applying all-available-devices on host Ceph5-2... 2022-06-10T17:51:43.373570+0000 mgr.Ceph5-1.nqikfh [INF] Reconfiguring node-exporter.Ceph5-1 (unknown last config time)... 2022-06-10T17:51:43.373840+0000 mgr.Ceph5-1.nqikfh [INF] Reconfiguring daemon node-exporter.Ceph5-1 on Ceph5-1
기본적으로 로그는 정보 수준 이벤트 이상을 표시합니다. 디버그 수준 메시지를 보려면 다음 명령을 실행합니다.
예
[ceph: root@host01 /]# ceph config set mgr mgr/cephadm/log_to_cluster_level debug [ceph: root@host01 /]# ceph -W cephadm --watch-debug [ceph: root@host01 /]# ceph -W cephadm --verbose
디버깅 수준을 기본
info
로 반환 :예
[ceph: root@host01 /]# ceph config set mgr mgr/cephadm/log_to_cluster_level info
최근 이벤트를 보려면 다음 명령을 실행합니다.
예
[ceph: root@host01 /]# ceph log last cephadm
이러한 이벤트는 모니터 호스트의 ceph.cephadm.log
파일 및 모니터 데몬의 stderr
에도 기록됩니다.