Este conteúdo não está disponível no idioma selecionado.
Chapter 2. Understanding process management for Ceph
As a storage administrator, you can manipulate the various Ceph daemons by type or instance, on bare-metal or in containers. Manipulating these daemons allows you to start, stop and restart all of the Ceph services as needed.
2.1. Prerequisites Copiar o linkLink copiado para a área de transferência!
- Installation of the Red Hat Ceph Storage software.
2.2. Ceph process management Copiar o linkLink copiado para a área de transferência!
In Red Hat Ceph Storage, all process management is done through the Systemd service. Each time you want to start, restart, and stop the Ceph daemons, you must specify the daemon type or the daemon instance.
Additional Resources
- For more information about using Systemd, see the chapter Managing services with systemd in the Red Hat Enterprise Linux System Administrator’s Guide.
2.3. Starting, stopping, and restarting all Ceph daemons Copiar o linkLink copiado para a área de transferência!
Start, stop, and restart all Ceph daemons as an admin from the node.
Prerequisites
- A running Red Hat Ceph Storage cluster.
-
Having
rootaccess to the node.
Procedure
Starting all Ceph daemons:
systemctl start ceph.target
[root@admin ~]# systemctl start ceph.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping all Ceph daemons:
systemctl stop ceph.target
[root@admin ~]# systemctl stop ceph.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting all Ceph daemons:
systemctl restart ceph.target
[root@admin ~]# systemctl restart ceph.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4. Starting, stopping, and restarting the Ceph daemons by type Copiar o linkLink copiado para a área de transferência!
To start, stop, or restart all Ceph daemons of a particular type, follow these procedures on the node running the Ceph daemons.
Prerequisites
- A running Red Hat Ceph Storage cluster.
-
Having
rootaccess to the node.
Procedure
On Ceph Monitor nodes:
Starting:
systemctl start ceph-mon.target
[root@mon ~]# systemctl start ceph-mon.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping:
systemctl stop ceph-mon.target
[root@mon ~]# systemctl stop ceph-mon.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting:
systemctl restart ceph-mon.target
[root@mon ~]# systemctl restart ceph-mon.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Ceph Manager nodes:
Starting:
systemctl start ceph-mgr.target
[root@mgr ~]# systemctl start ceph-mgr.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping:
systemctl stop ceph-mgr.target
[root@mgr ~]# systemctl stop ceph-mgr.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting:
systemctl restart ceph-mgr.target
[root@mgr ~]# systemctl restart ceph-mgr.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Ceph OSD nodes:
Starting:
systemctl start ceph-osd.target
[root@osd ~]# systemctl start ceph-osd.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping:
systemctl stop ceph-osd.target
[root@osd ~]# systemctl stop ceph-osd.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting:
systemctl restart ceph-osd.target
[root@osd ~]# systemctl restart ceph-osd.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Ceph Object Gateway nodes:
Starting:
systemctl start ceph-radosgw.target
[root@rgw ~]# systemctl start ceph-radosgw.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping:
systemctl stop ceph-radosgw.target
[root@rgw ~]# systemctl stop ceph-radosgw.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting:
systemctl restart ceph-radosgw.target
[root@rgw ~]# systemctl restart ceph-radosgw.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Starting, stopping, and restarting the Ceph daemons by instance Copiar o linkLink copiado para a área de transferência!
To start, stop, or restart a Ceph daemon by instance, follow these procedures on the node running the Ceph daemons.
Prerequisites
- A running Red Hat Ceph Storage cluster.
-
Having
rootaccess to the node.
Procedure
On a Ceph Monitor node:
Starting:
systemctl start ceph-mon@MONITOR_HOST_NAME
[root@mon ~]# systemctl start ceph-mon@MONITOR_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping:
systemctl stop ceph-mon@MONITOR_HOST_NAME
[root@mon ~]# systemctl stop ceph-mon@MONITOR_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting:
systemctl restart ceph-mon@MONITOR_HOST_NAME
[root@mon ~]# systemctl restart ceph-mon@MONITOR_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
-
MONITOR_HOST_NAMEwith the name of the Ceph Monitor node.
-
On a Ceph Manager node:
Starting:
systemctl start ceph-mgr@MANAGER_HOST_NAME
[root@mgr ~]# systemctl start ceph-mgr@MANAGER_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping:
systemctl stop ceph-mgr@MANAGER_HOST_NAME
[root@mgr ~]# systemctl stop ceph-mgr@MANAGER_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting:
systemctl restart ceph-mgr@MANAGER_HOST_NAME
[root@mgr ~]# systemctl restart ceph-mgr@MANAGER_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
-
MANAGER_HOST_NAMEwith the name of the Ceph Manager node.
-
On a Ceph OSD node:
Starting:
systemctl start ceph-osd@OSD_NUMBER
[root@osd ~]# systemctl start ceph-osd@OSD_NUMBERCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping:
systemctl stop ceph-osd@OSD_NUMBER
[root@osd ~]# systemctl stop ceph-osd@OSD_NUMBERCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting:
systemctl restart ceph-osd@OSD_NUMBER
[root@osd ~]# systemctl restart ceph-osd@OSD_NUMBERCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
OSD_NUMBERwith theIDnumber of the Ceph OSD.For example, when looking at the
ceph osd treecommand output,osd.0has anIDof0.
On a Ceph Object Gateway node:
Starting:
systemctl start ceph-radosgw@rgw.OBJ_GATEWAY_HOST_NAME
[root@rgw ~]# systemctl start ceph-radosgw@rgw.OBJ_GATEWAY_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping:
systemctl stop ceph-radosgw@rgw.OBJ_GATEWAY_HOST_NAME
[root@rgw ~]# systemctl stop ceph-radosgw@rgw.OBJ_GATEWAY_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting:
systemctl restart ceph-radosgw@rgw.OBJ_GATEWAY_HOST_NAME
[root@rgw ~]# systemctl restart ceph-radosgw@rgw.OBJ_GATEWAY_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
-
OBJ_GATEWAY_HOST_NAMEwith the name of the Ceph Object Gateway node.
-
2.6. Starting, stopping, and restarting Ceph daemons that run in containers Copiar o linkLink copiado para a área de transferência!
Use the systemctl command start, stop, or restart Ceph daemons that run in containers.
Prerequisites
- Installation of the Red Hat Ceph Storage software.
- Root-level access to the node.
Procedure
To start, stop, or restart a Ceph daemon running in a container, run a
systemctlcommand asrootcomposed in the following format:systemctl ACTION ceph-DAEMON@ID
systemctl ACTION ceph-DAEMON@IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace
-
ACTION is the action to perform;
start,stop, orrestart. -
DAEMON is the daemon;
osd,mon,mds, orrgw. ID is either:
-
The short host name where the
ceph-mon,ceph-mds, orceph-rgwdaemons are running. -
The ID of the
ceph-osddaemon if it was deployed.
-
The short host name where the
For example, to restart a
ceph-osddaemon with the IDosd01:systemctl restart ceph-osd@osd01
[root@osd ~]# systemctl restart ceph-osd@osd01Copy to Clipboard Copied! Toggle word wrap Toggle overflow To start a
ceph-mondemon that runs on theceph-monitor01host:systemctl start ceph-mon@ceph-monitor01
[root@mon ~]# systemctl start ceph-mon@ceph-monitor01Copy to Clipboard Copied! Toggle word wrap Toggle overflow To stop a
ceph-rgwdaemon that runs on theceph-rgw01host:systemctl stop ceph-radosgw@ceph-rgw01
[root@rgw ~]# systemctl stop ceph-radosgw@ceph-rgw01Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
ACTION is the action to perform;
Verify that the action was completed successfully.
systemctl status ceph-DAEMON@ID
systemctl status ceph-DAEMON@IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
systemctl status ceph-mon@ceph-monitor01
[root@mon ~]# systemctl status ceph-mon@ceph-monitor01Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- See the Understanding process management for Ceph chapter in the Red Hat Ceph Storage Administration Guide for more information.
2.7. Viewing the logs of Ceph daemons that run in containers Copiar o linkLink copiado para a área de transferência!
Use the journald daemon from the container host to view the logs of a Ceph daemon from a container.
Prerequisites
- Installation of the Red Hat Ceph Storage software.
- Root-level access to the node.
Procedure
To view the entire Ceph log, run a
journalctlcommand asrootcomposed in the following format:journalctl -u ceph-DAEMON@ID
journalctl -u ceph-DAEMON@IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace
-
DAEMON is the Ceph daemon;
osd,mon, orrgw. ID is either:
-
The short host name where the
ceph-mon,ceph-mds, orceph-rgwdaemons are running. -
The ID of the
ceph-osddaemon if it was deployed.
-
The short host name where the
For example, to view the entire log for the
ceph-osddaemon with the IDosd01:journalctl -u ceph-osd@osd01
[root@osd ~]# journalctl -u ceph-osd@osd01Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
DAEMON is the Ceph daemon;
To show only the recent journal entries, use the
-foption.journalctl -fu ceph-DAEMON@ID
journalctl -fu ceph-DAEMON@IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example, to view only recent journal entries for the
ceph-mondaemon that runs on theceph-monitor01host:journalctl -fu ceph-mon@ceph-monitor01
[root@mon ~]# journalctl -fu ceph-mon@ceph-monitor01Copy to Clipboard Copied! Toggle word wrap Toggle overflow
You can also use the sosreport utility to view the journald logs. For more details about SOS reports, see the What is an sosreport and how to create one in Red Hat Enterprise Linux? solution on the Red Hat Customer Portal.
Additional Resources
-
The
journalctl(1)manual page.
2.8. Enabling logging to a file for containerized Ceph daemons Copiar o linkLink copiado para a área de transferência!
By default, containerized Ceph daemons do not log to files. You can use centralized configuration management to enable containerized Ceph daemons to log to files.
Prerequisites
- Installation of the Red Hat Ceph Storage software.
- Root-level access to the node where the containerized daemon runs.
Procedure
Navigate to the
var/log/cephdirectory:Example
cd /var/log/ceph
[root@host01 ~]# cd /var/log/cephCopy to Clipboard Copied! Toggle word wrap Toggle overflow Note any existing log files.
Syntax
ls -l /var/log/ceph/
ls -l /var/log/ceph/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ls -l /var/log/ceph/
[root@host01 ceph]# ls -l /var/log/ceph/ total 396 -rw-r--r--. 1 ceph ceph 107230 Feb 5 14:42 ceph-osd.0.log -rw-r--r--. 1 ceph ceph 107230 Feb 5 14:42 ceph-osd.3.log -rw-r--r--. 1 root root 181641 Feb 5 14:42 ceph-volume.logCopy to Clipboard Copied! Toggle word wrap Toggle overflow In the example, logging to files for OSD.0 and OSD.3 are already enabled.
Fetch the container name of the daemon for which you want to enable logging:
Red Hat Enterprise Linux 7
docker ps -a
[root@host01 ceph]# docker ps -aCopy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 8
podman ps -a
[root@host01 ceph]# podman ps -aCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use centralized configuration management to enable logging to a file for a Ceph daemon.
Red Hat Enterprise Linux 7
docker exec CONTAINER_NAME ceph config set DAEMON_NAME log_to_file true
docker exec CONTAINER_NAME ceph config set DAEMON_NAME log_to_file trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 8
podman exec CONTAINER_NAME ceph config set DAEMON_NAME log_to_file true
podman exec CONTAINER_NAME ceph config set DAEMON_NAME log_to_file trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow The DAEMON_NAME is derived from the CONTAINER_NAME. Remove
ceph-and replace the hyphen between the daemon and daemon ID with a period.Red Hat Enterprise Linux 7
docker exec ceph-mon-host01 ceph config set mon.host01 log_to_file true
[root@host01 ceph]# docker exec ceph-mon-host01 ceph config set mon.host01 log_to_file trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 8
podman exec ceph-mon-host01 ceph config set mon.host01 log_to_file true
[root@host01 ceph]# podman exec ceph-mon-host01 ceph config set mon.host01 log_to_file trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: To enable logging to a file for the cluster log, use the
mon_cluster_log_to_fileoption:Red Hat Enterprise Linux 7
docker exec CONTAINER_NAME ceph config set DAEMON_NAME mon_cluster_log_to_file true
docker exec CONTAINER_NAME ceph config set DAEMON_NAME mon_cluster_log_to_file trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 8
podman exec CONTAINER_NAME ceph config set DAEMON_NAME mon_cluster_log_to_file true
podman exec CONTAINER_NAME ceph config set DAEMON_NAME mon_cluster_log_to_file trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 7
docker exec ceph-mon-host01 ceph config set mon.host01 mon_cluster_log_to_file true
[root@host01 ceph]# docker exec ceph-mon-host01 ceph config set mon.host01 mon_cluster_log_to_file trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 8
podman exec ceph-mon-host01 ceph config set mon.host01 mon_cluster_log_to_file true
[root@host01 ceph]# podman exec ceph-mon-host01 ceph config set mon.host01 mon_cluster_log_to_file trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow Validate the updated configuration:
Red Hat Enterprise Linux 7
docker exec CONTAINER_NAME ceph config show-with-defaults DAEMON_NAME | grep log_to_file
docker exec CONTAINER_NAME ceph config show-with-defaults DAEMON_NAME | grep log_to_fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Red Hat Enterprise Linux 8
podman exec CONTAINER_NAME ceph config show-with-defaults DAEMON_NAME | grep log_to_file
podman exec CONTAINER_NAME ceph config show-with-defaults DAEMON_NAME | grep log_to_fileCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
podman exec ceph-mon-host01 ceph config show-with-defaults mon.host01 | grep log_to_file
[root@host01 ceph]# podman exec ceph-mon-host01 ceph config show-with-defaults mon.host01 | grep log_to_file log_to_file true mon default[false] mon_cluster_log_to_file true mon default[false]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Restart the Ceph daemon:
Syntax
systemctl restart ceph-DAEMON@DAEMON_ID
systemctl restart ceph-DAEMON@DAEMON_IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
systemctl restart ceph-mon@host01
[root@host01 ceph]# systemctl restart ceph-mon@host01Copy to Clipboard Copied! Toggle word wrap Toggle overflow Validate that the new log files exist:
Syntax
ls -l /var/log/ceph/
ls -l /var/log/ceph/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Two new files were created,
ceph-mon.host01.logfor a Monitor daemon andceph.logfor the cluster log.
2.9. Gathering log files of Ceph daemons Copiar o linkLink copiado para a área de transferência!
To gather log files of Ceph daemons, run the gather-ceph-logs.yml Ansible playbook. Currently, Red Hat Ceph Storage supports gathering logs for non-containerized deployments only.
Prerequisites
- A running Red Hat Ceph Storage cluster deployed.
- Admin-level access to the Ansible node.
Procedure
Navigate to the
/usr/share/ceph-ansibledirectory:cd /usr/share/ceph-ansible
[ansible@admin ~]# cd /usr/share/ceph-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the playbook:
ansible-playbook infrastructure-playbooks/gather-ceph-logs.yml -i hosts
[ansible@admin ~]# ansible-playbook infrastructure-playbooks/gather-ceph-logs.yml -i hostsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for the logs to be collected on the Ansible administration node.
Additional Resources
- See the Viewing log files of Ceph daemons that run in containers section in the Red Hat Ceph Storage Administration Guide for more details.
2.10. Powering down and rebooting Red Hat Ceph Storage cluster Copiar o linkLink copiado para a área de transferência!
Follow the below procedure for powering down and rebooting the Ceph cluster.
Prerequisites
- A running Red Hat Ceph Storage cluster.
-
Having
rootaccess.
Procedure
Powering down the Red Hat Ceph Storage cluster
- Stop the clients from using the RBD images and RADOS Gateway on this cluster and any other clients.
-
The cluster must be in healthy state (
Health_OKand all PGsactive+clean) before proceeding. Runceph statuson a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller nodes, to ensure the cluster is healthy. If you use the Ceph File System (
CephFS), theCephFScluster must be brought down. Taking aCephFScluster down is done by reducing the number of ranks to1, setting thecluster_downflag, and then failing the last rank.Example:
ceph fs set FS_NAME max_mds 1 ceph mds deactivate FS_NAME:1 # rank 2 of 2 ceph status # wait for rank 1 to finish stopping ceph fs set FS_NAME cluster_down true ceph mds fail FS_NAME:0
[root@osd ~]# ceph fs set FS_NAME max_mds 1 [root@osd ~]# ceph mds deactivate FS_NAME:1 # rank 2 of 2 [root@osd ~]# ceph status # wait for rank 1 to finish stopping [root@osd ~]# ceph fs set FS_NAME cluster_down true [root@osd ~]# ceph mds fail FS_NAME:0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Setting the
cluster_downflag prevents standbys from taking over the failed rank.Set the
noout,norecover,norebalance,nobackfill,nodownandpauseflags. Run the following on a node with the client keyrings. For example, the Ceph Monitor or OpenStack controller node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Shut down the OSD nodes one by one:
systemctl stop ceph-osd.target
[root@osd ~]# systemctl stop ceph-osd.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Shut down the monitor nodes one by one:
systemctl stop ceph-mon.target
[root@mon ~]# systemctl stop ceph-mon.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Rebooting the Red Hat Ceph Storage cluster
- Power on the administration node.
Power on the monitor nodes:
systemctl start ceph-mon.target
[root@mon ~]# systemctl start ceph-mon.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Power on the OSD nodes:
systemctl start ceph-osd.target
[root@osd ~]# systemctl start ceph-osd.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for all the nodes to come up. Verify all the services are up and the connectivity is fine between the nodes.
Unset the
noout,norecover,norebalance,nobackfill,nodownandpauseflags. Run the following on a node with the client keyrings. For example, the Ceph Monitor or OpenStack controller node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you use the Ceph File System (
CephFS), theCephFScluster must be brought back up by setting thecluster_downflag tofalse:ceph fs set FS_NAME cluster_down false
[root@admin~]# ceph fs set FS_NAME cluster_down falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Verify the cluster is in healthy state (
Health_OKand all PGsactive+clean). Runceph statuson a node with the client keyrings. For example, the Ceph Monitor or OpenStack controller nodes, to ensure the cluster is healthy.
2.11. Additional Resources Copiar o linkLink copiado para a área de transferência!
- For more information on installing Ceph see the Red Hat Ceph Storage Installation Guide