Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 2. Understanding process management for Ceph


As a storage administrator, you can manipulate the various Ceph daemons by type or instance in a Red Hat Ceph Storage cluster. Manipulating these daemons allows you to start, stop and restart all of the Ceph services as needed.

2.1. Ceph process management

In Red Hat Ceph Storage, all process management is done through the Systemd service. Each time you want to start, restart, and stop the Ceph daemons, you must specify the daemon type or the daemon instance.

Additional Resources

You can start, stop, and restart all Ceph daemons as the root user from the host where you want to stop the Ceph daemons.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Having root access to the node.

Procedure

  1. On the host where you want to start, stop, and restart the daemons, run the systemctl service to get the SERVICE_ID of the service.

    Example

    [root@host01 ~]# systemctl --type=service
    ceph-499829b4-832f-11eb-8d6d-001a4a000635@mon.host01.service
    Copy to Clipboard Toggle word wrap

  2. Starting all Ceph daemons:

    Syntax

    systemctl start SERVICE_ID
    Copy to Clipboard Toggle word wrap

    Example

    [root@host01 ~]# systemctl start ceph-499829b4-832f-11eb-8d6d-001a4a000635@mon.host01.service
    Copy to Clipboard Toggle word wrap

  3. Stopping all Ceph daemons:

    Syntax

    systemctl stop SERVICE_ID
    Copy to Clipboard Toggle word wrap

    Example

    [root@host01 ~]# systemctl stop ceph-499829b4-832f-11eb-8d6d-001a4a000635@mon.host01.service
    Copy to Clipboard Toggle word wrap

  4. Restarting all Ceph daemons:

    Syntax

    systemctl restart SERVICE_ID
    Copy to Clipboard Toggle word wrap

    Example

    [root@host01 ~]# systemctl restart ceph-499829b4-832f-11eb-8d6d-001a4a000635@mon.host01.service
    Copy to Clipboard Toggle word wrap

2.3. Starting, stopping, and restarting all Ceph services

Ceph services are logical groups of Ceph daemons of the same type, configured to run in the same Red Hat Ceph Storage cluster. The orchestration layer in Ceph allows the user to manage these services in a centralized way, making it easy to execute operations that affect all the Ceph daemons that belong to the same logical service. The Ceph daemons running in each host are managed through the Systemd service. You can start, stop, and restart all Ceph services from the host where you want to manage the Ceph services.

Important

If you want to start,stop, or restart a specific Ceph daemon in a specific host, you need to use the SystemD service. To obtain a list of the SystemD services running in a specific host, connect to the host, and run the following command:

Example

[root@host01 ~]# systemctl list-units “ceph*”
Copy to Clipboard Toggle word wrap

The output will give you a list of the service names that you can use, to manage each Ceph daemon.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Having root access to the node.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell
    Copy to Clipboard Toggle word wrap

  2. Run the ceph orch ls command to get a list of Ceph services configured in the Red Hat Ceph Storage cluster and to get the specific service ID.

    Example

    [ceph: root@host01 /]# ceph orch ls
    NAME                       RUNNING  REFRESHED  AGE  PLACEMENT  IMAGE NAME                                                       IMAGE ID
    alertmanager                   1/1  4m ago     4M   count:1    registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.5   b7bae610cd46
    crash                          3/3  4m ago     4M   *          registry.redhat.io/rhceph-alpha/rhceph-6-rhel9:latest            c88a5d60f510
    grafana                        1/1  4m ago     4M   count:1    registry.redhat.io/rhceph-alpha/rhceph-6-dashboard-rhel9:latest  bd3d7748747b
    mgr                            2/2  4m ago     4M   count:2    registry.redhat.io/rhceph-alpha/rhceph-6-rhel9:latest            c88a5d60f510
    mon                            2/2  4m ago     10w  count:2    registry.redhat.io/rhceph-alpha/rhceph-6-rhel9:latest            c88a5d60f510
    nfs.foo                        0/1  -          -    count:1    <unknown>                                                        <unknown>
    node-exporter                  1/3  4m ago     4M   *          registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.5  mix
    osd.all-available-devices      5/5  4m ago     3M   *          registry.redhat.io/rhceph-alpha/rhceph-6-rhel9:latest            c88a5d60f510
    prometheus                     1/1  4m ago     4M   count:1    registry.redhat.io/openshift4/ose-prometheus:v4.6                bebb0ddef7f0
    rgw.test_realm.test_zone       2/2  4m ago     3M   count:2    registry.redhat.io/rhceph-alpha/rhceph-6-rhel9:latest            c88a5d60f510
    Copy to Clipboard Toggle word wrap

  3. To start a specific service, run the following command:

    Syntax

    ceph orch start SERVICE_ID
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph orch start node-exporter
    Copy to Clipboard Toggle word wrap

  4. To stop a specific service, run the following command:

    Important

    The ceph orch stop SERVICE_ID command results in the Red Hat Ceph Storage cluster being inaccessible, only for the MON and MGR service. It is recommended to use the systemctl stop SERVICE_ID command to stop a specific daemon in the host.

    Syntax

    ceph orch stop SERVICE_ID
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph orch stop node-exporter
    Copy to Clipboard Toggle word wrap

    In the example the ceph orch stop node-exporter command removes all the daemons of the node exporter service.

  5. To restart a specific service, run the following command:

    Syntax

    ceph orch restart SERVICE_ID
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph orch restart node-exporter
    Copy to Clipboard Toggle word wrap

2.4. Viewing log files of Ceph daemons that run in containers

Use the journald daemon from the container host to view a log file of a Ceph daemon from a container.

Prerequisites

  • Installation of the Red Hat Ceph Storage software.
  • Root-level access to the node.

Procedure

  1. To view the entire Ceph log file, run a journalctl command as root composed in the following format:

    Syntax

    journalctl -u ceph SERVICE_ID
    Copy to Clipboard Toggle word wrap

    Example

    [root@host01 ~]# journalctl -u ceph-499829b4-832f-11eb-8d6d-001a4a000635@osd.8.service
    Copy to Clipboard Toggle word wrap

    In the above example, you can view the entire log for the OSD with ID osd.8.

  2. To show only the recent journal entries, use the -f option.

    Syntax

    journalctl -fu SERVICE_ID
    Copy to Clipboard Toggle word wrap

    Example

    [root@host01 ~]# journalctl -fu ceph-499829b4-832f-11eb-8d6d-001a4a000635@osd.8.service
    Copy to Clipboard Toggle word wrap

Note

You can also use the sosreport utility to view the journald logs. For more details about SOS reports, see the What is an sosreport and how to create one in Red Hat Enterprise Linux? solution on the Red Hat Customer Portal.

Additional Resources

  • The journalctl manual page.

2.5. Powering down and rebooting Red Hat Ceph Storage cluster

Use either the Ceph Orchestrator or 'systemctl' commands to power down and restart the Red Hat Ceph Storage cluster.

Before you begin powering down a Red Hat Ceph Storage, be sure that you have root-level access.

2.5.1. Powering down and rebooting with Ceph Orchestrator

Use the Ceph Orchestrator to shut down and restart the Red Hat Ceph Storage cluster. In most cases, logging in to a single system is sufficient to power off the cluster.

The Ceph Orchestrator supports start, stop, and restart operations for powering down or rebooting the cluster. Alternatively, use systemctl commands for powering down and rebooting the Ceph cluster. For more information, see Powering down and rebooting the cluster using the systemctl commands.

Use the following basic flow to power down a Red Hat Ceph Storage cluster. The procedure provides full in-depth information and commands for each step.

  1. Stop all clients from using the Block Device image and Ceph Object Gateway on this cluster and on any other clients.
  2. Stop the Ceph File System (CephFS) cluster and the MDS service.
  3. Stop all component gateways related to block and file.
  4. Stop all ingress services and RADOS gateways.
  5. If mirror dameons are used, stop all mirror daemons.
  6. Stop the monitoring service components.
  7. Set the noout flag.
  8. Shut down all nodes.

Prerequisites

Before you begin, make sure that you have the following prerequisites in place:

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

2.5.1.1. Powering down the Red Hat Ceph Storage cluster

Use the Ceph Orchestrator to shut down the Red Hat Ceph Storage cluster.

Procedure

  1. Stop all clients from using user Block Device images, CephFS volumes, and Ceph Object Gateways.
  2. Log in to the Cephadm shell.

    Example

    [root@host01 ~]# cephadm shell
    Copy to Clipboard Toggle word wrap

    Before proceeding, the cluster must be in healthy state (Health_OK and all PGs active+clean). View the cluster state by using the ceph -s command.

  3. When using CephFS, bring down the CephFS cluster.

    Syntax

    ceph fs fail FS_NAME
    ceph status
    ceph fs set FS_NAME joinable false
    ceph mds fail FS_NAME:N
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs fail cephfs
    [ceph: root@host01 /]# ceph status
    [ceph: root@host01 /]# ceph fs set cephfs joinable false
    [ceph: root@host01 /]# ceph mds fail cephfs:1
    Copy to Clipboard Toggle word wrap

  4. Stop the MDS service.

    1. Get the MDS service name.

      Syntax

      ceph orch ls --service-type mds
      Copy to Clipboard Toggle word wrap

      Example

      [ceph: root@host01 /]# ceph orch ls --service-type mds
      Copy to Clipboard Toggle word wrap

    2. Stop the MDS service that uses the name output in the previous step.

      Syntax

      ceph orch stop SERVICE_NAME
      Copy to Clipboard Toggle word wrap

  5. Stop all component gateways that are related to block and file systems.

    1. Find all relevant services that are running by using the ceph orch ls command.
    2. Stop each block and file system service.

      Syntax

      ceph orch stop SERVICE_NAME
      Copy to Clipboard Toggle word wrap

      Example

      [ceph: root@host01 /]# ceph orch stop rbd
      Copy to Clipboard Toggle word wrap

  6. Stop the Ceph Object Gateway services. Repeat for each deployed service.

    1. Find the Ceph Object Gateway service name.

      Syntax

      ceph orch ls --service-type SERVICE_TYPE
      Copy to Clipboard Toggle word wrap

    2. Stop the Ceph Object Gateway service that uses the fetched service name.

      Syntax

      ceph orch stop SERVICE_NAME
      Copy to Clipboard Toggle word wrap

  7. Stop the Ceph Object Gateway ingress services. Repeat for each deployed service.

    1. Fetch the deployed ingress services.

      Syntax

      ceph orch ls --service-type ingress
      Copy to Clipboard Toggle word wrap

    2. Stop each deployed ingress service.

      Syntax

      ceph orch stop SERVICE_NAME
      Copy to Clipboard Toggle word wrap

  8. Stop the monitoring service components.

    1. Stop the Alertmanager service.

      Syntax

      ceph orch stop alertmanager
      Copy to Clipboard Toggle word wrap

    2. Stop the node-exporter service, which is part of the monitoring stack.

      Syntax

      ceph orch stop node-exporter
      Copy to Clipboard Toggle word wrap

    3. Stop the Prometheus service.

      Syntax

      ceph orch stop prometheus
      Copy to Clipboard Toggle word wrap

    4. Stop the Grafana dashboard service.

      Syntax

      ceph orch stop grafana
      Copy to Clipboard Toggle word wrap

    5. Stop the crash service.

      Syntax

      ceph orch stop crash
      Copy to Clipboard Toggle word wrap

  9. Set the noout flag. Set the noout flag during maintenance to prevent CRUSH from automatically rebalancing the cluster.

    Note

    Use the pause flag to block any forgotten client I/Os.

    Syntax

    ceph osd set noout
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph osd set noout
    Copy to Clipboard Toggle word wrap

  10. Stop the OSD services from the cephadm node, one by one. If there are large number of OSD daemons, you may proceed to shutdown the OSD nodes directly.

    1. Fetch the OSD IDs.

      Syntax

      ceph orch ps
      Copy to Clipboard Toggle word wrap

    2. Stop the OSD daemons.

      Syntax

      ceph orch daemon stop OSD_ID
      Copy to Clipboard Toggle word wrap

      Example

      [ceph: root@host01 /]# ceph orch daemon stop osd.1
      Scheduled to stop osd.1 on host 'host02'
      Copy to Clipboard Toggle word wrap

    3. Identify the OSD hosts.

      Syntax

      ceph orch ls osd
      Copy to Clipboard Toggle word wrap

    4. Shut down each of the OSD hosts from the hosts listed in the ceph orch ps output.
  11. Stop the monitors.

    1. Identify the hosts with monitors.

      Syntax

      ceph orch ls mon
      Copy to Clipboard Toggle word wrap

      Note the listed monitor hosts.

    2. Shut down the monitor hosts that are listed in the previous step. Shut down the monitor hosts one by one.

      Syntax

      systemtl stop SERVICE_NAME
      Copy to Clipboard Toggle word wrap

  12. Shut down any remaining standalone hosts. Shut down hosts with services, such as Ceph Object Gateway, that are still running.

2.5.1.2. Rebooting the Red Hat Ceph Storage cluster

Use the Ceph Orchestrator to reboot the Red Hat Ceph Storage cluster.

Prerequisites

Before you begin rebooting the cluster, power on and stabilize any network equipment before powering on the Ceph hosts and nodes.

Procedure

  1. Power on all the Ceph hosts.

    1. Log in to the administration node from the Cephadm shell.

      Example

      [root@host01 ~]# cephadm shell
      Copy to Clipboard Toggle word wrap

  2. Verify that all services are in the running state.

    Syntax

    ceph orch ls
    Copy to Clipboard Toggle word wrap

  3. Ensure that the cluster health is in the Health_OK status, by running the ceph -s command.
  4. Unset the noout flag that was set in Powering down the Red Hat Ceph Storage cluster.

    Syntax

    ceph osd unset noout
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph osd unset noout
    Copy to Clipboard Toggle word wrap

  5. When using Ceph File System (CephFS), make the CephFS cluster available by setting the joinable flag to true.

    Syntax

    ceph fs set FS_NAME joinable true
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs set cephfs joinable true
    Copy to Clipboard Toggle word wrap

Verification

Verify that the cluster is in healthy state (Health_OK and all PGs active+clean). Run the ceph -s status command on a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller nodes, to ensure that the cluster is healthy.

Use systemctl commands to shut down and restart the Red Hat Ceph Storage cluster. This method aligns with standard Linux service management practices.

Alternatively, use the Ceph Orchestrator to power down and reboot the Ceph cluster. For more information, see Powering down and rebooting with Ceph Orchestrator.

Prerequisites

Before you begin, make sure that you have the following prerequisites in place:

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to the node.

2.5.2.1. Powering down the Red Hat Ceph Storage cluster

Use systemctl commands to shut down the Red Hat Ceph Storage cluster.

Repeat these steps for each node in the cluster.

Procedure

  1. Stop all clients from using user Block Device Images, CephFS volumes, and Ceph Object Gateways.
  2. Log in to the Cephadm shell.

    Example

    [root@host01 ~]# cephadm shell
    Copy to Clipboard Toggle word wrap

    Before proceeding, the cluster must be in healthy state (Health_OK and all PGs active+clean). View the cluster state by using the ceph -s command.

  3. When using CephFS, bring down the CephFS cluster.

    Syntax

    ceph fs fail FS_NAME
    ceph status
    ceph fs set FS_NAME joinable false
    ceph mds fail FS_NAME:N
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs fail cephfs
    [ceph: root@host01 /]# ceph status
    [ceph: root@host01 /]# ceph fs set cephfs joinable false
    [ceph: root@host01 /]# ceph mds fail cephfs:1
    Copy to Clipboard Toggle word wrap

  4. If the MDS and Ceph Object Gateway nodes are on their own dedicated nodes, stop the MDS service.

    1. Get the MDS service name.

      Syntax

      ceph orch ls --service-type mds
      Copy to Clipboard Toggle word wrap

      Example

      [ceph: root@host01 /]# ceph orch ls --service-type mds
      Copy to Clipboard Toggle word wrap

    2. Stop the MDS service that uses the name output in the previous step.

      Syntax

      ceph orch stop SERVICE_NAME
      Copy to Clipboard Toggle word wrap

  5. Set the noout flag. Set the noout flag during maintenance to prevent CRUSH from automatically rebalancing the cluster.

    Note

    Use the pause flag to block any forgotten client I/Os.

    Syntax

    ceph osd set noout
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph osd set noout
    Copy to Clipboard Toggle word wrap

  6. Get the systemd target of the daemons.

    Syntax

    systemctl list-units --type target | grep ceph
    Copy to Clipboard Toggle word wrap

    Example

    [root@host01 ~]# systemctl list-units --type target | grep ceph
    ceph-0b007564-ec48-11ee-b736-525400fd02f8.target loaded active active Ceph cluster 0b007564-ec48-11ee-b736-525400fd02f8
    ceph.target                                      loaded active active All Ceph clusters and services
    Copy to Clipboard Toggle word wrap

  7. Disable the target that includes the cluster FSID.

    Syntax

    systemctl disable CLUSTER_FSID.target
    Copy to Clipboard Toggle word wrap

    Example

    [root@host01 ~]# systemctl disable ceph-0b007564-ec48-11ee-b736-525400fd02f8.target
    Removed "/etc/systemd/system/multi-user.target.wants/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target".
    Removed "/etc/systemd/system/ceph.target.wants/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target".
    Copy to Clipboard Toggle word wrap

  8. Stop all daemons on the host.

    Syntax

    systemctl stop CLUSTER_FSID.target
    Copy to Clipboard Toggle word wrap

    Example

    [root@host01 ~]# systemctl stop ceph-0b007564-ec48-11ee-b736-525400fd02f8.target
    Copy to Clipboard Toggle word wrap

  9. Shut down the node by using the shutdown command.

    Example

    [root@host01 ~]# shutdown
    Shutdown scheduled for Wed 2024-03-27 11:47:19 EDT, use 'shutdown -c' to cancel.
    Copy to Clipboard Toggle word wrap

  10. Repeat the shutdown steps for all nodes in the cluster.

2.5.2.2. Rebooting the Red Hat Ceph Storage cluster

Use systemctl commands to reboot the Red Hat Ceph Storage cluster.

Procedure

  1. Enable the systemd target to run all daemons.

    Syntax

    systemctl enable CLUSTER_FSID.target
    Copy to Clipboard Toggle word wrap

    Example

    [root@host01 ~]# systemctl enable ceph-0b007564-ec48-11ee-b736-525400fd02f8.target
    Created symlink /etc/systemd/system/multi-user.target.wants/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target  /etc/systemd/system/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target.
    Created symlink /etc/systemd/system/ceph.target.wants/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target  /etc/systemd/system/ceph-0b007564-ec48-11ee-b736-525400fd02f8.target.
    Copy to Clipboard Toggle word wrap

  2. Start the systemd target.

    Syntax

    systemctl start CLUSTER_FSID.target
    Copy to Clipboard Toggle word wrap

    Example

    [root@host01 ~]# systemctl start ceph-0b007564-ec48-11ee-b736-525400fd02f8.target
    Copy to Clipboard Toggle word wrap

    Wait for all the nodes to come up.

  3. Log in to the Cephadm shell.

    Syntax

    cephadm shell
    Copy to Clipboard Toggle word wrap

  4. Verify that all services are up and there are no connectivity issues between the nodes.

    Syntax

    ceph orch ls
    Copy to Clipboard Toggle word wrap

  5. Unset the noout flags. Run the following commands on a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller node.

    Syntax

    ceph osd unset noout
    Copy to Clipboard Toggle word wrap

    Note

    If the pause flag was set, unset the pause flag.

    Example

    [ceph: root@host01 /]# ceph osd unset noout
    Copy to Clipboard Toggle word wrap

  6. When using Ceph File System (CephFS), bring the CephFS cluster back up by setting the joinable flag to true.

    Syntax

    ceph fs set FS_NAME joinable true
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph fs set cephfs joinable true
    Copy to Clipboard Toggle word wrap

Verification

Verify that the cluster is in healthy state (Health_OK and all PGs active+clean). Run the ceph -s status command on a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller nodes, to ensure that the cluster is healthy.

Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2026 Red Hat
Nach oben