Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 2. Understanding Process Management for Ceph
As a storage administrator, you can manipulate the Ceph daemons in various ways. Manipulating these daemons allows you to start, stop and restart all of the Ceph services as needed.
2.1. Prerequisites Copier lienLien copié sur presse-papiers!
- A running Red Hat Ceph Storage cluster.
2.2. An Overview of Process Management for Ceph Copier lienLien copié sur presse-papiers!
In Red Hat Ceph Storage 3, all process management is done through the Systemd service. Each time you want to start, restart, and stop the Ceph daemons, you must specify the daemon type or the daemon instance.
Additional Resources
- For more information about using Systemd, see Chapter 9 in the Red Hat Enterprise Linux System Administrator’s Guide.
2.3. Starting, Stopping, and Restarting All the Ceph Daemons Copier lienLien copié sur presse-papiers!
To start, stop, or restart all the running Ceph daemons on a node, follow these procedures.
Prerequisites
-
Having
rootaccess to the node.
Procedure
Starting all Ceph daemons:
systemctl start ceph.target
[root@admin ~]# systemctl start ceph.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping all Ceph daemons:
systemctl stop ceph.target
[root@admin ~]# systemctl stop ceph.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting all Ceph daemons:
systemctl restart ceph.target
[root@admin ~]# systemctl restart ceph.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.4. Starting, Stopping, and Restarting the Ceph Daemons by Type Copier lienLien copié sur presse-papiers!
To start, stop, or restart all Ceph daemons of a particular type, follow these procedures on the node running the Ceph daemons.
Prerequisites
-
Having
rootaccess to the node.
Procedure
On Ceph Monitor nodes:
Starting
systemctl start ceph-mon.target
[root@mon ~]# systemctl start ceph-mon.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping
systemctl stop ceph-mon.target
[root@mon ~]# systemctl stop ceph-mon.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting
systemctl restart ceph-mon.target
[root@mon ~]# systemctl restart ceph-mon.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Ceph Manager nodes:
Starting
systemctl start ceph-mgr.target
[root@mgr ~]# systemctl start ceph-mgr.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping
systemctl stop ceph-mgr.target
[root@mgr ~]# systemctl stop ceph-mgr.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting
systemctl restart ceph-mgr.target
[root@mgr ~]# systemctl restart ceph-mgr.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Ceph OSD nodes:
Starting
systemctl start ceph-osd.target
[root@osd ~]# systemctl start ceph-osd.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping
systemctl stop ceph-osd.target
[root@osd ~]# systemctl stop ceph-osd.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting
systemctl restart ceph-osd.target
[root@osd ~]# systemctl restart ceph-osd.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow On Ceph Object Gateway nodes:
Starting
systemctl start ceph-radosgw.target
[root@rgw ~]# systemctl start ceph-radosgw.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping
systemctl stop ceph-radosgw.target
[root@rgw ~]# systemctl stop ceph-radosgw.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting
systemctl restart ceph-radosgw.target
[root@rgw ~]# systemctl restart ceph-radosgw.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.5. Starting, Stopping, and Restarting a Ceph Daemons by Instance Copier lienLien copié sur presse-papiers!
To start, stop, or restart a Ceph daemon by instance, follow these procedures on the node running the Ceph daemons.
Prerequisites
-
Having
rootaccess to the node.
Procedure
On a Ceph Monitor node:
Starting
systemctl start ceph-mon@$MONITOR_HOST_NAME
[root@mon ~]# systemctl start ceph-mon@$MONITOR_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping
systemctl stop ceph-mon@$MONITOR_HOST_NAME
[root@mon ~]# systemctl stop ceph-mon@$MONITOR_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting
systemctl restart ceph-mon@$MONITOR_HOST_NAME
[root@mon ~]# systemctl restart ceph-mon@$MONITOR_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
-
$MONITOR_HOST_NAMEwith the name of the Ceph Monitor node.
-
On a Ceph Manager node:
Starting
systemctl start ceph-mgr@MANAGER_HOST_NAME
[root@mgr ~]# systemctl start ceph-mgr@MANAGER_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping
systemctl stop ceph-mgr@MANAGER_HOST_NAME
[root@mgr ~]# systemctl stop ceph-mgr@MANAGER_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting
systemctl restart ceph-mgr@MANAGER_HOST_NAME
[root@mgr ~]# systemctl restart ceph-mgr@MANAGER_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
-
$MANAGER_HOST_NAMEwith the name of the Ceph Manager node.
-
On a Ceph OSD node:
Starting
systemctl start ceph-osd@$OSD_NUMBER
[root@osd ~]# systemctl start ceph-osd@$OSD_NUMBERCopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping
systemctl stop ceph-osd@$OSD_NUMBER
[root@osd ~]# systemctl stop ceph-osd@$OSD_NUMBERCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting
systemctl restart ceph-osd@$OSD_NUMBER
[root@osd ~]# systemctl restart ceph-osd@$OSD_NUMBERCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
$OSD_NUMBERwith theIDnumber of the Ceph OSD.For example, when looking at the
ceph osd treecommand output,osd.0has anIDof0.
On a Ceph Object Gateway node:
Starting
systemctl start ceph-radosgw@rgw.$OBJ_GATEWAY_HOST_NAME
[root@rgw ~]# systemctl start ceph-radosgw@rgw.$OBJ_GATEWAY_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Stopping
systemctl stop ceph-radosgw@rgw.$OBJ_GATEWAY_HOST_NAME
[root@rgw ~]# systemctl stop ceph-radosgw@rgw.$OBJ_GATEWAY_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Restarting
systemctl restart ceph-radosgw@rgw.$OBJ_GATEWAY_HOST_NAME
[root@rgw ~]# systemctl restart ceph-radosgw@rgw.$OBJ_GATEWAY_HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
-
$OBJ_GATEWAY_HOST_NAMEwith the name of the Ceph Object Gateway node.
-
2.6. Powering down and rebooting a Red Hat Ceph Storage cluster Copier lienLien copié sur presse-papiers!
Follow the below procedure for powering down and rebooting the Ceph cluster:
Prerequisites
-
Having
rootaccess.
Procedure
Powering down the Red Hat Ceph Storage cluster
Stop the clients from using the RBD images, NFS-Ganesha Gateway, and RADOS Gateway on this cluster and any other clients.
On the NFS-Ganesha Gateway node:
systemctl stop nfs-ganesha.service
# systemctl stop nfs-ganesha.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow On the RADOS Gateway node:
systemctl stop ceph-radosgw.target
# systemctl stop ceph-radosgw.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
The cluster must be in healthy state (
Health_OKand all PGsactive+clean) before proceeding. Runceph statuson a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller nodes, to ensure the cluster is healthy. If you use the Ceph File System (
CephFS), theCephFScluster must be brought down. Taking aCephFScluster down is done by reducing the number of ranks to1, setting thecluster_downflag, and then failing the last rank. For example:#ceph fs set <fs_name> max_mds 1 #ceph mds deactivate <fs_name>:1 # rank 2 of 2 #ceph status # wait for rank 1 to finish stopping #ceph fs set <fs_name> cluster_down true #ceph mds fail <fs_name>:0
#ceph fs set <fs_name> max_mds 1 #ceph mds deactivate <fs_name>:1 # rank 2 of 2 #ceph status # wait for rank 1 to finish stopping #ceph fs set <fs_name> cluster_down true #ceph mds fail <fs_name>:0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Setting the
cluster_downflag prevents standbys from taking over the failed rank.Set the
noout,norecover,norebalance,nobackfill,nodownandpauseflags. Run the following on a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Shut down the OSD nodes one by one:
systemctl stop ceph-osd.target
[root@osd ~]# systemctl stop ceph-osd.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Shut down the monitor nodes one by one:
systemctl stop ceph-mon.target
[root@mon ~]# systemctl stop ceph-mon.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Rebooting the Red Hat Ceph Storage cluster
Power on the monitor nodes:
systemctl start ceph-mon.target
[root@mon ~]# systemctl start ceph-mon.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Power on the OSD nodes:
systemctl start ceph-osd.target
[root@osd ~]# systemctl start ceph-osd.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Wait for all the nodes to come up. Verify all the services are up and the connectivity is fine between the nodes.
Unset the
noout,norecover,norebalance,nobackfill,nodownandpauseflags. Run the following on a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller node:Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you use the Ceph File System (
CephFS), theCephFScluster must be brought back up by setting thecluster_downflag tofalse:ceph fs set <fs_name> cluster_down false
[root@admin~]# ceph fs set <fs_name> cluster_down falseCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the RADOS Gateway and NFS-Ganesha Gateway.
On the RADOS Gateway node:
systemctl start ceph-radosgw.target
# systemctl start ceph-radosgw.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow On the NFS-Ganesha Gateway node:
systemctl start nfs-ganesha.service
# systemctl start nfs-ganesha.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Verify the cluster is in healthy state (
Health_OKand all PGsactive+clean). Runceph statuson a node with the client keyrings, for example, the Ceph Monitor or OpenStack controller nodes, to ensure the cluster is healthy.
2.7. Additional Resources Copier lienLien copié sur presse-papiers!
For more information about installing Red Hat Ceph Storage, see:
- Installation Guide for Red Hat Enterprise Linux
- Installation Guide for Ubuntu