Chapter 2. Management of services using the Ceph Orchestrator
As a storage administrator, after installing the Red Hat Ceph Storage cluster, you can monitor and manage the services in a storage cluster using the Ceph Orchestrator. A service is a group of daemons that are configured together.
This section covers the following administrative information:
- Placement specification of the Ceph Orchestrator.
- Deploying the Ceph daemons using the command line interface.
- Deploying the Ceph daemons on a subset of hosts using the command line interface.
- Service specification of the Ceph Orchestrator.
- Deploying the Ceph daemons using the service specification.
- Deploying the Ceph File System mirroring daemon using the service specification.
2.1. Placement specification of the Ceph Orchestrator
You can use the Ceph Orchestrator to deploy osds
, mons
, mgrs
, mds
, and rgw
services. Red Hat recommends deploying services using placement specifications. You need to know where and how many daemons have to be deployed to deploy a service using the Ceph Orchestrator. Placement specifications can either be passed as command line arguments or as a service specification in a yaml
file.
There are two ways of deploying the services using the placement specification:
Using the placement specification directly in the command line interface. For example, if you want to deploy three monitors on the hosts, running the following command deploys three monitors on
host01
,host02
, andhost03
.Example
[ceph: root@host01 /]# ceph orch apply mon --placement="3 host01 host02 host03"
[ceph: root@host01 /]# ceph orch apply mon --placement="3 host01 host02 host03"
Copy to Clipboard Copied! Using the placement specification in the YAML file. For example, if you want to deploy
node-exporter
on all the hosts, then you can specify the following in theyaml
file.Example
service_type: node-exporter placement: host_pattern: '*' extra_entrypoint_args: - "--collector.textfile.directory=/var/lib/node_exporter/textfile_collector2"
service_type: node-exporter placement: host_pattern: '*' extra_entrypoint_args: - "--collector.textfile.directory=/var/lib/node_exporter/textfile_collector2"
Copy to Clipboard Copied!
2.2. Deploying the Ceph daemons using the command line interface
Using the Ceph Orchestrator, you can deploy the daemons such as Ceph Manager, Ceph Monitors, Ceph OSDs, monitoring stack, and others using the ceph orch
command. Placement specification is passed as --placement
argument with the Orchestrator commands.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the storage cluster.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Use one of the following methods to deploy the daemons on the hosts:
Method 1: Specify the number of daemons and the host names:
Syntax
ceph orch apply SERVICE_NAME --placement="NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"
ceph orch apply SERVICE_NAME --placement="NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"
Copy to Clipboard Copied! Example
[ceph: root@host01 /]# ceph orch apply mon --placement="3 host01 host02 host03"
[ceph: root@host01 /]# ceph orch apply mon --placement="3 host01 host02 host03"
Copy to Clipboard Copied! Method 2: Add the labels to the hosts and then deploy the daemons using the labels:
Add the labels to the hosts:
Syntax
ceph orch host label add HOSTNAME_1 LABEL
ceph orch host label add HOSTNAME_1 LABEL
Copy to Clipboard Copied! Example
[ceph: root@host01 /]# ceph orch host label add host01 mon
[ceph: root@host01 /]# ceph orch host label add host01 mon
Copy to Clipboard Copied! Deploy the daemons with labels:
Syntax
ceph orch apply DAEMON_NAME label:LABEL
ceph orch apply DAEMON_NAME label:LABEL
Copy to Clipboard Copied! Example
ceph orch apply mon label:mon
ceph orch apply mon label:mon
Copy to Clipboard Copied!
Method 3: Add the labels to the hosts and deploy using the
--placement
argument:Add the labels to the hosts:
Syntax
ceph orch host label add HOSTNAME_1 LABEL
ceph orch host label add HOSTNAME_1 LABEL
Copy to Clipboard Copied! Example
[ceph: root@host01 /]# ceph orch host label add host01 mon
[ceph: root@host01 /]# ceph orch host label add host01 mon
Copy to Clipboard Copied! Deploy the daemons using the label placement specification:
Syntax
ceph orch apply DAEMON_NAME --placement="label:LABEL"
ceph orch apply DAEMON_NAME --placement="label:LABEL"
Copy to Clipboard Copied! Example
ceph orch apply mon --placement="label:mon"
ceph orch apply mon --placement="label:mon"
Copy to Clipboard Copied!
Verification
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
[ceph: root@host01 /]# ceph orch ls
Copy to Clipboard Copied! List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME ceph orch ps --service_name=SERVICE_NAME
ceph orch ps --daemon_type=DAEMON_NAME ceph orch ps --service_name=SERVICE_NAME
Copy to Clipboard Copied! Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=mon [ceph: root@host01 /]# ceph orch ps --service_name=mon
[ceph: root@host01 /]# ceph orch ps --daemon_type=mon [ceph: root@host01 /]# ceph orch ps --service_name=mon
Copy to Clipboard Copied!
2.3. Deploying the Ceph daemons on a subset of hosts using the command line interface
You can use the --placement
option to deploy daemons on a subset of hosts. You can specify the number of daemons in the placement specification with the name of the hosts to deploy the daemons.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the cluster.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! List the hosts on which you want to deploy the Ceph daemons:
Example
[ceph: root@host01 /]# ceph orch host ls
[ceph: root@host01 /]# ceph orch host ls
Copy to Clipboard Copied! Deploy the daemons:
Syntax
ceph orch apply SERVICE_NAME --placement="NUMBER_OF_DAEMONS HOST_NAME_1 _HOST_NAME_2 HOST_NAME_3"
ceph orch apply SERVICE_NAME --placement="NUMBER_OF_DAEMONS HOST_NAME_1 _HOST_NAME_2 HOST_NAME_3"
Copy to Clipboard Copied! Example
ceph orch apply mgr --placement="2 host01 host02 host03"
ceph orch apply mgr --placement="2 host01 host02 host03"
Copy to Clipboard Copied! In this example, the
mgr
daemons are deployed only on two hosts.
Verification
List the hosts:
Example
[ceph: root@host01 /]# ceph orch host ls
[ceph: root@host01 /]# ceph orch host ls
Copy to Clipboard Copied!
2.4. Service specification of the Ceph Orchestrator
A service specification is a data structure to specify the service attributes and configuration settings that is used to deploy the Ceph service. The following is an example of the multi-document YAML file, cluster.yaml
, for specifying service specifications:
Example
service_type: mon placement: host_pattern: "mon*" --- service_type: mgr placement: host_pattern: "mgr*" --- service_type: osd service_id: default_drive_group placement: host_pattern: "osd*" data_devices: all: true
service_type: mon
placement:
host_pattern: "mon*"
---
service_type: mgr
placement:
host_pattern: "mgr*"
---
service_type: osd
service_id: default_drive_group
placement:
host_pattern: "osd*"
data_devices:
all: true
The following list are the parameters where the properties of a service specification are defined as follows:
service_type
: The type of service:- Ceph services like mon, crash, mds, mgr, osd, rbd, or rbd-mirror.
- Ceph gateway like nfs or rgw.
- Monitoring stack like Alertmanager, Prometheus, Grafana or Node-exporter.
- Container for custom containers.
-
service_id
: A unique name of the service. -
placement
: This is used to define where and how to deploy the daemons. -
unmanaged
: If set totrue
, the Orchestrator will neither deploy nor remove any daemon associated with this service.
Stateless service of Orchestrators
A stateless service is a service that does not need information of the state to be available. For example, to start an rgw
service, additional information is not needed to start or run the service. The rgw
service does not create information about this state in order to provide the functionality. Regardless of when the rgw
service starts, the state is the same.
2.5. Disabling automatic management of daemons
You can mark the Cephadm services as managed
or unmanaged
without having to edit and re-apply the service specification.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
Procedure
Set
unmanaged
for services by using this command:Syntax
ceph orch set-unmanaged SERVICE_NAME
ceph orch set-unmanaged SERVICE_NAME
Copy to Clipboard Copied! Example
ceph orch set-unmanaged grafana
[root@host01 ~]# ceph orch set-unmanaged grafana
Copy to Clipboard Copied! Set
managed
for services by using this command:Syntax
ceph orch set-managed SERVICE_NAME
ceph orch set-managed SERVICE_NAME
Copy to Clipboard Copied! Example
ceph orch set-managed mon
[root@host01 ~]# ceph orch set-managed mon
Copy to Clipboard Copied!
2.6. Deploying the Ceph daemons using the service specification
Using the Ceph Orchestrator, you can deploy daemons such as ceph Manager, Ceph Monitors, Ceph OSDs, monitoring stack, and others using the service specification in a YAML file.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
Procedure
Create the
yaml
file:Example
touch mon.yaml
[root@host01 ~]# touch mon.yaml
Copy to Clipboard Copied! This file can be configured in two different ways:
Edit the file to include the host details in placement specification:
Syntax
service_type: SERVICE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2
service_type: SERVICE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2
Copy to Clipboard Copied! Example
service_type: mon placement: hosts: - host01 - host02 - host03
service_type: mon placement: hosts: - host01 - host02 - host03
Copy to Clipboard Copied! Edit the file to include the label details in placement specification:
Syntax
service_type: SERVICE_NAME placement: label: "LABEL_1"
service_type: SERVICE_NAME placement: label: "LABEL_1"
Copy to Clipboard Copied! Example
service_type: mon placement: label: "mon"
service_type: mon placement: label: "mon"
Copy to Clipboard Copied!
Optional: You can also use extra container arguments in the service specification files such as CPUs, CA certificates, and other files while deploying services:
Example
extra_container_args: - "-v" - "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro" - "--security-opt" - "label=disable" - "cpus=2" - "--collector.textfile.directory=/var/lib/node_exporter/textfile_collector2"
extra_container_args: - "-v" - "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro" - "--security-opt" - "label=disable" - "cpus=2" - "--collector.textfile.directory=/var/lib/node_exporter/textfile_collector2"
Copy to Clipboard Copied! NoteRed Hat Ceph Storage supports the use of extra arguments to enable additional metrics in node-exporter deployed by Cephadm.
Mount the YAML file under a directory in the container:
Example
cephadm shell --mount mon.yaml:/var/lib/ceph/mon/mon.yaml
[root@host01 ~]# cephadm shell --mount mon.yaml:/var/lib/ceph/mon/mon.yaml
Copy to Clipboard Copied! Navigate to the directory:
Example
[ceph: root@host01 /]# cd /var/lib/ceph/mon/
[ceph: root@host01 /]# cd /var/lib/ceph/mon/
Copy to Clipboard Copied! Deploy the Ceph daemons using service specification:
Syntax
ceph orch apply -i FILE_NAME.yaml
ceph orch apply -i FILE_NAME.yaml
Copy to Clipboard Copied! Example
[ceph: root@host01 mon]# ceph orch apply -i mon.yaml
[ceph: root@host01 mon]# ceph orch apply -i mon.yaml
Copy to Clipboard Copied!
Verification
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
[ceph: root@host01 /]# ceph orch ls
Copy to Clipboard Copied! List the hosts, daemons, and processes:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
ceph orch ps --daemon_type=DAEMON_NAME
Copy to Clipboard Copied! Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=mon
[ceph: root@host01 /]# ceph orch ps --daemon_type=mon
Copy to Clipboard Copied!
2.7. Deploying the Ceph File System mirroring daemon using the service specification
Ceph File System (CephFS) supports asynchronous replication of snapshots to a remote CephFS file system using the CephFS mirroring daemon (cephfs-mirror
). Snapshot synchronization copies snapshot data to a remote CephFS, and creates a new snapshot on the remote target with the same name. Using the Ceph Orchestrator, you can deploy cephfs-mirror
using the service specification in a YAML file.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- A CephFS created.
Procedure
Create the
yaml
file:Example
touch mirror.yaml
[root@host01 ~]# touch mirror.yaml
Copy to Clipboard Copied! Edit the file to include the following:
Syntax
service_type: cephfs-mirror service_name: SERVICE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 - HOST_NAME_3
service_type: cephfs-mirror service_name: SERVICE_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 - HOST_NAME_3
Copy to Clipboard Copied! Example
service_type: cephfs-mirror service_name: cephfs-mirror placement: hosts: - host01 - host02 - host03
service_type: cephfs-mirror service_name: cephfs-mirror placement: hosts: - host01 - host02 - host03
Copy to Clipboard Copied! Mount the YAML file under a directory in the container:
Example
cephadm shell --mount mirror.yaml:/var/lib/ceph/mirror.yaml
[root@host01 ~]# cephadm shell --mount mirror.yaml:/var/lib/ceph/mirror.yaml
Copy to Clipboard Copied! Navigate to the directory:
Example
[ceph: root@host01 /]# cd /var/lib/ceph/
[ceph: root@host01 /]# cd /var/lib/ceph/
Copy to Clipboard Copied! Deploy the
cephfs-mirror
daemon using the service specification:Example
[ceph: root@host01 /]# ceph orch apply -i mirror.yaml
[ceph: root@host01 /]# ceph orch apply -i mirror.yaml
Copy to Clipboard Copied!
Verification
List the service:
Example
[ceph: root@host01 /]# ceph orch ls
[ceph: root@host01 /]# ceph orch ls
Copy to Clipboard Copied! List the hosts, daemons, and processes:
Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=cephfs-mirror
[ceph: root@host01 /]# ceph orch ps --daemon_type=cephfs-mirror
Copy to Clipboard Copied!