Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 2. Management of services using the Ceph Orchestrator


As a storage administrator, after installing the Red Hat Ceph Storage cluster, you can monitor and manage the services in a storage cluster using the Ceph Orchestrator. A service is a group of daemons that are configured together.

This section covers the following administrative information:

2.1. Placement specification of the Ceph Orchestrator

You can use the Ceph Orchestrator to deploy osds, mons, mgrs, mds, and rgw services. Red Hat recommends deploying services using placement specifications. You need to know where and how many daemons have to be deployed to deploy a service using the Ceph Orchestrator. Placement specifications can either be passed as command line arguments or as a service specification in a yaml file.

There are two ways of deploying the services using the placement specification:

  • Using the placement specification directly in the command line interface. For example, if you want to deploy three monitors on the hosts, running the following command deploys three monitors on host01, host02, and host03.

    Example

    [ceph: root@host01 /]# ceph orch apply mon --placement="3 host01 host02 host03"

  • Using the placement specification in the YAML file. For example, if you want to deploy node-exporter on all the hosts, then you can specify the following in the yaml file.

    Example

    service_type: node-exporter
    placement:
    host_pattern: '*'
    extra_entrypoint_args:
        - "--collector.textfile.directory=/var/lib/node_exporter/textfile_collector2"

2.2. Deploying the Ceph daemons using the command line interface

Using the Ceph Orchestrator, you can deploy the daemons such as Ceph Manager, Ceph Monitors, Ceph OSDs, monitoring stack, and others using the ceph orch command. Placement specification is passed as --placement argument with the Orchestrator commands.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Hosts are added to the storage cluster.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. Use one of the following methods to deploy the daemons on the hosts:

    • Method 1: Specify the number of daemons and the host names:

      Syntax

      ceph orch apply SERVICE_NAME --placement="NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3"

      Example

      [ceph: root@host01 /]# ceph orch apply mon --placement="3 host01 host02 host03"

    • Method 2: Add the labels to the hosts and then deploy the daemons using the labels:

      1. Add the labels to the hosts:

        Syntax

        ceph orch host label add HOSTNAME_1 LABEL

        Example

        [ceph: root@host01 /]# ceph orch host label add host01 mon

      2. Deploy the daemons with labels:

        Syntax

        ceph orch apply DAEMON_NAME label:LABEL

        Example

        ceph orch apply mon label:mon

    • Method 3: Add the labels to the hosts and deploy using the --placement argument:

      1. Add the labels to the hosts:

        Syntax

        ceph orch host label add HOSTNAME_1 LABEL

        Example

        [ceph: root@host01 /]# ceph orch host label add host01 mon

      2. Deploy the daemons using the label placement specification:

        Syntax

        ceph orch apply DAEMON_NAME --placement="label:LABEL"

        Example

        ceph orch apply mon --placement="label:mon"

Verification

  • List the service:

    Example

    [ceph: root@host01 /]# ceph orch ls

  • List the hosts, daemons, and processes:

    Syntax

    ceph orch ps --daemon_type=DAEMON_NAME
    ceph orch ps --service_name=SERVICE_NAME

    Example

    [ceph: root@host01 /]# ceph orch ps --daemon_type=mon
    [ceph: root@host01 /]# ceph orch ps --service_name=mon

Additional Resources

2.3. Deploying the Ceph daemons on a subset of hosts using the command line interface

You can use the --placement option to deploy daemons on a subset of hosts. You can specify the number of daemons in the placement specification with the name of the hosts to deploy the daemons.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Hosts are added to the cluster.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. List the hosts on which you want to deploy the Ceph daemons:

    Example

    [ceph: root@host01 /]# ceph orch host ls

  3. Deploy the daemons:

    Syntax

    ceph orch apply SERVICE_NAME --placement="NUMBER_OF_DAEMONS HOST_NAME_1 _HOST_NAME_2 HOST_NAME_3"

    Example

    ceph orch apply mgr --placement="2 host01 host02 host03"

    In this example, the mgr daemons are deployed only on two hosts.

Verification

  • List the hosts:

    Example

    [ceph: root@host01 /]# ceph orch host ls

Additional Resources

2.4. Service specification of the Ceph Orchestrator

A service specification is a data structure to specify the service attributes and configuration settings that is used to deploy the Ceph service. The following is an example of the multi-document YAML file, cluster.yaml, for specifying service specifications:

Example

service_type: mon
placement:
  host_pattern: "mon*"
---
service_type: mgr
placement:
  host_pattern: "mgr*"
---
service_type: osd
service_id: default_drive_group
placement:
  host_pattern: "osd*"
data_devices:
  all: true

The following list are the parameters where the properties of a service specification are defined as follows:

  • service_type: The type of service:

    • Ceph services like mon, crash, mds, mgr, osd, rbd, or rbd-mirror.
    • Ceph gateway like nfs or rgw.
    • Monitoring stack like Alertmanager, Prometheus, Grafana or Node-exporter.
    • Container for custom containers.
  • service_id: A unique name of the service.
  • placement: This is used to define where and how to deploy the daemons.
  • unmanaged: If set to true, the Orchestrator will neither deploy nor remove any daemon associated with this service.

Stateless service of Orchestrators

A stateless service is a service that does not need information of the state to be available. For example, to start an rgw service, additional information is not needed to start or run the service. The rgw service does not create information about this state in order to provide the functionality. Regardless of when the rgw service starts, the state is the same.

2.5. Disabling automatic management of daemons

You can mark the Cephadm services as managed or unmanaged without having to edit and re-apply the service specification.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.

Procedure

  • Set unmanaged for services by using this command:

    Syntax

    ceph orch set-unmanaged SERVICE_NAME

    Example

    [root@host01 ~]# ceph orch set-unmanaged grafana

  • Set managed for services by using this command:

    Syntax

    ceph orch set-managed SERVICE_NAME

    Example

    [root@host01 ~]# ceph orch set-managed mon

2.6. Deploying the Ceph daemons using the service specification

Using the Ceph Orchestrator, you can deploy daemons such as ceph Manager, Ceph Monitors, Ceph OSDs, monitoring stack, and others using the service specification in a YAML file.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.

Procedure

  1. Create the yaml file:

    Example

    [root@host01 ~]# touch mon.yaml

  2. This file can be configured in two different ways:

    • Edit the file to include the host details in placement specification:

      Syntax

      service_type: SERVICE_NAME
      placement:
        hosts:
          - HOST_NAME_1
          - HOST_NAME_2

      Example

      service_type: mon
      placement:
        hosts:
          - host01
          - host02
          - host03

    • Edit the file to include the label details in placement specification:

      Syntax

      service_type: SERVICE_NAME
      placement:
        label: "LABEL_1"

      Example

      service_type: mon
      placement:
        label: "mon"

  3. Optional: You can also use extra container arguments in the service specification files such as CPUs, CA certificates, and other files while deploying services:

    Example

    extra_container_args:
      - "-v"
      - "/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro"
      - "--security-opt"
      - "label=disable"
      - "cpus=2"
      - "--collector.textfile.directory=/var/lib/node_exporter/textfile_collector2"

    Note

    Red Hat Ceph Storage supports the use of extra arguments to enable additional metrics in node-exporter deployed by Cephadm.

  4. Mount the YAML file under a directory in the container:

    Example

    [root@host01 ~]# cephadm shell --mount mon.yaml:/var/lib/ceph/mon/mon.yaml

  5. Navigate to the directory:

    Example

    [ceph: root@host01 /]# cd /var/lib/ceph/mon/

  6. Deploy the Ceph daemons using service specification:

    Syntax

    ceph orch apply -i FILE_NAME.yaml

    Example

    [ceph: root@host01 mon]# ceph orch apply -i mon.yaml

Verification

  • List the service:

    Example

    [ceph: root@host01 /]# ceph orch ls

  • List the hosts, daemons, and processes:

    Syntax

    ceph orch ps --daemon_type=DAEMON_NAME

    Example

    [ceph: root@host01 /]# ceph orch ps --daemon_type=mon

Additional Resources

2.7. Deploying the Ceph File System mirroring daemon using the service specification

Ceph File System (CephFS) supports asynchronous replication of snapshots to a remote CephFS file system using the CephFS mirroring daemon (cephfs-mirror). Snapshot synchronization copies snapshot data to a remote CephFS, and creates a new snapshot on the remote target with the same name. Using the Ceph Orchestrator, you can deploy cephfs-mirror using the service specification in a YAML file.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • A CephFS created.

Procedure

  1. Create the yaml file:

    Example

    [root@host01 ~]# touch mirror.yaml

  2. Edit the file to include the following:

    Syntax

    service_type: cephfs-mirror
    service_name: SERVICE_NAME
    placement:
      hosts:
        - HOST_NAME_1
        - HOST_NAME_2
        - HOST_NAME_3

    Example

    service_type: cephfs-mirror
    service_name: cephfs-mirror
    placement:
      hosts:
        - host01
        - host02
        - host03

  3. Mount the YAML file under a directory in the container:

    Example

    [root@host01 ~]# cephadm shell --mount mirror.yaml:/var/lib/ceph/mirror.yaml

  4. Navigate to the directory:

    Example

    [ceph: root@host01 /]# cd /var/lib/ceph/

  5. Deploy the cephfs-mirror daemon using the service specification:

    Example

    [ceph: root@host01 /]# ceph orch apply -i mirror.yaml

Verification

  • List the service:

    Example

    [ceph: root@host01 /]# ceph orch ls

  • List the hosts, daemons, and processes:

    Example

    [ceph: root@host01 /]# ceph orch ps --daemon_type=cephfs-mirror

Additional Resources

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.