Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 3. Management of hosts using the Ceph Orchestrator


As a storage administrator, you can use the Ceph Orchestrator with Cephadm in the backend to add, list, and remove hosts in an existing Red Hat Ceph Storage cluster.

You can also add labels to hosts. Labels are free-form and have no specific meanings. Each host can have multiple labels. For example, apply the mon label to all hosts that have monitor daemons deployed, mgr for all hosts with manager daemons deployed, rgw for Ceph object gateways, and so on.

Labeling all the hosts in the storage cluster helps to simplify system management tasks by allowing you to quickly identify the daemons running on each host. In addition, you can use the Ceph Orchestrator or a YAML file to deploy or remove daemons on hosts that have specific host labels.

This section covers the following administrative tasks:

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • The IP addresses of the new hosts should be updated in /etc/hosts file.

3.1. Adding hosts using the Ceph Orchestrator

You can use the Ceph Orchestrator with Cephadm in the backend to add hosts to an existing Red Hat Ceph Storage cluster.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all nodes in the storage cluster.
  • Register the nodes to the CDN and attach subscriptions.
  • Ansible user with sudo and passwordless ssh access to all nodes in the storage cluster.

Procedure

  1. From the Ceph administration node, log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. Extract the cluster’s public SSH keys to a folder:

    Syntax

    ceph cephadm get-pub-key > ~/PATH

    Example

    [ceph: root@host01 /]# ceph cephadm get-pub-key > ~/ceph.pub

  3. Copy Ceph cluster’s public SSH keys to the root user’s authorized_keys file on the new host:

    Syntax

    ssh-copy-id -f -i ~/PATH root@HOST_NAME_2

    Example

    [ceph: root@host01 /]# ssh-copy-id -f -i ~/ceph.pub root@host02

  4. From the Ansible administration node, add the new host to the Ansible inventory file. The default location for the file is /usr/share/cephadm-ansible/hosts. The following example shows the structure of a typical inventory file:

    Example

    host01
    host02
    host03
    
    [admin]
    host00

    Note

    If you have previously added the new host to the Ansible inventory file and run the preflight playbook on the host, skip to step 6.

  5. Run the preflight playbook with the --limit option:

    Syntax

    ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit NEWHOST

    Example

    [ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit host02

    The preflight playbook installs podman, lvm2, chronyd, and cephadm on the new host. After installation is complete, cephadm resides in the /usr/sbin/ directory.

  6. From the Ceph administration node, log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  7. Use the cephadm orchestrator to add hosts to the storage cluster:

    Syntax

    ceph orch host add HOST_NAME IP_ADDRESS_OF_HOST [--label=LABEL_NAME_1,LABEL_NAME_2]

    The --label option is optional and this adds the labels when adding the hosts. You can add multiple labels to the host.

    Example

    [ceph: root@host01 /]# ceph orch host add host02 10.10.128.70 --labels=mon,mgr

Verification

  • List the hosts:

    Example

    [ceph: root@host01 /]# ceph orch host ls

Additional Resources

3.2. Adding multiple hosts using the Ceph Orchestrator

You can use the Ceph Orchestrator to add multiple hosts to a Red Hat Ceph Storage cluster at the same time using the service specification in YAML file format.

Prerequisites

  • A running Red Hat Ceph Storage cluster.

Procedure

  1. Create the hosts.yaml file:

    Example

    [root@host01 ~]# touch hosts.yaml

  2. Edit the hosts.yaml file to include the following details:

    Example

    service_type: host
    addr: host01
    hostname: host01
    labels:
    - mon
    - osd
    - mgr
    ---
    service_type: host
    addr: host02
    hostname: host02
    labels:
    - mon
    - osd
    - mgr
    ---
    service_type: host
    addr: host03
    hostname: host03
    labels:
    - mon
    - osd

  3. Mount the YAML file under a directory in the container:

    Example

    [root@host01 ~]# cephadm shell --mount hosts.yaml:/var/lib/ceph/hosts.yaml

  4. Navigate to the directory:

    Example

    [ceph: root@host01 /]# cd /var/lib/ceph/

  5. Deploy the hosts using service specification:

    Syntax

    ceph orch apply -i FILE_NAME.yaml

    Example

    [ceph: root@host01 hosts]# ceph orch apply -i hosts.yaml

Verification

  • List the hosts:

    Example

    [ceph: root@host01 /]# ceph orch host ls

Additional Resources

3.3. Listing hosts using the Ceph Orchestrator

You can list hosts of a Ceph cluster with Ceph Orchestrators.

Note

The STATUS of the hosts is blank, in the output of the ceph orch host ls command.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Hosts are added to the storage cluster.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. List the hosts of the cluster:

    Example

    [ceph: root@host01 /]# ceph orch host ls

    You will see that the STATUS of the hosts is blank which is expected.

3.4. Adding a label to a host

Use the Ceph Orchestrator to add a label to a host. Labels can be used to specify placement of daemons.

A few examples of labels are mgr, mon, and osd based on the service deployed on the hosts. Each host can have multiple labels.

You can also add the following host labels that have special meaning to cephadm and they begin with _:

  • _no_schedule: This label prevents cephadm from scheduling or deploying daemons on the host. If it is added to an existing host that already contains Ceph daemons, it causes cephadm to move those daemons elsewhere, except OSDs which are not removed automatically. When a host is added with the _no_schedule label, no daemons are deployed on it. When the daemons are drained before the host is removed, the _no_schedule label is set on that host.
  • _no_autotune_memory: This label does not autotune memory on the host. It prevents the daemon memory from being tuned even when the osd_memory_target_autotune option or other similar options are enabled for one or more daemons on that host.
  • _admin: By default, the _admin label is applied to the bootstrapped host in the storage cluster and the client.admin key is set to be distributed to that host with the ceph orch client-keyring {ls|set|rm} function. Adding this label to additional hosts normally causes cephadm to deploy configuration and keyring files in the /etc/ceph directory.

Prerequisites

  • A storage cluster that has been installed and bootstrapped.
  • Root-level access to all nodes in the storage cluster.
  • Hosts are added to the storage cluster.

Procedure

  1. Log in to the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. Add a label to a host:

    Syntax

    ceph orch host label add HOSTNAME LABEL

    Example

    [ceph: root@host01 /]# ceph orch host label add host02 mon

Verification

  • List the hosts:

    Example

    [ceph: root@host01 /]# ceph orch host ls

3.5. Removing a label from a host

You can use the Ceph orchestrator to remove a label from a host.

Prerequisites

  • A storage cluster that has been installed and bootstrapped.
  • Root-level access to all nodes in the storage cluster.

Procedure

  1. Launch the cephadm shell:

    [root@host01 ~]# cephadm shell
    [ceph: root@host01 /]#
  2. Remove the label.

    Syntax

    ceph orch host label rm HOSTNAME LABEL

    Example

    [ceph: root@host01 /]# ceph orch host label rm host02 mon

Verification

  • List the hosts:

    Example

    [ceph: root@host01 /]# ceph orch host ls

3.6. Removing hosts using the Ceph Orchestrator

You can remove hosts of a Ceph cluster with the Ceph Orchestrators. All the daemons are removed with the drain option which adds the _no_schedule label to ensure that you cannot deploy any daemons or a cluster till the operation is complete.

Important

If you are removing the bootstrap host, be sure to copy the admin keyring and the configuration file to another host in the storage cluster before you remove the host.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • Hosts are added to the storage cluster.
  • All the services are deployed.
  • Cephadm is deployed on the nodes where the services have to be removed.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. Fetch the host details:

    Example

    [ceph: root@host01 /]# ceph orch host ls

  3. Drain all the daemons from the host:

    Syntax

    ceph orch host drain HOSTNAME

    Example

    [ceph: root@host01 /]# ceph orch host drain host02

    The _no_schedule label is automatically applied to the host which blocks deployment.

  4. Check the status of OSD removal:

    Example

    [ceph: root@host01 /]# ceph orch osd rm status

    When no placement groups (PG) are left on the OSD, the OSD is decommissioned and removed from the storage cluster.

  5. Check if all the daemons are removed from the storage cluster:

    Syntax

    ceph orch ps HOSTNAME

    Example

    [ceph: root@host01 /]# ceph orch ps host02

  6. Remove the host:

    Syntax

    ceph orch host rm HOSTNAME

    Example

    [ceph: root@host01 /]# ceph orch host rm host02

Additional Resources

3.7. Placing hosts in the maintenance mode using the Ceph Orchestrator

You can use the Ceph Orchestrator to place the hosts in and out of the maintenance mode. The ceph orch host maintenance enter command stops the systemd target which causes all the Ceph daemons to stop on the host. Similarly, the ceph orch host maintenance exit command restarts the systemd target and the Ceph daemons restart on their own.

The orchestrator adopts the following workflow when the host is placed in maintenance:

  1. Confirms the removal of hosts does not impact data availability by running the orch host ok-to-stop command.
  2. If the host has Ceph OSD daemons, it applies noout to the host subtree to prevent data migration from triggering during the planned maintenance slot.
  3. Stops the Ceph target, thereby, stopping all the daemons.
  4. Disables the ceph target on the host, to prevent a reboot from automatically starting Ceph services.

Exiting maintenance reverses the above sequence.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • Hosts added to the cluster.

Procedure

  1. Log into the Cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  2. You can either place the host in maintenance mode or place it out of the maintenance mode:

    • Place the host in maintenance mode:

      Syntax

      ceph orch host maintenance enter HOST_NAME [--force]

      Example

      [ceph: root@host01 /]# ceph orch host maintenance enter host02 --force

      The --force flag allows the user to bypass warnings, but not alerts.

    • Place the host out of the maintenance mode:

      Syntax

      ceph orch host maintenance exit HOST_NAME

      Example

      [ceph: root@host01 /]# ceph orch host maintenance exit host02

Verification

  • List the hosts:

    Example

    [ceph: root@host01 /]# ceph orch host ls

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.