Este contenido no está disponible en el idioma seleccionado.
Chapter 3. Management of hosts using the Ceph Orchestrator
As a storage administrator, you can use the Ceph Orchestrator with Cephadm in the backend to add, list, and remove hosts in an existing Red Hat Ceph Storage cluster.
You can also add labels to hosts. Labels are free-form and have no specific meanings. Each host can have multiple labels. For example, apply the mon
label to all hosts that have monitor daemons deployed, mgr
for all hosts with manager daemons deployed, rgw
for Ceph object gateways, and so on.
Labeling all the hosts in the storage cluster helps to simplify system management tasks by allowing you to quickly identify the daemons running on each host. In addition, you can use the Ceph Orchestrator or a YAML file to deploy or remove daemons on hosts that have specific host labels.
This section covers the following administrative tasks:
- Adding hosts using the Ceph Orchestrator.
- Adding multiple hosts using the Ceph Orchestrator.
- Listing hosts using the Ceph Orchestrator.
- Adding a label to a host.
- Removing a label from a host.
- Removing hosts using the Ceph Orchestrator.
- Placing hosts in the maintenance mode using the Ceph Orchestrator.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
-
The IP addresses of the new hosts should be updated in
/etc/hosts
file.
3.1. Adding hosts using the Ceph Orchestrator
You can use the Ceph Orchestrator with Cephadm in the backend to add hosts to an existing Red Hat Ceph Storage cluster.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all nodes in the storage cluster.
- Register the nodes to the CDN and attach subscriptions.
-
Ansible user with sudo and passwordless
ssh
access to all nodes in the storage cluster.
Procedure
From the Ceph administration node, log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Extract the cluster’s public SSH keys to a folder:
Syntax
ceph cephadm get-pub-key > ~/PATH
Example
[ceph: root@host01 /]# ceph cephadm get-pub-key > ~/ceph.pub
Copy Ceph cluster’s public SSH keys to the root user’s
authorized_keys
file on the new host:Syntax
ssh-copy-id -f -i ~/PATH root@HOST_NAME_2
Example
[ceph: root@host01 /]# ssh-copy-id -f -i ~/ceph.pub root@host02
From the Ansible administration node, add the new host to the Ansible inventory file. The default location for the file is
/usr/share/cephadm-ansible/hosts
. The following example shows the structure of a typical inventory file:Example
host01 host02 host03 [admin] host00
NoteIf you have previously added the new host to the Ansible inventory file and run the preflight playbook on the host, skip to step 6.
Run the preflight playbook with the
--limit
option:Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit NEWHOST
Example
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit host02
The preflight playbook installs
podman
,lvm2
,chronyd
, andcephadm
on the new host. After installation is complete,cephadm
resides in the/usr/sbin/
directory.From the Ceph administration node, log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Use the
cephadm
orchestrator to add hosts to the storage cluster:Syntax
ceph orch host add HOST_NAME IP_ADDRESS_OF_HOST [--label=LABEL_NAME_1,LABEL_NAME_2]
The
--label
option is optional and this adds the labels when adding the hosts. You can add multiple labels to the host.Example
[ceph: root@host01 /]# ceph orch host add host02 10.10.128.70 --labels=mon,mgr
Verification
List the hosts:
Example
[ceph: root@host01 /]# ceph orch host ls
Additional Resources
- See the Listing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide.
-
For more information about the
cephadm-preflight
playbook, see Running the preflight playbook section in the Red Hat Ceph Storage Installation Guide. - See the Registering Red Hat Ceph Storage nodes to the CDN and attaching subscriptions section in the Red Hat Ceph Storage Installation Guide.
- See the Creating an Ansible user with sudo access section in the Red Hat Ceph Storage Installation Guide.
3.2. Adding multiple hosts using the Ceph Orchestrator
You can use the Ceph Orchestrator to add multiple hosts to a Red Hat Ceph Storage cluster at the same time using the service specification in YAML file format.
Prerequisites
- A running Red Hat Ceph Storage cluster.
Procedure
Create the
hosts.yaml
file:Example
[root@host01 ~]# touch hosts.yaml
Edit the
hosts.yaml
file to include the following details:Example
service_type: host addr: host01 hostname: host01 labels: - mon - osd - mgr --- service_type: host addr: host02 hostname: host02 labels: - mon - osd - mgr --- service_type: host addr: host03 hostname: host03 labels: - mon - osd
Mount the YAML file under a directory in the container:
Example
[root@host01 ~]# cephadm shell --mount hosts.yaml:/var/lib/ceph/hosts.yaml
Navigate to the directory:
Example
[ceph: root@host01 /]# cd /var/lib/ceph/
Deploy the hosts using service specification:
Syntax
ceph orch apply -i FILE_NAME.yaml
Example
[ceph: root@host01 hosts]# ceph orch apply -i hosts.yaml
Verification
List the hosts:
Example
[ceph: root@host01 /]# ceph orch host ls
Additional Resources
- See the Listing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide.
3.3. Listing hosts using the Ceph Orchestrator
You can list hosts of a Ceph cluster with Ceph Orchestrators.
The STATUS of the hosts is blank, in the output of the ceph orch host ls
command.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the storage cluster.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
List the hosts of the cluster:
Example
[ceph: root@host01 /]# ceph orch host ls
You will see that the STATUS of the hosts is blank which is expected.
3.4. Adding a label to a host
Use the Ceph Orchestrator to add a label to a host. Labels can be used to specify placement of daemons.
A few examples of labels are mgr
, mon
, and osd
based on the service deployed on the hosts. Each host can have multiple labels.
You can also add the following host labels that have special meaning to cephadm
and they begin with _
:
-
_no_schedule
: This label preventscephadm
from scheduling or deploying daemons on the host. If it is added to an existing host that already contains Ceph daemons, it causescephadm
to move those daemons elsewhere, except OSDs which are not removed automatically. When a host is added with the_no_schedule
label, no daemons are deployed on it. When the daemons are drained before the host is removed, the_no_schedule
label is set on that host. -
_no_autotune_memory
: This label does not autotune memory on the host. It prevents the daemon memory from being tuned even when theosd_memory_target_autotune
option or other similar options are enabled for one or more daemons on that host. -
_admin
: By default, the_admin
label is applied to the bootstrapped host in the storage cluster and theclient.admin
key is set to be distributed to that host with theceph orch client-keyring {ls|set|rm}
function. Adding this label to additional hosts normally causescephadm
to deploy configuration and keyring files in the/etc/ceph
directory.
Prerequisites
- A storage cluster that has been installed and bootstrapped.
- Root-level access to all nodes in the storage cluster.
- Hosts are added to the storage cluster.
Procedure
Log in to the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Add a label to a host:
Syntax
ceph orch host label add HOSTNAME LABEL
Example
[ceph: root@host01 /]# ceph orch host label add host02 mon
Verification
List the hosts:
Example
[ceph: root@host01 /]# ceph orch host ls
3.5. Removing a label from a host
You can use the Ceph orchestrator to remove a label from a host.
Prerequisites
- A storage cluster that has been installed and bootstrapped.
- Root-level access to all nodes in the storage cluster.
Procedure
Launch the
cephadm
shell:[root@host01 ~]# cephadm shell [ceph: root@host01 /]#
Remove the label.
Syntax
ceph orch host label rm HOSTNAME LABEL
Example
[ceph: root@host01 /]# ceph orch host label rm host02 mon
Verification
List the hosts:
Example
[ceph: root@host01 /]# ceph orch host ls
3.6. Removing hosts using the Ceph Orchestrator
You can remove hosts of a Ceph cluster with the Ceph Orchestrators. All the daemons are removed with the drain
option which adds the _no_schedule
label to ensure that you cannot deploy any daemons or a cluster till the operation is complete.
If you are removing the bootstrap host, be sure to copy the admin keyring and the configuration file to another host in the storage cluster before you remove the host.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- Hosts are added to the storage cluster.
- All the services are deployed.
- Cephadm is deployed on the nodes where the services have to be removed.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Fetch the host details:
Example
[ceph: root@host01 /]# ceph orch host ls
Drain all the daemons from the host:
Syntax
ceph orch host drain HOSTNAME
Example
[ceph: root@host01 /]# ceph orch host drain host02
The
_no_schedule
label is automatically applied to the host which blocks deployment.Check the status of OSD removal:
Example
[ceph: root@host01 /]# ceph orch osd rm status
When no placement groups (PG) are left on the OSD, the OSD is decommissioned and removed from the storage cluster.
Check if all the daemons are removed from the storage cluster:
Syntax
ceph orch ps HOSTNAME
Example
[ceph: root@host01 /]# ceph orch ps host02
Remove the host:
Syntax
ceph orch host rm HOSTNAME
Example
[ceph: root@host01 /]# ceph orch host rm host02
Additional Resources
- See the Adding hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information.
- See the Listing hosts using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information.
3.7. Placing hosts in the maintenance mode using the Ceph Orchestrator
You can use the Ceph Orchestrator to place the hosts in and out of the maintenance mode. The ceph orch host maintenance enter
command stops the systemd target
which causes all the Ceph daemons to stop on the host. Similarly, the ceph orch host maintenance exit
command restarts the systemd target
and the Ceph daemons restart on their own.
The orchestrator adopts the following workflow when the host is placed in maintenance:
-
Confirms the removal of hosts does not impact data availability by running the
orch host ok-to-stop
command. -
If the host has Ceph OSD daemons, it applies
noout
to the host subtree to prevent data migration from triggering during the planned maintenance slot. - Stops the Ceph target, thereby, stopping all the daemons.
-
Disables the
ceph target
on the host, to prevent a reboot from automatically starting Ceph services.
Exiting maintenance reverses the above sequence.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- Hosts added to the cluster.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
You can either place the host in maintenance mode or place it out of the maintenance mode:
Place the host in maintenance mode:
Syntax
ceph orch host maintenance enter HOST_NAME [--force]
Example
[ceph: root@host01 /]# ceph orch host maintenance enter host02 --force
The
--force
flag allows the user to bypass warnings, but not alerts.Place the host out of the maintenance mode:
Syntax
ceph orch host maintenance exit HOST_NAME
Example
[ceph: root@host01 /]# ceph orch host maintenance exit host02
Verification
List the hosts:
Example
[ceph: root@host01 /]# ceph orch host ls