Chapter 3. Management of hosts using the Ceph Orchestrator
As a storage administrator, you can use the Ceph Orchestrator with Cephadm in the backend to add, list, and remove hosts in an existing Red Hat Ceph Storage cluster.
You can also add labels to hosts. Labels are free-form and have no specific meanings. Each host can have multiple labels. For example, apply the mon
label to all hosts that have monitor daemons deployed, mgr
for all hosts with manager daemons deployed, rgw
for Ceph object gateways, and so on.
Labeling all the hosts in the storage cluster helps to simplify system management tasks by allowing you to quickly identify the daemons running on each host. In addition, you can use the Ceph Orchestrator or a YAML file to deploy or remove daemons on hosts that have specific host labels.
This section covers the following administrative tasks:
- Adding hosts using the Ceph Orchestrator.
- Adding multiple hosts using the Ceph Orchestrator.
- Listing hosts using the Ceph Orchestrator.
- Adding labels to hosts using the Ceph Orchestrator.
- Removing a label from a host.
- Removing hosts using the Ceph Orchestrator.
- Placing hosts in the maintenance mode using the Ceph Orchestrator.
3.1. Prerequisites Copy linkLink copied to clipboard!
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
-
The IP addresses of the new hosts should be updated in
/etc/hosts
file.
3.2. Adding hosts using the Ceph Orchestrator Copy linkLink copied to clipboard!
You can use the Ceph Orchestrator with Cephadm in the backend to add hosts to an existing Red Hat Ceph Storage cluster.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all nodes in the storage cluster.
- Register the nodes to the CDN and attach subscriptions.
-
Ansible user with sudo and passwordless
ssh
access to all nodes in the storage cluster.
Procedure
From the Ceph administration node, log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the cluster’s public SSH keys to a folder:
Syntax
ceph cephadm get-pub-key > ~/PATH
ceph cephadm get-pub-key > ~/PATH
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph cephadm get-pub-key > ~/ceph.pub
[ceph: root@host01 /]# ceph cephadm get-pub-key > ~/ceph.pub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy Ceph cluster’s public SSH keys to the root user’s
authorized_keys
file on the new host:Syntax
ssh-copy-id -f -i ~/PATH root@HOST_NAME_2
ssh-copy-id -f -i ~/PATH root@HOST_NAME_2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ssh-copy-id -f -i ~/ceph.pub root@host02
[ceph: root@host01 /]# ssh-copy-id -f -i ~/ceph.pub root@host02
Copy to Clipboard Copied! Toggle word wrap Toggle overflow From the Ansible administration node, add the new host to the Ansible inventory file. The default location for the file is
/usr/share/cephadm-ansible/hosts
. The following example shows the structure of a typical inventory file:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you have previously added the new host to the Ansible inventory file and run the preflight playbook on the host, skip to step 6.
Run the preflight playbook with the
--limit
option:Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit NEWHOST
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit NEWHOST
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit host02
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit host02
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The preflight playbook installs
podman
,lvm2
,chronyd
, andcephadm
on the new host. After installation is complete,cephadm
resides in the/usr/sbin/
directory.From the Ceph administration node, log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
cephadm
orchestrator to add hosts to the storage cluster:Syntax
ceph orch host add HOST_NAME IP_ADDRESS_OF_HOST [--label=LABEL_NAME_1,LABEL_NAME_2]
ceph orch host add HOST_NAME IP_ADDRESS_OF_HOST [--label=LABEL_NAME_1,LABEL_NAME_2]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
--label
option is optional and this adds the labels when adding the hosts. You can add multiple labels to the host.Example
[ceph: root@host01 /]# ceph orch host add host02 10.10.128.70 --labels=mon,mgr
[ceph: root@host01 /]# ceph orch host add host02 10.10.128.70 --labels=mon,mgr
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List the hosts:
Example
[ceph: root@host01 /]# ceph orch host ls
[ceph: root@host01 /]# ceph orch host ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Setting the initial CRUSH location of host Copy linkLink copied to clipboard!
You can add the location
identifier to the host which instructs cephadm
to create a new CRUSH host located in the specified hierarchy.
The location
attribute only affects the initial CRUSH location. Subsequent changes of the location property is ignored. Also, removing a host does not remove any CRUSH buckets.
Prerequisites
- A running Red Hat Ceph Storage cluster.
Procedure
Edit the
hosts.yaml
file to include the following details:Example
service_type: host hostname: host01 addr: 192.168.0.11 location: rack: rack1
service_type: host hostname: host01 addr: 192.168.0.11 location: rack: rack1
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mount the YAML file under a directory in the container:
Example
cephadm shell --mount hosts.yaml:/var/lib/ceph/hosts.yaml
[root@host01 ~]# cephadm shell --mount hosts.yaml:/var/lib/ceph/hosts.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to the directory:
Example
[ceph: root@host01 /]# cd /var/lib/ceph/
[ceph: root@host01 /]# cd /var/lib/ceph/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the hosts using service specification:
Syntax
ceph orch apply -i FILE_NAME.yaml
ceph orch apply -i FILE_NAME.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 ceph]# ceph orch apply -i hosts.yaml
[ceph: root@host01 ceph]# ceph orch apply -i hosts.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.4. Adding multiple hosts using the Ceph Orchestrator Copy linkLink copied to clipboard!
You can use the Ceph Orchestrator to add multiple hosts to a Red Hat Ceph Storage cluster at the same time using the service specification in YAML file format.
Prerequisites
- A running Red Hat Ceph Storage cluster.
Procedure
Create the
hosts.yaml
file:Example
touch hosts.yaml
[root@host01 ~]# touch hosts.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the
hosts.yaml
file to include the following details:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Mount the YAML file under a directory in the container:
Example
cephadm shell --mount hosts.yaml:/var/lib/ceph/hosts.yaml
[root@host01 ~]# cephadm shell --mount hosts.yaml:/var/lib/ceph/hosts.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to the directory:
Example
[ceph: root@host01 /]# cd /var/lib/ceph/
[ceph: root@host01 /]# cd /var/lib/ceph/
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy the hosts using service specification:
Syntax
ceph orch apply -i FILE_NAME.yaml
ceph orch apply -i FILE_NAME.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 hosts]# ceph orch apply -i hosts.yaml
[ceph: root@host01 hosts]# ceph orch apply -i hosts.yaml
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List the hosts:
Example
[ceph: root@host01 /]# ceph orch host ls
[ceph: root@host01 /]# ceph orch host ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5. Listing hosts using the Ceph Orchestrator Copy linkLink copied to clipboard!
You can list hosts of a Ceph cluster with Ceph Orchestrators.
The STATUS of the hosts is blank, in the output of the ceph orch host ls
command.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the storage cluster.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow List the hosts of the cluster:
Example
[ceph: root@host01 /]# ceph orch host ls
[ceph: root@host01 /]# ceph orch host ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You will see that the STATUS of the hosts is blank which is expected.
3.6. Adding labels to hosts using the Ceph Orchestrator Copy linkLink copied to clipboard!
You can use the Ceph Orchestrator to add labels to hosts in an existing Red Hat Ceph Storage cluster. A few examples of labels are mgr
, mon
, and osd
based on the service deployed on the hosts.
You can also add the following host labels that have special meaning to cephadm
and they begin with _
:
-
_no_schedule
: This label preventscephadm
from scheduling or deploying daemons on the host. If it is added to an existing host that already contains Ceph daemons, it causescephadm
to move those daemons elsewhere, except OSDs which are not removed automatically. When a host is added with the_no_schedule
label, no daemons are deployed on it. When the daemons are drained before the host is removed, the_no_schedule
label is set on that host. -
_no_autotune_memory
: This label does not autotune memory on the host. It prevents the daemon memory from being tuned even when theosd_memory_target_autotune
option or other similar options are enabled for one or more daemons on that host. -
_admin
: By default, the_admin
label is applied to the bootstrapped host in the storage cluster and theclient.admin
key is set to be distributed to that host with theceph orch client-keyring {ls|set|rm}
function. Adding this label to additional hosts normally causescephadm
to deploy configuration and keyring files in/etc/ceph
directory.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Hosts are added to the storage cluster
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add labels to the hosts:
Syntax
ceph orch host label add HOST_NAME LABEL_NAME
ceph orch host label add HOST_NAME LABEL_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host label add host02 mon
[ceph: root@host01 /]# ceph orch host label add host02 mon
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List the hosts:
Example
[ceph: root@host01 /]# ceph orch host ls
[ceph: root@host01 /]# ceph orch host ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.7. Removing a label from a host Copy linkLink copied to clipboard!
You can use the Ceph orchestrator to remove a label from a host.
Prerequisites
- A storage cluster that has been installed and bootstrapped.
- Root-level access to all nodes in the storage cluster.
Procedure
Launch the
cephadm
shell:cephadm shell
[root@host01 ~]# cephadm shell [ceph: root@host01 /]#
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the label.
Syntax
ceph orch host label rm HOSTNAME LABEL
ceph orch host label rm HOSTNAME LABEL
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host label rm host02 mon
[ceph: root@host01 /]# ceph orch host label rm host02 mon
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List the hosts:
Example
[ceph: root@host01 /]# ceph orch host ls
[ceph: root@host01 /]# ceph orch host ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.8. Removing hosts using the Ceph Orchestrator Copy linkLink copied to clipboard!
You can remove hosts of a Ceph cluster with the Ceph Orchestrators. All the daemons are removed with the drain
option which adds the _no_schedule
label to ensure that you cannot deploy any daemons or a cluster till the operation is complete.
If you are removing the bootstrap host, be sure to copy the admin keyring and the configuration file to another host in the storage cluster before you remove the host.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- Hosts are added to the storage cluster.
- All the services are deployed.
- Cephadm is deployed on the nodes where the services have to be removed.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Fetch the host details:
Example
[ceph: root@host01 /]# ceph orch host ls
[ceph: root@host01 /]# ceph orch host ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain all the daemons from the host:
Syntax
ceph orch host drain HOSTNAME
ceph orch host drain HOSTNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host drain host02
[ceph: root@host01 /]# ceph orch host drain host02
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
_no_schedule
label is automatically applied to the host which blocks deployment.Check the status of OSD removal:
Example
[ceph: root@host01 /]# ceph orch osd rm status
[ceph: root@host01 /]# ceph orch osd rm status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When no placement groups (PG) are left on the OSD, the OSD is decommissioned and removed from the storage cluster.
Check if all the daemons are removed from the storage cluster:
Syntax
ceph orch ps HOSTNAME
ceph orch ps HOSTNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch ps host02
[ceph: root@host01 /]# ceph orch ps host02
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the host:
Syntax
ceph orch host rm HOSTNAME
ceph orch host rm HOSTNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host rm host02
[ceph: root@host01 /]# ceph orch host rm host02
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.9. Placing hosts in the maintenance mode using the Ceph Orchestrator Copy linkLink copied to clipboard!
You can use the Ceph Orchestrator to place the hosts in and out of the maintenance mode. The ceph orch host maintenance enter
command stops the systemd target
which causes all the Ceph daemons to stop on the host. Similarly, the ceph orch host maintenance exit
command restarts the systemd target
and the Ceph daemons restart on their own.
The orchestrator adopts the following workflow when the host is placed in maintenance:
-
Confirms the removal of hosts does not impact data availability by running the
orch host ok-to-stop
command. -
If the host has Ceph OSD daemons, it applies
noout
to the host subtree to prevent data migration from triggering during the planned maintenance slot. - Stops the Ceph target, thereby, stopping all the daemons.
-
Disables the
ceph target
on the host, to prevent a reboot from automatically starting Ceph services.
Exiting maintenance reverses the above sequence.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- Hosts added to the cluster.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow You can either place the host in maintenance mode or place it out of the maintenance mode:
Place the host in maintenance mode:
Syntax
ceph orch host maintenance enter HOST_NAME [--force]
ceph orch host maintenance enter HOST_NAME [--force]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host maintenance enter host02 --force
[ceph: root@host01 /]# ceph orch host maintenance enter host02 --force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
--force
flag allows the user to bypass warnings, but not alerts.Place the host out of the maintenance mode:
Syntax
ceph orch host maintenance exit HOST_NAME
ceph orch host maintenance exit HOST_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host maintenance exit host02
[ceph: root@host01 /]# ceph orch host maintenance exit host02
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List the hosts:
Example
[ceph: root@host01 /]# ceph orch host ls
[ceph: root@host01 /]# ceph orch host ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow