Chapter 16. Managing a Red Hat Ceph Storage cluster using cephadm-ansible modules
As a storage administrator, you can use cephadm-ansible
modules in Ansible playbooks to administer your Red Hat Ceph Storage cluster. The cephadm-ansible
package provides several modules that wrap cephadm
calls to let you write your own unique Ansible playbooks to administer your cluster.
At this time, cephadm-ansible
modules only support the most important tasks. Any operation not covered by cephadm-ansible
modules must be completed using either the command
or shell
Ansible modules in your playbooks.
16.1. The cephadm-ansible
modules
The cephadm-ansible
modules are a collection of modules that simplify writing Ansible playbooks by providing a wrapper around cephadm
and ceph orch
commands. You can use the modules to write your own unique Ansible playbooks to administer your cluster using one or more of the modules.
The cephadm-ansible
package includes the following modules:
-
cephadm_bootstrap
-
ceph_orch_host
-
ceph_config
-
ceph_orch_apply
-
ceph_orch_daemon
-
cephadm_registry_login
16.2. The cephadm-ansible
modules options
The following tables list the available options for the cephadm-ansible
modules. Options listed as required need to be set when using the modules in your Ansible playbooks. Options listed with a default value of true
indicate that the option is automatically set when using the modules and you do not need to specify it in your playbook. For example, for the cephadm_bootstrap
module, the Ceph Dashboard is installed unless you set dashboard: false
.
cephadm_bootstrap | Description | Required | Default |
---|---|---|---|
| Ceph Monitor IP address. | true | |
| Ceph container image. | false | |
|
Use | false | |
| Define the Ceph FSID. | false | |
| Pull the Ceph container image. | false | true |
| Deploy the Ceph Dashboard. | false | true |
| Specify a specific Ceph Dashboard user. | false | |
| Ceph Dashboard password. | false | |
| Deploy the monitoring stack. | false | true |
| Manage firewall rules with firewalld. | false | true |
| Allow overwrite of existing --output-config, --output-keyring, or --output-pub-ssh-key files. | false | false |
| URL for custom registry. | false | |
| Username for custom registry. | false | |
| Password for custom registry. | false | |
| JSON file with custom registry login information. | false | |
|
SSH user to use for | false | |
|
SSH config file path for | false | |
| Allow hostname that is a fully-qualified domain name (FQDN). | false | false |
| Subnet to use for cluster replication, recovery and heartbeats. | false |
ceph_orch_host | Description | Required | Default |
---|---|---|---|
| The FSID of the Ceph cluster to interact with. | false | |
| The Ceph container image to use. | false | |
| Name of the host to add, remove, or update. | true | |
| IP address of the host. |
true when | |
|
Set the | false | false |
| The list of labels to apply to the host. | false | [] |
|
If set to | false | present |
ceph_config | Description | Required | Default |
---|---|---|---|
| The FSID of the Ceph cluster to interact with. | false | |
| The Ceph container image to use. | false | |
|
Whether to | false | set |
| Which daemon to set the configuration to. | true | |
|
Name of the parameter to | true | |
| Value of the parameter to set. |
true if action is |
ceph_orch_apply | Description | Required |
---|---|---|
| The FSID of the Ceph cluster to interact with. | false |
| The Ceph container image to use. | false |
| The service specification to apply. | true |
ceph_orch_daemon | Description | Required |
---|---|---|
| The FSID of the Ceph cluster to interact with. | false |
| The Ceph container image to use. | false |
|
The desired state of the service specified in | true
If
If
If |
| The ID of the service. | true |
| The type of service. | true |
cephadm_registry_login | Description | Required | Default |
---|---|---|---|
| Login or logout of a registry. | false | login |
|
Use | false | |
| The URL for custom registry. | false | |
| Username for custom registry. |
| |
| Password for custom registry. |
| |
| The path to a JSON file. This file must be present on remote hosts prior to running this task. This option is currently not supported. |
16.3. Bootstrapping a storage cluster using the cephadm_bootstrap
and cephadm_registry_login
modules
As a storage administrator, you can bootstrap a storage cluster using Ansible by using the cephadm_bootstrap
and cephadm_registry_login
modules in your Ansible playbook.
Prerequisites
- An IP address for the first Ceph Monitor container, which is also the IP address for the first node in the storage cluster.
-
Login access to
registry.redhat.io
. -
A minimum of 10 GB of free space for
/var/lib/containers/
. -
Red Hat Enterprise Linux 8.10 or 9.4 or later with
ansible-core
bundled into AppStream. -
Installation of the
cephadm-ansible
package on the Ansible administration node. - Passwordless SSH is set up on all hosts in the storage cluster.
- Hosts are registered with CDN.
Procedure
- Log in to the Ansible administration node.
Navigate to the
/usr/share/cephadm-ansible
directory on the Ansible administration node:Example
[ceph-admin@admin ~]$ cd /usr/share/cephadm-ansible
Create the
hosts
file and add hosts, labels, and monitor IP address of the first host in the storage cluster:Syntax
sudo vi INVENTORY_FILE HOST1 labels="['LABEL1', 'LABEL2']" HOST2 labels="['LABEL1', 'LABEL2']" HOST3 labels="['LABEL1']" [admin] ADMIN_HOST monitor_address=MONITOR_IP_ADDRESS labels="['ADMIN_LABEL', 'LABEL1', 'LABEL2']"
Example
[ceph-admin@admin cephadm-ansible]$ sudo vi hosts host02 labels="['mon', 'mgr']" host03 labels="['mon', 'mgr']" host04 labels="['osd']" host05 labels="['osd']" host06 labels="['osd']" [admin] host01 monitor_address=10.10.128.68 labels="['_admin', 'mon', 'mgr']"
Run the preflight playbook:
Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs"
Example
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs"
Create a playbook to bootstrap your cluster:
Syntax
sudo vi PLAYBOOK_FILENAME.yml --- - name: NAME_OF_PLAY hosts: BOOTSTRAP_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: -name: NAME_OF_TASK cephadm_registry_login: state: STATE registry_url: REGISTRY_URL registry_username: REGISTRY_USER_NAME registry_password: REGISTRY_PASSWORD - name: NAME_OF_TASK cephadm_bootstrap: mon_ip: "{{ monitor_address }}" dashboard_user: DASHBOARD_USER dashboard_password: DASHBOARD_PASSWORD allow_fqdn_hostname: ALLOW_FQDN_HOSTNAME cluster_network: NETWORK_CIDR
Example
[ceph-admin@admin cephadm-ansible]$ sudo vi bootstrap.yml --- - name: bootstrap the cluster hosts: host01 become: true gather_facts: false tasks: - name: login to registry cephadm_registry_login: state: login registry_url: registry.redhat.io registry_username: user1 registry_password: mypassword1 - name: bootstrap initial cluster cephadm_bootstrap: mon_ip: "{{ monitor_address }}" dashboard_user: mydashboarduser dashboard_password: mydashboardpassword allow_fqdn_hostname: true cluster_network: 10.10.128.0/28
Run the playbook:
Syntax
ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME.yml -vvv
Example
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts bootstrap.yml -vvv
Verification
- Review the Ansible output after running the playbook.
16.4. Adding or removing hosts using the ceph_orch_host
module
As a storage administrator, you can add and remove hosts in your storage cluster by using the ceph_orch_host
module in your Ansible playbook.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Register the nodes to the CDN and attach subscriptions.
- Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster.
-
Installation of the
cephadm-ansible
package on the Ansible administration node. - New hosts have the storage cluster’s public SSH key. For more information about copying the storage cluster’s public SSH keys to new hosts, see Adding hosts in the Red Hat Ceph Storage Installation Guide.
Procedure
Use the following procedure to add new hosts to the cluster:
- Log in to the Ansible administration node.
Navigate to the
/usr/share/cephadm-ansible
directory on the Ansible administration node:Example
[ceph-admin@admin ~]$ cd /usr/share/cephadm-ansible
Add the new hosts and labels to the Ansible inventory file.
Syntax
sudo vi INVENTORY_FILE NEW_HOST1 labels="['LABEL1', 'LABEL2']" NEW_HOST2 labels="['LABEL1', 'LABEL2']" NEW_HOST3 labels="['LABEL1']" [admin] ADMIN_HOST monitor_address=MONITOR_IP_ADDRESS labels="['ADMIN_LABEL', 'LABEL1', 'LABEL2']"
Example
[ceph-admin@admin cephadm-ansible]$ sudo vi hosts host02 labels="['mon', 'mgr']" host03 labels="['mon', 'mgr']" host04 labels="['osd']" host05 labels="['osd']" host06 labels="['osd']" [admin] host01 monitor_address= 10.10.128.68 labels="['_admin', 'mon', 'mgr']"
NoteIf you have previously added the new hosts to the Ansible inventory file and ran the preflight playbook on the hosts, skip to step 3.
Run the preflight playbook with the
--limit
option:Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit NEWHOST
Example
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit host02
The preflight playbook installs
podman
,lvm2
,chronyd
, andcephadm
on the new host. After installation is complete,cephadm
resides in the/usr/sbin/
directory.Create a playbook to add the new hosts to the cluster:
Syntax
sudo vi PLAYBOOK_FILENAME.yml --- - name: PLAY_NAME hosts: HOSTS_OR_HOST_GROUPS become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_host: name: "{{ ansible_facts['hostname'] }}" address: "{{ ansible_facts['default_ipv4']['address'] }}" labels: "{{ labels }}" delegate_to: HOST_TO_DELEGATE_TASK_TO - name: NAME_OF_TASK when: inventory_hostname in groups['admin'] ansible.builtin.shell: cmd: CEPH_COMMAND_TO_RUN register: REGISTER_NAME - name: NAME_OF_TASK when: inventory_hostname in groups['admin'] debug: msg: "{{ REGISTER_NAME.stdout }}"
NoteBy default, Ansible executes all tasks on the host that matches the
hosts
line of your playbook. Theceph orch
commands must run on the host that contains the admin keyring and the Ceph configuration file. Use thedelegate_to
keyword to specify the admin host in your cluster.Example
[ceph-admin@admin cephadm-ansible]$ sudo vi add-hosts.yml --- - name: add additional hosts to the cluster hosts: all become: true gather_facts: true tasks: - name: add hosts to the cluster ceph_orch_host: name: "{{ ansible_facts['hostname'] }}" address: "{{ ansible_facts['default_ipv4']['address'] }}" labels: "{{ labels }}" delegate_to: host01 - name: list hosts in the cluster when: inventory_hostname in groups['admin'] ansible.builtin.shell: cmd: ceph orch host ls register: host_list - name: print current list of hosts when: inventory_hostname in groups['admin'] debug: msg: "{{ host_list.stdout }}"
In this example, the playbook adds the new hosts to the cluster and displays a current list of hosts.
Run the playbook to add additional hosts to the cluster:
Syntax
ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME.yml
Example
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts add-hosts.yml
Use the following procedure to remove hosts from the cluster:
- Log in to the Ansible administration node.
Navigate to the
/usr/share/cephadm-ansible
directory on the Ansible administration node:Example
[ceph-admin@admin ~]$ cd /usr/share/cephadm-ansible
Create a playbook to remove a host or hosts from the cluster:
Syntax
sudo vi PLAYBOOK_FILENAME.yml --- - name: NAME_OF_PLAY hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_host: name: HOST_TO_REMOVE state: STATE - name: NAME_OF_TASK ceph_orch_host: name: HOST_TO_REMOVE state: STATE retries: NUMBER_OF_RETRIES delay: DELAY until: CONTINUE_UNTIL register: REGISTER_NAME - name: NAME_OF_TASK ansible.builtin.shell: cmd: ceph orch host ls register: REGISTER_NAME - name: NAME_OF_TASK debug: msg: "{{ REGISTER_NAME.stdout }}"
Example
[ceph-admin@admin cephadm-ansible]$ sudo vi remove-hosts.yml --- - name: remove host hosts: host01 become: true gather_facts: true tasks: - name: drain host07 ceph_orch_host: name: host07 state: drain - name: remove host from the cluster ceph_orch_host: name: host07 state: absent retries: 20 delay: 1 until: result is succeeded register: result - name: list hosts in the cluster ansible.builtin.shell: cmd: ceph orch host ls register: host_list - name: print current list of hosts debug: msg: "{{ host_list.stdout }}"
In this example, the playbook tasks drain all daemons on
host07
, removes the host from the cluster, and displays a current list of hosts.Run the playbook to remove host from the cluster:
Syntax
ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME.yml
Example
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts remove-hosts.yml
Verification
Review the Ansible task output displaying the current list of hosts in the cluster:
Example
TASK [print current hosts] ****************************************************************************************************** Friday 24 June 2022 14:52:40 -0400 (0:00:03.365) 0:02:31.702 *********** ok: [host01] => msg: |- HOST ADDR LABELS STATUS host01 10.10.128.68 _admin mon mgr host02 10.10.128.69 mon mgr host03 10.10.128.70 mon mgr host04 10.10.128.71 osd host05 10.10.128.72 osd host06 10.10.128.73 osd
16.5. Setting configuration options using the ceph_config
module
As a storage administrator, you can set or get Red Hat Ceph Storage configuration options using the ceph_config
module.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster.
-
Installation of the
cephadm-ansible
package on the Ansible administration node. - The Ansible inventory file contains the cluster and admin hosts.
Procedure
- Log in to the Ansible administration node.
Navigate to the
/usr/share/cephadm-ansible
directory on the Ansible administration node:Example
[ceph-admin@admin ~]$ cd /usr/share/cephadm-ansible
Create a playbook with configuration changes:
Syntax
sudo vi PLAYBOOK_FILENAME.yml --- - name: PLAY_NAME hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_config: action: GET_OR_SET who: DAEMON_TO_SET_CONFIGURATION_TO option: CEPH_CONFIGURATION_OPTION value: VALUE_OF_PARAMETER_TO_SET - name: NAME_OF_TASK ceph_config: action: GET_OR_SET who: DAEMON_TO_SET_CONFIGURATION_TO option: CEPH_CONFIGURATION_OPTION register: REGISTER_NAME - name: NAME_OF_TASK debug: msg: "MESSAGE_TO_DISPLAY {{ REGISTER_NAME.stdout }}"
Example
[ceph-admin@admin cephadm-ansible]$ sudo vi change_configuration.yml --- - name: set pool delete hosts: host01 become: true gather_facts: false tasks: - name: set the allow pool delete option ceph_config: action: set who: mon option: mon_allow_pool_delete value: true - name: get the allow pool delete setting ceph_config: action: get who: mon option: mon_allow_pool_delete register: verify_mon_allow_pool_delete - name: print current mon_allow_pool_delete setting debug: msg: "the value of 'mon_allow_pool_delete' is {{ verify_mon_allow_pool_delete.stdout }}"
In this example, the playbook first sets the
mon_allow_pool_delete
option tofalse
. The playbook then gets the currentmon_allow_pool_delete
setting and displays the value in the Ansible output.Run the playbook:
Syntax
ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME.yml
Example
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts change_configuration.yml
Verification
Review the output from the playbook tasks.
Example
TASK [print current mon_allow_pool_delete setting] ************************************************************* Wednesday 29 June 2022 13:51:41 -0400 (0:00:05.523) 0:00:17.953 ******** ok: [host01] => msg: the value of 'mon_allow_pool_delete' is true
Additional Resources
- See the Red Hat Ceph Storage Configuration Guide for more details on configuration options.
16.6. Applying a service specification using the ceph_orch_apply
module
As a storage administrator, you can apply service specifications to your storage cluster using the ceph_orch_apply
module in your Ansible playbooks. A service specification is a data structure to specify the service attributes and configuration settings that is used to deploy the Ceph service. You can use a service specification to deploy Ceph service types like mon
, crash
, mds
, mgr
, osd
, rdb
, or rbd-mirror
.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster.
-
Installation of the
cephadm-ansible
package on the Ansible administration node. - The Ansible inventory file contains the cluster and admin hosts.
Procedure
- Log in to the Ansible administration node.
Navigate to the
/usr/share/cephadm-ansible
directory on the Ansible administration node:Example
[ceph-admin@admin ~]$ cd /usr/share/cephadm-ansible
Create a playbook with the service specifications:
Syntax
sudo vi PLAYBOOK_FILENAME.yml --- - name: PLAY_NAME hosts: HOSTS_OR_HOST_GROUPS become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_apply: spec: | service_type: SERVICE_TYPE service_id: UNIQUE_NAME_OF_SERVICE placement: host_pattern: 'HOST_PATTERN_TO_SELECT_HOSTS' label: LABEL spec: SPECIFICATION_OPTIONS:
Example
[ceph-admin@admin cephadm-ansible]$ sudo vi deploy_osd_service.yml --- - name: deploy osd service hosts: host01 become: true gather_facts: true tasks: - name: apply osd spec ceph_orch_apply: spec: | service_type: osd service_id: osd placement: host_pattern: '*' label: osd spec: data_devices: all: true
In this example, the playbook deploys the Ceph OSD service on all hosts with the label
osd
.Run the playbook:
Syntax
ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME.yml
Example
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts deploy_osd_service.yml
Verification
- Review the output from the playbook tasks.
Additional Resources
- See the Red Hat Ceph Storage Operations Guide for more details on service specification options.
16.7. Managing Ceph daemon states using the ceph_orch_daemon
module
As a storage administrator, you can start, stop, and restart Ceph daemons on hosts using the ceph_orch_daemon
module in your Ansible playbooks.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster.
-
Installation of the
cephadm-ansible
package on the Ansible administration node. - The Ansible inventory file contains the cluster and admin hosts.
Procedure
- Log in to the Ansible administration node.
Navigate to the
/usr/share/cephadm-ansible
directory on the Ansible administration node:Example
[ceph-admin@admin ~]$ cd /usr/share/cephadm-ansible
Create a playbook with daemon state changes:
Syntax
sudo vi PLAYBOOK_FILENAME.yml --- - name: PLAY_NAME hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_daemon: state: STATE_OF_SERVICE daemon_id: DAEMON_ID daemon_type: TYPE_OF_SERVICE
Example
[ceph-admin@admin cephadm-ansible]$ sudo vi restart_services.yml --- - name: start and stop services hosts: host01 become: true gather_facts: false tasks: - name: start osd.0 ceph_orch_daemon: state: started daemon_id: 0 daemon_type: osd - name: stop mon.host02 ceph_orch_daemon: state: stopped daemon_id: host02 daemon_type: mon
In this example, the playbook starts the OSD with an ID of
0
and stops a Ceph Monitor with an id ofhost02
.Run the playbook:
Syntax
ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME.yml
Example
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts restart_services.yml
Verification
- Review the output from the playbook tasks.