Chapter 16. Managing a Red Hat Ceph Storage cluster using cephadm-ansible modules


As a storage administrator, you can use cephadm-ansible modules in Ansible playbooks to administer your Red Hat Ceph Storage cluster. The cephadm-ansible package provides several modules that wrap cephadm calls to let you write your own unique Ansible playbooks to administer your cluster.

Note

At this time, cephadm-ansible modules only support the most important tasks. Any operation not covered by cephadm-ansible modules must be completed using either the command or shell Ansible modules in your playbooks.

16.1. The cephadm-ansible modules

The cephadm-ansible modules are a collection of modules that simplify writing Ansible playbooks by providing a wrapper around cephadm and ceph orch commands. You can use the modules to write your own unique Ansible playbooks to administer your cluster using one or more of the modules.

The cephadm-ansible package includes the following modules:

  • cephadm_bootstrap
  • ceph_orch_host
  • ceph_config
  • ceph_orch_apply
  • ceph_orch_daemon
  • cephadm_registry_login

16.2. The cephadm-ansible modules options

The following tables list the available options for the cephadm-ansible modules. Options listed as required need to be set when using the modules in your Ansible playbooks. Options listed with a default value of true indicate that the option is automatically set when using the modules and you do not need to specify it in your playbook. For example, for the cephadm_bootstrap module, the Ceph Dashboard is installed unless you set dashboard: false.

Table 16.1. Available options for the cephadm_bootstrap module.
cephadm_bootstrapDescriptionRequiredDefault

mon_ip

Ceph Monitor IP address.

true

 

image

Ceph container image.

false

 

docker

Use docker instead of podman.

false

 

fsid

Define the Ceph FSID.

false

 

pull

Pull the Ceph container image.

false

true

dashboard

Deploy the Ceph Dashboard.

false

true

dashboard_user

Specify a specific Ceph Dashboard user.

false

 

dashboard_password

Ceph Dashboard password.

false

 

monitoring

Deploy the monitoring stack.

false

true

firewalld

Manage firewall rules with firewalld.

false

true

allow_overwrite

Allow overwrite of existing --output-config, --output-keyring, or --output-pub-ssh-key files.

false

false

registry_url

URL for custom registry.

false

 

registry_username

Username for custom registry.

false

 

registry_password

Password for custom registry.

false

 

registry_json

JSON file with custom registry login information.

false

 

ssh_user

SSH user to use for cephadm ssh to hosts.

false

 

ssh_config

SSH config file path for cephadm SSH client.

false

 

allow_fqdn_hostname

Allow hostname that is a fully-qualified domain name (FQDN).

false

false

cluster_network

Subnet to use for cluster replication, recovery and heartbeats.

false

 
Table 16.2. Available options for the ceph_orch_host module.
ceph_orch_hostDescriptionRequiredDefault

fsid

The FSID of the Ceph cluster to interact with.

false

 

image

The Ceph container image to use.

false

 

name

Name of the host to add, remove, or update.

true

 

address

IP address of the host.

true when state is present.

 

set_admin_label

Set the _admin label on the specified host.

false

false

labels

The list of labels to apply to the host.

false

[]

state

If set to present, it ensures the name specified in name is present. If set to absent, it removes the host specified in name. If set to drain, it schedules to remove all daemons from the host specified in name.

false

present

Table 16.3. Available options for the ceph_config module
ceph_configDescriptionRequiredDefault

fsid

The FSID of the Ceph cluster to interact with.

false

 

image

The Ceph container image to use.

false

 

action

Whether to set or get the parameter specified in option.

false

set

who

Which daemon to set the configuration to.

true

 

option

Name of the parameter to set or get.

true

 

value

Value of the parameter to set.

true if action is set

 
Table 16.4. Available options for the ceph_orch_apply module.
ceph_orch_applyDescriptionRequired

fsid

The FSID of the Ceph cluster to interact with.

false

image

The Ceph container image to use.

false

spec

The service specification to apply.

true

Table 16.5. Available options for the ceph_orch_daemon module.
ceph_orch_daemonDescriptionRequired

fsid

The FSID of the Ceph cluster to interact with.

false

image

The Ceph container image to use.

false

state

The desired state of the service specified in name.

true

If started, it ensures the service is started.

If stopped, it ensures the service is stopped.

If restarted, it will restart the service.

daemon_id

The ID of the service.

true

daemon_type

The type of service.

true

Table 16.6. Available options for the cephadm_registry_login module
cephadm_registry_loginDescriptionRequiredDefault

state

Login or logout of a registry.

false

login

docker

Use docker instead of podman.

false

 

registry_url

The URL for custom registry.

false

 

registry_username

Username for custom registry.

true when state is login.

 

registry_password

Password for custom registry.

true when state is login.

 

registry_json

The path to a JSON file. This file must be present on remote hosts prior to running this task. This option is currently not supported.

  

16.3. Bootstrapping a storage cluster using the cephadm_bootstrap and cephadm_registry_login modules

As a storage administrator, you can bootstrap a storage cluster using Ansible by using the cephadm_bootstrap and cephadm_registry_login modules in your Ansible playbook.

Prerequisites

  • An IP address for the first Ceph Monitor container, which is also the IP address for the first node in the storage cluster.
  • Login access to registry.redhat.io.
  • A minimum of 10 GB of free space for /var/lib/containers/.
  • Red Hat Enterprise Linux 8.10 or 9.4 or later with ansible-core bundled into AppStream.
  • Installation of the cephadm-ansible package on the Ansible administration node.
  • Passwordless SSH is set up on all hosts in the storage cluster.
  • Hosts are registered with CDN.

Procedure

  1. Log in to the Ansible administration node.
  2. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node:

    Example

    [ceph-admin@admin ~]$ cd /usr/share/cephadm-ansible

  3. Create the hosts file and add hosts, labels, and monitor IP address of the first host in the storage cluster:

    Syntax

    sudo vi INVENTORY_FILE
    
    HOST1 labels="['LABEL1', 'LABEL2']"
    HOST2 labels="['LABEL1', 'LABEL2']"
    HOST3 labels="['LABEL1']"
    
    [admin]
    ADMIN_HOST monitor_address=MONITOR_IP_ADDRESS labels="['ADMIN_LABEL', 'LABEL1', 'LABEL2']"

    Example

    [ceph-admin@admin cephadm-ansible]$ sudo vi hosts
    
    host02 labels="['mon', 'mgr']"
    host03 labels="['mon', 'mgr']"
    host04 labels="['osd']"
    host05 labels="['osd']"
    host06 labels="['osd']"
    
    [admin]
    host01 monitor_address=10.10.128.68 labels="['_admin', 'mon', 'mgr']"

  4. Run the preflight playbook:

    Syntax

    ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs"

    Example

    [ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs"

  5. Create a playbook to bootstrap your cluster:

    Syntax

    sudo vi PLAYBOOK_FILENAME.yml
    
    ---
    - name: NAME_OF_PLAY
      hosts: BOOTSTRAP_HOST
      become: USE_ELEVATED_PRIVILEGES
      gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS
      tasks:
        -name: NAME_OF_TASK
         cephadm_registry_login:
           state: STATE
           registry_url: REGISTRY_URL
           registry_username: REGISTRY_USER_NAME
           registry_password: REGISTRY_PASSWORD
    
        - name: NAME_OF_TASK
          cephadm_bootstrap:
            mon_ip: "{{ monitor_address }}"
            dashboard_user: DASHBOARD_USER
            dashboard_password: DASHBOARD_PASSWORD
            allow_fqdn_hostname: ALLOW_FQDN_HOSTNAME
            cluster_network: NETWORK_CIDR

    Example

    [ceph-admin@admin cephadm-ansible]$ sudo vi bootstrap.yml
    
    ---
    - name: bootstrap the cluster
      hosts: host01
      become: true
      gather_facts: false
      tasks:
        - name: login to registry
          cephadm_registry_login:
            state: login
            registry_url: registry.redhat.io
            registry_username: user1
            registry_password: mypassword1
    
        - name: bootstrap initial cluster
          cephadm_bootstrap:
            mon_ip: "{{ monitor_address }}"
            dashboard_user: mydashboarduser
            dashboard_password: mydashboardpassword
            allow_fqdn_hostname: true
            cluster_network: 10.10.128.0/28

  6. Run the playbook:

    Syntax

    ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME.yml -vvv

    Example

    [ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts bootstrap.yml -vvv

Verification

  • Review the Ansible output after running the playbook.

16.4. Adding or removing hosts using the ceph_orch_host module

As a storage administrator, you can add and remove hosts in your storage cluster by using the ceph_orch_host module in your Ansible playbook.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Register the nodes to the CDN and attach subscriptions.
  • Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster.
  • Installation of the cephadm-ansible package on the Ansible administration node.
  • New hosts have the storage cluster’s public SSH key. For more information about copying the storage cluster’s public SSH keys to new hosts, see Adding hosts in the Red Hat Ceph Storage Installation Guide.

Procedure

  1. Use the following procedure to add new hosts to the cluster:

    1. Log in to the Ansible administration node.
    2. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node:

      Example

      [ceph-admin@admin ~]$ cd /usr/share/cephadm-ansible

    3. Add the new hosts and labels to the Ansible inventory file.

      Syntax

      sudo vi INVENTORY_FILE
      
      NEW_HOST1 labels="['LABEL1', 'LABEL2']"
      NEW_HOST2 labels="['LABEL1', 'LABEL2']"
      NEW_HOST3 labels="['LABEL1']"
      
      [admin]
      ADMIN_HOST monitor_address=MONITOR_IP_ADDRESS labels="['ADMIN_LABEL', 'LABEL1', 'LABEL2']"

      Example

      [ceph-admin@admin cephadm-ansible]$ sudo vi hosts
      
      host02 labels="['mon', 'mgr']"
      host03 labels="['mon', 'mgr']"
      host04 labels="['osd']"
      host05 labels="['osd']"
      host06 labels="['osd']"
      
      [admin]
      host01 monitor_address= 10.10.128.68 labels="['_admin', 'mon', 'mgr']"

      Note

      If you have previously added the new hosts to the Ansible inventory file and ran the preflight playbook on the hosts, skip to step 3.

    4. Run the preflight playbook with the --limit option:

      Syntax

      ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit NEWHOST

      Example

      [ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit host02

      The preflight playbook installs podman, lvm2, chronyd, and cephadm on the new host. After installation is complete, cephadm resides in the /usr/sbin/ directory.

    5. Create a playbook to add the new hosts to the cluster:

      Syntax

      sudo vi PLAYBOOK_FILENAME.yml
      
      ---
      - name: PLAY_NAME
        hosts: HOSTS_OR_HOST_GROUPS
        become: USE_ELEVATED_PRIVILEGES
        gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS
        tasks:
          - name: NAME_OF_TASK
            ceph_orch_host:
              name: "{{ ansible_facts['hostname'] }}"
              address: "{{ ansible_facts['default_ipv4']['address'] }}"
              labels: "{{ labels }}"
            delegate_to: HOST_TO_DELEGATE_TASK_TO
      
          - name: NAME_OF_TASK
            when: inventory_hostname in groups['admin']
            ansible.builtin.shell:
              cmd: CEPH_COMMAND_TO_RUN
            register: REGISTER_NAME
      
          - name: NAME_OF_TASK
            when: inventory_hostname in groups['admin']
            debug:
              msg: "{{ REGISTER_NAME.stdout }}"

      Note

      By default, Ansible executes all tasks on the host that matches the hosts line of your playbook. The ceph orch commands must run on the host that contains the admin keyring and the Ceph configuration file. Use the delegate_to keyword to specify the admin host in your cluster.

      Example

      [ceph-admin@admin cephadm-ansible]$ sudo vi add-hosts.yml
      
      ---
      - name: add additional hosts to the cluster
        hosts: all
        become: true
        gather_facts: true
        tasks:
          - name: add hosts to the cluster
            ceph_orch_host:
              name: "{{ ansible_facts['hostname'] }}"
              address: "{{ ansible_facts['default_ipv4']['address'] }}"
              labels: "{{ labels }}"
            delegate_to: host01
      
          - name: list hosts in the cluster
            when: inventory_hostname in groups['admin']
            ansible.builtin.shell:
              cmd: ceph orch host ls
            register: host_list
      
          - name: print current list of hosts
            when: inventory_hostname in groups['admin']
            debug:
              msg: "{{ host_list.stdout }}"

      In this example, the playbook adds the new hosts to the cluster and displays a current list of hosts.

    6. Run the playbook to add additional hosts to the cluster:

      Syntax

      ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME.yml

      Example

      [ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts add-hosts.yml

  2. Use the following procedure to remove hosts from the cluster:

    1. Log in to the Ansible administration node.
    2. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node:

      Example

      [ceph-admin@admin ~]$ cd /usr/share/cephadm-ansible

    3. Create a playbook to remove a host or hosts from the cluster:

      Syntax

      sudo vi PLAYBOOK_FILENAME.yml
      
      ---
      - name: NAME_OF_PLAY
        hosts: ADMIN_HOST
        become: USE_ELEVATED_PRIVILEGES
        gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS
        tasks:
          - name: NAME_OF_TASK
            ceph_orch_host:
              name: HOST_TO_REMOVE
              state: STATE
      
          - name: NAME_OF_TASK
            ceph_orch_host:
              name: HOST_TO_REMOVE
              state: STATE
            retries: NUMBER_OF_RETRIES
            delay: DELAY
            until: CONTINUE_UNTIL
            register: REGISTER_NAME
      
          - name: NAME_OF_TASK
            ansible.builtin.shell:
              cmd: ceph orch host ls
            register: REGISTER_NAME
      
          - name: NAME_OF_TASK
              debug:
                msg: "{{ REGISTER_NAME.stdout }}"

      Example

      [ceph-admin@admin cephadm-ansible]$ sudo vi remove-hosts.yml
      
      ---
      - name: remove host
        hosts: host01
        become: true
        gather_facts: true
        tasks:
          - name: drain host07
            ceph_orch_host:
              name: host07
              state: drain
      
          - name: remove host from the cluster
            ceph_orch_host:
              name: host07
              state: absent
            retries: 20
            delay: 1
            until: result is succeeded
            register: result
      
           - name: list hosts in the cluster
             ansible.builtin.shell:
               cmd: ceph orch host ls
             register: host_list
      
           - name: print current list of hosts
             debug:
               msg: "{{ host_list.stdout }}"

      In this example, the playbook tasks drain all daemons on host07, removes the host from the cluster, and displays a current list of hosts.

    4. Run the playbook to remove host from the cluster:

      Syntax

      ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME.yml

      Example

      [ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts remove-hosts.yml

Verification

  • Review the Ansible task output displaying the current list of hosts in the cluster:

    Example

    TASK [print current hosts] ******************************************************************************************************
    Friday 24 June 2022  14:52:40 -0400 (0:00:03.365)       0:02:31.702 ***********
    ok: [host01] =>
      msg: |-
        HOST    ADDR           LABELS          STATUS
        host01  10.10.128.68   _admin mon mgr
        host02  10.10.128.69   mon mgr
        host03  10.10.128.70   mon mgr
        host04  10.10.128.71   osd
        host05  10.10.128.72   osd
        host06  10.10.128.73   osd

16.5. Setting configuration options using the ceph_config module

As a storage administrator, you can set or get Red Hat Ceph Storage configuration options using the ceph_config module.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster.
  • Installation of the cephadm-ansible package on the Ansible administration node.
  • The Ansible inventory file contains the cluster and admin hosts.

Procedure

  1. Log in to the Ansible administration node.
  2. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node:

    Example

    [ceph-admin@admin ~]$ cd /usr/share/cephadm-ansible

  3. Create a playbook with configuration changes:

    Syntax

    sudo vi PLAYBOOK_FILENAME.yml
    
    ---
    - name: PLAY_NAME
      hosts: ADMIN_HOST
      become: USE_ELEVATED_PRIVILEGES
      gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS
      tasks:
        - name: NAME_OF_TASK
          ceph_config:
            action: GET_OR_SET
            who: DAEMON_TO_SET_CONFIGURATION_TO
            option: CEPH_CONFIGURATION_OPTION
            value: VALUE_OF_PARAMETER_TO_SET
    
        - name: NAME_OF_TASK
          ceph_config:
            action: GET_OR_SET
            who: DAEMON_TO_SET_CONFIGURATION_TO
            option: CEPH_CONFIGURATION_OPTION
          register: REGISTER_NAME
    
        - name: NAME_OF_TASK
          debug:
            msg: "MESSAGE_TO_DISPLAY {{ REGISTER_NAME.stdout }}"

    Example

    [ceph-admin@admin cephadm-ansible]$ sudo vi change_configuration.yml
    
    ---
    - name: set pool delete
      hosts: host01
      become: true
      gather_facts: false
      tasks:
        - name: set the allow pool delete option
          ceph_config:
            action: set
            who: mon
            option: mon_allow_pool_delete
            value: true
    
        - name: get the allow pool delete setting
          ceph_config:
            action: get
            who: mon
            option: mon_allow_pool_delete
          register: verify_mon_allow_pool_delete
    
        - name: print current mon_allow_pool_delete setting
          debug:
            msg: "the value of 'mon_allow_pool_delete' is {{ verify_mon_allow_pool_delete.stdout }}"

    In this example, the playbook first sets the mon_allow_pool_delete option to false. The playbook then gets the current mon_allow_pool_delete setting and displays the value in the Ansible output.

  4. Run the playbook:

    Syntax

    ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME.yml

    Example

    [ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts change_configuration.yml

Verification

  • Review the output from the playbook tasks.

    Example

    TASK [print current mon_allow_pool_delete setting] *************************************************************
    Wednesday 29 June 2022  13:51:41 -0400 (0:00:05.523)       0:00:17.953 ********
    ok: [host01] =>
      msg: the value of 'mon_allow_pool_delete' is true

Additional Resources

16.6. Applying a service specification using the ceph_orch_apply module

As a storage administrator, you can apply service specifications to your storage cluster using the ceph_orch_apply module in your Ansible playbooks. A service specification is a data structure to specify the service attributes and configuration settings that is used to deploy the Ceph service. You can use a service specification to deploy Ceph service types like mon, crash, mds, mgr, osd, rdb, or rbd-mirror.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster.
  • Installation of the cephadm-ansible package on the Ansible administration node.
  • The Ansible inventory file contains the cluster and admin hosts.

Procedure

  1. Log in to the Ansible administration node.
  2. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node:

    Example

    [ceph-admin@admin ~]$ cd /usr/share/cephadm-ansible

  3. Create a playbook with the service specifications:

    Syntax

    sudo vi PLAYBOOK_FILENAME.yml
    
    ---
    - name: PLAY_NAME
      hosts: HOSTS_OR_HOST_GROUPS
      become: USE_ELEVATED_PRIVILEGES
      gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS
      tasks:
        - name: NAME_OF_TASK
          ceph_orch_apply:
            spec: |
              service_type: SERVICE_TYPE
              service_id: UNIQUE_NAME_OF_SERVICE
              placement:
                host_pattern: 'HOST_PATTERN_TO_SELECT_HOSTS'
                label: LABEL
              spec:
                SPECIFICATION_OPTIONS:

    Example

    [ceph-admin@admin cephadm-ansible]$ sudo vi deploy_osd_service.yml
    
    ---
    - name: deploy osd service
      hosts: host01
      become: true
      gather_facts: true
      tasks:
        - name: apply osd spec
          ceph_orch_apply:
            spec: |
              service_type: osd
              service_id: osd
              placement:
                host_pattern: '*'
                label: osd
              spec:
                data_devices:
                  all: true

    In this example, the playbook deploys the Ceph OSD service on all hosts with the label osd.

  4. Run the playbook:

    Syntax

    ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME.yml

    Example

    [ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts deploy_osd_service.yml

Verification

  • Review the output from the playbook tasks.

Additional Resources

16.7. Managing Ceph daemon states using the ceph_orch_daemon module

As a storage administrator, you can start, stop, and restart Ceph daemons on hosts using the ceph_orch_daemon module in your Ansible playbooks.

Prerequisites

  • A running Red Hat Ceph Storage cluster.
  • Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster.
  • Installation of the cephadm-ansible package on the Ansible administration node.
  • The Ansible inventory file contains the cluster and admin hosts.

Procedure

  1. Log in to the Ansible administration node.
  2. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node:

    Example

    [ceph-admin@admin ~]$ cd /usr/share/cephadm-ansible

  3. Create a playbook with daemon state changes:

    Syntax

    sudo vi PLAYBOOK_FILENAME.yml
    
    ---
    - name: PLAY_NAME
      hosts: ADMIN_HOST
      become: USE_ELEVATED_PRIVILEGES
      gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS
      tasks:
        - name: NAME_OF_TASK
          ceph_orch_daemon:
            state: STATE_OF_SERVICE
            daemon_id: DAEMON_ID
            daemon_type: TYPE_OF_SERVICE

    Example

    [ceph-admin@admin cephadm-ansible]$ sudo vi restart_services.yml
    
    ---
    - name: start and stop services
      hosts: host01
      become: true
      gather_facts: false
      tasks:
        - name: start osd.0
          ceph_orch_daemon:
            state: started
            daemon_id: 0
            daemon_type: osd
    
        - name: stop mon.host02
          ceph_orch_daemon:
            state: stopped
            daemon_id: host02
            daemon_type: mon

    In this example, the playbook starts the OSD with an ID of 0 and stops a Ceph Monitor with an id of host02.

  4. Run the playbook:

    Syntax

    ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME.yml

    Example

    [ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts restart_services.yml

Verification

  • Review the output from the playbook tasks.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.