Chapter 3. Red Hat Ceph Storage installation
As a storage administrator, you can use the cephadm utility to deploy new Red Hat Ceph Storage clusters.
The cephadm utility manages the entire life cycle of a Ceph cluster. Installation and management tasks comprise two types of operations:
- Day One operations involve installing and bootstrapping a bare-minimum, containerized Ceph storage cluster, running on a single node. Day One also includes deploying the Monitor and Manager daemons and adding Ceph OSDs.
-
Day Two operations use the Ceph orchestration interface,
cephadm orch, or the Red Hat Ceph Storage Dashboard to expand the storage cluster by adding other Ceph services to the storage cluster.
Prerequisites
- At least one running virtual machine (VM) or bare-metal server with an active internet connection.
-
Red Hat Enterprise Linux 9.0 or later with
ansible-corebundled into AppStream. - A valid Red Hat subscription with the appropriate entitlements.
- Root-level access to all nodes.
- An active Red Hat Network (RHN) or service account to access the Red Hat Registry.
- Remove troubling configurations in iptables so that refresh of iptables services does not cause issues to the cluster. For an example, refer to the Verifying firewall rules are configured for default Ceph ports section of the Red Hat Ceph Storage Configuration Guide.
- For the latest supported Red Hat Enterprise Linux versions for bootstrap nodes, see the Red Hat Ceph Storage Compatibility Guide.
3.1. The cephadm utility Copy linkLink copied to clipboard!
The cephadm utility deploys and manages a Ceph storage cluster. It is tightly integrated with both the command-line interface (CLI) and the Red Hat Ceph Storage Dashboard web interface, so that you can manage storage clusters from either environment. cephadm uses SSH to connect to hosts from the manager daemon to add, remove, or update Ceph daemon containers. It does not rely on external configuration or orchestration tools such as Ansible or Rook.
The cephadm utility is available after running the preflight playbook on a host.
The cephadm utility consists of two main components:
-
The
cephadmshell. -
The
cephadmorchestrator.
The cephadm shell
The cephadm shell launches a bash shell within a container. This enables you to perform “Day One” cluster setup tasks, such as installation and bootstrapping, and to invoke ceph commands.
There are two ways to invoke the cephadm shell:
Enter
cephadm shellat the system prompt:Example
cephadm shell [ceph: root@host01 /]# ceph -s
[root@host01 ~]# cephadm shell [ceph: root@host01 /]# ceph -sCopy to Clipboard Copied! Toggle word wrap Toggle overflow At the system prompt, type
cephadm shelland the command you want to execute:Example
cephadm shell ceph -s
[root@host01 ~]# cephadm shell ceph -sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If the node contains configuration and keyring files in /etc/ceph/, the container environment uses the values in those files as defaults for the cephadm shell. However, if you execute the cephadm shell on a Ceph Monitor node, the cephadm shell inherits its default configuration from the Ceph Monitor container, instead of using the default configuration.
The cephadm orchestrator
The cephadm orchestrator enables you to perform “Day Two” Ceph functions, such as expanding the storage cluster and provisioning Ceph daemons and services. You can use the cephadm orchestrator through either the command-line interface (CLI) or the web-based Red Hat Ceph Storage Dashboard. Orchestrator commands take the form ceph orch.
The cephadm script interacts with the Ceph orchestration module used by the Ceph Manager.
3.2. How cephadm works Copy linkLink copied to clipboard!
The cephadm command manages the full lifecycle of a Red Hat Ceph Storage cluster. The cephadm command can perform the following operations:
- Bootstrap a new Red Hat Ceph Storage cluster.
- Launch a containerized shell that works with the Red Hat Ceph Storage command-line interface (CLI).
- Aid in debugging containerized daemons.
The cephadm command uses ssh to communicate with the nodes in the storage cluster. This allows you to add, remove, or update Red Hat Ceph Storage containers without using external tools. Generate the ssh key pair during the bootstrapping process, or use your own ssh key.
The cephadm bootstrapping process creates a small storage cluster on a single node, consisting of one Ceph Monitor and one Ceph Manager, as well as any required dependencies. You then use the orchestrator CLI or the Red Hat Ceph Storage Dashboard to expand the storage cluster to include nodes, and to provision all of the Red Hat Ceph Storage daemons and services. You can perform management functions through the CLI or from the Red Hat Ceph Storage Dashboard web interface.
3.3. The cephadm-ansible playbooks Copy linkLink copied to clipboard!
The cephadm-ansible package is a collection of Ansible playbooks to simplify workflows that are not covered by cephadm. After installation, the playbooks are located in /usr/share/cephadm-ansible/.
The cephadm-ansible package includes the following playbooks:
-
cephadm-preflight.yml -
cephadm-clients.yml -
cephadm-purge-cluster.yml
The cephadm-preflight playbook
Use the cephadm-preflight playbook to initially setup hosts before bootstrapping the storage cluster and before adding new nodes or clients to your storage cluster. This playbook configures the Ceph repository and installs some prerequisites such as podman, lvm2, chrony, and cephadm.
The cephadm-clients playbook
Use the cephadm-clients playbook to set up client hosts. This playbook handles the distribution of configuration and keyring files to a group of Ceph clients.
The cephadm-purge-cluster playbook
Use the cephadm-purge-cluster playbook to remove a Ceph cluster. This playbook purges a Ceph cluster managed with cephadm.
3.4. Registering the Red Hat Ceph Storage nodes to the CDN and attaching subscriptions Copy linkLink copied to clipboard!
When using Red Hat Enterprise Linux 8.x, the Admin node must be running a supported Red Hat Enterprise Linux 9.x version for your Red Hat Ceph Storage.
For full compatibility information, see Compatibility Guide.
Prerequisites
- At least one running virtual machine (VM) or bare-metal server with an active internet connection.
-
Red Hat Enterprise Linux 9.0 or later with
ansible-corebundled into AppStream. - A valid Red Hat subscription with the appropriate entitlements.
- Root-level access to all nodes.
Procedure
Register the node, and when prompted, enter your Red Hat Customer Portal credentials:
Syntax
subscription-manager register
subscription-manager registerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Pull the latest subscription data from the CDN:
Syntax
subscription-manager refresh
subscription-manager refreshCopy to Clipboard Copied! Toggle word wrap Toggle overflow List all available subscriptions for Red Hat Ceph Storage:
Syntax
subscription-manager list --available --matches 'Red Hat Ceph Storage'
subscription-manager list --available --matches 'Red Hat Ceph Storage'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Identify the appropriate subscription and retrieve its Pool ID.
Attach a pool ID to gain access to the software entitlements. Use the Pool ID you identified in the previous step.
Syntax
subscription-manager attach --pool=POOL_ID
subscription-manager attach --pool=POOL_IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow Disable the default software repositories, and then enable the server and the extras repositories on the respective version of Red Hat Enterprise Linux:
Red Hat Enterprise Linux 9
subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms
subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the system to receive the latest packages for Red Hat Enterprise Linux:
Syntax
dnf update
# dnf updateCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Subscribe to Red Hat Ceph Storage 6 content. Follow the instructions in How to Register Ceph with Red Hat Satellite 6.
Enable the
ceph-toolsrepository:Red Hat Enterprise Linux 9
subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms
subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Repeat the above steps on all nodes you are adding to the cluster.
Install
cephadm-ansible:Syntax
dnf install cephadm-ansible
dnf install cephadm-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.5. Configuring Ansible inventory location Copy linkLink copied to clipboard!
You can configure inventory location files for the cephadm-ansible staging and production environments. The Ansible inventory hosts file contains all the hosts that are part of the storage cluster. You can list nodes individually in the inventory hosts file or you can create groups such as [mons],[osds], and [rgws] to provide clarity to your inventory and ease the usage of the --limit option to target a group or node when running a playbook.
If deploying clients, client nodes must be defined in a dedicated [clients] group.
Prerequisites
- An Ansible administration node.
- Root-level access to the Ansible administration node.
-
The
cephadm-ansiblepackage is installed on the node.
Procedure
Navigate to the
/usr/share/cephadm-ansible/directory:cd /usr/share/cephadm-ansible
[root@admin ~]# cd /usr/share/cephadm-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Create subdirectories for staging and production:
mkdir -p inventory/staging inventory/production
[root@admin cephadm-ansible]# mkdir -p inventory/staging inventory/productionCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Edit the
ansible.cfgfile and add the following line to assign a default inventory location:[defaults] inventory = ./inventory/staging
[defaults] inventory = ./inventory/stagingCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Create an inventory
hostsfile for each environment:touch inventory/staging/hosts touch inventory/production/hosts
[root@admin cephadm-ansible]# touch inventory/staging/hosts [root@admin cephadm-ansible]# touch inventory/production/hostsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open and edit each
hostsfile and add the nodes and[admin]group:NODE_NAME_1 NODE_NAME_2 [admin] ADMIN_NODE_NAME_1
NODE_NAME_1 NODE_NAME_2 [admin] ADMIN_NODE_NAME_1Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace NODE_NAME_1 and NODE_NAME_2 with the Ceph nodes such as monitors, OSDs, MDSs, and gateway nodes.
Replace ADMIN_NODE_NAME_1 with the name of the node where the admin keyring is stored.
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you set the inventory location in the
ansible.cfgfile to staging, you need to run the playbooks in the staging environment as follows:Syntax
ansible-playbook -i inventory/staging/hosts PLAYBOOK.yml
ansible-playbook -i inventory/staging/hosts PLAYBOOK.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow To run the playbooks in the production environment:
Syntax
ansible-playbook -i inventory/production/hosts PLAYBOOK.yml
ansible-playbook -i inventory/production/hosts PLAYBOOK.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.6. Enabling SSH login as root user on Red Hat Enterprise Linux 9 Copy linkLink copied to clipboard!
Red Hat Enterprise Linux 9 does not support SSH login as a root user even if PermitRootLogin parameter is set to yes in the /etc/ssh/sshd_config file. You get the following error:
Example
ssh root@myhostname root@myhostname password: Permission denied, please try again.
[root@host01 ~]# ssh root@myhostname
root@myhostname password:
Permission denied, please try again.
You can run one of the following methods to enable login as a root user:
- Use "Allow root SSH login with password" flag while setting the root password during installation of Red Hat Enterprise Linux 9.
-
Manually set the
PermitRootLoginparameter after Red Hat Enterprise Linux 9 installation.
This section describes manual setting of the PermitRootLogin parameter.
Prerequisites
- Root-level access to all nodes.
Procedure
Open the
etc/ssh/sshd_configfile and set thePermitRootLogintoyes:Example
echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config.d/01-permitrootlogin.conf
[root@admin ~]# echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config.d/01-permitrootlogin.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the
SSHservice:Example
systemctl restart sshd.service
[root@admin ~]# systemctl restart sshd.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Login to the node as the
rootuser:Syntax
ssh root@HOST_NAME
ssh root@HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace HOST_NAME with the host name of the Ceph node.
Example
ssh root@host01
[root@admin ~]# ssh root@host01Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the
rootpassword when prompted.
3.7. Creating an Ansible user with sudo access Copy linkLink copied to clipboard!
You can create an Ansible user with password-less root access on all nodes in the storage cluster to run the cephadm-ansible playbooks. The Ansible user must be able to log into all the Red Hat Ceph Storage nodes as a user that has root privileges to install software and create configuration files without prompting for a password.
Prerequisites
- Root-level access to all nodes.
- For Red Hat Enterprise Linux 9, to log in as a root user, see Enabling SSH log in as root user on Red Hat Enterprise Linux 9
Procedure
Log in to the node as the
rootuser:Syntax
ssh root@HOST_NAME
ssh root@HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace HOST_NAME with the host name of the Ceph node.
Example
ssh root@host01
[root@admin ~]# ssh root@host01Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the
rootpassword when prompted.Create a new Ansible user:
Syntax
adduser USER_NAME
adduser USER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace USER_NAME with the new user name for the Ansible user.
Example
adduser ceph-admin
[root@host01 ~]# adduser ceph-adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantDo not use
cephas the user name. Thecephuser name is reserved for the Ceph daemons. A uniform user name across the cluster can improve ease of use, but avoid using obvious user names, because intruders typically use them for brute-force attacks.Set a new password for this user:
Syntax
passwd USER_NAME
passwd USER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace USER_NAME with the new user name for the Ansible user.
Example
passwd ceph-admin
[root@host01 ~]# passwd ceph-adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enter the new password twice when prompted.
Configure
sudoaccess for the newly created user:Syntax
cat << EOF >/etc/sudoers.d/USER_NAME $USER_NAME ALL = (root) NOPASSWD:ALL EOF
cat << EOF >/etc/sudoers.d/USER_NAME $USER_NAME ALL = (root) NOPASSWD:ALL EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace USER_NAME with the new user name for the Ansible user.
Example
[root@host01 ~]# cat << EOF >/etc/sudoers.d/ceph-admin ceph-admin ALL = (root) NOPASSWD:ALL EOF
[root@host01 ~]# cat << EOF >/etc/sudoers.d/ceph-admin ceph-admin ALL = (root) NOPASSWD:ALL EOFCopy to Clipboard Copied! Toggle word wrap Toggle overflow Assign the correct file permissions to the new file:
Syntax
chmod 0440 /etc/sudoers.d/USER_NAME
chmod 0440 /etc/sudoers.d/USER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace USER_NAME with the new user name for the Ansible user.
Example
chmod 0440 /etc/sudoers.d/ceph-admin
[root@host01 ~]# chmod 0440 /etc/sudoers.d/ceph-adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Repeat the above steps on all nodes in the storage cluster.
3.8. Configuring SSH Copy linkLink copied to clipboard!
As a storage administrator, with Cephadm, you can use an SSH key to securely authenticate with remote hosts. The SSH key is stored in the monitor to connect to remote hosts.
Prerequisites
- An Ansible administration node.
- Root-level access to the Ansible administration node.
-
The
cephadm-ansiblepackage is installed on the node.
Procedure
-
Navigate to the
cephadm-ansibledirectory. Generate a new SSH key:
Example
ceph cephadm generate-key
[ceph-admin@admin cephadm-ansible]$ ceph cephadm generate-keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the public portion of the SSH key:
Example
ceph cephadm get-pub-key
[ceph-admin@admin cephadm-ansible]$ ceph cephadm get-pub-keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Delete the currently stored SSH key:
Example
ceph cephadm clear-key
[ceph-admin@admin cephadm-ansible]$ceph cephadm clear-keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restart the mgr daemon to reload the configuration:
Example
ceph mgr fail
[ceph-admin@admin cephadm-ansible]$ ceph mgr failCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.8.1. Configuring a different SSH user Copy linkLink copied to clipboard!
As a storage administrator, you can configure a non-root SSH user who can log into all the Ceph cluster nodes with enough privileges to download container images, start containers, and execute commands without prompting for a password.
Prior to configuring a non-root SSH user, the cluster SSH key needs to be added to the user’s authorized_keys file and non-root users must have passwordless sudo access.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- An Ansible administration node.
- Root-level access to the Ansible administration node.
-
The
cephadm-ansiblepackage is installed on the node. -
Add the cluster SSH keys to the user’s
authorized_keys. - Enable passwordless sudo access for the non-root users.
Procedure
-
Navigate to the
cephadm-ansibledirectory. Provide Cephadm the name of the user who is going to perform all the Cephadm operations:
Syntax
ceph cephadm set-user <user>
[ceph-admin@admin cephadm-ansible]$ ceph cephadm set-user <user>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph cephadm set-user user
[ceph-admin@admin cephadm-ansible]$ ceph cephadm set-user userCopy to Clipboard Copied! Toggle word wrap Toggle overflow Retrieve the SSH public key.
Syntax
ceph cephadm get-pub-key > ~/ceph.pub
ceph cephadm get-pub-key > ~/ceph.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph cephadm get-pub-key > ~/ceph.pub
[ceph-admin@admin cephadm-ansible]$ ceph cephadm get-pub-key > ~/ceph.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the SSH keys to all the hosts.
Syntax
ssh-copy-id -f -i ~/ceph.pub USER@HOST
ssh-copy-id -f -i ~/ceph.pub USER@HOSTCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ssh-copy-id ceph-admin@host01
[ceph-admin@admin cephadm-ansible]$ ssh-copy-id ceph-admin@host01Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.9. Enabling password-less SSH for Ansible Copy linkLink copied to clipboard!
Generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password.
Prerequisites
- Access to the Ansible administration node.
- Ansible user with sudo access to all nodes in the storage cluster.
- For Red Hat Enterprise Linux 9, to log in as a root user, see Enabling SSH log in as root user on Red Hat Enterprise Linux 9
Procedure
Generate the SSH key pair, accept the default file name and leave the passphrase empty:
ssh-keygen
[ceph-admin@admin ~]$ ssh-keygenCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the public key to all nodes in the storage cluster:
ssh-copy-id USER_NAME@HOST_NAME
ssh-copy-id USER_NAME@HOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace USER_NAME with the new user name for the Ansible user. Replace HOST_NAME with the host name of the Ceph node.
Example
ssh-copy-id ceph-admin@host01
[ceph-admin@admin ~]$ ssh-copy-id ceph-admin@host01Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the user’s SSH
configfile:touch ~/.ssh/config
[ceph-admin@admin ~]$ touch ~/.ssh/configCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open for editing the
configfile. Set values for theHostnameandUseroptions for each node in the storage cluster:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace HOST_NAME with the host name of the Ceph node. Replace USER_NAME with the new user name for the Ansible user.
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantBy configuring the
~/.ssh/configfile you do not have to specify the-u USER_NAMEoption each time you execute theansible-playbookcommand.Set the correct file permissions for the
~/.ssh/configfile:chmod 600 ~/.ssh/config
[ceph-admin@admin ~]$ chmod 600 ~/.ssh/configCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.10. Running the preflight playbook Copy linkLink copied to clipboard!
This Ansible playbook configures the Ceph repository and prepares the storage cluster for bootstrapping. It also installs some prerequisites, such as podman, lvm2, chrony, and cephadm. The default location for cephadm-ansible and cephadm-preflight.yml is /usr/share/cephadm-ansible.
The preflight playbook uses the cephadm-ansible inventory file to identify all the admin and nodes in the storage cluster.
The default location for the inventory file is /usr/share/cephadm-ansible/hosts. The following example shows the structure of a typical inventory file:
Example
The [admin] group in the inventory file contains the name of the node where the admin keyring is stored. On a new storage cluster, the node in the [admin] group will be the bootstrap node. To add additional admin hosts after bootstrapping the cluster see Setting up the admin node in the Installation Guide for more information.
Run the preflight playbook before you bootstrap the initial host.
If you are performing a disconnected installation, see Running the preflight playbook for a disconnected installation.
Prerequisites
- Root-level access to the Ansible administration node.
Ansible user with sudo and passwordless
sshaccess to all nodes in the storage cluster.NoteIn the below example, host01 is the bootstrap node.
Procedure
-
Navigate to the the
/usr/share/cephadm-ansibledirectory. Open and edit the
hostsfile and add your nodes:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the preflight playbook:
Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs"
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs"
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs"Copy to Clipboard Copied! Toggle word wrap Toggle overflow After installation is complete,
cephadmresides in the/usr/sbin/directory.Use the
--limitoption to run the preflight playbook on a selected set of hosts in the storage cluster:Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit GROUP_NAME|NODE_NAME
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit GROUP_NAME|NODE_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace GROUP_NAME with a group name from your inventory file. Replace NODE_NAME with a specific node name from your inventory file.
NoteOptionally, you can group your nodes in your inventory file by group name such as
[mons],[osds], and[mgrs]. However, admin nodes must be added to the[admin]group and clients must be added to the[clients]group.Example
ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit clients ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit host01
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit clients [ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit host01Copy to Clipboard Copied! Toggle word wrap Toggle overflow When you run the preflight playbook,
cephadm-ansibleautomatically installschronyandceph-commonon the client nodes.The preflight playbook installs
chronybut configures it for a single NTP source. If you want to configure multiple sources or if you have a disconnected environment, see the following documentation for more information:
3.11. Bootstrapping a new storage cluster Copy linkLink copied to clipboard!
The cephadm utility performs the following tasks during the bootstrap process:
- Installs and starts a Ceph Monitor daemon and a Ceph Manager daemon for a new Red Hat Ceph Storage cluster on the local node as containers.
-
Creates the
/etc/cephdirectory. -
Writes a copy of the public key to
/etc/ceph/ceph.pubfor the Red Hat Ceph Storage cluster and adds the SSH key to the root user’s/root/.ssh/authorized_keysfile. -
Applies the
_adminlabel to the bootstrap node. -
Writes a minimal configuration file needed to communicate with the new cluster to
/etc/ceph/ceph.conf. -
Writes a copy of the
client.adminadministrative secret key to/etc/ceph/ceph.client.admin.keyring. -
Deploys a basic monitoring stack with Prometheus, Grafana, and other tools including
node-exporterandAlertmanager.
If you are performing a disconnected installation, see Performing a disconnected installation.
If you have existing Prometheus services that you want to run with the new storage cluster, or if you are deploying Ceph with Rook, pass the --skip-monitoring-stack option to the cephadm bootstrap command. This option bypasses the basic monitoring stack so that you can manually configure it later.
If you are deploying the monitoring stack, see Deploying the monitoring stack using the Ceph Orchestrator in the Red Hat Ceph Storage Operations Guide.
Bootstrapping provides the default user name and password for the initial login to the Dashboard. Bootstrap requires you to change the password after you log in.
Before you begin the bootstrapping process, make sure that the container image that you want to use has the same version of Red Hat Ceph Storage as cephadm. If the two versions do not match, bootstrapping fails at the Creating initial admin user stage.
Before you begin the bootstrapping process, you must create a username and password for the registry.redhat.io container registry. For more information about Red Hat container registry authentication, see the knowledge base article Red Hat Container Registry Authentication
Prerequisites
- An IP address for the first Ceph Monitor container, which is also the IP address for the first node in the storage cluster.
-
Login access to
registry.redhat.io. -
A minimum of 10 GB of free space for
/var/lib/containers/. - Root-level access to all nodes.
If the storage cluster includes multiple networks and interfaces, be sure to choose a network that is accessible by any node that uses the storage cluster.
If the local node uses fully-qualified domain names (FQDN), then add the --allow-fqdn-hostname option to cephadm bootstrap on the command line.
Run cephadm bootstrap on the node that you want to be the initial Monitor node in the cluster. The IP_ADDRESS option should be the IP address of the node you are using to run cephadm bootstrap.
If you want to deploy a storage cluster using IPV6 addresses, then use the IPV6 address format for the --mon-ip IP_ADDRESS option. For example: cephadm bootstrap --mon-ip 2620:52:0:880:225:90ff:fefc:2536 --registry-json /etc/mylogin.json
Procedure
Bootstrap a storage cluster:
Syntax
cephadm bootstrap --cluster-network NETWORK_CIDR --mon-ip IP_ADDRESS --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD --yes-i-know
cephadm bootstrap --cluster-network NETWORK_CIDR --mon-ip IP_ADDRESS --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD --yes-i-knowCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
cephadm bootstrap --cluster-network 10.10.128.0/24 --mon-ip 10.10.128.68 --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1 --yes-i-know
[root@host01 ~]# cephadm bootstrap --cluster-network 10.10.128.0/24 --mon-ip 10.10.128.68 --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1 --yes-i-knowCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you want internal cluster traffic routed over the public network, you can omit the
--cluster-network NETWORK_CIDRoption.The script takes a few minutes to complete. Once the script completes, it provides the credentials to the Red Hat Ceph Storage Dashboard URL, a command to access the Ceph command-line interface (CLI), and a request to enable telemetry.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional Resources
- For more information about the recommended bootstrap command options, see Recommended cephadm bootstrap command options.
- For more information about the options available for the bootstrap command, see Bootstrap command options.
- For information about using a JSON file to contain login credentials for the bootstrap process, see Using a JSON file to protect login information.
3.11.1. Recommended cephadm bootstrap command options Copy linkLink copied to clipboard!
The cephadm bootstrap command has multiple options that allow you to specify file locations, configure ssh settings, set passwords, and perform other initial configuration tasks.
Red Hat recommends that you use a basic set of command options for cephadm bootstrap. You can configure additional options after your initial cluster is up and running.
The following examples show how to specify the recommended options.
Syntax
cephadm bootstrap --ssh-user USER_NAME --mon-ip IP_ADDRESS --allow-fqdn-hostname --registry-json REGISTRY_JSON
cephadm bootstrap --ssh-user USER_NAME --mon-ip IP_ADDRESS --allow-fqdn-hostname --registry-json REGISTRY_JSON
Example
cephadm bootstrap --ssh-user ceph --mon-ip 10.10.128.68 --allow-fqdn-hostname --registry-json /etc/mylogin.json
[root@host01 ~]# cephadm bootstrap --ssh-user ceph --mon-ip 10.10.128.68 --allow-fqdn-hostname --registry-json /etc/mylogin.json
3.11.2. Using a JSON file to protect login information Copy linkLink copied to clipboard!
As a storage administrator, you might choose to add login and password information to a JSON file, and then refer to the JSON file for bootstrapping. This protects the login credentials from exposure.
You can also use a JSON file with the cephadm --registry-login command.
Prerequisites
- An IP address for the first Ceph Monitor container, which is also the IP address for the first node in the storage cluster.
-
Login access to
registry.redhat.io. -
A minimum of 10 GB of free space for
/var/lib/containers/. - Root-level access to all nodes.
Procedure
Create the JSON file. In this example, the file is named
mylogin.json.Syntax
{ "url":"REGISTRY_URL", "username":"USER_NAME", "password":"PASSWORD" }{ "url":"REGISTRY_URL", "username":"USER_NAME", "password":"PASSWORD" }Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
{ "url":"registry.redhat.io", "username":"myuser1", "password":"mypassword1" }{ "url":"registry.redhat.io", "username":"myuser1", "password":"mypassword1" }Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bootstrap a storage cluster:
Syntax
cephadm bootstrap --mon-ip IP_ADDRESS --registry-json /etc/mylogin.json
cephadm bootstrap --mon-ip IP_ADDRESS --registry-json /etc/mylogin.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
cephadm bootstrap --mon-ip 10.10.128.68 --registry-json /etc/mylogin.json
[root@host01 ~]# cephadm bootstrap --mon-ip 10.10.128.68 --registry-json /etc/mylogin.jsonCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.11.3. Bootstrapping a storage cluster using a service configuration file Copy linkLink copied to clipboard!
To bootstrap the storage cluster and configure additional hosts and daemons using a service configuration file, use the --apply-spec option with the cephadm bootstrap command. The configuration file is a .yaml file that contains the service type, placement, and designated nodes for services that you want to deploy.
If you want to use a non-default realm or zone for applications such as multi-site, configure your Ceph Object Gateway daemons after you bootstrap the storage cluster, instead of adding them to the configuration file and using the --apply-spec option. This gives you the opportunity to create the realm or zone you need for the Ceph Object Gateway daemons before deploying them. See the Red Hat Ceph Storage Operations Guide for more information.
If deploying a NFS-Ganesha gateway, or Metadata Server (MDS) service, configure them after bootstrapping the storage cluster.
- To deploy a Ceph NFS-Ganesha gateway, you must create a RADOS pool first.
- To deploy the MDS service, you must create a CephFS volume first.
See the Red Hat Ceph Storage Operations Guide for more information.
With Red Hat Ceph Storage 6.0, if you run the bootstrap command with --apply-spec option, ensure to include the IP address of the bootstrap host in the specification file. This prevents resolving the IP address to loopback address while re-adding the bootstrap host where active Ceph Manager is already running.
If you do not use the --apply spec option during bootstrap and instead use ceph orch apply command with another specification file which includes re-adding the host and contains an active Ceph Manager running, then ensure to explicitly provide the addr field. This is applicable for applying any specification file after bootstrapping.
Prerequisites
- At least one running virtual machine (VM) or server.
-
Red Hat Enterprise Linux 9.0 or later with
ansible-corebundled into AppStream. - Root-level access to all nodes.
-
Login access to
registry.redhat.io. -
Passwordless
sshis set up on all hosts in the storage cluster. -
cephadmis installed on the node that you want to be the initial Monitor node in the storage cluster.
For the latest supported Red Hat Enterprise Linux versions for bootstrap nodes, see the Red Hat Ceph Storage Compatibility Guide.
Procedure
- Log in to the bootstrap host.
Create the service configuration
.yamlfile for your storage cluster. The example file directscephadm bootstrapto configure the initial host and two additional hosts, and it specifies that OSDs be created on all available disks.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Bootstrap the storage cluster with the
--apply-specoption:Syntax
cephadm bootstrap --apply-spec CONFIGURATION_FILE_NAME --mon-ip MONITOR_IP_ADDRESS --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD
cephadm bootstrap --apply-spec CONFIGURATION_FILE_NAME --mon-ip MONITOR_IP_ADDRESS --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORDCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
cephadm bootstrap --apply-spec initial-config.yaml --mon-ip 10.10.128.68 --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1
[root@host01 ~]# cephadm bootstrap --apply-spec initial-config.yaml --mon-ip 10.10.128.68 --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1Copy to Clipboard Copied! Toggle word wrap Toggle overflow The script takes a few minutes to complete. Once the script completes, it provides the credentials to the Red Hat Ceph Storage Dashboard URL, a command to access the Ceph command-line interface (CLI), and a request to enable telemetry.
- Once your storage cluster is up and running, see the Red Hat Ceph Storage Operations Guide for more information about configuring additional daemons and services.
3.11.4. Bootstrapping the storage cluster as a non-root user Copy linkLink copied to clipboard!
To bootstrap the Red Hat Ceph Storage cluster as a non-root user on the bootstrap node, use the --ssh-user option with the cephadm bootstrap command. --ssh-user specifies a user for SSH connections to cluster nodes.
Non-root users must have passwordless sudo access.
Prerequisites
- An IP address for the first Ceph Monitor container, which is also the IP address for the initial Monitor node in the storage cluster.
-
Login access to
registry.redhat.io. -
A minimum of 10 GB of free space for
/var/lib/containers/. - SSH public and private keys.
-
Passwordless
sudoaccess to the bootstrap node.
Procedure
Change to
sudoon the bootstrap node:Syntax
su - SSH_USER_NAME
su - SSH_USER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
su - ceph Last login: Tue Sep 14 12:00:29 EST 2021 on pts/0
[root@host01 ~]# su - ceph Last login: Tue Sep 14 12:00:29 EST 2021 on pts/0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Establish the SSH connection to the bootstrap node:
Example
ssh host01 Last login: Tue Sep 14 12:03:29 EST 2021 on pts/0
[ceph@host01 ~]# ssh host01 Last login: Tue Sep 14 12:03:29 EST 2021 on pts/0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Invoke the
cephadm bootstrapcommand.NoteUsing private and public keys is optional. If SSH keys have not previously been created, these can be created during this step.
Include the
--ssh-private-keyand--ssh-public-keyoptions:Syntax
cephadm bootstrap --ssh-user USER_NAME --mon-ip IP_ADDRESS --ssh-private-key PRIVATE_KEY --ssh-public-key PUBLIC_KEY --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORD
cephadm bootstrap --ssh-user USER_NAME --mon-ip IP_ADDRESS --ssh-private-key PRIVATE_KEY --ssh-public-key PUBLIC_KEY --registry-url registry.redhat.io --registry-username USER_NAME --registry-password PASSWORDCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
cephadm bootstrap --ssh-user ceph --mon-ip 10.10.128.68 --ssh-private-key /home/ceph/.ssh/id_rsa --ssh-public-key /home/ceph/.ssh/id_rsa.pub --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1
cephadm bootstrap --ssh-user ceph --mon-ip 10.10.128.68 --ssh-private-key /home/ceph/.ssh/id_rsa --ssh-public-key /home/ceph/.ssh/id_rsa.pub --registry-url registry.redhat.io --registry-username myuser1 --registry-password mypassword1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.11.5. Bootstrap command options Copy linkLink copied to clipboard!
The cephadm bootstrap command bootstraps a Ceph storage cluster on the local host. It deploys a MON daemon and a MGR daemon on the bootstrap node, automatically deploys the monitoring stack on the local host, and calls ceph orch host add HOSTNAME.
The following table lists the available options for cephadm bootstrap.
cephadm bootstrap option | Description |
|---|---|
| --config CONFIG_FILE, -c CONFIG_FILE |
CONFIG_FILE is the |
| --cluster-network NETWORK_CIDR |
Use the subnet defined by NETWORK_CIDR for internal cluster traffic. This is specified in CIDR notation. For example: |
| --mon-id MON_ID | Bootstraps on the host named MON_ID. Default value is the local host. |
| --mon-addrv MON_ADDRV | mon IPs (e.g., [v2:localipaddr:3300,v1:localipaddr:6789]) |
| --mon-ip IP_ADDRESS |
IP address of the node you are using to run |
| --mgr-id MGR_ID | Host ID where a MGR node should be installed. Default: randomly generated. |
| --fsid FSID | Cluster FSID. |
| --output-dir OUTPUT_DIR | Use this directory to write config, keyring, and pub key files. |
| --output-keyring OUTPUT_KEYRING | Use this location to write the keyring file with the new cluster admin and mon keys. |
| --output-config OUTPUT_CONFIG | Use this location to write the configuration file to connect to the new cluster. |
| --output-pub-ssh-key OUTPUT_PUB_SSH_KEY | Use this location to write the public SSH key for the cluster. |
| --skip-ssh | Skip the setup of the ssh key on the local host. |
| --initial-dashboard-user INITIAL_DASHBOARD_USER | Initial user for the dashboard. |
| --initial-dashboard-password INITIAL_DASHBOARD_PASSWORD | Initial password for the initial dashboard user. |
| --ssl-dashboard-port SSL_DASHBOARD_PORT | Port number used to connect with the dashboard using SSL. |
| --dashboard-key DASHBOARD_KEY | Dashboard key. |
| --dashboard-crt DASHBOARD_CRT | Dashboard certificate. |
| --ssh-config SSH_CONFIG | SSH config. |
| --ssh-private-key SSH_PRIVATE_KEY | SSH private key. |
| --ssh-public-key SSH_PUBLIC_KEY | SSH public key. |
| --ssh-user SSH_USER | Sets the user for SSH connections to cluster hosts. Passwordless sudo is needed for non-root users. |
| --skip-mon-network | Sets mon public_network based on the bootstrap mon ip. |
| --skip-dashboard | Do not enable the Ceph Dashboard. |
| --dashboard-password-noupdate | Disable forced dashboard password change. |
| --no-minimize-config | Do not assimilate and minimize the configuration file. |
| --skip-ping-check | Do not verify that the mon IP is pingable. |
| --skip-pull | Do not pull the latest image before bootstrapping. |
| --skip-firewalld | Do not configure firewalld. |
| --allow-overwrite | Allow the overwrite of existing –output-* config/keyring/ssh files. |
| --allow-fqdn-hostname | Allow fully qualified host name. |
| --skip-prepare-host | Do not prepare host. |
| --orphan-initial-daemons | Do not create initial mon, mgr, and crash service specs. |
| --skip-monitoring-stack | Do not automatically provision the monitoring stack] (prometheus, grafana, alertmanager, node-exporter). |
| --apply-spec APPLY_SPEC | Apply cluster spec file after bootstrap (copy ssh key, add hosts and apply services). |
| --registry-url REGISTRY_URL |
Specifies the URL of the custom registry to log into. For example: |
| --registry-username REGISTRY_USERNAME | User name of the login account to the custom registry. |
| --registry-password REGISTRY_PASSWORD | Password of the login account to the custom registry. |
| --registry-json REGISTRY_JSON | JSON file containing registry login information. |
3.11.6. Configuring a private registry for a disconnected installation Copy linkLink copied to clipboard!
You can use a disconnected installation procedure to install cephadm and bootstrap your storage cluster on a private network. A disconnected installation uses a private registry for installation. Use this procedure when the Red Hat Ceph Storage nodes do NOT have access to the Internet during deployment.
Follow this procedure to set up a secure private registry by using authentication and a self-signed certificate. Perform these steps on a node that has both Internet access and access to the local cluster.
Using an insecure registry for production is not recommended.
Prerequisites
- At least one running virtual machine (VM) or server with an active internet connection.
-
Red Hat Enterprise Linux 9.0 or later with
ansible-corebundled into AppStream. -
Login access to
registry.redhat.io. - Root-level access to all nodes.
For the latest supported Red Hat Enterprise Linux versions for bootstrap nodes, see the Red Hat Ceph Storage Compatibility Guide.
Procedure
- Log in to the node that has access to both the public network and the cluster nodes.
Register the node, and when prompted, enter the appropriate Red Hat Customer Portal credentials:
Example
subscription-manager register
[root@admin ~]# subscription-manager registerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Pull the latest subscription data:
Example
subscription-manager refresh
[root@admin ~]# subscription-manager refreshCopy to Clipboard Copied! Toggle word wrap Toggle overflow List all available subscriptions for Red Hat Ceph Storage:
Example
subscription-manager list --available --all --matches="*Ceph*"
[root@admin ~]# subscription-manager list --available --all --matches="*Ceph*"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the Pool ID from the list of available subscriptions for Red Hat Ceph Storage.
Attach the subscription to get access to the software entitlements:
Syntax
subscription-manager attach --pool=POOL_ID
subscription-manager attach --pool=POOL_IDCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace POOL_ID with the Pool ID identified in the previous step.
Disable the default software repositories, and enable the server and the extras repositories:
Red Hat Enterprise Linux 9
subscription-manager repos --disable=* subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms
[root@admin ~]# subscription-manager repos --disable=* [root@admin ~]# subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms [root@admin ~]# subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the
podmanandhttpd-toolspackages:Example
dnf install -y podman httpd-tools
[root@admin ~]# dnf install -y podman httpd-toolsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create folders for the private registry:
Example
mkdir -p /opt/registry/{auth,certs,data}[root@admin ~]# mkdir -p /opt/registry/{auth,certs,data}Copy to Clipboard Copied! Toggle word wrap Toggle overflow The registry will be stored in
/opt/registryand the directories are mounted in the container that is running the registry.-
The
authdirectory stores thehtpasswdfile that the registry uses for authentication. -
The
certsdirectory stores the certificates that the registry uses for authentication. -
The
datadirectory stores the registry images.
-
The
Create credentials for accessing the private registry:
Syntax
htpasswd -bBc /opt/registry/auth/htpasswd PRIVATE_REGISTRY_USERNAME PRIVATE_REGISTRY_PASSWORD
htpasswd -bBc /opt/registry/auth/htpasswd PRIVATE_REGISTRY_USERNAME PRIVATE_REGISTRY_PASSWORDCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
The
boption provides the password from the command line. -
The
Boption stores the password usingBcryptencryption. -
The
coption creates thehtpasswdfile. - Replace PRIVATE_REGISTRY_USERNAME with the username to create for the private registry.
Replace PRIVATE_REGISTRY_PASSWORD with the password to create for the private registry username.
Example
htpasswd -bBc /opt/registry/auth/htpasswd myregistryusername myregistrypassword1
[root@admin ~]# htpasswd -bBc /opt/registry/auth/htpasswd myregistryusername myregistrypassword1Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
The
Create a self-signed certificate:
Syntax
openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 365 -out /opt/registry/certs/domain.crt -addext "subjectAltName = DNS:LOCAL_NODE_FQDN"
openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 365 -out /opt/registry/certs/domain.crt -addext "subjectAltName = DNS:LOCAL_NODE_FQDN"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace LOCAL_NODE_FQDN with the fully qualified hostname of the private registry node.
NoteYou will be prompted for the respective options for your certificate. The
CN=value is the host name of your node and should be resolvable by DNS or the/etc/hostsfile.Example
openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 365 -out /opt/registry/certs/domain.crt -addext "subjectAltName = DNS:admin.lab.redhat.com"
[root@admin ~]# openssl req -newkey rsa:4096 -nodes -sha256 -keyout /opt/registry/certs/domain.key -x509 -days 365 -out /opt/registry/certs/domain.crt -addext "subjectAltName = DNS:admin.lab.redhat.com"Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen creating a self-signed certificate, be sure to create a certificate with a proper Subject Alternative Name (SAN). Podman commands that require TLS verification for certificates that do not include a proper SAN, return the following error: x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0
Create a symbolic link to
domain.certto allowskopeoto locate the certificate with the file extension.cert:Example
ln -s /opt/registry/certs/domain.crt /opt/registry/certs/domain.cert
[root@admin ~]# ln -s /opt/registry/certs/domain.crt /opt/registry/certs/domain.certCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the certificate to the trusted list on the private registry node:
Syntax
cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust trust list | grep -i "LOCAL_NODE_FQDN"
cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust trust list | grep -i "LOCAL_NODE_FQDN"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace LOCAL_NODE_FQDN with the FQDN of the private registry node.
Example
cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust trust list | grep -i "admin.lab.redhat.com" label: admin.lab.redhat.com[root@admin ~]# cp /opt/registry/certs/domain.crt /etc/pki/ca-trust/source/anchors/ [root@admin ~]# update-ca-trust [root@admin ~]# trust list | grep -i "admin.lab.redhat.com" label: admin.lab.redhat.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the certificate to any nodes that will access the private registry for installation and update the trusted list:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Download and install the mirror registry.
- Download the mirror-registory from the Red Hat Hybrid Cloud Console.
Install the mirror registry.
Syntax
./mirror-registry install --sslKey /opt/registry/certs/domain.key --sslCert /opt/registry/certs/domain.crt --initUser myregistryuser --initPassword myregistrypass
./mirror-registry install --sslKey /opt/registry/certs/domain.key --sslCert /opt/registry/certs/domain.crt --initUser myregistryuser --initPassword myregistrypassCopy to Clipboard Copied! Toggle word wrap Toggle overflow
On the local registry node, verify that
registry.redhat.iois in the container registry search path.Open for editing the
/etc/containers/registries.conffile, and addregistry.redhat.ioto theunqualified-search-registrieslist, if it does not exist:Example
unqualified-search-registries = ["registry.redhat.io", "registry.access.redhat.com", "registry.fedoraproject.org", "registry.centos.org", "docker.io"]
unqualified-search-registries = ["registry.redhat.io", "registry.access.redhat.com", "registry.fedoraproject.org", "registry.centos.org", "docker.io"]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Login to
registry.redhat.iowith your Red Hat Customer Portal credentials:Syntax
podman login registry.redhat.io
podman login registry.redhat.ioCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the following Red Hat Ceph Storage 6 image, Prometheus images, and Dashboard image from the Red Hat Customer Portal to the private registry:
Expand Table 3.1. Custom image details for monitoring stack Monitoring stack component Image details Prometheus
registry.redhat.io/openshift4/ose-prometheus:v4.12
Grafana
registry.redhat.io/rhceph/rhceph-6-dashboard-rhel9:latest
Node-exporter
registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.12
AlertManager
registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.12
HAProxy
registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest
Keepalived
registry.redhat.io/rhceph/keepalived-rhel9:latest
SNMP Gateway
registry.redhat.io/rhceph/snmp-notifier-rhel9:latest
Syntax
podman run -v /CERTIFICATE_DIRECTORY_PATH:/certs:Z -v /CERTIFICATE_DIRECTORY_PATH/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel8/skopeo:8.5-8 skopeo copy --remove-signatures --src-creds RED_HAT_CUSTOMER_PORTAL_LOGIN:RED_HAT_CUSTOMER_PORTAL_PASSWORD --dest-cert-dir=./certs/ --dest-creds PRIVATE_REGISTRY_USERNAME:PRIVATE_REGISTRY_PASSWORD docker://registry.redhat.io/SRC_IMAGE:SRC_TAG docker://LOCAL_NODE_FQDN:8433/DST_IMAGE:DST_TAG
podman run -v /CERTIFICATE_DIRECTORY_PATH:/certs:Z -v /CERTIFICATE_DIRECTORY_PATH/domain.cert:/certs/domain.cert:Z --rm registry.redhat.io/rhel8/skopeo:8.5-8 skopeo copy --remove-signatures --src-creds RED_HAT_CUSTOMER_PORTAL_LOGIN:RED_HAT_CUSTOMER_PORTAL_PASSWORD --dest-cert-dir=./certs/ --dest-creds PRIVATE_REGISTRY_USERNAME:PRIVATE_REGISTRY_PASSWORD docker://registry.redhat.io/SRC_IMAGE:SRC_TAG docker://LOCAL_NODE_FQDN:8433/DST_IMAGE:DST_TAGCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace CERTIFICATE_DIRECTORY_PATH with the directory path to the self-signed certificates.
- Replace RED_HAT_CUSTOMER_PORTAL_LOGIN and RED_HAT_CUSTOMER_PORTAL_PASSWORD with your Red Hat Customer Portal credentials.
- Replace PRIVATE_REGISTRY_USERNAME and PRIVATE_REGISTRY_PASSWORD with the private registry credentials.
- Replace SRC_IMAGE and SRC_TAG with the name and tag of the image to copy from registry.redhat.io.
- Replace DST_IMAGE and DST_TAG with the name and tag of the image to copy to the private registry.
Replace LOCAL_NODE_FQDN with the FQDN of the private registry.
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Using the Ceph Dashboard, verify that the images are in the local registry. For more information, see Monitoring services of the Ceph cluster on the dashboard in the Red Hat Ceph Storage Dashboard guide.
3.11.7. Running the preflight playbook for a disconnected installation Copy linkLink copied to clipboard!
You use the cephadm-preflight.yml Ansible playbook to configure the Ceph repository and prepare the storage cluster for bootstrapping. It also installs some prerequisites, such as podman, lvm2, chrony, and cephadm.
The preflight playbook uses the cephadm-ansible inventory hosts file to identify all the nodes in the storage cluster. The default location for cephadm-ansible, cephadm-preflight.yml, and the inventory hosts file is /usr/share/cephadm-ansible/.
The following example shows the structure of a typical inventory file:
Example
The [admin] group in the inventory file contains the name of the node where the admin keyring is stored.
Run the preflight playbook before you bootstrap the initial host.
Prerequisites
-
The
cephadm-ansiblepackage is installed on the Ansible administration node. - Root-level access to all nodes in the storage cluster.
-
Passwordless
sshis set up on all hosts in the storage cluster. Nodes configured to access a local YUM repository server with the following repositories enabled:
- rhel-9-for-x86_64-baseos-rpms
- rhel-9-for-x86_64-appstream-rpms
- rhceph-6-tools-for-rhel-9-x86_64-rpms
When using Red Hat Enterprise Linux 8.x, the Admin node must be running a supported Red Hat Enterprise Linux 9.x version for your Red Hat Ceph Storage. For the latest supported Red Hat Enterprise Linux versions, see the Red Hat Ceph Storage Compatibility Guide.
For more information about setting up a local YUM repository, see the knowledge base article Creating a Local Repository and Sharing with Disconnected/Offline/Air-gapped Systems
Procedure
-
Navigate to the
/usr/share/cephadm-ansibledirectory on the Ansible administration node. -
Open and edit the
hostsfile and add your nodes. Run the preflight playbook with the
ceph_originparameter set tocustomto use a local YUM repository:Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=custom" -e "custom_repo_url=CUSTOM_REPO_URL"
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=custom" -e "custom_repo_url=CUSTOM_REPO_URL"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=custom" -e "custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/"
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=custom" -e "custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/"Copy to Clipboard Copied! Toggle word wrap Toggle overflow After installation is complete,
cephadmresides in the/usr/sbin/directory.NotePopulate the contents of the
registries.conffile with the Ansible playbook:Syntax
ansible-playbook -vvv -i INVENTORY_HOST_FILE_ cephadm-set-container-insecure-registries.yml -e insecure_registry=REGISTRY_URL
ansible-playbook -vvv -i INVENTORY_HOST_FILE_ cephadm-set-container-insecure-registries.yml -e insecure_registry=REGISTRY_URLCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ansible-playbook -vvv -i hosts cephadm-set-container-insecure-registries.yml -e insecure_registry=host01:5050
[root@admin ~]# ansible-playbook -vvv -i hosts cephadm-set-container-insecure-registries.yml -e insecure_registry=host01:5050Copy to Clipboard Copied! Toggle word wrap Toggle overflow Alternatively, you can use the
--limitoption to run the preflight playbook on a selected set of hosts in the storage cluster:Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=custom" -e "custom_repo_url=CUSTOM_REPO_URL" --limit GROUP_NAME|NODE_NAME
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=custom" -e "custom_repo_url=CUSTOM_REPO_URL" --limit GROUP_NAME|NODE_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace GROUP_NAME with a group name from your inventory file. Replace NODE_NAME with a specific node name from your inventory file.
Example
ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=custom" -e "custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/" --limit clients ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=custom" -e "custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/" --limit host02
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=custom" -e "custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/" --limit clients [ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=custom" -e "custom_repo_url=http://mycustomrepo.lab.redhat.com/x86_64/os/" --limit host02Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen you run the preflight playbook,
cephadm-ansibleautomatically installschronyandceph-commonon the client nodes.
3.11.8. Performing a disconnected installation Copy linkLink copied to clipboard!
Before you can perform the installation, you must obtain a Red Hat Ceph Storage container image, either from a proxy host that has access to the Red Hat registry or by copying the image to your local registry.
If your local registry uses a self-signed certificate with a local registry, ensure you have added the trusted root certificate to the bootstrap host. For more information, see Configuring a private registry for a disconnected installation.
For the latest supported Red Hat Enterprise Linux versions for bootstrap nodes, see the Red Hat Ceph Storage Compatibility Guide.
Before you begin the bootstrapping process, make sure that the container image that you want to use has the same version of Red Hat Ceph Storage as cephadm. If the two versions do not match, bootstrapping fails at the Creating initial admin user stage.
Prerequisites
- At least one running virtual machine (VM) or server.
- Root-level access to all nodes.
-
Passwordless
sshis set up on all hosts in the storage cluster. - The preflight playbook has been run on the bootstrap host in the storage cluster. For more information, see Running the preflight playbook for a disconnected installation.
- A private registry has been configured and the bootstrap node has access to it. For more information, see Configuring a private registry for a disconnected installation.
- A Red Hat Ceph Storage container image resides in the custom registry.
Procedure
- Log in to the bootstrap host.
Bootstrap the storage cluster:
Syntax
cephadm --image PRIVATE_REGISTRY_NODE_FQDN:5000/CUSTOM_IMAGE_NAME:IMAGE_TAG bootstrap --mon-ip IP_ADDRESS --registry-url PRIVATE_REGISTRY_NODE_FQDN:5000 --registry-username PRIVATE_REGISTRY_USERNAME --registry-password PRIVATE_REGISTRY_PASSWORD
cephadm --image PRIVATE_REGISTRY_NODE_FQDN:5000/CUSTOM_IMAGE_NAME:IMAGE_TAG bootstrap --mon-ip IP_ADDRESS --registry-url PRIVATE_REGISTRY_NODE_FQDN:5000 --registry-username PRIVATE_REGISTRY_USERNAME --registry-password PRIVATE_REGISTRY_PASSWORDCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Replace PRIVATE_REGISTRY_NODE_FQDN with the fully qualified domain name of your private registry.
- Replace CUSTOM_IMAGE_NAME and IMAGE_TAG with the name and tag of the Red Hat Ceph Storage container image that resides in the private registry.
-
Replace IP_ADDRESS with the IP address of the node you are using to run
cephadm bootstrap. - Replace PRIVATE_REGISTRY_USERNAME with the username to create for the private registry.
Replace PRIVATE_REGISTRY_PASSWORD with the password to create for the private registry username.
Example
cephadm --image admin.lab.redhat.com:5000/rhceph-6-rhel9:latest bootstrap --mon-ip 10.10.128.68 --registry-url admin.lab.redhat.com:5000 --registry-username myregistryusername --registry-password myregistrypassword1
[root@host01 ~]# cephadm --image admin.lab.redhat.com:5000/rhceph-6-rhel9:latest bootstrap --mon-ip 10.10.128.68 --registry-url admin.lab.redhat.com:5000 --registry-username myregistryusername --registry-password myregistrypassword1Copy to Clipboard Copied! Toggle word wrap Toggle overflow The script takes a few minutes to complete. Once the script completes, it provides the credentials to the Red Hat Ceph Storage Dashboard URL, a command to access the Ceph command-line interface (CLI), and a request to enable telemetry.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After the bootstrap process is complete, see Changing configurations of custom container images for disconnected installations to configure the container images.
3.11.9. Changing configurations of custom container images for disconnected installations Copy linkLink copied to clipboard!
After you perform the initial bootstrap for disconnected nodes, you must specify custom container images for monitoring stack daemons. You can override the default container images for monitoring stack daemons, since the nodes do not have access to the default container registry.
Make sure that the bootstrap process on the initial host is complete before making any configuration changes.
By default, the monitoring stack components are deployed based on the primary Ceph image. For disconnected environment of the storage cluster, you can use the latest available monitoring stack component images.
When using a custom registry, be sure to log in to the custom registry on newly added nodes before adding any Ceph daemons.
Syntax
ceph cephadm registry-login --registry-url CUSTOM_REGISTRY_NAME --registry_username REGISTRY_USERNAME --registry_password REGISTRY_PASSWORD
# ceph cephadm registry-login --registry-url CUSTOM_REGISTRY_NAME --registry_username REGISTRY_USERNAME --registry_password REGISTRY_PASSWORD
Example
ceph cephadm registry-login --registry-url myregistry --registry_username myregistryusername --registry_password myregistrypassword1
# ceph cephadm registry-login --registry-url myregistry --registry_username myregistryusername --registry_password myregistrypassword1
Prerequisites
- At least one running virtual machine (VM) or server.
-
Red Hat Enterprise Linux 9.0 or later with
ansible-corebundled into AppStream. - Root-level access to all nodes.
-
Passwordless
sshis set up on all hosts in the storage cluster.
For the latest supported Red Hat Enterprise Linux versions for bootstrap nodes, see the Red Hat Ceph Storage Compatibility Guide.
Procedure
Set the custom container images with the
ceph configcommand:Syntax
ceph config set mgr mgr/cephadm/OPTION_NAME CUSTOM_REGISTRY_NAME/CONTAINER_NAME
ceph config set mgr mgr/cephadm/OPTION_NAME CUSTOM_REGISTRY_NAME/CONTAINER_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Use the following options for OPTION_NAME:
container_image_prometheus container_image_grafana container_image_alertmanager container_image_node_exporter
container_image_prometheus container_image_grafana container_image_alertmanager container_image_node_exporterCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph config set mgr mgr/cephadm/container_image_prometheus myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_grafana myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_alertmanager myregistry/mycontainer ceph config set mgr mgr/cephadm/container_image_node_exporter myregistry/mycontainer
[root@host01 ~]# ceph config set mgr mgr/cephadm/container_image_prometheus myregistry/mycontainer [root@host01 ~]# ceph config set mgr mgr/cephadm/container_image_grafana myregistry/mycontainer [root@host01 ~]# ceph config set mgr mgr/cephadm/container_image_alertmanager myregistry/mycontainer [root@host01 ~]# ceph config set mgr mgr/cephadm/container_image_node_exporter myregistry/mycontainerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Redeploy
node-exporter:Syntax
ceph orch redeploy node-exporter
ceph orch redeploy node-exporterCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If any of the services do not deploy, you can redeploy them with the ceph orch redeploy command.
By setting a custom image, the default values for the configuration image name and tag will be overridden, but not overwritten. The default values change when updates become available. By setting a custom image, you will not be able to configure the component for which you have set the custom image for automatic updates. You will need to manually update the configuration image name and tag to be able to install updates.
If you choose to revert to using the default configuration, you can reset the custom container image. Use
ceph config rmto reset the configuration option:Syntax
ceph config rm mgr mgr/cephadm/OPTION_NAME
ceph config rm mgr mgr/cephadm/OPTION_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph config rm mgr mgr/cephadm/container_image_prometheus
ceph config rm mgr mgr/cephadm/container_image_prometheusCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.12. Distributing SSH keys Copy linkLink copied to clipboard!
You can use the cephadm-distribute-ssh-key.yml playbook to distribute the SSH keys instead of creating and distributing the keys manually. The playbook distributes an SSH public key over all hosts in the inventory.
You can also generate an SSH key pair on the Ansible administration node and distribute the public key to each node in the storage cluster so that Ansible can access the nodes without being prompted for a password.
Prerequisites
- Ansible is installed on the administration node.
- Access to the Ansible administration node.
- Ansible user with sudo access to all nodes in the storage cluster.
- Bootstrapping is completed. See the Bootstrapping a new storage cluster section in the Red Hat Ceph Storage Installation Guide.
Procedure
Navigate to the
/usr/share/cephadm-ansibledirectory on the Ansible administration node:Example
cd /usr/share/cephadm-ansible
[ansible@admin ~]$ cd /usr/share/cephadm-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow From the Ansible administration node, distribute the SSH keys. The optional
cephadm_pubkey_pathparameter is the full path name of the SSH public key file on the ansible controller host.NoteIf
cephadm_pubkey_pathis not specified, the playbook gets the key from thecephadm get-pub-keycommand. This implies that you have at least bootstrapped a minimal cluster.Syntax
ansible-playbook -i INVENTORY_HOST_FILE cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=USER_NAME -e cephadm_pubkey_path= home/cephadm/ceph.key -e admin_node=ADMIN_NODE_NAME_1
ansible-playbook -i INVENTORY_HOST_FILE cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=USER_NAME -e cephadm_pubkey_path= home/cephadm/ceph.key -e admin_node=ADMIN_NODE_NAME_1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e cephadm_pubkey_path=/home/cephadm/ceph.key -e admin_node=host01 ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e admin_node=host01
[ansible@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e cephadm_pubkey_path=/home/cephadm/ceph.key -e admin_node=host01 [ansible@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-distribute-ssh-key.yml -e cephadm_ssh_user=ceph-admin -e admin_node=host01Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.13. Launching the cephadm shell Copy linkLink copied to clipboard!
The cephadm shell command launches a bash shell in a container with all of the Ceph packages installed. This enables you to perform “Day One” cluster setup tasks, such as installation and bootstrapping, and to invoke ceph commands.
Prerequisites
- A storage cluster that has been installed and bootstrapped.
- Root-level access to all nodes in the storage cluster.
Procedure
There are two ways to launch the cephadm shell:
Enter
cephadm shellat the system prompt. This example invokes theceph -scommand from within the shell.Example
cephadm shell [ceph: root@host01 /]# ceph -s
[root@host01 ~]# cephadm shell [ceph: root@host01 /]# ceph -sCopy to Clipboard Copied! Toggle word wrap Toggle overflow At the system prompt, type
cephadm shelland the command you want to execute:Example
cephadm shell ceph -s
[root@host01 ~]# cephadm shell ceph -sCopy to Clipboard Copied! Toggle word wrap Toggle overflow
If the node contains configuration and keyring files in /etc/ceph/, the container environment uses the values in those files as defaults for the cephadm shell. If you execute the cephadm shell on a MON node, the cephadm shell inherits its default configuration from the MON container, instead of using the default configuration.
3.14. Verifying the cluster installation Copy linkLink copied to clipboard!
Once the cluster installation is complete, you can verify that the Red Hat Ceph Storage 6 installation is running properly.
There are two ways of verifying the storage cluster installation as a root user:
-
Run the
podman pscommand. -
Run the
cephadm shell ceph -s.
Prerequisites
- Root-level access to all nodes in the storage cluster.
Procedure
Run the
podman pscommand:Example
podman ps
[root@host01 ~]# podman psCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIn Red Hat Ceph Storage 6, the format of the
systemdunits has changed. In theNAMEScolumn, the unit files now include theFSID.Run the
cephadm shell ceph -scommand:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe health of the storage cluster is in HEALTH_WARN status as the hosts and the daemons are not added.
3.15. Adding hosts Copy linkLink copied to clipboard!
Bootstrapping the Red Hat Ceph Storage installation creates a working storage cluster, consisting of one Monitor daemon and one Manager daemon within the same container. As a storage administrator, you can add additional hosts to the storage cluster and configure them.
-
Running the preflight playbook installs
podman,lvm2,chrony, andcephadmon all hosts listed in the Ansible inventory file. When using a custom registry, be sure to log in to the custom registry on newly added nodes before adding any Ceph daemons.
.Syntax [source,subs="verbatim,quotes"] ---- # ceph cephadm registry-login --registry-url _CUSTOM_REGISTRY_NAME_ --registry_username _REGISTRY_USERNAME_ --registry_password _REGISTRY_PASSWORD_ ----
.Syntax [source,subs="verbatim,quotes"] ---- # ceph cephadm registry-login --registry-url _CUSTOM_REGISTRY_NAME_ --registry_username _REGISTRY_USERNAME_ --registry_password _REGISTRY_PASSWORD_ ----Copy to Clipboard Copied! Toggle word wrap Toggle overflow .Example ---- # ceph cephadm registry-login --registry-url myregistry --registry_username myregistryusername --registry_password myregistrypassword1 ----
.Example ---- # ceph cephadm registry-login --registry-url myregistry --registry_username myregistryusername --registry_password myregistrypassword1 ----Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level or user with sudo access to all nodes in the storage cluster.
- Register the nodes to the CDN and attach subscriptions.
-
Ansible user with sudo and passwordless
sshaccess to all nodes in the storage cluster.
Procedure
+
In the following procedure, use either root, as indicated, or the username with which the user is bootstrapped.
From the node that contains the admin keyring, install the storage cluster’s public SSH key in the root user’s
authorized_keysfile on the new host:Syntax
ssh-copy-id -f -i /etc/ceph/ceph.pub user@NEWHOST
ssh-copy-id -f -i /etc/ceph/ceph.pub user@NEWHOSTCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ssh-copy-id -f -i /etc/ceph/ceph.pub root@host02 ssh-copy-id -f -i /etc/ceph/ceph.pub root@host03
[root@host01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@host02 [root@host01 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@host03Copy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to the
/usr/share/cephadm-ansibledirectory on the Ansible administration node.Example
cd /usr/share/cephadm-ansible
[ceph-admin@admin ~]$ cd /usr/share/cephadm-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow From the Ansible administration node, add the new host to the Ansible inventory file. The default location for the file is
/usr/share/cephadm-ansible/hosts. The following example shows the structure of a typical inventory file:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you have previously added the new host to the Ansible inventory file and run the preflight playbook on the host, skip to step 4.
Run the preflight playbook with the
--limitoption:Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit NEWHOST
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit NEWHOSTCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit host02
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs" --limit host02Copy to Clipboard Copied! Toggle word wrap Toggle overflow The preflight playbook installs
podman,lvm2,chrony, andcephadmon the new host. After installation is complete,cephadmresides in the/usr/sbin/directory.From the bootstrap node, use the
cephadmorchestrator to add the new host to the storage cluster:Syntax
ceph orch host add NEWHOST
ceph orch host add NEWHOSTCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host add host02 Added host 'host02' with addr '10.10.128.69' [ceph: root@host01 /]# ceph orch host add host03 Added host 'host03' with addr '10.10.128.70'
[ceph: root@host01 /]# ceph orch host add host02 Added host 'host02' with addr '10.10.128.69' [ceph: root@host01 /]# ceph orch host add host03 Added host 'host03' with addr '10.10.128.70'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: You can also add nodes by IP address, before and after you run the preflight playbook. If you do not have DNS configured in your storage cluster environment, you can add the hosts by IP address, along with the host names.
Syntax
ceph orch host add HOSTNAME IP_ADDRESS
ceph orch host add HOSTNAME IP_ADDRESSCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host add host02 10.10.128.69 Added host 'host02' with addr '10.10.128.69'
[ceph: root@host01 /]# ceph orch host add host02 10.10.128.69 Added host 'host02' with addr '10.10.128.69'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verification
View the status of the storage cluster and verify that the new host has been added. The STATUS of the hosts is blank, in the output of the
ceph orch host lscommand.Example
[ceph: root@host01 /]# ceph orch host ls
[ceph: root@host01 /]# ceph orch host lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.15.1. Using the addr option to identify hosts Copy linkLink copied to clipboard!
The addr option offers an additional way to contact a host. Add the IP address of the host to the addr option. If ssh cannot connect to the host by its hostname, then it uses the value stored in addr to reach the host by its IP address.
Prerequisites
- A storage cluster that has been installed and bootstrapped.
- Root-level access to all nodes in the storage cluster.
Procedure
Run this procedure from inside the cephadm shell.
Add the IP address:
Syntax
ceph orch host add HOSTNAME IP_ADDR
ceph orch host add HOSTNAME IP_ADDRCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host add host01 10.10.128.68
[ceph: root@host01 /]# ceph orch host add host01 10.10.128.68Copy to Clipboard Copied! Toggle word wrap Toggle overflow
If adding a host by hostname results in that host being added with an IPv6 address instead of an IPv4 address, use ceph orch host to specify the IP address of that host:
ceph orch host set-addr HOSTNAME IP_ADDR
ceph orch host set-addr HOSTNAME IP_ADDR
To convert the IP address from IPv6 format to IPv4 format for a host you have added, use the following command:
ceph orch host set-addr HOSTNAME IPV4_ADDRESS
ceph orch host set-addr HOSTNAME IPV4_ADDRESS
3.15.2. Adding multiple hosts Copy linkLink copied to clipboard!
Use a YAML file to add multiple hosts to the storage cluster at the same time.
Be sure to create the hosts.yaml file within a host container, or create the file on the local host and then use the cephadm shell to mount the file within the container. The cephadm shell automatically places mounted files in /mnt. If you create the file directly on the local host and then apply the hosts.yaml file instead of mounting it, you might see a File does not exist error.
Prerequisites
- A storage cluster that has been installed and bootstrapped.
- Root-level access to all nodes in the storage cluster.
Procedure
-
Copy over the public
sshkey to each of the hosts that you want to add. -
Use a text editor to create a
hosts.yamlfile. Add the host descriptions to the
hosts.yamlfile, as shown in the following example. Include the labels to identify placements for the daemons that you want to deploy on each host. Separate each host description with three dashes (---).Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you created the
hosts.yamlfile within the host container, invoke theceph orch applycommand:Example
ceph orch apply -i hosts.yaml Added host 'host02' with addr '10.10.128.69' Added host 'host03' with addr '10.10.128.70' Added host 'host04' with addr '10.10.128.71'
[root@host01 ~]# ceph orch apply -i hosts.yaml Added host 'host02' with addr '10.10.128.69' Added host 'host03' with addr '10.10.128.70' Added host 'host04' with addr '10.10.128.71'Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you created the
hosts.yamlfile directly on the local host, use thecephadmshell to mount the file:Example
cephadm shell --mount hosts.yaml -- ceph orch apply -i /mnt/hosts.yaml
[root@host01 ~]# cephadm shell --mount hosts.yaml -- ceph orch apply -i /mnt/hosts.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the list of hosts and their labels:
Example
ceph orch host ls HOST ADDR LABELS STATUS host02 host02 mon,osd,mgr host03 host03 mon,osd,mgr host04 host04 mon,osd
[root@host01 ~]# ceph orch host ls HOST ADDR LABELS STATUS host02 host02 mon,osd,mgr host03 host03 mon,osd,mgr host04 host04 mon,osdCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf a host is online and operating normally, its status is blank. An offline host shows a status of OFFLINE, and a host in maintenance mode shows a status of MAINTENANCE.
3.15.3. Adding hosts in disconnected deployments Copy linkLink copied to clipboard!
If you are running a storage cluster on a private network and your host domain name server (DNS) cannot be reached through private IP, you must include both the host name and the IP address for each host you want to add to the storage cluster.
Prerequisites
- A running storage cluster.
- Root-level access to all hosts in the storage cluster.
Procedure
Invoke the
cephadmshell.Syntax
cephadm shell
[root@host01 ~]# cephadm shellCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the host:
Syntax
ceph orch host add HOST_NAME HOST_ADDRESS
ceph orch host add HOST_NAME HOST_ADDRESSCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host add host03 10.10.128.70
[ceph: root@host01 /]# ceph orch host add host03 10.10.128.70Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.15.4. Removing hosts Copy linkLink copied to clipboard!
You can remove hosts of a Ceph cluster with the Ceph Orchestrators. All the daemons are removed with the drain option which adds the _no_schedule label to ensure that you cannot deploy any daemons or a cluster till the operation is complete.
If you are removing the bootstrap host, be sure to copy the admin keyring and the configuration file to another host in the storage cluster before you remove the host.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- Hosts are added to the storage cluster.
- All the services are deployed.
- Cephadm is deployed on the nodes where the services have to be removed.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shellCopy to Clipboard Copied! Toggle word wrap Toggle overflow Fetch the host details:
Example
[ceph: root@host01 /]# ceph orch host ls
[ceph: root@host01 /]# ceph orch host lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Drain all the daemons from the host:
Syntax
ceph orch host drain HOSTNAME
ceph orch host drain HOSTNAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host drain host02
[ceph: root@host01 /]# ceph orch host drain host02Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
_no_schedulelabel is automatically applied to the host which blocks deployment.Check the status of OSD removal:
Example
[ceph: root@host01 /]# ceph orch osd rm status
[ceph: root@host01 /]# ceph orch osd rm statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow When no placement groups (PG) are left on the OSD, the OSD is decommissioned and removed from the storage cluster.
Check if all the daemons are removed from the storage cluster:
Syntax
ceph orch ps HOSTNAME
ceph orch ps HOSTNAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch ps host02
[ceph: root@host01 /]# ceph orch ps host02Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the host:
Syntax
ceph orch host rm HOSTNAME
ceph orch host rm HOSTNAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host rm host02
[ceph: root@host01 /]# ceph orch host rm host02Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.16. Labeling hosts Copy linkLink copied to clipboard!
The Ceph orchestrator supports assigning labels to hosts. Labels are free-form and have no specific meanings. This means that you can use mon, monitor, mycluster_monitor, or any other text string. Each host can have multiple labels.
For example, apply the mon label to all hosts on which you want to deploy Ceph Monitor daemons, mgr for all hosts on which you want to deploy Ceph Manager daemons, rgw for Ceph Object Gateway daemons, and so on.
Labeling all the hosts in the storage cluster helps to simplify system management tasks by allowing you to quickly identify the daemons running on each host. In addition, you can use the Ceph orchestrator or a YAML file to deploy or remove daemons on hosts that have specific host labels.
3.16.1. Adding a label to a host Copy linkLink copied to clipboard!
Use the Ceph Orchestrator to add a label to a host. Labels can be used to specify placement of daemons.
A few examples of labels are mgr, mon, and osd based on the service deployed on the hosts. Each host can have multiple labels.
You can also add the following host labels that have special meaning to cephadm and they begin with _:
-
_no_schedule: This label preventscephadmfrom scheduling or deploying daemons on the host. If it is added to an existing host that already contains Ceph daemons, it causescephadmto move those daemons elsewhere, except OSDs which are not removed automatically. When a host is added with the_no_schedulelabel, no daemons are deployed on it. When the daemons are drained before the host is removed, the_no_schedulelabel is set on that host. -
_no_autotune_memory: This label does not autotune memory on the host. It prevents the daemon memory from being tuned even when theosd_memory_target_autotuneoption or other similar options are enabled for one or more daemons on that host. -
_admin: By default, the_adminlabel is applied to the bootstrapped host in the storage cluster and theclient.adminkey is set to be distributed to that host with theceph orch client-keyring {ls|set|rm}function. Adding this label to additional hosts normally causescephadmto deploy configuration and keyring files in the/etc/cephdirectory.
Prerequisites
- A storage cluster that has been installed and bootstrapped.
- Root-level access to all nodes in the storage cluster.
- Hosts are added to the storage cluster.
Procedure
Log in to the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shellCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add a label to a host:
Syntax
ceph orch host label add HOSTNAME LABEL
ceph orch host label add HOSTNAME LABELCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host label add host02 mon
[ceph: root@host01 /]# ceph orch host label add host02 monCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List the hosts:
Example
[ceph: root@host01 /]# ceph orch host ls
[ceph: root@host01 /]# ceph orch host lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.16.2. Removing a label from a host Copy linkLink copied to clipboard!
You can use the Ceph orchestrator to remove a label from a host.
Prerequisites
- A storage cluster that has been installed and bootstrapped.
- Root-level access to all nodes in the storage cluster.
Procedure
Launch the
cephadmshell:cephadm shell [ceph: root@host01 /]#
[root@host01 ~]# cephadm shell [ceph: root@host01 /]#Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the label.
Syntax
ceph orch host label rm HOSTNAME LABEL
ceph orch host label rm HOSTNAME LABELCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host label rm host02 mon
[ceph: root@host01 /]# ceph orch host label rm host02 monCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List the hosts:
Example
[ceph: root@host01 /]# ceph orch host ls
[ceph: root@host01 /]# ceph orch host lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.16.3. Using host labels to deploy daemons on specific hosts Copy linkLink copied to clipboard!
You can use host labels to deploy daemons to specific hosts. There are two ways to use host labels to deploy daemons on specific hosts:
-
By using the
--placementoption from the command line. - By using a YAML file.
Prerequisites
- A storage cluster that has been installed and bootstrapped.
- Root-level access to all nodes in the storage cluster.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shellCopy to Clipboard Copied! Toggle word wrap Toggle overflow List current hosts and labels:
Example
[ceph: root@host01 /]# ceph orch host ls HOST ADDR LABELS STATUS host01 _admin,mon,osd,mgr host02 mon,osd,mgr,mylabel
[ceph: root@host01 /]# ceph orch host ls HOST ADDR LABELS STATUS host01 _admin,mon,osd,mgr host02 mon,osd,mgr,mylabelCopy to Clipboard Copied! Toggle word wrap Toggle overflow Method 1: Use the
--placementoption to deploy a daemon from the command line:Syntax
ceph orch apply DAEMON --placement="label:LABEL"
ceph orch apply DAEMON --placement="label:LABEL"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch apply prometheus --placement="label:mylabel"
[ceph: root@host01 /]# ceph orch apply prometheus --placement="label:mylabel"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Method 2 To assign the daemon to a specific host label in a YAML file, specify the service type and label in the YAML file:
Create the
placement.ymlfile:Example
[ceph: root@host01 /]# vi placement.yml
[ceph: root@host01 /]# vi placement.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the service type and label in the
placement.ymlfile:Example
service_type: prometheus placement: label: "mylabel"
service_type: prometheus placement: label: "mylabel"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the daemon placement file:
Syntax
ceph orch apply -i FILENAME
ceph orch apply -i FILENAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch apply -i placement.yml Scheduled prometheus update…
[ceph: root@host01 /]# ceph orch apply -i placement.yml Scheduled prometheus update…Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List the status of the daemons:
Syntax
ceph orch ps --daemon_type=DAEMON_NAME
ceph orch ps --daemon_type=DAEMON_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch ps --daemon_type=prometheus NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID prometheus.host02 host02 *:9095 running (2h) 8m ago 2h 85.3M - 2.22.2 ac25aac5d567 ad8c7593d7c0
[ceph: root@host01 /]# ceph orch ps --daemon_type=prometheus NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID prometheus.host02 host02 *:9095 running (2h) 8m ago 2h 85.3M - 2.22.2 ac25aac5d567 ad8c7593d7c0Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.17. Adding Monitor service Copy linkLink copied to clipboard!
A typical Red Hat Ceph Storage storage cluster has three or five monitor daemons deployed on different hosts. If your storage cluster has five or more hosts, Red Hat recommends that you deploy five Monitor nodes.
In the case of a firewall, see the Firewall settings for Ceph Monitor node section of the Red Hat Ceph Storage Configuration Guide for details.
The bootstrap node is the initial monitor of the storage cluster. Be sure to include the bootstrap node in the list of hosts to which you want to deploy.
If you want to apply Monitor service to more than one specific host, be sure to specify all of the host names within the same ceph orch apply command. If you specify ceph orch apply mon --placement host1 and then specify ceph orch apply mon --placement host2, the second command removes the Monitor service on host1 and applies a Monitor service to host2.
If your Monitor nodes or your entire cluster are located on a single subnet, then cephadm automatically adds up to five Monitor daemons as you add new hosts to the cluster. cephadm automatically configures the Monitor daemons on the new hosts. The new hosts reside on the same subnet as the first (bootstrap) host in the storage cluster. cephadm can also deploy and scale monitors to correspond to changes in the size of the storage cluster.
Prerequisites
- Root-level access to all hosts in the storage cluster.
- A running storage cluster.
Procedure
Apply the five Monitor daemons to five random hosts in the storage cluster:
ceph orch apply mon 5
ceph orch apply mon 5Copy to Clipboard Copied! Toggle word wrap Toggle overflow Disable automatic Monitor deployment:
ceph orch apply mon --unmanaged
ceph orch apply mon --unmanagedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.17.1. Adding Monitor nodes to specific hosts Copy linkLink copied to clipboard!
Use host labels to identify the hosts that contain Monitor nodes.
Prerequisites
- Root-level access to all nodes in the storage cluster.
- A running storage cluster.
Procedure
Assign the
monlabel to the host:Syntax
ceph orch host label add HOSTNAME mon
ceph orch host label add HOSTNAME monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host label add host01 mon
[ceph: root@host01 /]# ceph orch host label add host01 monCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the current hosts and labels:
Syntax
ceph orch host ls
ceph orch host lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy monitors based on the host label:
Syntax
ceph orch apply mon label:mon
ceph orch apply mon label:monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy monitors on a specific set of hosts:
Syntax
ceph orch apply mon HOSTNAME1,HOSTNAME2,HOSTNAME3
ceph orch apply mon HOSTNAME1,HOSTNAME2,HOSTNAME3Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph orch apply mon host01,host02,host03
[root@host01 ~]# ceph orch apply mon host01,host02,host03Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBe sure to include the bootstrap node in the list of hosts to which you want to deploy.
3.18. Setting up the admin node Copy linkLink copied to clipboard!
Use an admin node to administer the storage cluster.
An admin node contains both the cluster configuration file and the admin keyring. Both of these files are stored in the directory /etc/ceph and use the name of the storage cluster as a prefix.
For example, the default ceph cluster name is ceph. In a cluster using the default name, the admin keyring is named /etc/ceph/ceph.client.admin.keyring. The corresponding cluster configuration file is named /etc/ceph/ceph.conf.
To set up additional hosts in the storage cluster as admin nodes, apply the _admin label to the host you want to designate as an administrator node.
By default, after applying the _admin label to a node, cephadm copies the ceph.conf and client.admin keyring files to that node. The _admin label is automatically applied to the bootstrap node unless the --skip-admin-label option was specified with the cephadm bootstrap command.
Prerequisites
-
A running storage cluster with
cephadminstalled. - The storage cluster has running Monitor and Manager nodes.
- Root-level access to all nodes in the cluster.
Procedure
Use
ceph orch host lsto view the hosts in your storage cluster:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
_adminlabel to designate the admin host in your storage cluster. For best results, this host should have both Monitor and Manager daemons running.Syntax
ceph orch host label add HOSTNAME _admin
ceph orch host label add HOSTNAME _adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph orch host label add host03 _admin
[root@host01 ~]# ceph orch host label add host03 _adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the admin host has the
_adminlabel.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Log in to the admin node to manage the storage cluster.
3.18.1. Deploying Ceph monitor nodes using host labels Copy linkLink copied to clipboard!
A typical Red Hat Ceph Storage storage cluster has three or five Ceph Monitor daemons deployed on different hosts. If your storage cluster has five or more hosts, Red Hat recommends that you deploy five Ceph Monitor nodes.
If your Ceph Monitor nodes or your entire cluster are located on a single subnet, then cephadm automatically adds up to five Ceph Monitor daemons as you add new nodes to the cluster. cephadm automatically configures the Ceph Monitor daemons on the new nodes. The new nodes reside on the same subnet as the first (bootstrap) node in the storage cluster. cephadm can also deploy and scale monitors to correspond to changes in the size of the storage cluster.
Use host labels to identify the hosts that contain Ceph Monitor nodes.
Prerequisites
- Root-level access to all nodes in the storage cluster.
- A running storage cluster.
Procedure
Assign the mon label to the host:
Syntax
ceph orch host label add HOSTNAME mon
ceph orch host label add HOSTNAME monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host label add host02 mon [ceph: root@host01 /]# ceph orch host label add host03 mon
[ceph: root@host01 /]# ceph orch host label add host02 mon [ceph: root@host01 /]# ceph orch host label add host03 monCopy to Clipboard Copied! Toggle word wrap Toggle overflow View the current hosts and labels:
Syntax
ceph orch host ls
ceph orch host lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy Ceph Monitor daemons based on the host label:
Syntax
ceph orch apply mon label:mon
ceph orch apply mon label:monCopy to Clipboard Copied! Toggle word wrap Toggle overflow Deploy Ceph Monitor daemons on a specific set of hosts:
Syntax
ceph orch apply mon HOSTNAME1,HOSTNAME2,HOSTNAME3
ceph orch apply mon HOSTNAME1,HOSTNAME2,HOSTNAME3Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch apply mon host01,host02,host03
[ceph: root@host01 /]# ceph orch apply mon host01,host02,host03Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteBe sure to include the bootstrap node in the list of hosts to which you want to deploy.
3.18.2. Adding Ceph Monitor nodes by IP address or network name Copy linkLink copied to clipboard!
A typical Red Hat Ceph Storage storage cluster has three or five monitor daemons deployed on different hosts. If your storage cluster has five or more hosts, Red Hat recommends that you deploy five Monitor nodes.
If your Monitor nodes or your entire cluster are located on a single subnet, then cephadm automatically adds up to five Monitor daemons as you add new nodes to the cluster. You do not need to configure the Monitor daemons on the new nodes. The new nodes reside on the same subnet as the first node in the storage cluster. The first node in the storage cluster is the bootstrap node. cephadm can also deploy and scale monitors to correspond to changes in the size of the storage cluster.
Prerequisites
- Root-level access to all nodes in the storage cluster.
- A running storage cluster.
Procedure
To deploy each additional Ceph Monitor node:
Syntax
ceph orch apply mon NODE:IP_ADDRESS_OR_NETWORK_NAME [NODE:IP_ADDRESS_OR_NETWORK_NAME...]
ceph orch apply mon NODE:IP_ADDRESS_OR_NETWORK_NAME [NODE:IP_ADDRESS_OR_NETWORK_NAME...]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch apply mon host02:10.10.128.69 host03:mynetwork
[ceph: root@host01 /]# ceph orch apply mon host02:10.10.128.69 host03:mynetworkCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.19. Adding Manager service Copy linkLink copied to clipboard!
cephadm automatically installs a Manager daemon on the bootstrap node during the bootstrapping process. Use the Ceph orchestrator to deploy additional Manager daemons.
The Ceph orchestrator deploys two Manager daemons by default. To deploy a different number of Manager daemons, specify a different number. If you do not specify the hosts where the Manager daemons should be deployed, the Ceph orchestrator randomly selects the hosts and deploys the Manager daemons to them.
If you want to apply Manager daemons to more than one specific host, be sure to specify all of the host names within the same ceph orch apply command. If you specify ceph orch apply mgr --placement host1 and then specify ceph orch apply mgr --placement host2, the second command removes the Manager daemon on host1 and applies a Manager daemon to host2.
Red Hat recommends that you use the --placement option to deploy to specific hosts.
Prerequisites
- A running storage cluster.
Procedure
To specify that you want to apply a certain number of Manager daemons to randomly selected hosts:
Syntax
ceph orch apply mgr NUMBER_OF_DAEMONS
ceph orch apply mgr NUMBER_OF_DAEMONSCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch apply mgr 3
[ceph: root@host01 /]# ceph orch apply mgr 3Copy to Clipboard Copied! Toggle word wrap Toggle overflow To add Manager daemons to specific hosts in your storage cluster:
Syntax
ceph orch apply mgr --placement "HOSTNAME1 HOSTNAME2 HOSTNAME3"
ceph orch apply mgr --placement "HOSTNAME1 HOSTNAME2 HOSTNAME3"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch apply mgr --placement "host02 host03 host04"
[ceph: root@host01 /]# ceph orch apply mgr --placement "host02 host03 host04"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.20. Adding OSDs Copy linkLink copied to clipboard!
Cephadm will not provision an OSD on a device that is not available. A storage device is considered available if it meets all of the following conditions:
- The device must have no partitions.
- The device must not be mounted.
- The device must not contain a file system.
- The device must not contain a Ceph BlueStore OSD.
- The device must be larger than 5 GB.
By default, the osd_memory_target_autotune parameter is set to true in Red Hat Ceph Storage 6.0. For more information about tuning OSD memory, see the Automatically tuning OSD memory section in the Red Hat Ceph Storage Operations Guide.
Prerequisites
- A running Red Hat Ceph Storage cluster.
Procedure
List the available devices to deploy OSDs:
Syntax
ceph orch device ls [--hostname=HOSTNAME1 HOSTNAME2] [--wide] [--refresh]
ceph orch device ls [--hostname=HOSTNAME1 HOSTNAME2] [--wide] [--refresh]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch device ls --wide --refresh
[ceph: root@host01 /]# ceph orch device ls --wide --refreshCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can either deploy the OSDs on specific hosts or on all the available devices:
To create an OSD from a specific device on a specific host:
Syntax
ceph orch daemon add osd HOSTNAME:DEVICE_PATH
ceph orch daemon add osd HOSTNAME:DEVICE_PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch daemon add osd host02:/dev/sdb
[ceph: root@host01 /]# ceph orch daemon add osd host02:/dev/sdbCopy to Clipboard Copied! Toggle word wrap Toggle overflow To deploy OSDs on any available and unused devices, use the
--all-available-devicesoption.Example
[ceph: root@host01 /]# ceph orch apply osd --all-available-devices
[ceph: root@host01 /]# ceph orch apply osd --all-available-devicesCopy to Clipboard Copied! Toggle word wrap Toggle overflow
This command creates colocated WAL and DB daemons. If you want to create non-colocated daemons, do not use this command.
3.21. Running the cephadm-clients playbook Copy linkLink copied to clipboard!
The cephadm-clients.yml playbook handles the distribution of configuration and admin keyring files to a group of Ceph clients.
If you do not specify a configuration file when you run the playbook, the playbook will generate and distribute a minimal configuration file. By default, the generated file is located at /etc/ceph/ceph.conf.
If you are not using the cephadm-ansible playbooks, after upgrading your Ceph cluster, you must upgrade the ceph-common package and client libraries on your client nodes. For more information, see Upgrading the Red Hat Ceph Storage cluster section in the Red Hat Ceph Storage Upgrade Guide.
Prerequisites
- Root-level access to the Ansible administration node.
-
Ansible user with sudo and passwordless
sshaccess to all nodes in the storage cluster. -
The
cephadm-ansiblepackage is installed. - The preflight playbook has been run on the initial host in the storage cluster. For more information, see Running the preflight playbook.
-
The
client_groupvariable must be specified in the Ansible inventory file. -
The
[admin]group is defined in the inventory file with a node where the admin keyring is present at/etc/ceph/ceph.client.admin.keyring.
Procedure
- Navigate to the /usr/share/cephadm-ansible directory.
Run the
cephadm-clients.ymlplaybook on the initial host in the group of clients. Use the full path name to the admin keyring on the admin host for PATH_TO_KEYRING. Optional: If you want to specify an existing configuration file to use, specify the full path to the configuration file for CONFIG-FILE. Use the Ansible group name for the group of clients for ANSIBLE_GROUP_NAME. Use the FSID of the cluster where the admin keyring and configuration files are stored for FSID. The default path for the FSID is/var/lib/ceph/.Syntax
ansible-playbook -i hosts cephadm-clients.yml -extra-vars '{"fsid":"FSID", "client_group":"ANSIBLE_GROUP_NAME", "keyring":"PATH_TO_KEYRING", "conf":"CONFIG_FILE"}'ansible-playbook -i hosts cephadm-clients.yml -extra-vars '{"fsid":"FSID", "client_group":"ANSIBLE_GROUP_NAME", "keyring":"PATH_TO_KEYRING", "conf":"CONFIG_FILE"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ansible-playbook -i hosts cephadm-clients.yml --extra-vars '{"fsid":"be3ca2b2-27db-11ec-892b-005056833d58","client_group":"fs_clients","keyring":"/etc/ceph/fs.keyring", "conf": "/etc/ceph/ceph.conf"}'[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-clients.yml --extra-vars '{"fsid":"be3ca2b2-27db-11ec-892b-005056833d58","client_group":"fs_clients","keyring":"/etc/ceph/fs.keyring", "conf": "/etc/ceph/ceph.conf"}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow
After installation is complete, the specified clients in the group have the admin keyring. If you did not specify a configuration file, cephadm-ansible creates a minimal default configuration file on each client.
3.22. Managing operating system tuning profiles with cephadm Copy linkLink copied to clipboard!
As a storage administrator, you can use cephadm to create and manage operating system tuning profiles that apply a set of sysctl settings to a given set of hosts in your Red Hat Ceph Storage cluster. Tuning the operating system gives you extra opportunities for better performance of your Red Hat Ceph Storage cluster.
Additional Resources
-
For more information about configuring kernel parameters, see the
sysctl(8)man page. - For more information about tuned profiles, see Customizing TuneD profiles.
3.22.1. Creating tuning profiles Copy linkLink copied to clipboard!
You can create a tuning profile by creating a YAML specification file with kernel parameters or by defining kernel parameter settings using the orchestrator CLI.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to an admin host.
-
Installation of the
tunedpackage.
Method 1:
Create a tuning profile by creating and applying a YAML specification:
From a Ceph admin host, create a YAML specification file:
Syntax
touch TUNED_PROFILE_NAME.yaml
touch TUNED_PROFILE_NAME.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
touch mon_hosts_profile.yaml
[root@host01 ~]# touch mon_hosts_profile.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the YAML file to include the tuning parameters:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the tuning profile:
Syntax
ceph orch tuned-profile apply -i TUNED_PROFILE_NAME.yaml
ceph orch tuned-profile apply -i TUNED_PROFILE_NAME.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph orch tuned-profile apply -i mon_hosts_profile.yaml Saved tuned profile mon_hosts_profile
[root@host01 ~]# ceph orch tuned-profile apply -i mon_hosts_profile.yaml Saved tuned profile mon_hosts_profileCopy to Clipboard Copied! Toggle word wrap Toggle overflow This example writes the profile to
/etc/sysctl.d/onhost01andhost02and runssysctl --systemon each host to reload sysctl variables without rebooting.NoteCephadm writes the profile file name under
/etc/sysctl.d/as TUNED_PROFILE_NAME-cephadm-tuned-profile.conf where TUNED_PROFILE_NAME is theprofile_nameyou specify in the provided YAML specification. Thesysctlcommand applies settings in lexicographical order by the file name the setting occurs in. If multiple files contain the same setting, the entry in the file with the lexicographically latest name will take precedence. To ensure you apply settings before or after other configuration files that may exist, set theprofile_namein your specification file accordingly.NoteCephadm applies
sysctlsettings only at the host level and not to any certain daemon or container.
Method 2:
Create a tuning profile by using the orchestrator CLI:
From a Ceph admin host, specify the tuning profile name, placement, and settings:
Syntax
ceph orch tuned-profile apply PROFILE_NAME --placement=’HOST1,HOST2’ --settings=’SETTING_NAME1=VALUE1,SETTING_NAME2=VALUE2’
ceph orch tuned-profile apply PROFILE_NAME --placement=’HOST1,HOST2’ --settings=’SETTING_NAME1=VALUE1,SETTING_NAME2=VALUE2’Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph orch tuned-profile apply osd_hosts_profile --placement=’host04,host05’ --settings=’fs.file-max=200000,vm.swappiness=19’ Saved tuned profile osd_hosts_profile
[root@host01 ~]# ceph orch tuned-profile apply osd_hosts_profile --placement=’host04,host05’ --settings=’fs.file-max=200000,vm.swappiness=19’ Saved tuned profile osd_hosts_profileCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
List the tuning profiles that
cephadmis managing:Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.22.2. Viewing tuning profiles Copy linkLink copied to clipboard!
You can view all the tuning profiles that cephadm manages by running the tuned-profile ls command.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to an admin host.
-
Installation of the
tunedpackage.
Procedure
From a Ceph admin host, list the tuning profiles:
Syntax
ceph orch tuned-profile ls
ceph orch tuned-profile lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you need to make modifications and re-apply a profile, passing the
--format yamlparameter to thetuned-profile lscommand will present the profiles in a format that you can copy and re-apply.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.22.3. Modifying tuning profiles Copy linkLink copied to clipboard!
After you create tuning profiles, you can modify the exiting tuning profiles to adjust sysctl settings when needed.
You can modify existing tuning profiles in two ways:
- Re-apply a YAML specification with the same profile name.
-
Use the
tuned-profileadd-settingandrm-settingparameters to adjust a setting.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to an admin host.
-
Installation of the
tunedpackage.
Method 1:
Modify a setting using the
tuned-profileadd-settingandrm-settingparameters:From a Ceph admin host, add or modify a setting for an existing profile:
Syntax
ceph orch tuned-profile add-setting PROFILE_NAME SETTING_NAME VALUE
ceph orch tuned-profile add-setting PROFILE_NAME SETTING_NAME VALUECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph orch tuned-profile add-setting mon_hosts_profile vm.vfs_cache_pressure 110 Added setting vm.vfs_cache_pressure with value 110 to tuned profile mon_hosts_profile
[root@host01 ~]# ceph orch tuned-profile add-setting mon_hosts_profile vm.vfs_cache_pressure 110 Added setting vm.vfs_cache_pressure with value 110 to tuned profile mon_hosts_profileCopy to Clipboard Copied! Toggle word wrap Toggle overflow To remove a setting from an existing profile:
Syntax
ceph orch tuned-profile rm-setting PROFILE_NAME SETTING_NAME
ceph orch tuned-profile rm-setting PROFILE_NAME SETTING_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph orch tuned-profile rm-setting mon_hosts_profile vm.vfs_cache_pressure Removed setting vm.vfs_cache_pressure from tuned profile mon_hosts_profile
[root@host01 ~]# ceph orch tuned-profile rm-setting mon_hosts_profile vm.vfs_cache_pressure Removed setting vm.vfs_cache_pressure from tuned profile mon_hosts_profileCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Method 2:
Modify a setting by re-applying a YAML specification with the same profile name:
From a Ceph admin host, create the YAML specification file or modify an existing specification file:
Syntax
vi TUNED_PROFILE_NAME.yaml
vi TUNED_PROFILE_NAME.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
vi mon_hosts_profile.yaml
[root@host01 ~]# vi mon_hosts_profile.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit the YAML file to include the tuned parameters you want to modify:
Syntax
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Apply the tuning profile:
Syntax
ceph orch tuned-profile apply -i TUNED_PROFILE_NAME.yaml
ceph orch tuned-profile apply -i TUNED_PROFILE_NAME.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph orch tuned-profile apply -i mon_hosts_profile.yaml Saved tuned profile mon_hosts_profile
[root@host01 ~]# ceph orch tuned-profile apply -i mon_hosts_profile.yaml Saved tuned profile mon_hosts_profileCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteModifying the placement will require re-applying a profile with the same name. Cephadm tracks profiles by their name, therefore applying a profile with the same name as an existing profile, results in the old profile being overwritten.
3.22.4. Removing tuning profiles Copy linkLink copied to clipboard!
As a storage administrator, you can remove tuning profiles that you no longer want cephadm to manage, with the tuned-profile rm command.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to an admin host.
-
Installation of the
tunedpackage.
Procedure
From a Ceph admin host, view the tuning profiles that
cephadmis managing:Example
ceph orch tuned-profile ls
[root@host01 ~]# ceph orch tuned-profile lsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the tuning profile:
Syntax
ceph orch tuned-profile rm TUNED_PROFILE_NAME
ceph orch tuned-profile rm TUNED_PROFILE_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph orch tuned-profile rm mon_hosts_profile Removed tuned profile mon_hosts_profile
[root@host01 ~]# ceph orch tuned-profile rm mon_hosts_profile Removed tuned profile mon_hosts_profileCopy to Clipboard Copied! Toggle word wrap Toggle overflow When
cephadmremoves a tuning profile, it will remove the profile file previously written to the/etc/sysctl.ddirectory on the corresponding host.
3.23. Purging the Ceph storage cluster Copy linkLink copied to clipboard!
Purging the Ceph storage cluster clears any data or connections that remain from previous deployments on your server. For Red Hat Enterprise Linux 8, this Ansible script removes all daemons, logs, and data that belong to the FSID passed to the script from all hosts in the storage cluster. For Red Hat Enterprise Linux 9, use the cephadm rm-cluster command since Ansible is not supported.
For Red Hat Enterprise Linux 8
The Ansible inventory file lists all the hosts in your cluster and what roles each host plays in your Ceph storage cluster. The default location for an inventory file is /usr/share/cephadm-ansible/hosts, but this file can be placed anywhere.
This process works only if the cephadm binary is installed on all hosts in the storage cluster.
The following example shows the structure of an inventory file:
Example
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Ansible 2.12 or later is installed on the bootstrap node.
- Root-level access to the Ansible administration node.
-
Ansible user with sudo and passwordless
sshaccess to all nodes in the storage cluster. -
The
[admin]group is defined in the inventory file with a node where the admin keyring is present at/etc/ceph/ceph.client.admin.keyring.
Procedure
As an Ansible user on the bootstrap node, run the purge script:
Syntax
ansible-playbook -i hosts cephadm-purge-cluster.yml -e fsid=FSID -vvv
ansible-playbook -i hosts cephadm-purge-cluster.yml -e fsid=FSID -vvvCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ansible-playbook -i hosts cephadm-purge-cluster.yml -e fsid=a6ca415a-cde7-11eb-a41a-002590fc2544 -vvv
[ceph-admin@host01 cephadm-ansible]$ ansible-playbook -i hosts cephadm-purge-cluster.yml -e fsid=a6ca415a-cde7-11eb-a41a-002590fc2544 -vvvCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteAn additional extra-var (
-e ceph_origin=rhcs) is required to zap the disk devices during the purge.When the script has completed, the entire storage cluster, including all OSD disks, will have been removed from all hosts in the cluster.
For Red Hat Enterprise Linux 9
Prerequisites
- A running Red Hat Ceph Storage cluster.
Procedure
Disable
cephadmto stop all the orchestration operations to avoid deploying new daemons:Example
[ceph: root#host01 /]# ceph mgr module disable cephadm
[ceph: root#host01 /]# ceph mgr module disable cephadmCopy to Clipboard Copied! Toggle word wrap Toggle overflow Get the FSID of the cluster:
Example
[ceph: root#host01 /]# ceph fsid
[ceph: root#host01 /]# ceph fsidCopy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the cephadm shell.
Example
[ceph: root#host01 /]# exit
[ceph: root#host01 /]# exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow Purge the Ceph daemons from all hosts in the cluster:
Syntax
cephadm rm-cluster --force --zap-osds --fsid FSID
cephadm rm-cluster --force --zap-osds --fsid FSIDCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
cephadm rm-cluster --force --zap-osds --fsid a6ca415a-cde7-11eb-a41a-002590fc2544
[root@host01 ~]# cephadm rm-cluster --force --zap-osds --fsid a6ca415a-cde7-11eb-a41a-002590fc2544Copy to Clipboard Copied! Toggle word wrap Toggle overflow