Chapter 2. Upgrading a Red Hat Ceph Storage cluster running Red Hat Enterprise Linux 8 from RHCS 4 to RHCS 5
As a storage administrator, you can upgrade a Red Hat Ceph Storage cluster running Red Hat Enterprise Linux 8 from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5. The upgrade process includes the following tasks:
- Use Ansible playbooks to upgrade a Red Hat Ceph Storage 4 storage cluster to Red Hat Ceph Storage 5.
ceph-ansible
is currently not supported with Red Hat Ceph Storage 5. This means that once you have migrated your storage cluster to Red Hat Ceph Storage 5, you must use cephadm
and cephadm-ansible
to perform subsequent updates.
While upgrading from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5, do not set bluestore_fsck_quick_fix_on_mount
parameter to true
or do not run the ceph-bluestore-tool --path PATH_TO_OSD --command quick-fix|repair
commands as it might lead to improperly formatted OMAP keys and cause data corruption.
Upgrading to Red Hat Ceph Storage 5.2 from Red Hat Ceph Storage 5.0 on Ceph Object Gateway storage clusters (single-site or multi-site) is supported but you must set the ceph config set mgr mgr/cephadm/no_five_one_rgw true --force
option prior to upgrading your storage cluster.
Upgrading to Red Hat Ceph Storage 5.2 from Red Hat Ceph Storage 5.1 on Ceph Object Gateway storage clusters (single-site or multi-site) is not supported due to a known issue. For more information, see the knowledge base article Support Restrictions for upgrades for RADOS Gateway (RGW) on Red Hat Red Hat Ceph Storage 5.2.
Follow the knowledge base article How to upgrade from Red Hat Ceph Storage 4.2z4 to 5.0z4 with the upgrade procedure if you are planning to upgrade to Red Hat Ceph Storage 5.0z4.
The option bluefs_buffered_io
is set to True
by default for Red Hat Ceph Storage. This option enables BlueFS to perform buffered reads in some cases, and enables the kernel page cache to act as a secondary cache for reads like RocksDB block reads. For example, if the RocksDB block cache is not large enough to hold all blocks during the OMAP iteration, it may be possible to read them from the page cache instead of the disk. This can dramatically improve performance when osd_memory_target is too small to hold all entries in the block cache. Currently, enabling bluefs_buffered_io
and disabling the system level swap prevents performance degradation.
For more information about viewing the current setting for bluefs_buffered_io
, see the Viewing the bluefs_buffered_io
setting section in the Red Hat Ceph Storage Administration Guide.
Red Hat Ceph Storage 5 supports only containerized daemons. It does not support non-containerized storage clusters. If you are upgrading a non-containerized storage cluster from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5, the upgrade process includes the conversion to a containerized deployment.
2.1. Prerequisites
- A Red Hat Ceph Storage 4 cluster running Red Hat Enterprise Linux 8.4 or later.
- A valid customer subscription.
- Root-level access to the Ansible administration node.
- Root-level access to all nodes in the storage cluster.
- The Ansible user account for use with the Ansible application.
- Red Hat Ceph Storage tools and Ansible repositories are enabled.
You can manually upgrade the Ceph File System (CephFS) Metadata Server (MDS) software on a Red Hat Ceph Storage cluster and the Red Hat Enterprise Linux operating system to a new major release at the same time. The underlying XFS filesystem must be formatted with ftype=1
or with d_type
support. Run the command xfs_info /var
to ensure the ftype
is set to 1
. If the value of ftype
is not 1
, attach a new disk or create a volume. On top of this new device, create a new XFS filesystem and mount it on /var/lib/containers
.
Starting with Red Hat Enterprise Linux 8, mkfs.xfs
enables ftype=1
by default.
2.2. Compatibility considerations between RHCS and podman
versions
podman
and Red Hat Ceph Storage have different end-of-life strategies that might make it challenging to find compatible versions.
If you plan to upgrade from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 as part of the Ceph upgrade process, make sure that the version of podman
is compatible with Red Hat Ceph Storage 5.
Red Hat recommends to use the podman
version shipped with the corresponding Red Hat Enterprise Linux version for Red Hat Ceph Storage 5. See the Red Hat Ceph Storage: Supported configurations knowledge base article for more details. See the Contacting Red Hat support for service section in the Red Hat Ceph Storage Troubleshooting Guide for additional assistance.
Red Hat Ceph Storage 5 is compatible with podman
versions 2.0.0 and later, except for version 2.2.1. Version 2.2.1 is not compatible with Red Hat Ceph Storage 5.
The following table shows version compatibility between Red Hat Ceph Storage 5 and versions of podman
.
Ceph | Podman | ||||
---|---|---|---|---|---|
1.9 | 2.0 | 2.1 | 2.2 | 3.0 | |
5.0 (Pacific) | false | true | true | false | true |
2.3. Preparing for an upgrade
As a storage administrator, you can upgrade your Ceph storage cluster to Red Hat Ceph Storage 5. However, some components of your storage cluster must be running specific software versions before an upgrade can take place. The following list shows the minimum software versions that must be installed on your storage cluster before you can upgrade to Red Hat Ceph Storage 5.
- Red Hat Ceph Storage 4.3 or later.
- Ansible 2.9.
- Ceph-ansible shipped with the latest version of Red Hat Ceph Storage.
- Red Hat Enterprise Linux 8.4 EUS or later.
- FileStore OSDs must be migrated to BlueStore. For more information about converting OSDs from FileStore to BlueStore, refer to BlueStore.
There is no direct upgrade path from Red Hat Ceph Storage versions earlier than Red Hat Ceph Storage 4.3. If you are upgrading from Red Hat Ceph Storage 3, you must first upgrade to Red Hat Ceph Storage 4.3 or later, and then upgrade to Red Hat Ceph Storage 5.
You can only upgrade to the latest version of Red Hat Ceph Storage 5. For example, if version 5.1 is available, you cannot upgrade from 4 to 5.0; you must go directly to 5.1.
The new deployment of Red Hat Ceph Storage-4.3.z1 on Red Hat Enterprise Linux-8.7 (or higher) or Upgrade of Red Hat Ceph Storage-4.3.z1 to 5.X with host OS as Red Hat Enterprise Linux-8.7(or higher) fails at TASK [ceph-mgr : wait for all mgr to be up]
. The behavior of podman
released with Red Hat Enterprise Linux 8.7 had changed with respect to SELinux relabeling. Due to this, depending on their startup order, some Ceph containers would fail to start as they would not have access to the files they needed.
As a workaround, refer to the knowledge base RHCS 4.3 installation fails while executing the command `ceph mgr dump`.
To upgrade your storage cluster to Red Hat Ceph Storage 5, Red Hat recommends that your cluster be running Red Hat Ceph Storage 4.3 or later. Refer to the Knowledgebase article What are the Red Hat Ceph Storage Releases?. This article contains download links to the most recent versions of the Ceph packages and ceph-ansible.
The upgrade process uses Ansible playbooks to upgrade an Red Hat Ceph Storage 4 storage cluster to Red Hat Ceph Storage 5. If your Red Hat Ceph Storage 4 cluster is a non-containerized cluster, the upgrade process includes a step to transform the cluster into a containerized version. Red Hat Ceph Storage 5 does not run on non-containerized clusters.
If you have a mirroring or multisite configuration, upgrade one cluster at a time. Make sure that each upgraded cluster is running properly before upgrading another cluster.
leapp
does not support upgrades for encrypted OSDs or OSDs that have encrypted partitions. If your OSDs are encrypted and you are upgrading the host OS, disable dmcrypt
in ceph-ansible
before upgrading the OS. For more information about using leapp
, refer to Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8.
Perform the first three steps in this procedure only if the storage cluster is not already running the latest version of Red Hat Ceph Storage 4. The latest version of Red Hat Ceph Storage 4 should be 4.3 or later.
Prerequisites
- A running Red Hat Ceph Storage 4 cluster.
- Sudo-level access to all nodes in the storage cluster.
- A valid customer subscription.
- Root-level access to the Ansible administration node.
- The Ansible user account for use with the Ansible application.
- Red Hat Ceph Storage tools and Ansible repositories are enabled.
Procedure
Enable the Ceph and Ansible repositories on the Ansible administration node:
Example
[root@admin ceph-ansible]# subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms
Update Ansible:
Example
[root@admin ceph-ansible]# dnf update ansible ceph-ansible
If the storage cluster you want to upgrade contains Ceph Block Device images that use the
exclusive-lock
feature, ensure that all Ceph Block Device users have permissions to create a denylist for clients:Syntax
ceph auth caps client.ID mon 'profile rbd' osd 'profile rbd pool=POOL_NAME_1, profile rbd pool=POOL_NAME_2'
If the storage cluster was originally installed using Cockpit, create a symbolic link in the
/usr/share/ceph-ansible
directory to the inventory file where Cockpit created it, at/usr/share/ansible-runner-service/inventory/hosts
:Change to the
/usr/share/ceph-ansible
directory:# cd /usr/share/ceph-ansible
Create the symbolic link:
# ln -s /usr/share/ansible-runner-service/inventory/hosts hosts
To upgrade the cluster using
ceph-ansible
, create the symbolic link in theetc/ansible/hosts
directory to thehosts
inventory file:# ln -s /etc/ansible/hosts hosts
If the storage cluster was originally installed using Cockpit, copy the Cockpit-generated SSH keys to the Ansible user’s
~/.ssh
directory:Copy the keys:
Syntax
cp /usr/share/ansible-runner-service/env/ssh_key.pub /home/ANSIBLE_USERNAME/.ssh/id_rsa.pub cp /usr/share/ansible-runner-service/env/ssh_key /home/ANSIBLE_USERNAME/.ssh/id_rsa
Replace ANSIBLE_USERNAME with the user name for Ansible. The usual default user name is
admin
.Example
# cp /usr/share/ansible-runner-service/env/ssh_key.pub /home/admin/.ssh/id_rsa.pub # cp /usr/share/ansible-runner-service/env/ssh_key /home/admin/.ssh/id_rsa
Set the appropriate owner, group, and permissions on the key files:
Syntax
# chown ANSIBLE_USERNAME:ANSIBLE_USERNAME /home/ANSIBLE_USERNAME/.ssh/id_rsa.pub # chown ANSIBLE_USERNAME:ANSIBLE_USERNAME /home/ANSIBLE_USERNAME/.ssh/id_rsa # chmod 644 /home/ANSIBLE_USERNAME/.ssh/id_rsa.pub # chmod 600 /home/ANSIBLE_USERNAME/.ssh/id_rsa
Replace ANSIBLE_USERNAME with the username for Ansible. The usual default user name is
admin
.Example
# chown admin:admin /home/admin/.ssh/id_rsa.pub # chown admin:admin /home/admin/.ssh/id_rsa # chmod 644 /home/admin/.ssh/id_rsa.pub # chmod 600 /home/admin/.ssh/id_rsa
Additional Resources
- What are the Red Hat Ceph Storage Releases?
- For more information about converting from FileStore to BlueStore, refer to BlueStore.
2.4. Backing up the files before the host OS upgrade
Perform the procedure in this section only if you are upgrading the host OS. If you are not upgrading the host OS, skip this section.
Before you can perform the upgrade procedure, you must make backup copies of the files that you customized for your storage cluster, including keyring files and the yml
files for your configuration as the ceph.conf
file gets overridden when you execute any playbook.
Prerequisites
- A running Red Hat Ceph Storage 4 cluster.
- A valid customer subscription.
- Root-level access to the Ansible administration node.
- The Ansible user account for use with the Ansible application.
- Red Hat Ceph Storage Tools and Ansible repositories are enabled.
Procedure
-
Make a backup copy of the
/etc/ceph
and/var/lib/ceph
folders. -
Make a backup copy of the
ceph.client.admin.keyring
file. -
Make backup copies of the
ceph.conf
files from each node. -
Make backup copies of the
/etc/ganesha/
folder on each node. -
If the storage cluster has RBD mirroring defined, then make backup copies of the
/etc/ceph
folder and thegroup_vars/rbdmirrors.yml
file.
2.5. Converting to a containerized deployment
This procedure is required for non-containerized clusters. If your storage cluster is a non-containerized cluster, this procedure transforms the cluster into a containerized version.
Red Hat Ceph Storage 5 supports container-based deployments only. A cluster needs to be containerized before upgrading to RHCS 5.x.
If your Red Hat Ceph Storage 4 storage cluster is already containerized, skip this section.
This procedure stops and restarts a daemon. If the playbook stops executing during this procedure, be sure to analyze the state of the cluster before restarting.
Prerequisites
- A running Red Hat Ceph Storage non-containerized 4 cluster.
- Root-level access to all nodes in the storage cluster.
- A valid customer subscription.
- Root-level access to the Ansible administration node.
- The Ansible user account for use with the Ansible application.
Procedure
-
If you are running a multisite setup, set
rgw_multisite: false
inall.yml
. Ensure the
group_vars/all.yml
has the following default values for the configuration parameters:ceph_docker_image_tag: "latest" ceph_docker_registry: "registry.redhat.io" ceph_docker_image: rhceph/rhceph-4-rhel8 containerized_deployment: true
NoteThese values differ if you use a local registry and a custom image name.
Optional: For two-way RBD mirroring configured using the command-line interface in a bare-metal storage cluster, the cluster does not migrate RBD mirroring. For such a configuration, follow the below steps before migrating the non-containerized storage cluster to a containerized deployment:
Create a user on the Ceph client node:
Syntax
ceph auth get client.PRIMARY_CLUSTER_NAME -o /etc/ceph/ceph.PRIMARY_CLUSTER_NAME.keyring
Example
[root@rbd-client-site-a ~]# ceph auth get client.rbd-mirror.site-a -o /etc/ceph/ceph.client.rbd-mirror.site-a.keyring
Change the username in the
auth
file in/etc/ceph
directory:Example
[client.rbd-mirror.rbd-client-site-a] key = AQCbKbVg+E7POBAA7COSZCodvOrg2LWIFc9+3g== caps mds = "allow *" caps mgr = "allow *" caps mon = "allow *" caps osd = "allow *"
Import the
auth
file to add relevant permissions:Syntax
ceph auth import -i PATH_TO_KEYRING
Example
[root@rbd-client-site-a ~]# ceph auth import -i /etc/ceph/ceph.client.rbd-mirror.rbd-client-site-a.keyring
Check the service name of the RBD mirror node:
Example
[root@rbd-client-site-a ~]# systemctl list-units --all systemctl stop ceph-rbd-mirror@rbd-client-site-a.service systemctl disable ceph-rbd-mirror@rbd-client-site-a.service systemctl reset-failed ceph-rbd-mirror@rbd-client-site-a.service systemctl start ceph-rbd-mirror@rbd-mirror.rbd-client-site-a.service systemctl enable ceph-rbd-mirror@rbd-mirror.rbd-client-site-a.service systemctl status ceph-rbd-mirror@rbd-mirror.rbd-client-site-a.service
Add the rbd-mirror node to the
/etc/ansible/hosts
file:Example
[rbdmirrors] ceph.client.rbd-mirror.rbd-client-site-a
If you are using daemons that are not containerized, convert them to containerized format:
Syntax
ansible-playbook -vvvv -i INVENTORY_FILE infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml
The
-vvvv
option collects verbose logs of the conversion process.Example
[ceph-admin@admin ceph-ansible]$ ansible-playbook -vvvv -i hosts infrastructure-playbooks/switch-from-non-containerized-to-containerized-ceph-daemons.yml
Once the playbook completes successfully, edit the value of
rgw_multisite: true
in theall.yml
file and ensure the value ofcontainerized_deployment
istrue
.NoteEnsure to remove the
ceph-iscsi
,libtcmu
, andtcmu-runner
packages from the admin node.
2.6. The upgrade process
As a storage administrator, you use Ansible playbooks to upgrade an Red Hat Ceph Storage 4 storage cluster to Red Hat Ceph Storage 5. The rolling_update.yml
Ansible playbook performs upgrades for deployments of Red Hat Ceph Storage. The ceph-ansible
upgrades the Ceph nodes in the following order:
- Ceph Monitor
- Ceph Manager
- Ceph OSD nodes
- MDS nodes
- Ceph Object Gateway (RGW) nodes
- Ceph RBD-mirror node
- Ceph NFS nodes
- Ceph iSCSI gateway node
- Ceph client nodes
- Ceph-crash daemons
- Node-exporter on all nodes
- Ceph Dashboard
After the storage cluster is upgraded from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5, the Grafana UI shows two dashboards. This is because the port for Prometheus in Red Hat Ceph Storage 4 is 9092 while for Red Hat Ceph Storage 5 is 9095. You can remove the grafana. The cephadm
redeploys the service and the daemons and removes the old dashboard on the Grafana UI.
Red Hat Ceph Storage 5 supports only containerized deployments.
ceph-ansible
is currently not supported with Red Hat Ceph Storage 5. This means that once you have migrated your storage cluster to Red Hat Ceph Storage 5, you must use cephadm
to perform subsequent updates.
To deploy multi-site Ceph Object Gateway with single realm and multiple realms, edit the all.yml
file. For more information, see the Configuring multi-site Ceph Object Gateways in the Red Hat Ceph Storage 4 Installation Guide.
Red Hat Ceph Storage 5 also includes a health check function that returns a DAEMON_OLD_VERSION warning if it detects that any of the daemons in the storage cluster are running multiple versions of Red Hat Ceph Storage. The warning is triggered when the daemons continue to run multiple versions of Red Hat Ceph Storage beyond the time value set in the mon_warn_older_version_delay
option. By default, the mon_warn_older_version_delay
option is set to one week. This setting allows most upgrades to proceed without falsely seeing the warning. If the upgrade process is paused for an extended time period, you can mute the health warning:
ceph health mute DAEMON_OLD_VERSION --sticky
After the upgrade has finished, unmute the health warning:
ceph health unmute DAEMON_OLD_VERSION
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all hosts in the storage cluster.
- A valid customer subscription.
- Root-level access to the Ansible administration node.
-
The latest versions of Ansible and
ceph-ansible
available with Red Hat Ceph Storage 5. -
The
ansible
user account for use with the Ansible application. - The nodes of the storage cluster is upgraded to Red Hat Enterprise Linux 8.4 EUS or later.
The Ansible inventory file must be present in the ceph-ansible
directory.
Procedure
Enable the Ceph and Ansible repositories on the Ansible administration node:
Red Hat Enterprise Linux 8
subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms
Red Hat Enterprise Linux 9
subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpms
On the Ansible administration node, ensure that the latest versions of the
ansible
andceph-ansible
packages are installed.Syntax
dnf update ansible ceph-ansible
Navigate to the
/usr/share/ceph-ansible/
directory:Example
[root@admin ~]# cd /usr/share/ceph-ansible
If upgrading from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5, make copies of the
group_vars/osds.yml.sample
andgroup_vars/clients.yml.sample
files, and rename them togroup_vars/osds.yml
, andgroup_vars/clients.yml
respectively.Example
[root@admin ceph-ansible]# cp group_vars/osds.yml.sample group_vars/osds.yml [root@admin ceph-ansible]# cp group_vars/mdss.yml.sample group_vars/mdss.yml [root@admin ceph-ansible]# cp group_vars/rgws.yml.sample group_vars/rgws.yml [root@admin ceph-ansible]# cp group_vars/clients.yml.sample group_vars/clients.yml
-
If upgrading from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5, edit the
group_vars/all.yml
file to add Red Hat Ceph Storage 5 details. Once you have done the above two steps, copy the settings from the old
yaml
files to the newyaml
files. Do not change the values ofceph_rhcs_version
,ceph_docker_image
, andgrafana_container_image
as the values for these configuration parameters are for Red Hat Ceph Storage 5. This ensures that all the settings related to your cluster are present in the currentyaml
file.Example
fetch_directory: ~/ceph-ansible-keys monitor_interface: eth0 public_network: 192.168.0.0/24 ceph_docker_registry_auth: true ceph_docker_registry_username: SERVICE_ACCOUNT_USER_NAME ceph_docker_registry_password: TOKEN dashboard_admin_user: DASHBOARD_ADMIN_USERNAME dashboard_admin_password: DASHBOARD_ADMIN_PASSWORD grafana_admin_user: GRAFANA_ADMIN_USER grafana_admin_password: GRAFANA_ADMIN_PASSWORD radosgw_interface: eth0 ceph_docker_image: "rhceph/rhceph-5-rhel8" ceph_docker_image_tag: "latest" ceph_docker_registry: "registry.redhat.io" node_exporter_container_image: registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6 grafana_container_image: registry.redhat.io/rhceph/rhceph-5-dashboard-rhel8:5 prometheus_container_image: registry.redhat.io/openshift4/ose-prometheus:v4.6 alertmanager_container_image: registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6
NoteEnsure the Red Hat Ceph Storage 5 container images are set to the default values.
Edit the
group_vars/osds.yml
file. Add and set the following options:Syntax
nb_retry_wait_osd_up: 50 delay_wait_osd_up: 30
Open the
group_vars/all.yml
file and verify the following values are present from the oldall.yml
file.The
fetch_directory
option is set with the same value from the oldall.yml
file:Syntax
fetch_directory: FULL_DIRECTORY_PATH
Replace FULL_DIRECTORY_PATH with a writable location, such as the Ansible user’s home directory.
If the cluster you want to upgrade contains any Ceph Object Gateway nodes, add the
radosgw_interface
option:radosgw_interface: INTERFACE
Replace INTERFACE with the interface to which the Ceph Object Gateway nodes listen.
If your current setup has SSL certificates configured, edit the following:
Syntax
radosgw_frontend_ssl_certificate: /etc/pki/ca-trust/extracted/CERTIFICATE_NAME radosgw_frontend_port: 443
Uncomment the
upgrade_ceph_packages
option and set it toTrue
:Syntax
upgrade_ceph_packages: True
If the storage cluster has more than one Ceph Object Gateway instance per node, then uncomment the
radosgw_num_instances
setting and set it to the number of instances per node in the cluster:Syntax
radosgw_num_instances : NUMBER_OF_INSTANCES_PER_NODE
Example
radosgw_num_instances : 2
-
If the storage cluster has Ceph Object Gateway multi-site defined, check the multisite settings in
all.yml
to make sure that they contain the same values as they did in the oldall.yml
file.
If the buckets are created or have the
num_shards = 0
, manually reshard the buckets, before planning an upgrade to Red Hat Ceph Storage 5.3:WarningUpgrade to Red Hat Ceph Storage 5.3 from older releases when
bucket_index_max_shards
is0
can result in the loss of the Ceph Object Gateway bucket’s metadata leading to the bucket’s unavailability while trying to access it. Hence, ensurebucket_index_max_shards
is set to11
shards. If not, modify this configuration at the zonegroup level.Syntax
radosgw-admin bucket reshard --num-shards 11 --bucket BUCKET_NAME
Example
[ceph: root@host01 /]# radosgw-admin bucket reshard --num-shards 11 --bucket mybucket
-
Log in as
ansible-user
on the Ansible administration node. Use the
--extra-vars
option to update theinfrastructure-playbooks/rolling_update.yml
playbook and to change thehealth_osd_check_retries
andhealth_osd_check_delay
values to50
and30
, respectively:Example
[root@admin ceph-ansible]# ansible-playbook -i hosts infrastructure-playbooks/rolling_update.yml --extra-vars "health_osd_check_retries=50 health_osd_check_delay=30"
For each OSD node, these values cause
ceph-ansible
to check the storage cluster health every 30 seconds, up to 50 times. This means thatceph-ansible
waits up to 25 minutes for each OSD.Adjust the
health_osd_check_retries
option value up or down, based on the used storage capacity of the storage cluster. For example, if you are using 218 TB out of 436 TB, or 50% of the storage capacity, then set thehealth_osd_check_retries
option to50
./etc/ansible/hosts
is the default location for the Ansible inventory file.Run the
rolling_update.yml
playbook to convert the storage cluster from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5:Syntax
ansible-playbook -vvvv infrastructure-playbooks/rolling_update.yml -i INVENTORY_FILE
The -vvvv option collects verbose logs of the upgrade process.
Example
[ceph-admin@admin ceph-ansible]$ ansible-playbook -vvvv infrastructure-playbooks/rolling_update.yml -i hosts
ImportantUsing the
--limit
Ansible option with therolling_update.yml
playbook is not supported.- Review the Ansible playbook log output to verify the status of the upgrade.
Verification
List all running containers:
Example
[root@mon ~]# podman ps
Check the health status of the cluster. Replace MONITOR_ID with the name of the Ceph Monitor container found in the previous step:
Syntax
podman exec ceph-mon-MONITOR_ID ceph -s
Example
[root@mon ~]# podman exec ceph-mon-mon01 ceph -s
Verify the Ceph cluster daemon versions to confirm the upgrade of all daemons. Replace MONITOR_ID with the name of the Ceph Monitor container found in the previous step:
Syntax
podman exec ceph-mon-MONITOR_ID ceph --cluster ceph versions
Example
[root@mon ~]# podman exec ceph-mon-mon01 ceph --cluster ceph versions
2.7. Converting the storage cluster to using cephadm
After you have upgraded the storage cluster to Red Hat Ceph Storage 5, run the cephadm-adopt
playbook to convert the storage cluster daemons to run cephadm
.
The cephadm-adopt
playbook adopts the Ceph services, installs all cephadm
dependencies, enables the cephadm
Orchestrator backend, generates and configures the ssh
key on all hosts, and adds the hosts to the Orchestrator configuration.
After you run the cephadm-adopt
playbook, remove the ceph-ansible
package. The cluster daemons no longer work with ceph-ansible
. You must use cephadm
to manage the cluster daemons.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all nodes in the storage cluster.
Procedure
-
Log in to the
ceph-ansible
node and change directory to/usr/share/ceph-ansible
. Edit the
all.yml
file.Syntax
ceph_origin: custom/rhcs ceph_custom_repositories: - name: NAME state: present description: DESCRIPTION gpgcheck: 'no' baseurl: BASE_URL file: FILE_NAME priority: '2' enabled: 1
Example
ceph_origin: custom ceph_custom_repositories: - name: ceph_custom state: present description: Ceph custom repo gpgcheck: 'no' baseurl: https://example.ceph.redhat.com file: cephbuild priority: '2' enabled: 1 - name: ceph_custom_1 state: present description: Ceph custom repo 1 gpgcheck: 'no' baseurl: https://example.ceph.redhat.com file: cephbuild_1 priority: '2' enabled: 1
Run the
cephadm-adopt
playbook:Syntax
ansible-playbook infrastructure-playbooks/cephadm-adopt.yml -i INVENTORY_FILE
Example
[ceph-admin@admin ceph-ansible]$ ansible-playbook infrastructure-playbooks/cephadm-adopt.yml -i hosts
Set the minimum compat client parameter to
luminous
:Example
[ceph: root@node0 /]# ceph osd set-require-min-compat-client luminous
Run the following command to enable applications to run on the NFS-Ganesha pool. POOL_NAME is
nfs-ganesha
, and APPLICATION_NAME is the name of the application you want to enable, such ascephfs
,rbd
, orrgw
.Syntax
ceph osd pool application enable POOL_NAME APPLICATION_NAME
Example
[ceph: root@node0 /]# ceph osd pool application enable nfs-ganesha rgw
ImportantThe
cephadm-adopt
playbook does not bring up rbd-mirroring after migrating the storage cluster from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5.To work around this issue, add the peers manually:
Syntax
rbd mirror pool peer add POOL_NAME CLIENT_NAME@CLUSTER_NAME
Example
[ceph: root@node0 /]# rbd --cluster site-a mirror pool peer add image-pool client.rbd-mirror-peer@site-b
Remove Grafana after upgrade:
Log in to the Cephadm shell:
Example
[root@host01 ~]# cephadm shell
Fetch the name of Grafana in your storage cluster:
Example
[ceph: root@host01 /]# ceph orch ps --daemon_type grafana
Remove Grafana:
Syntax
ceph orch daemon rm GRAFANA_DAEMON_NAME
Example
[ceph: root@host01 /]# ceph orch daemon rm grafana.host01 Removed grafana.host01 from host 'host01'
Wait a few minutes and check the latest log:
Example
[ceph: root@host01 /]# ceph log last cephadm
cephadm
redeploys the Grafana service and the daemon.
Additional Resources
-
For more information about using
leapp
to upgrade Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8, see Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8. -
For more information about using
leapp
to upgrade Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9, see Upgrading from Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9. - For more information about converting from FileStore to BlueStore, refer to BlueStore.
- For more information about storage peers, see Viewing information about peers.
2.8. Installing cephadm-ansible
on an upgraded storage cluster
cephadm-ansible
is a collection of Ansible playbooks to simplify workflows that are not covered by cephadm
. After installation, the playbooks are located in /usr/share/cephadm-ansible/
.
Before adding new nodes or new clients to your upgraded storage cluster, run the cephadm-preflight.yml
playbook.
Prerequisites
- Root-level access to the Ansible administration node.
- A valid Red Hat subscription with the appropriate entitlements.
- An active Red Hat Network (RHN) or service account to access the Red Hat Registry.
Procedure
Uninstall
ansible
and the olderceph-ansible
packages:Syntax
dnf remove ansible ceph-ansible
Disable Ansible repository and enable Ceph repository on the Ansible administration node:
Red Hat Enterprise Linux 8
[root@admin ~]# subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms --disable=ansible-2.9-for-rhel-8-x86_64-rpms
Red Hat Enterprise Linux 9
[root@admin ~]# subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms --enable=rhel-9-for-x86_64-appstream-rpms --enable=rhceph-5-tools-for-rhel-9-x86_64-rpms --disable=ansible-2.9-for-rhel-9-x86_64-rpms
Install the
cephadm-ansible
package, which installs theansible-core
as a dependency:Syntax
dnf install cephadm-ansible
Additional Resources
- Running the preflight playbook
- Adding hosts
- Adding Monitor service
- Adding Manager service
- Adding OSDs
- For more information about configuring clients and services, see Red Hat Ceph Storage Operations Guide.
-
For more information about the
cephadm-ansible
playbooks, see The cephadm-ansible playbooks.