Chapter 17. Upgrading Red Hat Ceph Storage 6 to 7
You can upgrade the Red Hat Ceph Storage cluster from Release 6 to 7 after all other upgrade tasks are completed.
Prerequisites
- The upgrade from Red Hat OpenStack Platform 16.2 to 17.1 is complete.
- All Controller nodes are upgraded to Red Hat Enterprise Linux 9. In HCI environments, all Compute nodes must also be upgraded to RHEL 9.
- The current Red Hat Ceph Storage 6 cluster is healthy.
17.1. Director-deployed Red Hat Ceph Storage environments Copy linkLink copied to clipboard!
Perform the following tasks if Red Hat Ceph Storage is director-deployed in your environment.
17.1.1. Updating the cephadm client Copy linkLink copied to clipboard!
Before you upgrade the Red Hat Ceph Storage cluster, you must update the cephadm package in the overcloud nodes to Release 7.
Prerequisites
Log in to a Controller node and confirm that the health status of the Red Hat Ceph Storage cluster is
HEALTH_OK:$ sudo cephadm shell -- ceph -sIf the status is not
HEALTH_OK, correct any issues before continuing with this procedure. For more information about troubleshooting Red Hat Ceph Storage 6, see Troubleshooting Guide.
Procedure
Create a playbook to enable the Red Hat Ceph Storage (tool only) repositories in the Controller nodes. It should contain the following information:
- hosts: all gather_facts: false tasks: - name: Enable RHCS 7 tools repo ansible.builtin.command: | subscription-manager repos --disable=rhceph-6-tools-for-rhel-9-x86_64-rpms --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms become: true - name: Update cephadm ansible.builtin.package: name: cephadm state: latest become: trueRun the playbook:
ansible-playbook -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml <playbook_file_name> --limit <controller_role>-
Replace
<stack>with the name of your stack. -
Replace
<playbook_file_name> with the name of the playbook created in the previous step. -
Replace
<controller_role>with the role applied to Controller nodes. -
Use the
--limitoption to apply the content to Controller nodes only.
-
Replace
- Log in to a Controller node.
Verify that the
cephadmpackage is updated to Release 7:$ sudo dnf info cephadm | grep -i version
17.1.2. Updating the Red Hat Ceph Storage container image Copy linkLink copied to clipboard!
The container-image-prepare.yaml file contains the ContainerImagePrepare parameter and defines the Red Hat Ceph Storage containers. This file is used by the tripleo-container-image prepare command to define the rules for obtaining container images for the undercloud and overcloud. Update this file with the correct image version before updating your environment.
Procedure
-
Locate your container preparation file. The default name of this file is
containers-prepare-parameter.yaml. - Edit the container preparation file.
Locate the
ceph_tagparameter. The current entry should be similar to the following example:ceph_namespace: registry.redhat.io ceph_image: rhceph-6-rhel9 ceph_tag: '6'Update the
ceph_tagparameter for Red Hat Ceph Storage 7:ceph_namespace: registry.redhat.io ceph_image: rhceph-7-rhel9 ceph_tag: '7'Edit the
containers-image-prepare.yamlfile and replace the Red Hat Ceph monitoring stack container related parameters with the following content:ceph_alertmanager_image: ose-prometheus-alertmanager ceph_alertmanager_namespace: registry.redhat.io/openshift4 ceph_alertmanager_tag: v4.15 ceph_grafana_image: grafana-rhel9 ceph_grafana_namespace: registry.redhat.io/rhceph ceph_grafana_tag: latest ceph_node_exporter_image: ose-prometheus-node-exporter ceph_node_exporter_namespace: registry.redhat.io/openshift4 ceph_node_exporter_tag: v4.15 ceph_prometheus_image: ose-prometheus ceph_prometheus_namespace: registry.redhat.io/openshift4 ceph_prometheus_tag: v4.15- Save the file.
17.1.3. Running the container image prepare Copy linkLink copied to clipboard!
Complete the container image preparation process by running the director container preparation command. This prepares all container image configurations for the overcloud and retrieves the latest Red Hat Ceph Storage 7 container image.
If you are using Red Hat Satellite Server to host RPMs and container images for your Red Hat OpenStack Platform (RHOSP) environment, do not perform this procedure. Update Satellite to include the Red Hat Ceph Storage 7 container image and update your containers-prepare-parameter.yaml file to reference the URL of the container image that is hosted on the Satellite server.
Procedure
-
Log in to the undercloud host as the
stackuser. Source the
stackrcundercloud credentials file:$ source ~/stackrcRun the container preparation command:
$ openstack tripleo container image prepare -e <container_preparation_file>-
Replace
<container_preparation_file>with the name of your file. The default file iscontainers-prepare-parameter.yaml.
-
Replace
Verify that the new Red Hat Ceph Storage image is present in the undercloud registry:
$ openstack tripleo container image list -f value | awk -F '//' '/ceph/ {print $2}'If you have the Red Hat Ceph Storage Dashboard enabled, verify the new Red Hat monitoring stack images are present in the undercloud registry:
$ openstack tripleo container image list -f value | awk -F '//' '/dashboard|grafana|prometheus|alertmanager|node-exporter/ {print $2}'
17.1.4. Configuring Ceph Manager with Red Hat Ceph Storage 7 monitoring stack images Copy linkLink copied to clipboard!
Procedure
- Log in to a Controller node.
List the current images from the Ceph Manager configuration:
$ sudo cephadm shell -- ceph config dump | grep imageThe following is an example of the command output:
global basic container_image undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph:6-311 * mgr advanced mgr/cephadm/container_image_alertmanager undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus-alertmanager:v4.12 * mgr advanced mgr/cephadm/container_image_base undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph mgr advanced mgr/cephadm/container_image_grafana undercloud-0.ctlplane.redhat.local:8787/rh-osbs/grafana:latest * mgr advanced mgr/cephadm/container_image_node_exporter undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus-node-exporter:v4.12 * mgr advanced mgr/cephadm/container_image_prometheus undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus:v4.12Update the Ceph Manager configuration for the monitoring stack services to use Red Hat Ceph Storage 7 images:
$ sudo cephadm shell -- ceph config set mgr mgr/cephadm/container_image_alertmanager <alertmanager_image> $ sudo cephadm shell -- ceph config set mgr mgr/cephadm/container_image_grafana <grafana_image> $ sudo cephadm shell -- ceph config set mgr mgr/cephadm/container_image_node_exporter <node_exporter_image> $ sudo cephadm shell -- ceph config set mgr mgr/cephadm/container_image_prometheus <prometheus_image>-
Replace
<alertmanager_image>with the new alertmanager image. -
Replace
<grafana_image>with the new grafana image. -
Replace
<node_exporter_image>with the new node exporter image. Replace
<prometheus_image>with the new prometheus image.The following is an example of the alert manager update command:
$ sudo cephadm shell -- ceph config set mgr mgr/cephadm/container_image_alertmanager undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus-alertmanager:v4.15
-
Replace
Verify that the new image references are updated in the Red Hat Ceph Storage cluster:
$ sudo cephadm shell -- ceph config dump | grep image
17.1.5. Upgrading to Red Hat Ceph Storage 7 with Orchestrator Copy linkLink copied to clipboard!
Upgrade to Red Hat Ceph Storage 7 by using the Orchestrator capabilities of the cephadm command.
Prerequisities
On a Monitor or Controller node that is running the
ceph-monservice, confirm the Red Hat Ceph Storage cluster status:$ sudo cephadm shell -- ceph statusThis command returns one of three responses:
-
HEALTH_OK- The cluster is healthy. Proceed with the cluster upgrade. -
HEALTH_WARN- The cluster is unhealthy. Do not proceed with the cluster upgrade before the blocking issues are resolved. For troubleshooting guidance, see Red Hat Ceph Storage 6 Troubleshooting Guide. -
HEALTH_ERR- The cluster is unhealthy. Do not proceed with the cluster upgrade before the blocking issues are resolved. For troubleshooting guidance, see Red Hat Ceph Storage 6 Troubleshooting Guide.
-
Procedure
- Log in to a Controller node.
Upgrade the cluster to the latest Red Hat Ceph Storage version by using Upgrade a Red Hat Ceph Storage cluster using cephadm in the Red Hat Ceph Storage 7 Upgrade Guide.
ImportantSteps 1 to 4 of the upgrade procedure presented in Upgrading the Red Hat Ceph Storage cluster are not necessary when upgrading a Red Hat Ceph Storage cluster deployed with Red Hat OpenStack by director. Start at step 5 of the procedure.
Wait until the Red Hat Ceph Storage container upgrade completes. Monitor the status of the upgrade by using the following command:
$ sudo cephadm shell -- ceph orch upgrade status
17.1.6. Upgrading NFS Ganesha when moving from Red Hat Ceph Storage 6 to 7 Copy linkLink copied to clipboard!
When Red Hat Ceph Storage is upgraded from Release 5 to 6, NFS Ganesha is not adopted by the Orchestrator. This means it remains under director control and must be moved manually to Release 7.
Red Hat Ceph Storage 6 based NFS Ganesha with a Red Hat Ceph Storage 7 cluster is only supported during the upgrade period. Once you upgrade the Red Hat Ceph Storage cluster to 7, you must upgrade NFS Ganesha to use a Release 7 based container image.
This procedure only applies to environments that are using the Shared File Systems service (manila) with CephFS NFS. Upgradng the Red Hat Ceph Storage container for NFS Ganesha is mandatory in these environments.
Procedure
- Log in to a Controller node.
Inspect the
ceph-nfsservice:$ sudo pcs status | grep ceph-nfsInspect the
ceph-nfs systemdunit to confirm that it contains the Red Hat Ceph Storage 6 container image and tag:$ cat /etc/systemd/system/ceph-nfs@.service | grep -i container_imageCreate a file called
/home/stack/ganesha_update_extravars.yamlwith the following content:tripleo_cephadm_container_image: <ceph_image_name> tripleo_cephadm_container_ns: <ceph_image_namespace> tripleo_cephadm_container_tag: <ceph_image_tag>-
Replace
<ceph_image_name>with the name of the Red Hat Ceph Storage container image. -
Replace
<ceph_image_namespace>with the name of the Red Hat Ceph Storage container namespace. Replace
<ceph_image_tag>with the name of the Red Hat Ceph Storage container tag.For example, in a typical environment, this content would have the following values:
tripleo_cephadm_container_image: rhceph-7-rhel9 tripleo_cephadm_container_ns: undercloud-0.ctlplane.redhat.local:8787 tripleo_cephadm_container_tag: '7'
-
Replace
- Save the file.
Run the
ceph-update-ganesha.ymlplaybook and provide theganesha_update_extravars.yamlplaybook for additional command parameters:ansible-playbook -i $HOME/overcloud-deploy/<stack>/config-download/<stack>/tripleo-ansible-inventory.yaml \ /usr/share/ansible/tripleo-playbooks/ceph-update-ganesha.yml \ -e @$HOME/overcloud-deploy/<stack>/config-download/<stack>/global_vars.yaml \ -e @$HOME/overcloud-deploy/<stack>/config-download/<stack>/cephadm/cephadm-extra-vars-heat.yml \ -e @$HOME/ganesha_update_extravars.yaml-
Replace
<stack>with the name of the overcloud stack.
-
Replace
Verify that the
ceph-nfsservice is running:$ sudo pcs status | grep ceph-nfsVerify that the
ceph-nfs systemdunit contains the Red Hat Ceph Storage 6 container image and tag:$ cat /etc/systemd/system/ceph-nfs@.service | grep rhceph
17.2. External Red Hat Ceph Storage cluster environment Copy linkLink copied to clipboard!
Perform the following tasks if your Red Hat Ceph Storage cluster is external to your Red Hat OpenStack Platform deployment in your environment.
17.2.1. Updating the Red Hat Ceph Storage container image Copy linkLink copied to clipboard!
The container-image-prepare.yaml file contains the ContainerImagePrepare parameter and defines the Red Hat Ceph Storage containers. This file is used by the tripleo-container-image prepare command to define the rules for obtaining container images for the undercloud and overcloud. Update this file with the correct image version before updating your environment.
Procedure
-
Locate your container preparation file. The default name of this file is
containers-prepare-parameter.yaml. - Edit the container preparation file.
Locate the
ceph_tagparameter. The current entry should be similar to the following example:ceph_namespace: registry.redhat.io ceph_image: rhceph-6-rhel9 ceph_tag: '6'Update the
ceph_tagparameter for Red Hat Ceph Storage 7:ceph_namespace: registry.redhat.io ceph_image: rhceph-7-rhel9 ceph_tag: '7'Edit the
containers-image-prepare.yamlfile and replace the Red Hat Ceph monitoring stack container related parameters with the following content:ceph_alertmanager_image: ose-prometheus-alertmanager ceph_alertmanager_namespace: registry.redhat.io/openshift4 ceph_alertmanager_tag: v4.15 ceph_grafana_image: grafana-rhel9 ceph_grafana_namespace: registry.redhat.io/rhceph ceph_grafana_tag: latest ceph_node_exporter_image: ose-prometheus-node-exporter ceph_node_exporter_namespace: registry.redhat.io/openshift4 ceph_node_exporter_tag: v4.15 ceph_prometheus_image: ose-prometheus ceph_prometheus_namespace: registry.redhat.io/openshift4 ceph_prometheus_tag: v4.15- Save the file.
17.2.2. Running the container image prepare Copy linkLink copied to clipboard!
Complete the container image preparation process by running the director container preparation command. This prepares all container image configurations for the overcloud and retrieves the latest Red Hat Ceph Storage 7 container image.
If you are using Red Hat Satellite Server to host RPMs and container images for your Red Hat OpenStack Platform (RHOSP) environment, do not perform this procedure. Update Satellite to include the Red Hat Ceph Storage 7 container image and update your containers-prepare-parameter.yaml file to reference the URL of the container image that is hosted on the Satellite server.
Procedure
-
Log in to the undercloud host as the
stackuser. Source the
stackrcundercloud credentials file:$ source ~/stackrcRun the container preparation command:
$ openstack tripleo container image prepare -e <container_preparation_file>-
Replace
<container_preparation_file>with the name of your file. The default file iscontainers-prepare-parameter.yaml.
-
Replace
Verify that the new Red Hat Ceph Storage image is present in the undercloud registry:
$ openstack tripleo container image list -f value | awk -F '//' '/ceph/ {print $2}'If you have the Red Hat Ceph Storage Dashboard enabled, verify the new Red Hat monitoring stack images are present in the undercloud registry:
$ openstack tripleo container image list -f value | awk -F '//' '/dashboard|grafana|prometheus|alertmanager|node-exporter/ {print $2}'
17.2.3. Upgrading NFS Ganesha when moving from Red Hat Ceph Storage 6 to 7 Copy linkLink copied to clipboard!
When Red Hat Ceph Storage is upgraded from Release 5 to 6, NFS Ganesha is not adopted by the Orchestrator. This means it remains under director control and must be moved manually to Release 7.
Red Hat Ceph Storage 6 based NFS Ganesha with a Red Hat Ceph Storage 7 cluster is only supported during the upgrade period. Once you upgrade the Red Hat Ceph Storage cluster to 7, you must upgrade NFS Ganesha to use a Release 7 based container image.
This procedure only applies to environments that are using the Shared File Systems service (manila) with CephFS NFS. Upgradng the Red Hat Ceph Storage container for NFS Ganesha is mandatory in these environments.
Procedure
- Log in to a Controller node.
Inspect the
ceph-nfsservice:$ sudo pcs status | grep ceph-nfsInspect the
ceph-nfs systemdunit to confirm that it contains the Red Hat Ceph Storage 6 container image and tag:$ cat /etc/systemd/system/ceph-nfs@.service | grep -i container_imageCreate a file called
/home/stack/ganesha_update_extravars.yamlwith the following content:tripleo_cephadm_container_image: <ceph_image_name> tripleo_cephadm_container_ns: <ceph_image_namespace> tripleo_cephadm_container_tag: <ceph_image_tag>-
Replace
<ceph_image_name>with the name of the Red Hat Ceph Storage container image. -
Replace
<ceph_image_namespace>with the name of the Red Hat Ceph Storage container namespace. Replace
<ceph_image_tag>with the name of the Red Hat Ceph Storage container tag.For example, in a typical environment, this content would have the following values:
tripleo_cephadm_container_image: rhceph-7-rhel9 tripleo_cephadm_container_ns: undercloud-0.ctlplane.redhat.local:8787 tripleo_cephadm_container_tag: '7'
-
Replace
- Save the file.
Run the
ceph-update-ganesha.ymlplaybook and provide theganesha_update_extravars.yamlplaybook for additional command parameters:ansible-playbook -i $HOME/overcloud-deploy/<stack>/config-download/<stack>/tripleo-ansible-inventory.yaml \ /usr/share/ansible/tripleo-playbooks/ceph-update-ganesha.yml \ -e @$HOME/overcloud-deploy/<stack>/config-download/<stack>/global_vars.yaml \ -e @$HOME/overcloud-deploy/<stack>/config-download/<stack>/cephadm/cephadm-extra-vars-heat.yml \ -e @$HOME/ganesha_update_extravars.yaml-
Replace
<stack>with the name of the overcloud stack.
-
Replace
Verify that the
ceph-nfsservice is running:$ sudo pcs status | grep ceph-nfsVerify that the
ceph-nfs systemdunit contains the Red Hat Ceph Storage 6 container image and tag:$ cat /etc/systemd/system/ceph-nfs@.service | grep rhceph