搜索

Chapter 14. Upgrading Red Hat Ceph Storage 6 to 7

download PDF

You can upgrade the Red Hat Ceph Storage cluster from Release 6 to 7 after all other upgrade tasks are completed.

Prerequisites

  • The upgrade from Red Hat OpenStack Platform 16.2 to 17.1 is complete.
  • All Controller nodes are upgraded to Red Hat Enterprise Linux 9. In HCI environments, all Compute nodes must also be upgraded to RHEL 9.
  • The current Red Hat Ceph Storage 6 cluster is healthy.

14.1. Director-deployed Red Hat Ceph Storage environments

Perform the following tasks if Red Hat Ceph Storage is director-deployed in your environment.

14.1.1. Updating the cephadm client

Before you upgrade the Red Hat Ceph Storage cluster, you must update the cephadm package in the overcloud nodes to Release 7.

Prerequisites

Confirm that the health status of the Red Hat Ceph Storage cluster is HEALTH_OK. Log in to a Controller node and use the command sudo cephadm shell — ceph -s to confirm the cluster health. If the status is not HEALTH_OK, correct any issues before continuing with this procedure.

Procedure

  1. Create a playbook to enable the Red Hat Ceph Storage (tool only) repositories in the Controller nodes. It should contain the following information:

    - hosts: all
      gather_facts: false
      tasks:
        - name: Enable RHCS 7 tools repo
          ansible.builtin.command: |
              subscription-manager repos --disable=rhceph-6-tools-for-rhel-9-x86_64-rpms
              subscription-manager repos --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms
          become: true
        - name: Update cephadm
          ansible.builtin.package:
            name: cephadm
            state: latest
  2. Run the playbook:

    ansible-playbook -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml <playbook_file_name> --limit <controller_role>

    • Replace <stack> with the name of your stack.
    • Replace <playbook_file_name> with the name of the playbook created in the previous step.
    • Replace <controller_role> with the role applied to Controller nodes.
    • Use the --limit option to apply the content to Controller nodes only.
  3. Log in to a Controller node.
  4. Verify that the cephadm package is updated to Release 7:

    $ sudo dnf info cephadm | grep -i version

14.1.2. Updating the Red Hat Ceph Storage container image

The container-image-prepare.yaml file contains the ContainerImagePrepare parameter and defines the Red Hat Ceph Storage containers. This file is used by the tripleo-container-image prepare command to define the rules for obtaining container images for the undercloud and overcloud. Update this file with the correct image version before updating your environment.

Procedure

  1. Locate your container preparation file. The default name of this file is containers-prepare-parameter.yaml.
  2. Edit the container preparation file.
  3. Locate the ceph_tag parameter. The current entry should be similar to the following example:

    ceph_namespace: registry.redhat.io
    ceph_image: rhceph-6-rhel9
    ceph_tag: '6'
  4. Update the ceph_tag parameter for Red Hat Ceph Storage 6:

    ceph_namespace: registry.redhat.io
    ceph_image: rhceph-7-rhel9
    ceph_tag: '7'
  5. Edit the containers-image-prepare.yaml file and replace the Red Hat Ceph monitoring stack container related parameters with the following content:

    ceph_alertmanager_image: ose-prometheus-alertmanager
    ceph_alertmanager_namespace: registry.redhat.io/openshift4
    ceph_alertmanager_tag: v4.15
    ceph_grafana_image: grafana-rhel9
    ceph_grafana_namespace: registry.redhat.io/rhceph
    ceph_grafana_tag: latest
    ceph_node_exporter_image: ose-prometheus-node-exporter
    ceph_node_exporter_namespace: registry.redhat.io/openshift4
    ceph_node_exporter_tag: v4.15
    ceph_prometheus_image: ose-prometheus
    ceph_prometheus_namespace: registry.redhat.io/openshift4
    ceph_prometheus_tag: v4.15
  6. Save the file.

14.1.3. Running the container image prepare

Complete the container image preparation process by running the director container preparation command. This prepares all container image configurations for the overcloud and retrieves the latest Red Hat Ceph Storage 7 container image.

Note

If you are using Red Hat Satellite Server to host RPMs and container images for your Red Hat OpenStack Platform (RHOSP) environment, do not perform this procedure. Update Satellite to include the Red Hat Ceph Storage 7 container image and update your containers-prepare-parameter.yaml file to reference the URL of the container image that is hosted on the Satellite server.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. Run the container preparation command:

    $ openstack tripleo container image prepare -e <container_preparation_file>

    • Replace <container_preparation_file> with the name of your file. The default file is containers-prepare-parameter.yaml.
  4. Verify that the new Red Hat Ceph Storage image is present in the undercloud registry:

    $ openstack tripleo container image list -f value | awk -F '//' '/ceph/ {print $2}'

  5. Verify that the new Red Hat Ceph Storage monitoring stack images are present in the undercloud registry:

    $ openstack tripleo container image list -f value | awk -F '//' '/grafana|prometheus|alertmanager|node-exporter/ {print $2}'

  6. If you have the Red Hat Ceph Storage Dashboard enabled, verify the new Red Hat monitoring stack images are present in the undercloud registry:

    $ openstack tripleo container image list -f value | awk -F '//' '/grafana|prometheus|alertmanager|node-exporter/ {print $2}'

14.1.4. Configuring Ceph Manager with Red Hat Ceph Storage 7 monitoring stack images

Procedure

  1. Log in to a Controller node.
  2. List the current images from the Ceph Manager configuration:

    $ sudo cephadm shell -- ceph config dump | grep image

    The following is an example of the command output:

    global  basic     container_image                                undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph:6-311                                  *
    mgr     advanced  mgr/cephadm/container_image_alertmanager       undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus-alertmanager:v4.12   *
    mgr     advanced  mgr/cephadm/container_image_base               undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhceph
    mgr     advanced  mgr/cephadm/container_image_grafana            undercloud-0.ctlplane.redhat.local:8787/rh-osbs/grafana:latest                                *
    mgr     advanced  mgr/cephadm/container_image_node_exporter      undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus-node-exporter:v4.12  *
    mgr     advanced  mgr/cephadm/container_image_prometheus         undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus:v4.12
  3. Update the Ceph Manager configuration for the monitoring stack services to use Red Hat Ceph Storage 7 images:

    $ sudo cephadm shell -- ceph config set mgr mgr/cephadm/container_image_alertmanager <alertmanager_image>
    $ sudo cephadm shell -- ceph config set mgr mgr/cephadm/container_image_grafana <grafana_image>
    $ sudo cephadm shell -- ceph config set mgr mgr/cephadm/container_image_node_exporter <node_exporter_image>
    $ sudo cephadm shell -- ceph config set mgr mgr/cephadm/container_image_prometheus <prometheus_image>
    • Replace <alertmanager_image> with the new alertmanager image.
    • Replace <grafana_image> with the new grafana image.
    • Replace <node_exporter_image> with the new node exporter image.
    • Replace <prometheus_image> with the new prometheus image.

      The following is an example of the alert manager update command:

      $ sudo cephadm shell -- ceph config set mgr mgr/cephadm/container_image_alertmanager undercloud-0.ctlplane.redhat.local:8787/rh-osbs/openshift-ose-prometheus-alertmanager:v4.15
  4. Verify that the new image references are updated in the Red Hat Ceph Storage cluster:

    $ sudo cephadm shell -- ceph config dump | grep image

14.1.5. Upgrading to Red Hat Ceph Storage 7 with Orchestrator

Upgrade to Red Hat Ceph Storage 7 by using the Orchestrator capabilities of the cephadm command.

Prerequisities

  • On a Monitor or Controller node that is running the ceph-mon service, confirm the Red Hat Ceph Storage cluster status by using the sudo cephadm --shell ceph status command. This command returns one of three responses:

    • HEALTH_OK - The cluster is healthy. Proceed with the cluster upgrade.
    • HEALTH_WARN - The cluster is unhealthy. Do not proceed with the cluster upgrade before the blocking issues are resolved. For troubleshooting guidance, see Red Hat Ceph Storage 6 Troubleshooting Guide.
    • HEALTH_ERR - The cluster is unhealthy. Do not proceed with the cluster upgrade before the blocking issues are resolved. For troubleshooting guidance, see Red Hat Ceph Storage 6 Troubleshooting Guide.

Procedure

  1. Log in to a Controller node.
  2. Upgrade the cluster to the latest Red Hat Ceph Storage version by using Upgrade a Red Hat Ceph Storage cluster using cephadm in the Red Hat Ceph Storage 7 Upgrade Guide.
  3. Wait until the Red Hat Ceph Storage container upgrade completes.

    Note

    Monitor the status upgrade by using the command sudo cephadm shell — ceph orch upgrade status.

14.1.6. Upgrading NFS Ganesha when moving from Red Hat Ceph Storage 6 to 7

When Red Hat Ceph Storage is upgraded from Release 5 to 6, NFS Ganesha is not adopted by the Orchestrator. This means it remains under director control and must be moved manually to Release 7.

Red Hat Ceph Storage 6 based NFS Ganesha with a Red Hat Ceph Storage 7 cluster is only supported during the upgrade period. Once you upgrade the Red Hat Ceph Storage cluster to 7, you must upgrade NFS Ganesha to use a Release 7 based container image.

Note

This procedure only applies to environments that are using the Shared File Systems service (manila) with CephFS NFS. Upgradng the Red Hat Ceph Storage container for NFS Ganesha is mandatory in these environments.

Procedure

  1. Log in to a Controller node.
  2. Inspect the ceph-nfs service:

    $ sudo pcs status | grep ceph-nfs

  3. Inspect the ceph-nfs systemd unit to confirm that it contains the Red Hat Ceph Storage 6 container image and tag:

    $ cat /etc/systemd/system/ceph-nfs@.service | grep -i container_image
  4. Create a file called /home/stack/ganesha_update_extravars.yaml with the following content:

    tripleo_cephadm_container_image: <ceph_image_name>
    tripleo_cephadm_container_ns: <ceph_image_namespace>
    tripleo_cephadm_container_tag: <ceph_image_tag>
    • Replace <ceph_image_name> with the name of the Red Hat Ceph Storage container image.
    • Replace <ceph_image_namespace> with the name of the Red Hat Ceph Storage container namespace.
    • Replace <ceph_image_tag> with the name of the Red Hat Ceph Storage container tag.

      For example, in a typical environment, this content would have the following values:

      tripleo_cephadm_container_image: rhceph-7-rhel9
      tripleo_cephadm_container_ns: undercloud-0.ctlplane.redhat.local:8787
      tripleo_cephadm_container_tag: '7'
  5. Save the file.
  6. Run the ceph-update-ganesha.yml playbook and provide the ganesha_update_extravars.yaml playbook for additional command parameters:

    ansible-playbook -i $HOME/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml \
        /usr/share/ansible/tripleo-playbooks/ceph-update-ganesha.yml \
         -e @$HOME/overcloud-deploy/<stack>/config-download/<stack>/global_vars.yaml \
         -e @$HOME/overcloud-deploy/<stack>/config-download/<stack>/cephadm/cephadm-extra-vars-heat.yml \
         -e @$HOME/ganesha_update_extravars.yaml
    • Replace <stack> with the name of the overcloud stack.
  7. Verify that the ceph-nfs service is running:

    $ sudo pcs status | grep ceph-nfs

  8. Verify that the ceph-nfs systemd unit contains the Red Hat Ceph Storage 6 container image and tag:

    $ cat /etc/systemd/system/ceph-nfs@.service | grep rhceph

14.2. External Red Hat Ceph Storage cluster environment

Perform the following tasks if your Red Hat Ceph Storage cluster is external to your Red Hat OpenStack Platform deployment in your environment.

14.2.1. Updating the Red Hat Ceph Storage container image

The container-image-prepare.yaml file contains the ContainerImagePrepare parameter and defines the Red Hat Ceph Storage containers. This file is used by the tripleo-container-image prepare command to define the rules for obtaining container images for the undercloud and overcloud. Update this file with the correct image version before updating your environment.

Procedure

  1. Locate your container preparation file. The default name of this file is containers-prepare-parameter.yaml.
  2. Edit the container preparation file.
  3. Locate the ceph_tag parameter. The current entry should be similar to the following example:

    ceph_namespace: registry.redhat.io
    ceph_image: rhceph-6-rhel9
    ceph_tag: '6'
  4. Update the ceph_tag parameter for Red Hat Ceph Storage 6:

    ceph_namespace: registry.redhat.io
    ceph_image: rhceph-7-rhel9
    ceph_tag: '7'
  5. Edit the containers-image-prepare.yaml file and replace the Red Hat Ceph monitoring stack container related parameters with the following content:

    ceph_alertmanager_image: ose-prometheus-alertmanager
    ceph_alertmanager_namespace: registry.redhat.io/openshift4
    ceph_alertmanager_tag: v4.15
    ceph_grafana_image: grafana-rhel9
    ceph_grafana_namespace: registry.redhat.io/rhceph
    ceph_grafana_tag: latest
    ceph_node_exporter_image: ose-prometheus-node-exporter
    ceph_node_exporter_namespace: registry.redhat.io/openshift4
    ceph_node_exporter_tag: v4.15
    ceph_prometheus_image: ose-prometheus
    ceph_prometheus_namespace: registry.redhat.io/openshift4
    ceph_prometheus_tag: v4.15
  6. Save the file.

14.2.2. Running the container image prepare

Complete the container image preparation process by running the director container preparation command. This prepares all container image configurations for the overcloud and retrieves the latest Red Hat Ceph Storage 7 container image.

Note

If you are using Red Hat Satellite Server to host RPMs and container images for your Red Hat OpenStack Platform (RHOSP) environment, do not perform this procedure. Update Satellite to include the Red Hat Ceph Storage 7 container image and update your containers-prepare-parameter.yaml file to reference the URL of the container image that is hosted on the Satellite server.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
  3. Run the container preparation command:

    $ openstack tripleo container image prepare -e <container_preparation_file>

    • Replace <container_preparation_file> with the name of your file. The default file is containers-prepare-parameter.yaml.
  4. Verify that the new Red Hat Ceph Storage image is present in the undercloud registry:

    $ openstack tripleo container image list -f value | awk -F '//' '/ceph/ {print $2}'

  5. Verify that the new Red Hat Ceph Storage monitoring stack images are present in the undercloud registry:

    $ openstack tripleo container image list -f value | awk -F '//' '/grafana|prometheus|alertmanager|node-exporter/ {print $2}'

  6. If you have the Red Hat Ceph Storage Dashboard enabled, verify the new Red Hat monitoring stack images are present in the undercloud registry:

    $ openstack tripleo container image list -f value | awk -F '//' '/grafana|prometheus|alertmanager|node-exporter/ {print $2}'

14.2.3. Upgrading NFS Ganesha when moving from Red Hat Ceph Storage 6 to 7

When Red Hat Ceph Storage is upgraded from Release 5 to 6, NFS Ganesha is not adopted by the Orchestrator. This means it remains under director control and must be moved manually to Release 7.

Red Hat Ceph Storage 6 based NFS Ganesha with a Red Hat Ceph Storage 7 cluster is only supported during the upgrade period. Once you upgrade the Red Hat Ceph Storage cluster to 7, you must upgrade NFS Ganesha to use a Release 7 based container image.

Note

This procedure only applies to environments that are using the Shared File Systems service (manila) with CephFS NFS. Upgradng the Red Hat Ceph Storage container for NFS Ganesha is mandatory in these environments.

Procedure

  1. Log in to a Controller node.
  2. Inspect the ceph-nfs service:

    $ sudo pcs status | grep ceph-nfs

  3. Inspect the ceph-nfs systemd unit to confirm that it contains the Red Hat Ceph Storage 6 container image and tag:

    $ cat /etc/systemd/system/ceph-nfs@.service | grep -i container_image
  4. Create a file called /home/stack/ganesha_update_extravars.yaml with the following content:

    tripleo_cephadm_container_image: <ceph_image_name>
    tripleo_cephadm_container_ns: <ceph_image_namespace>
    tripleo_cephadm_container_tag: <ceph_image_tag>
    • Replace <ceph_image_name> with the name of the Red Hat Ceph Storage container image.
    • Replace <ceph_image_namespace> with the name of the Red Hat Ceph Storage container namespace.
    • Replace <ceph_image_tag> with the name of the Red Hat Ceph Storage container tag.

      For example, in a typical environment, this content would have the following values:

      tripleo_cephadm_container_image: rhceph-7-rhel9
      tripleo_cephadm_container_ns: undercloud-0.ctlplane.redhat.local:8787
      tripleo_cephadm_container_tag: '7'
  5. Save the file.
  6. Run the ceph-update-ganesha.yml playbook and provide the ganesha_update_extravars.yaml playbook for additional command parameters:

    ansible-playbook -i $HOME/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml \
        /usr/share/ansible/tripleo-playbooks/ceph-update-ganesha.yml \
         -e @$HOME/overcloud-deploy/<stack>/config-download/<stack>/global_vars.yaml \
         -e @$HOME/overcloud-deploy/<stack>/config-download/<stack>/cephadm/cephadm-extra-vars-heat.yml \
         -e @$HOME/ganesha_update_extravars.yaml
    • Replace <stack> with the name of the overcloud stack.
  7. Verify that the ceph-nfs service is running:

    $ sudo pcs status | grep ceph-nfs

  8. Verify that the ceph-nfs systemd unit contains the Red Hat Ceph Storage 6 container image and tag:

    $ cat /etc/systemd/system/ceph-nfs@.service | grep rhceph

Red Hat logoGithubRedditYoutubeTwitter

学习

尝试、购买和销售

社区

关于红帽文档

通过我们的产品和服务,以及可以信赖的内容,帮助红帽用户创新并实现他们的目标。

让开源更具包容性

红帽致力于替换我们的代码、文档和 Web 属性中存在问题的语言。欲了解更多详情,请参阅红帽博客.

關於紅帽

我们提供强化的解决方案,使企业能够更轻松地跨平台和环境(从核心数据中心到网络边缘)工作。

© 2024 Red Hat, Inc.