Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 1. Upgrade a Red Hat Ceph Storage cluster using cephadm


As a storage administrator, you can use the cephadm Orchestrator to Red Hat Ceph Storage 7 with the ceph orch upgrade command.

Note

Upgrading directly from Red Hat Ceph Storage 5 to Red Hat Ceph Storage 7 is supported.

The automated upgrade process follows Ceph best practices. For example:

  • The upgrade order starts with Ceph Managers, Ceph Monitors, then other daemons.
  • Each daemon is restarted only after Ceph indicates that the cluster will remain available.

The storage cluster health status is likely to switch to HEALTH_WARNING during the upgrade. When the upgrade is complete, the health status should switch back to HEALTH_OK.

Note

You do not get a message once the upgrade is successful. Run ceph versions and ceph orch ps commands to verify the new image ID and the version of the storage cluster.

1.1. Compatibility considerations between RHCS and podman versions

podman and Red Hat Ceph Storage have different end-of-life strategies that might make it challenging to find compatible versions.

Red Hat recommends to use the podman version shipped with the corresponding Red Hat Enterprise Linux version for Red Hat Ceph Storage. See the Red Hat Ceph Storage: Supported configurations knowledge base article for more details. See the Contacting Red Hat support for service section in the Red Hat Ceph Storage Troubleshooting Guide for additional assistance.

The following table shows version compatibility between Red Hat Ceph Storage 7 and versions of podman.

CephPodman     
 

1.9

2.0

2.1

2.2

3.0

>3.0

Red Hat Ceph Storage 7

false

true

true

false

true

true

Warning

You must use a version of Podman that is 2.0.0 or higher.

1.2. Upgrading the Red Hat Ceph Storage cluster

You can use ceph orch upgrade command for upgrading a Red Hat Ceph Storage cluster.

Prerequisites

  • Latest version of running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • Ansible user with sudo and passwordless ssh access to all nodes in the storage cluster.
  • At least two Ceph Manager nodes in the storage cluster: one active and one standby.
Note

Red Hat Ceph Storage 5 also includes a health check function that returns a DAEMON_OLD_VERSION warning if it detects that any of the daemons in the storage cluster are running multiple versions of Red Hat Ceph Storage. The warning is triggered when the daemons continue to run multiple versions of Red Hat Ceph Storage beyond the time value set in the mon_warn_older_version_delay option. By default, the mon_warn_older_version_delay option is set to 1 week. This setting allows most upgrades to proceed without falsely seeing the warning. If the upgrade process is paused for an extended time period, you can mute the health warning:

ceph health mute DAEMON_OLD_VERSION --sticky

After the upgrade has finished, unmute the health warning:

ceph health unmute DAEMON_OLD_VERSION

Procedure

  1. Enable the Ceph Ansible repositories on the Ansible administration node:

    Red Hat Enterprise Linux 9

    subscription-manager repos --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms

  2. Update the cephadm and cephadm-ansible package:

    Example

    [root@admin ~]# dnf update cephadm
    [root@admin ~]# dnf update cephadm-ansible

  3. Navigate to the /usr/share/cephadm-ansible/ directory:

    Example

    [root@admin ~]# cd /usr/share/cephadm-ansible

  4. Run the preflight playbook with the upgrade_ceph_packages parameter set to true on the bootstrapped host in the storage cluster:

    Syntax

    ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"

    Example

    [ceph-admin@admin cephadm-ansible]$ ansible-playbook -i /etc/ansible/hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"

    This package upgrades cephadm on all the nodes.

  5. Log into the cephadm shell:

    Example

    [root@host01 ~]# cephadm shell

  6. Ensure all the hosts are online and that the storage cluster is healthy:

    Example

    [ceph: root@host01 /]# ceph -s

  7. Set the OSD noout, noscrub, and nodeep-scrub flags to prevent OSDs from getting marked out during upgrade and to avoid unnecessary load on the cluster:

    Example

    [ceph: root@host01 /]# ceph osd set noout
    [ceph: root@host01 /]# ceph osd set noscrub
    [ceph: root@host01 /]# ceph osd set nodeep-scrub

  8. Check service versions and the available target containers:

    Syntax

    ceph orch upgrade check IMAGE_NAME

    Example

    [ceph: root@host01 /]# ceph orch upgrade check registry.redhat.io/rhceph/rhceph-7-rhel9:latest

    Note

    The image name is applicable for both Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 9.

  9. Upgrade the storage cluster:

    Syntax

    ceph orch upgrade start IMAGE_NAME

    Example

    [ceph: root@host01 /]# ceph orch upgrade start registry.redhat.io/rhceph/rhceph-7-rhel9:latest

    Note

    To perform a staggered upgrade, see Performing a staggered upgrade.

    While the upgrade is underway, a progress bar appears in the ceph status output.

    Example

    [ceph: root@host01 /]# ceph status
    [...]
    progress:
        Upgrade to 18.2.0-128.el9cp (1s)
          [............................]

  10. Verify the new IMAGE_ID and VERSION of the Ceph cluster:

    Example

    [ceph: root@host01 /]# ceph versions
    [ceph: root@host01 /]# ceph orch ps

    Note

    If you are not using the cephadm-ansible playbooks, after upgrading your Ceph cluster, you must upgrade the ceph-common package and client libraries on your client nodes.

    Example

    [root@client01 ~] dnf update ceph-common

    Verify you have the latest version:

    Example

    [root@client01 ~] ceph --version

  11. When the upgrade is complete, unset the noout, noscrub, and nodeep-scrub flags:

    Example

    [ceph: root@host01 /]# ceph osd unset noout
    [ceph: root@host01 /]# ceph osd unset noscrub
    [ceph: root@host01 /]# ceph osd unset nodeep-scrub

1.3. Upgrading the Red Hat Ceph Storage cluster in a disconnected environment

You can upgrade the storage cluster in a disconnected environment by using the --image tag.

You can use ceph orch upgrade command for upgrading a Red Hat Ceph Storage cluster.

Prerequisites

  • Latest version of running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • Ansible user with sudo and passwordless ssh access to all nodes in the storage cluster.
  • At least two Ceph Manager nodes in the storage cluster: one active and one standby.
  • Register the nodes to CDN and attach subscriptions.
  • Check for the customer container images in a disconnected environment and change the configuration, if required. See the Changing configurations of custom container images for disconnected installations section in the Red Hat Ceph Storage Installation Guide for more details.

By default, the monitoring stack components are deployed based on the primary Ceph image. For disconnected environment of the storage cluster, you have to use the latest available monitoring stack component images.

Table 1.1. Custom image details for monitoring stack
Monitoring stack componentImage details for Red Hat Ceph Storage 7.0Image details for Red Hat Ceph Storage 7.1

Prometheus

registry.redhat.io/openshift4/ose-prometheus:v4.12

registry.redhat.io/openshift4/ose-prometheus:v4.15

Grafana

registry.redhat.io/rhceph/grafana-rhel9:latest

registry.redhat.io/rhceph/grafana-rhel9:latest

Node-exporter

registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.12

registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.15

AlertManager

registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.12

registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.15

HAProxy

registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest

registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest

Keepalived

registry.redhat.io/rhceph/keepalived-rhel9:latest

registry.redhat.io/rhceph/keepalived-rhel9:latest

SNMP Gateway

registry.redhat.io/rhceph/snmp-notifier-rhel9:latest

registry.redhat.io/rhceph/snmp-notifier-rhel9:latest

Loki

registry.redhat.io/openshift-logging/logging-loki-rhel8:v2.6.1

registry.redhat.io/openshift-logging/logging-loki-rhel8:v2.6.1

Promtail

registry.redhat.io/rhceph/rhceph-promtail-rhel9:v2.4.0

registry.redhat.io/rhceph/rhceph-promtail-rhel9:v2.4.0

Find the latest available supported container images on the Red Hat Ecosystem Catalog.

Procedure

  1. Enable the Ceph Ansible repositories on the Ansible administration node:

    Red Hat Enterprise Linux 9

    subscription-manager repos --enable=rhceph-7-tools-for-rhel-9-x86_64-rpms

  2. Update the cephadm and cephadm-ansible package.

    Example

    [root@admin ~]# dnf update cephadm
    [root@admin ~]# dnf update cephadm-ansible

  3. Navigate to the /usr/share/cephadm-ansible/ directory:

    Example

    [root@admin ~]# cd /usr/share/cephadm-ansible

  4. Run the preflight playbook with the upgrade_ceph_packages parameter set to true and the ceph_origin parameter set to custom on the bootstrapped host in the storage cluster:

    Syntax

    ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=custom upgrade_ceph_packages=true"

    Example

    [ceph-admin@admin ~]$ ansible-playbook -i /etc/ansible/hosts cephadm-preflight.yml --extra-vars "ceph_origin=custom upgrade_ceph_packages=true"

    This package upgrades cephadm on all the nodes.

  5. Log into the cephadm shell:

    Example

    [root@node0 ~]# cephadm shell

  6. Ensure all the hosts are online and that the storage cluster is healthy:

    Example

    [ceph: root@node0 /]# ceph -s

  7. Set the OSD noout, noscrub, and nodeep-scrub flags to prevent OSDs from getting marked out during upgrade and to avoid unnecessary load on the cluster:

    Example

    [ceph: root@host01 /]# ceph osd set noout
    [ceph: root@host01 /]# ceph osd set noscrub
    [ceph: root@host01 /]# ceph osd set nodeep-scrub

  8. Check service versions and the available target containers:

    Syntax

    ceph orch upgrade check IMAGE_NAME

    Example

    [ceph: root@node0 /]# ceph orch upgrade check LOCAL_NODE_FQDN:5000/rhceph/rhceph-7-rhel9

  9. Upgrade the storage cluster:

    Syntax

    ceph orch upgrade start IMAGE_NAME

    Example

    [ceph: root@node0 /]# ceph orch upgrade start LOCAL_NODE_FQDN:5000/rhceph/rhceph-7-rhel9

    While the upgrade is underway, a progress bar appears in the ceph status output.

    Example

    [ceph: root@node0 /]# ceph status
    [...]
    progress:
        Upgrade to 18.2.0-128.el9cp (1s)
          [............................]

  10. Verify the new IMAGE_ID and VERSION of the Ceph cluster:

    Example

    [ceph: root@node0 /]# ceph version
    [ceph: root@node0 /]# ceph versions
    [ceph: root@node0 /]# ceph orch ps

  11. When the upgrade is complete, unset the noout, noscrub, and nodeep-scrub flags:

    Example

    [ceph: root@host01 /]# ceph osd unset noout
    [ceph: root@host01 /]# ceph osd unset noscrub
    [ceph: root@host01 /]# ceph osd unset nodeep-scrub

Additional Resources

Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.