Este contenido no está disponible en el idioma seleccionado.

Chapter 1. Upgrade a Red Hat Ceph Storage cluster using cephadm


As a storage administrator, you can use the cephadm Orchestrator to Red Hat Ceph Storage 8 with the ceph orch upgrade command.

Note

Upgrading directly from Red Hat Ceph Storage 6 to Red Hat Ceph Storage 8 is supported.

The automated upgrade process follows Ceph best practices. For example:

  • The upgrade order starts with Ceph Managers, Ceph Monitors, then other daemons.
  • Each daemon is restarted only after Ceph indicates that the cluster will remain available.

The storage cluster health status is likely to switch to HEALTH_WARNING during the upgrade. When the upgrade is complete, the health status should switch back to HEALTH_OK.

Note

You do not get a message once the upgrade is successful. Run ceph versions and ceph orch ps commands to verify the new image ID and the version of the storage cluster.

1.1. Compatibility considerations between RHCS and podman versions

podman and Red Hat Ceph Storage have different end-of-life strategies that might make it challenging to find compatible versions.

Red Hat recommends to use the podman version shipped with the corresponding Red Hat Enterprise Linux version for Red Hat Ceph Storage. See the Red Hat Ceph Storage: Supported configurations knowledge base article for more details. See the Contacting Red Hat support for service section in the Red Hat Ceph Storage Troubleshooting Guide for additional assistance.

The following table shows version compatibility between Red Hat Ceph Storage 8 and versions of podman.

Expand
CephPodman     
 

1.9

2.0

2.1

2.2

3.0

>3.0

Red Hat Ceph Storage 8

false

true

true

false

true

true

Warning

You must use a version of Podman that is 2.0.0 or higher.

1.2. Upgrading the Red Hat Ceph Storage cluster

You can use ceph orch upgrade command for upgrading a Red Hat Ceph Storage cluster.

Prerequisites

  • Latest version of running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • Ansible user with sudo and passwordless ssh access to all nodes in the storage cluster.
  • At least two Ceph Manager nodes in the storage cluster: one active and one standby.

Procedure

  1. Register the node, and when prompted, enter your Red Hat Customer Portal credentials:

    Syntax

    subscription-manager register
    Copy to Clipboard Toggle word wrap

  2. Pull the latest subscription data from the CDN:

    Syntax

    subscription-manager refresh
    Copy to Clipboard Toggle word wrap

    Note

    Red Hat customers now use Simple Content Access (SCA), as a result, attaching a subscription to the system is no longer necessary. Registering the system and enabling the required repositories is sufficient. For more information, see How to register and subscribe a RHEL system to the Red Hat Customer Portal using Red Hat Subscription-Manager? on the Red Hat Customer Portal.

  3. Update the system to receive the latest packages for Red Hat Enterprise Linux:

    Syntax

    dnf update
    Copy to Clipboard Toggle word wrap

  4. Enable the Ceph Ansible repositories on the Ansible administration node:

    Red Hat Enterprise Linux 9

    subscription-manager repos --enable=rhceph-8-tools-for-rhel-9-x86_64-rpms
    Copy to Clipboard Toggle word wrap

  5. Update the cephadm, cephadm-ansible, and crun packages:

    Example

    [root@admin ~]# dnf update cephadm
    [root@admin ~]# dnf update cephadm-ansible
    [root@admin ~]# dnf update crun
    Copy to Clipboard Toggle word wrap

  6. Navigate to the /usr/share/cephadm-ansible/ directory:

    Example

    [root@admin ~]# cd /usr/share/cephadm-ansible
    Copy to Clipboard Toggle word wrap

  7. Run the preflight playbook with the upgrade_ceph_packages parameter set to true on the bootstrapped host in the storage cluster:

    Syntax

    ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"
    Copy to Clipboard Toggle word wrap

    Example

    [ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"
    Copy to Clipboard Toggle word wrap

    This package upgrades cephadm on all the nodes.

  8. Log into the cephadm shell:

    Example

    [root@host01 ~]# cephadm shell
    Copy to Clipboard Toggle word wrap

  9. Ensure all the hosts are online and that the storage cluster is healthy:

    Example

    [ceph: root@host01 /]# ceph -s
    Copy to Clipboard Toggle word wrap

  10. Set the OSD noout, noscrub, and nodeep-scrub flags to prevent OSDs from getting marked out during upgrade and to avoid unnecessary load on the cluster:

    Example

    [ceph: root@host01 /]# ceph osd set noout
    [ceph: root@host01 /]# ceph osd set noscrub
    [ceph: root@host01 /]# ceph osd set nodeep-scrub
    Copy to Clipboard Toggle word wrap

  11. Check service versions and the available target containers:

    Syntax

    ceph orch upgrade check IMAGE_NAME
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph orch upgrade check registry.redhat.io/rhceph/rhceph-8-rhel9:latest
    Copy to Clipboard Toggle word wrap

    Note

    The image name is applicable for both Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 9.

  12. Upgrade the storage cluster:

    Syntax

    ceph orch upgrade start IMAGE_NAME
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph orch upgrade start registry.redhat.io/rhceph/rhceph-8-rhel9:latest
    Copy to Clipboard Toggle word wrap

    Note

    To perform a staggered upgrade, see Performing a staggered upgrade.

    While the upgrade is underway, a progress bar appears in the ceph status output.

    Example

    [ceph: root@host01 /]# ceph status
    [...]
    progress:
        Upgrade to 18.2.0-128.el9cp (1s)
          [............................]
    Copy to Clipboard Toggle word wrap

  13. Verify the new IMAGE_ID and VERSION of the Ceph cluster:

    Example

    [ceph: root@host01 /]# ceph versions
    [ceph: root@host01 /]# ceph orch ps
    Copy to Clipboard Toggle word wrap

    Note

    If you are not using the cephadm-ansible playbooks, after upgrading your Ceph cluster, you must upgrade the ceph-common package and client libraries on your client nodes.

    Example

    [root@client01 ~] dnf update ceph-common
    Copy to Clipboard Toggle word wrap

    Verify you have the latest version:

    Example

    [root@client01 ~] ceph --version
    Copy to Clipboard Toggle word wrap

  14. When the upgrade is complete, unset the noout, noscrub, and nodeep-scrub flags:

    Example

    [ceph: root@host01 /]# ceph osd unset noout
    [ceph: root@host01 /]# ceph osd unset noscrub
    [ceph: root@host01 /]# ceph osd unset nodeep-scrub
    Copy to Clipboard Toggle word wrap

You can upgrade the storage cluster in a disconnected environment by using the --image tag.

You can use ceph orch upgrade command for upgrading a Red Hat Ceph Storage cluster.

Prerequisites

  • Latest version of running Red Hat Ceph Storage cluster.
  • Root-level access to all the nodes.
  • Ansible user with sudo and passwordless ssh access to all nodes in the storage cluster.
  • At least two Ceph Manager nodes in the storage cluster: one active and one standby.
  • Register the nodes to CDN and attach subscriptions.
  • Check for the customer container images in a disconnected environment and change the configuration, if required. See the Changing configurations of custom container images for disconnected installations section in the Red Hat Ceph Storage Installation Guide for more details.

By default, the monitoring stack components are deployed based on the primary Ceph image. For disconnected environment of the storage cluster, you have to use the latest available monitoring stack component images.

Expand
Table 1.1. Custom image details for monitoring stack
Monitoring stack componentImage details

Prometheus

registry.redhat.io/openshift4/ose-prometheus:v4.15

Grafana

registry.redhat.io/rhceph/grafana-rhel9:latest

Node-exporter

registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.15

AlertManager

registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.15

HAProxy

registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest

Keepalived

registry.redhat.io/rhceph/keepalived-rhel9:latest

SNMP Gateway

registry.redhat.io/rhceph/snmp-notifier-rhel9:latest

Procedure

  1. Enable the Ceph Ansible repositories on the Ansible administration node:

    Red Hat Enterprise Linux 9

    subscription-manager repos --enable=rhceph-8-tools-for-rhel-9-x86_64-rpms
    Copy to Clipboard Toggle word wrap

  2. Update the cephadm and cephadm-ansible package.

    Example

    [root@admin ~]# dnf update cephadm
    [root@admin ~]# dnf update cephadm-ansible
    Copy to Clipboard Toggle word wrap

  3. Navigate to the /usr/share/cephadm-ansible/ directory:

    Example

    [root@admin ~]# cd /usr/share/cephadm-ansible
    Copy to Clipboard Toggle word wrap

  4. Run the preflight playbook with the upgrade_ceph_packages parameter set to true and the ceph_origin parameter set to custom on the bootstrapped host in the storage cluster:

    Syntax

    ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=custom upgrade_ceph_packages=true"
    Copy to Clipboard Toggle word wrap

    Example

    [ceph-admin@admin ~]$ ansible-playbook -i /etc/ansible/hosts cephadm-preflight.yml --extra-vars "ceph_origin=custom upgrade_ceph_packages=true"
    Copy to Clipboard Toggle word wrap

    This package upgrades cephadm on all the nodes.

  5. Log into the cephadm shell:

    Example

    [root@node0 ~]# cephadm shell
    Copy to Clipboard Toggle word wrap

  6. Ensure all the hosts are online and that the storage cluster is healthy:

    Example

    [ceph: root@node0 /]# ceph -s
    Copy to Clipboard Toggle word wrap

  7. Set the OSD noout, noscrub, and nodeep-scrub flags to prevent OSDs from getting marked out during upgrade and to avoid unnecessary load on the cluster:

    Example

    [ceph: root@host01 /]# ceph osd set noout
    [ceph: root@host01 /]# ceph osd set noscrub
    [ceph: root@host01 /]# ceph osd set nodeep-scrub
    Copy to Clipboard Toggle word wrap

  8. Check service versions and the available target containers:

    Syntax

    ceph orch upgrade check IMAGE_NAME
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@node0 /]# ceph orch upgrade check LOCAL_NODE_FQDN:5000/rhceph/rhceph-8-rhel9
    Copy to Clipboard Toggle word wrap

  9. Upgrade the storage cluster:

    Syntax

    ceph orch upgrade start IMAGE_NAME
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@node0 /]# ceph orch upgrade start LOCAL_NODE_FQDN:5000/rhceph/rhceph-8-rhel9
    Copy to Clipboard Toggle word wrap

    While the upgrade is underway, a progress bar appears in the ceph status output.

    Example

    [ceph: root@node0 /]# ceph status
    [...]
    progress:
        Upgrade to 18.2.0-128.el9cp (1s)
          [............................]
    Copy to Clipboard Toggle word wrap

  10. Verify the new IMAGE_ID and VERSION of the Ceph cluster:

    Example

    [ceph: root@node0 /]# ceph version
    [ceph: root@node0 /]# ceph versions
    [ceph: root@node0 /]# ceph orch ps
    Copy to Clipboard Toggle word wrap

  11. When the upgrade is complete, unset the noout, noscrub, and nodeep-scrub flags:

    Example

    [ceph: root@host01 /]# ceph osd unset noout
    [ceph: root@host01 /]# ceph osd unset noscrub
    [ceph: root@host01 /]# ceph osd unset nodeep-scrub
    Copy to Clipboard Toggle word wrap

Starting from Red Hat Ceph Storage 8.1, Red Hat Ceph Storage introduced a new root certificate authority (CA) format. As a result, when updating the Ceph Object Gateway in multi-site deployments from Red Hat Ceph Storage versions 8.0 or earlier, you must complete additional steps to enable cephadm self-signed certificates.

Use this procedure in any of the following scenarios:

  • You are upgrading from Red Hat Ceph Storage versions 8.0 or earlier and are planning on using cephadm self-signed certificates.
  • Cephadm self-signed certificates are expired.

If you do not plan on using cephadm self-signed certificates or the root CA commonName has the cephadm-root-<FSID> format and the certificates are valid, this procedure is not required.

Prerequisites

Before you begin, make sure that your Red Hat Ceph Storage storage system is using the earlier old-style root CA.

ceph orch certmgr cert ls
Copy to Clipboard Toggle word wrap
  • If the root CA commonName has the cephadm-root format, it is in the old style. Continue with the procedure documented here.
  • If the root CA commonName has the cephadm-root-<FSID> format, your cluster has already been upgraded to the new format and no further actions are required.

Procedure

  1. Remove the previous root CA.

    Syntax

    ceph orch certmgr rm cephadm_root_ca_cert
    Copy to Clipboard Toggle word wrap

  2. Reload the certmgr, to force creating the new root CA.

    Syntax

    ceph orch certmgr reload
    Copy to Clipboard Toggle word wrap

  3. Verify that the certmgr has been loaded correctly.

    Syntax

    ceph orch certmgr cert ls
    Copy to Clipboard Toggle word wrap

    Ensure that the cephadm_root_ca_cert subject has the following format: cephadm-root-<FSID>

    The root CA is upgraded.

  4. Reissue new certificates for all services that use cephadm self-signed certificates.
  5. Refresh any existing Grafana services.

    1. Remove all Grafana certificates.

      This step must be done for each Grafana service that uses self-signed certificates.

      Syntax

      ceph orch certmgr cert rm grafana_cert --hostname=HOSTNAME
      Copy to Clipboard Toggle word wrap

      Only continue once all certificates are removed.

    2. Redeploy Grafana.

      Syntax

      ceph orch redeploy grafana
      Copy to Clipboard Toggle word wrap

    3. Verify that all Grafana services are running correctly.

      Syntax

      ceph orch ps --service-name grafana
      Copy to Clipboard Toggle word wrap

    4. Verify that new cephadm-signed certificates have been created for Grafana.

      Syntax

      ceph orch certmgr cert ls
      Copy to Clipboard Toggle word wrap

  6. Refresh any existing Ceph Object Gateway (RGW) services.

    This step must be done for each Ceph Object Gateway service that uses self-signed certificates.

    Syntax

    ceph orch certmgr cert rm rgw_frontend_ssl_cert --service-name RGW-SERVICE-NAME
    ceph orch redeploy RGW-SERVICE-NAME
    Copy to Clipboard Toggle word wrap

  7. Verify that the Ceph Object Gateway (RGW) service is deployed and running as expected.

    Syntax

    ceph orch ps --service-name RGW_SERVICE_NAME
    Copy to Clipboard Toggle word wrap

  8. Verify the certificates.

    Syntax

    ceph orch certmgr cert ls
    Copy to Clipboard Toggle word wrap

  9. Upgrade the Ceph Dashboard certificates.

    1. Generate the cephadm-signed certificates for the dashboard.

      From the cephadm command-line interface (CLI), run the following commands:

      Syntax

      json="$(ceph orch certmgr generate-certificates dashboard)"
      printf '%s' "$json" | jq -r '.cert' > dashboard.crt
      printf '%s' "$json" | jq -r '.key'  > dashboard.key
      Copy to Clipboard Toggle word wrap

    2. Set the new generated certificate and key.

      From the Ceph command-line interface (CLI), run the following commands:

      Syntax

      ceph dashboard set-ssl-certificate -i dashboard.crt
      ceph dashboard set-ssl-certificate-key -i dashboard.key
      Copy to Clipboard Toggle word wrap

      The dashboard.crt and dashboard.key pair files are generated.

  10. Enable and then disable the Ceph Dashboard to load the new certificates.

    Syntax

    ceph mgr module disable dashboard
    ceph mgr module enable dashboard
    Copy to Clipboard Toggle word wrap

Volver arriba
Red Hat logoGithubredditYoutubeTwitter

Aprender

Pruebe, compre y venda

Comunidades

Acerca de la documentación de Red Hat

Ayudamos a los usuarios de Red Hat a innovar y alcanzar sus objetivos con nuestros productos y servicios con contenido en el que pueden confiar. Explore nuestras recientes actualizaciones.

Hacer que el código abierto sea más inclusivo

Red Hat se compromete a reemplazar el lenguaje problemático en nuestro código, documentación y propiedades web. Para más detalles, consulte el Blog de Red Hat.

Acerca de Red Hat

Ofrecemos soluciones reforzadas que facilitan a las empresas trabajar en plataformas y entornos, desde el centro de datos central hasta el perímetro de la red.

Theme

© 2025 Red Hat