Chapter 4. Upgrading RHCS 5 to RHCS 6 involving RHEL 8 to RHEL 9 upgrades with stretch mode enabled


You can perform an upgrade from Red Hat Ceph Storage 5 to Red Hat Ceph Storage 6 involving Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 with the stretch mode enabled.

Important

Upgrade to the latest version of Red Hat Ceph Storage 5.3.z5 prior to upgrading to the latest version Red Hat Ceph Storage 6.1.

Prerequisites

  • Red Hat Ceph Storage 5 on Red Hat Enterprise Linux 8 with necessary hosts and daemons running with stretch mode enabled.
  • Backup of Ceph binary (/usr/sbin/cephadm), ceph.pub (/etc/ceph), and the Ceph cluster’s public SSH keys from the admin node.
Note

Arbiter monitor cannot be drained or removed from the host. Hence, the arbiter mon needs to be re-provisioned to another tie-breaker node, and then drained or removed from host as described in Replacing the tiebreaker with a new monitor

Procedure

  1. Log into the Cephadm shell:

    Example

    [ceph: root@host01 /]# cephadm shell
    Copy to Clipboard Toggle word wrap

  2. Label a second node as the admin in the cluster to manage the cluster when the admin node is re-provisioned.

    Syntax

    ceph orch host label add HOSTNAME admin
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph orch host label add host02 admin
    Copy to Clipboard Toggle word wrap

  3. Set the noout flag.

    Example

    [ceph: root@host01 /]# ceph osd set noout
    Copy to Clipboard Toggle word wrap

  4. Drain all the daemons from the host:

    Syntax

    ceph orch host drain HOSTNAME --force
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph orch host drain host02 --force
    Copy to Clipboard Toggle word wrap

    The _no_schedule label is automatically applied to the host which blocks deployment.

  5. Check if all the daemons are removed from the storage cluster:

    Syntax

    ceph orch ps HOSTNAME
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph orch ps host02
    Copy to Clipboard Toggle word wrap

  6. Check the status of OSD removal:

    Example

    [ceph: root@host01 /]# ceph orch osd rm status
    Copy to Clipboard Toggle word wrap

    When no placement groups (PG) are left on the OSD, the OSD is decommissioned and removed from the storage cluster.

  7. Zap the devices so that if the hosts being drained have OSDs present, then they can be used to re-deploy OSDs when the host is added back.

    Syntax

    ceph orch device zap HOSTNAME DISK [--force]
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph orch device zap ceph-host02 /dev/vdb [--force]
    
    zap successful for /dev/vdb on ceph-host02
    Copy to Clipboard Toggle word wrap

  8. Remove the host from the cluster:

    Syntax

    ceph orch host rm HOSTNAME --force
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph orch host rm host02 --force
    Copy to Clipboard Toggle word wrap

  9. Re-provision the respective hosts from RHEL 8 to RHEL 9 as described in Upgrading from RHEL 8 to RHEL 9.
  10. Run the preflight playbook with the --limit option:

    Syntax

    ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --limit NEWHOST_NAME
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin={storage-product}" --limit host02
    Copy to Clipboard Toggle word wrap

    The preflight playbook installs podman, lvm2, chronyd, and cephadm on the new host. After installation is complete, cephadm resides in the /usr/sbin/ directory.

  11. Extract the cluster’s public SSH keys to a folder:

    Syntax

    ceph cephadm get-pub-key ~/PATH
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ceph cephadm get-pub-key ~/ceph.pub
    Copy to Clipboard Toggle word wrap

  12. Copy Ceph cluster’s public SSH keys to the re-provisioned node:

    Syntax

    ssh-copy-id -f -i ~/PATH root@HOST_NAME_2
    Copy to Clipboard Toggle word wrap

    Example

    [ceph: root@host01 /]# ssh-copy-id -f -i ~/ceph.pub root@host02
    Copy to Clipboard Toggle word wrap

    1. Optional: If the removed host has a monitor daemon, then, before adding the host to the cluster, add the --unmanaged flag to monitor deployment.

      Syntax

      ceph orch apply mon PLACEMENT --unmanaged
      Copy to Clipboard Toggle word wrap

  13. Add the host again to the cluster and add the labels present earlier:

    Syntax

    ceph orch host add HOSTNAME IP_ADDRESS --labels=LABELS
    Copy to Clipboard Toggle word wrap

    1. Optional: If the removed host had a monitor daemon deployed originally, the monitor daemon needs to be added back manually with the location attributes as described in Replacing the tiebreaker with a new monitor.

      Syntax

      ceph mon add HOSTNAME IP LOCATION
      Copy to Clipboard Toggle word wrap

      Example

      [ceph: root@host01 /]# ceph mon add ceph-host02 10.0.211.62 datacenter=DC2
      Copy to Clipboard Toggle word wrap

      Syntax

      ceph orch daemon add mon HOSTNAME
      Copy to Clipboard Toggle word wrap

      Example

      [ceph: root@host01 /]# ceph orch daemon add mon ceph-host02
      Copy to Clipboard Toggle word wrap

  14. Verify the daemons on the re-provisioned host running successfully with the same ceph version:

    Syntax

    ceph orch ps
    Copy to Clipboard Toggle word wrap

  15. Set back the monitor daemon placement to managed.

    Note

    This step needs to be done one by one.

    Syntax

    ceph orch apply mon PLACEMENT
    Copy to Clipboard Toggle word wrap

  16. Repeat the above steps for all hosts.
  17. Follow the same approach to re-provision admin nodes and use a second admin node to manage clusters.
  18. Add the backup files again to the node.
  19. . Add admin nodes again to cluster using the second admin node. Set the mon deployment to unmanaged.
  20. Follow Replacing the tiebreaker with a new monitor to add back the old arbiter mon and remove the temporary mon created earlier.
  21. Unset the noout flag.

    Syntax

    ceph osd unset noout
    Copy to Clipboard Toggle word wrap

  22. Verify the Ceph version and the cluster status to ensure that all demons are working as expected after the Red Hat Enterprise Linux upgrade.
  23. Now that the RHEL OS is successfully upgraded, follow Upgrade a Red Hat Ceph Storage cluster using cephadm to perform Red Hat Ceph Storage 5 to Red Hat Ceph Storage 6 Upgrade.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat