Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 3. Upgrading RHCS 5 to RHCS 7 involving RHEL 8 to RHEL 9 upgrades with stretch mode enabled


You can perform an upgrade from Red Hat Ceph Storage 5 to Red Hat Ceph Storage 7 involving Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 with the stretch mode enabled.

Important

Upgrade to the latest version of Red Hat Ceph Storage 5 prior to upgrading to the latest version of Red Hat Ceph Storage 7.

Prerequisites

  • Red Hat Ceph Storage 5 on Red Hat Enterprise Linux 8 with necessary hosts and daemons running with stretch mode enabled.
  • Backup of Ceph binary (/usr/sbin/cephadm), ceph.pub (/etc/ceph), and the Ceph cluster’s public SSH keys from the admin node.

Procedure

  1. Log into the Cephadm shell:

    Example

    [ceph: root@host01 /]# cephadm shell

  2. Label a second node as the admin in the cluster to manage the cluster when the admin node is re-provisioned.

    Syntax

    ceph orch host label add HOSTNAME _admin

    Example

    [ceph: root@host01 /]# ceph orch host label add host02_admin

  3. Set the noout flag.

    Example

    [ceph: root@host01 /]# ceph osd set noout

  4. Drain all the daemons from the host:

    Syntax

    ceph orch host drain HOSTNAME --force

    Example

    [ceph: root@host01 /]# ceph orch host drain host02 --force

    The _no_schedule label is automatically applied to the host which blocks deployment.

  5. Check if all the daemons are removed from the storage cluster:

    Syntax

    ceph orch ps HOSTNAME

    Example

    [ceph: root@host01 /]# ceph orch ps host02

  6. Zap the devices so that if the hosts being drained have OSDs present, then they can be used to re-deploy OSDs when the host is added back.

    Syntax

    ceph orch device zap HOSTNAME DISK --force

    Example

    [ceph: root@host01 /]# ceph orch device zap ceph-host02 /dev/vdb --force
    
    zap successful for /dev/vdb on ceph-host02

  7. Check the status of OSD removal:

    Example

    [ceph: root@host01 /]# ceph orch osd rm status

    When no placement groups (PG) are left on the OSD, the OSD is decommissioned and removed from the storage cluster.

  8. Remove the host from the cluster:

    Syntax

    ceph orch host rm HOSTNAME --force

    Example

    [ceph: root@host01 /]# ceph orch host rm host02 --force

  9. Re-provision the respective hosts from RHEL 8 to RHEL 9 as described in Upgrading from RHEL 8 to RHEL 9.
  10. Run the preflight playbook with the --limit option:

    Syntax

    ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --limit NEWHOST_NAME

    Example

    [ceph: root@host01 /]# ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin={storage-product}" --limit host02

    The preflight playbook installs podman, lvm2, chronyd, and cephadm on the new host. After installation is complete, cephadm resides in the /usr/sbin/ directory.

  11. Extract the cluster’s public SSH keys to a folder:

    Syntax

    ceph cephadm get-pub-key ~/PATH

    Example

    [ceph: root@host01 /]# ceph cephadm get-pub-key ~/ceph.pub

  12. Copy Ceph cluster’s public SSH keys to the re-provisioned node:

    Syntax

    ssh-copy-id -f -i ~/PATH root@HOST_NAME_2

    Example

    [ceph: root@host01 /]# ssh-copy-id -f -i ~/ceph.pub root@host02

    1. Optional: If the removed host has a monitor daemon, then, before adding the host to the cluster, add the --unmanaged flag to monitor deployment.

      Syntax

      ceph orch apply mon PLACEMENT --unmanaged

  13. Add the host again to the cluster and add the labels present earlier:

    Syntax

    ceph orch host add HOSTNAME IP_ADDRESS --labels=LABELS

    1. Optional: If the removed host had a monitor daemon deployed originally, the monitor daemon needs to be added back manually with the location attributes as described in Replacing the tiebreaker with a new monitor.

      Syntax

      ceph mon add HOSTNAME IP LOCATION

      Example

      [ceph: root@host01 /]# ceph mon add ceph-host02 10.0.211.62 datacenter=DC2

      Syntax

      ceph orch daemon add mon HOSTNAME

      Example

      [ceph: root@host01 /]# ceph orch daemon add mon ceph-host02

  14. Verify the daemons on the re-provisioned host running successfully with the same ceph version:

    Syntax

    ceph orch ps

  15. Set back the monitor daemon placement to managed.

    Syntax

    ceph orch apply mon PLACEMENT

  16. Repeat the above steps for all hosts.

    1. .Arbiter monitor cannot be drained or removed from the host. Hence, the arbiter mon needs to be re-provisioned to another tie-breaker node, and then drained or removed from host as described in Replacing the tiebreaker with a new monitor.
  17. Follow the same approach to re-provision admin nodes and use a second admin node to manage clusters.
  18. Add the backup files again to the node.
  19. . Add admin nodes again to cluster using the second admin node. Set the mon deployment to unmanaged.
  20. Follow Replacing the tiebreaker with a new monitor to add back the old arbiter mon and remove the temporary mon created earlier.
  21. Unset the noout flag.

    Syntax

    ceph osd unset noout

  22. Verify the Ceph version and the cluster status to ensure that all demons are working as expected after the Red Hat Enterprise Linux upgrade.
  23. Follow Upgrade a Red Hat Ceph Storage cluster using cephadm to perform Red Hat Ceph Storage 5 to Red Hat Ceph Storage 7 Upgrade.
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.