Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 3. Upgrading to RHCS with RHEL 10 upgrades with stretch mode enabled from previous versions
You can perform an upgrade from Red Hat Ceph Storage 7 and 8 to Red Hat Ceph Storage 9 involving Red Hat Enterprise Linux 9 to Red Hat Enterprise Linux 10 with the stretch mode enabled.
Upgrade to the latest version of Red Hat Ceph Storage 7 or 8 prior to upgrading to the latest version of Red Hat Ceph Storage 9
Prerequisites
- Red Hat Ceph Storage on Red Hat Enterprise Linux 9 with necessary hosts and daemons that run with stretch mode enabled.
-
Backup of Ceph binary (
/usr/sbin/cephadm), ceph.pub (/etc/ceph), and the Ceph cluster’s public SSH keys from the admin node.
Procedure
Log into the Cephadm shell:
Example
[ceph: root@host01 /]# cephadm shellLabel a second node as the admin in the cluster to manage the cluster when the admin node is re-provisioned.
Syntax
ceph orch host label add HOSTNAME _adminExample
[ceph: root@host01 /]# ceph orch host label add host02_adminSet the
nooutflag.Example
[ceph: root@host01 /]# ceph osd set nooutDrain all the daemons from the host:
Syntax
ceph orch host drain HOSTNAME --forceExample
[ceph: root@host01 /]# ceph orch host drain host02 --forceThe
_no_schedulelabel is automatically applied to the host which blocks deployment.Check if all the daemons are removed from the storage cluster:
Syntax
ceph orch ps HOSTNAMEExample
[ceph: root@host01 /]# ceph orch ps host02Zap the devices so that if the hosts being drained have OSDs present, then they can be used to re-deploy OSDs when the host is added back.
Syntax
ceph orch device zap HOSTNAME DISK --forceExample
[ceph: root@host01 /]# ceph orch device zap ceph-host02 /dev/vdb --force zap successful for /dev/vdb on ceph-host02Check the status of OSD removal:
Example
[ceph: root@host01 /]# ceph orch osd rm statusWhen no placement groups (PG) are left on the OSD, the OSD is decommissioned and removed from the storage cluster.
Remove the host from the cluster:
Syntax
ceph orch host rm HOSTNAME --forceExample
[ceph: root@host01 /]# ceph orch host rm host02 --force- Re-provision the respective hosts from RHEL 8 to RHEL 9 as described in Upgrading from RHEL 9 to RHEL 10 on Red Hat Documentation.
Run the preflight playbook with the
--limitoption:Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --limit NEWHOST_NAMEExample
[ceph: root@host01 /]# ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin={storage-product}" --limit host02The preflight playbook installs
podman,lvm2,chronyd, andcephadmon the new host. After installation is complete,cephadmresides in the/usr/sbin/directory.Extract the cluster’s public SSH keys to a folder:
Syntax
ceph cephadm get-pub-key ~/PATHExample
[ceph: root@host01 /]# ceph cephadm get-pub-key ~/ceph.pubCopy Ceph cluster’s public SSH keys to the re-provisioned node:
Syntax
ssh-copy-id -f -i ~/PATH root@HOST_NAME_2Example
[ceph: root@host01 /]# ssh-copy-id -f -i ~/ceph.pub root@host02Optional: If the removed host has a monitor daemon, then, before adding the host to the cluster, add the
--unmanagedflag to monitor deployment.Syntax
ceph orch apply mon PLACEMENT --unmanaged
Add the host again to the cluster and add the labels present earlier:
Syntax
ceph orch host add HOSTNAME IP_ADDRESS --labels=LABELSOptional: If the removed host had a monitor daemon deployed originally, the monitor daemon needs to be added back manually with the location attributes as described in Replacing the tiebreaker with a new monitor.
Syntax
ceph mon add HOSTNAME IP LOCATIONExample
[ceph: root@host01 /]# ceph mon add ceph-host02 10.0.211.62 datacenter=DC2Syntax
ceph orch daemon add mon HOSTNAMEExample
[ceph: root@host01 /]# ceph orch daemon add mon ceph-host02
Verify the daemons on the re-provisioned host running successfully with the same ceph version:
Syntax
ceph orch psSet back the monitor daemon placement to
managed.Syntax
ceph orch apply mon PLACEMENTRepeat the above steps for all hosts.
- The arbiter monitor cannot be drained or removed from the host. Therefor, the arbiter mon needs to be re-provisioned to another tie-breaker node, and then drained or removed from host as described in Replacing the tiebreaker with a new monitor.
- Follow the same approach to re-provision admin nodes and use a second admin node to manage clusters.
- Add the backup files again to the node.
-
. Add admin nodes again to cluster using the second admin node. Set the
mondeployment tounmanaged. - Re-add the old arbiter monintor and remove the temporary monitor created earlier. For more information, see Replacing the tiebreaker with a new monitor.
Unset the
nooutflag.Syntax
ceph osd unset noout- Verify the Ceph version and the cluster status to ensure that all demons are working as expected after the Red Hat Enterprise Linux upgrade.
- Follow Upgrade a Red Hat Ceph Storage cluster using `cephadm` to perform a Red Hat Ceph Storage 7 or 8 to Red Hat Ceph Storage 9 upgrade.