Questo contenuto non è disponibile nella lingua selezionata.
Chapter 3. Upgrading RHCS 5 to RHCS 7 involving RHEL 8 to RHEL 9 upgrades with stretch mode enabled
You can perform an upgrade from Red Hat Ceph Storage 5 to Red Hat Ceph Storage 7 involving Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 with the stretch mode enabled.
Upgrade to the latest version of Red Hat Ceph Storage 5 prior to upgrading to the latest version of Red Hat Ceph Storage 7.
Prerequisites
- Red Hat Ceph Storage 5 on Red Hat Enterprise Linux 8 with necessary hosts and daemons running with stretch mode enabled.
-
Backup of Ceph binary (
/usr/sbin/cephadm), ceph.pub (/etc/ceph), and the Ceph cluster’s public SSH keys from the admin node.
Procedure
Log into the Cephadm shell:
Example
[ceph: root@host01 /]# cephadm shell
[ceph: root@host01 /]# cephadm shellCopy to Clipboard Copied! Toggle word wrap Toggle overflow Label a second node as the admin in the cluster to manage the cluster when the admin node is re-provisioned.
Syntax
ceph orch host label add HOSTNAME _admin
ceph orch host label add HOSTNAME _adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host label add host02_admin
[ceph: root@host01 /]# ceph orch host label add host02_adminCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
nooutflag.Example
[ceph: root@host01 /]# ceph osd set noout
[ceph: root@host01 /]# ceph osd set nooutCopy to Clipboard Copied! Toggle word wrap Toggle overflow Drain all the daemons from the host:
Syntax
ceph orch host drain HOSTNAME --force
ceph orch host drain HOSTNAME --forceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host drain host02 --force
[ceph: root@host01 /]# ceph orch host drain host02 --forceCopy to Clipboard Copied! Toggle word wrap Toggle overflow The
_no_schedulelabel is automatically applied to the host which blocks deployment.Check if all the daemons are removed from the storage cluster:
Syntax
ceph orch ps HOSTNAME
ceph orch ps HOSTNAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch ps host02
[ceph: root@host01 /]# ceph orch ps host02Copy to Clipboard Copied! Toggle word wrap Toggle overflow Zap the devices so that if the hosts being drained have OSDs present, then they can be used to re-deploy OSDs when the host is added back.
Syntax
ceph orch device zap HOSTNAME DISK --force
ceph orch device zap HOSTNAME DISK --forceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch device zap ceph-host02 /dev/vdb --force zap successful for /dev/vdb on ceph-host02
[ceph: root@host01 /]# ceph orch device zap ceph-host02 /dev/vdb --force zap successful for /dev/vdb on ceph-host02Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of OSD removal:
Example
[ceph: root@host01 /]# ceph orch osd rm status
[ceph: root@host01 /]# ceph orch osd rm statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow When no placement groups (PG) are left on the OSD, the OSD is decommissioned and removed from the storage cluster.
Remove the host from the cluster:
Syntax
ceph orch host rm HOSTNAME --force
ceph orch host rm HOSTNAME --forceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host rm host02 --force
[ceph: root@host01 /]# ceph orch host rm host02 --forceCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Re-provision the respective hosts from RHEL 8 to RHEL 9 as described in Upgrading from RHEL 8 to RHEL 9.
Run the preflight playbook with the
--limitoption:Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --limit NEWHOST_NAME
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --limit NEWHOST_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin={storage-product}" --limit host02[ceph: root@host01 /]# ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin={storage-product}" --limit host02Copy to Clipboard Copied! Toggle word wrap Toggle overflow The preflight playbook installs
podman,lvm2,chronyd, andcephadmon the new host. After installation is complete,cephadmresides in the/usr/sbin/directory.Extract the cluster’s public SSH keys to a folder:
Syntax
ceph cephadm get-pub-key ~/PATH
ceph cephadm get-pub-key ~/PATHCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph cephadm get-pub-key ~/ceph.pub
[ceph: root@host01 /]# ceph cephadm get-pub-key ~/ceph.pubCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy Ceph cluster’s public SSH keys to the re-provisioned node:
Syntax
ssh-copy-id -f -i ~/PATH root@HOST_NAME_2
ssh-copy-id -f -i ~/PATH root@HOST_NAME_2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ssh-copy-id -f -i ~/ceph.pub root@host02
[ceph: root@host01 /]# ssh-copy-id -f -i ~/ceph.pub root@host02Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If the removed host has a monitor daemon, then, before adding the host to the cluster, add the
--unmanagedflag to monitor deployment.Syntax
ceph orch apply mon PLACEMENT --unmanaged
ceph orch apply mon PLACEMENT --unmanagedCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Add the host again to the cluster and add the labels present earlier:
Syntax
ceph orch host add HOSTNAME IP_ADDRESS --labels=LABELS
ceph orch host add HOSTNAME IP_ADDRESS --labels=LABELSCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If the removed host had a monitor daemon deployed originally, the monitor daemon needs to be added back manually with the location attributes as described in Replacing the tiebreaker with a new monitor.
Syntax
ceph mon add HOSTNAME IP LOCATION
ceph mon add HOSTNAME IP LOCATIONCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph mon add ceph-host02 10.0.211.62 datacenter=DC2
[ceph: root@host01 /]# ceph mon add ceph-host02 10.0.211.62 datacenter=DC2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Syntax
ceph orch daemon add mon HOSTNAME
ceph orch daemon add mon HOSTNAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch daemon add mon ceph-host02
[ceph: root@host01 /]# ceph orch daemon add mon ceph-host02Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify the daemons on the re-provisioned host running successfully with the same ceph version:
Syntax
ceph orch ps
ceph orch psCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set back the monitor daemon placement to
managed.Syntax
ceph orch apply mon PLACEMENT
ceph orch apply mon PLACEMENTCopy to Clipboard Copied! Toggle word wrap Toggle overflow Repeat the above steps for all hosts.
- .Arbiter monitor cannot be drained or removed from the host. Hence, the arbiter mon needs to be re-provisioned to another tie-breaker node, and then drained or removed from host as described in Replacing the tiebreaker with a new monitor.
- Follow the same approach to re-provision admin nodes and use a second admin node to manage clusters.
- Add the backup files again to the node.
-
. Add admin nodes again to cluster using the second admin node. Set the
mondeployment tounmanaged. - Follow Replacing the tiebreaker with a new monitor to add back the old arbiter mon and remove the temporary mon created earlier.
Unset the
nooutflag.Syntax
ceph osd unset noout
ceph osd unset nooutCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify the Ceph version and the cluster status to ensure that all demons are working as expected after the Red Hat Enterprise Linux upgrade.
- Follow Upgrade a Red Hat Ceph Storage cluster using cephadm to perform Red Hat Ceph Storage 5 to Red Hat Ceph Storage 7 Upgrade.