Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 1. Upgrade a Red Hat Ceph Storage cluster using cephadm
As a storage administrator, you can use the cephadm Orchestrator to upgrade Red Hat Ceph Storage 5 and later.
Upgrading directly from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 6 is not supported.
The automated upgrade process follows Ceph best practices. For example:
- The upgrade order starts with Ceph Managers, Ceph Monitors, then other daemons.
- Each daemon is restarted only after Ceph indicates that the cluster will remain available.
The storage cluster health status is likely to switch to HEALTH_WARNING during the upgrade. When the upgrade is complete, the health status should switch back to HEALTH_OK.
If you have a Red Hat Ceph Storage 6 cluster with multi-site configured, do not upgrade to the latest version of 6.1.z1 as there are issues with data corruption on encrypted objects when objects replicate to the disaster recovery (DR) site.
You do not get a message once the upgrade is successful. Run ceph versions and ceph orch ps commands to verify the new image ID and the version of the storage cluster.
The Ceph iSCSI gateway is removed from Red Hat Ceph Storage 6. Therefore, you need to manage the iSCSI LUNs before upgrading from Red Hat Ceph Storage 5 to Red Hat Ceph Storage 6.
When you upgrade a Red Hat Ceph Storage cluster from RHCS 5 to RHCS 6, RBD images that were exported through iSCSI are preserved, therefore data is not lost. However, because all iSCSI targets disappear with the upgrade, data is temporarily inaccessible. To recover data, you can map such RBD images with rbd device map command or export them to a file with rbd export command.
1.1. Compatibility considerations between RHCS and podman versions Copier lienLien copié sur presse-papiers!
podman and Red Hat Ceph Storage have different end-of-life strategies that might make it challenging to find compatible versions.
Red Hat recommends to use the podman version shipped with the corresponding Red Hat Enterprise Linux version for Red Hat Ceph Storage. See the Red Hat Ceph Storage: Supported configurations knowledge base article for more details. See the Contacting Red Hat support for service section in the Red Hat Ceph Storage Troubleshooting Guide for additional assistance.
The following table shows version compatibility between Red Hat Ceph Storage 6 and versions of podman.
| Ceph | Podman | |||||
|---|---|---|---|---|---|---|
| 1.9 | 2.0 | 2.1 | 2.2 | 3.0 | >3.0 | |
| Red Hat Ceph Storage 6 | false | true | true | false | true | true |
To use Podman with Red Hat Ceph Storage 5 and later, you must use a version of Podman that is 2.0.0 or higher.
1.2. Upgrading the Red Hat Ceph Storage cluster Copier lienLien copié sur presse-papiers!
You can use ceph orch upgrade command for upgrading a Red Hat Ceph Storage 5.0 cluster.
Prerequisites
- A running Red Hat Ceph Storage cluster 5.
-
Red Hat Enterprise Linux 9.0 or later with
ansible-corebundled into AppStream. - Root-level access to all the nodes.
-
Ansible user with sudo and passwordless
sshaccess to all nodes in the storage cluster. - At least two Ceph Manager nodes in the storage cluster: one active and one standby.
Upgrade to the latest version of Red Hat Ceph Storage 5.3.z5 prior to upgrading to the latest version Red Hat Ceph Storage 6.1.
Red Hat Ceph Storage 5 also includes a health check function that returns a DAEMON_OLD_VERSION warning if it detects that any of the daemons in the storage cluster are running multiple versions of RHCS. The warning is triggered when the daemons continue to run multiple versions of Red Hat Ceph Storage beyond the time value set in the mon_warn_older_version_delay option. By default, the mon_warn_older_version_delay option is set to 1 week. This setting allows most upgrades to proceed without falsely seeing the warning. If the upgrade process is paused for an extended time period, you can mute the health warning:
ceph health mute DAEMON_OLD_VERSION --sticky
ceph health mute DAEMON_OLD_VERSION --sticky
After the upgrade has finished, unmute the health warning:
ceph health unmute DAEMON_OLD_VERSION
ceph health unmute DAEMON_OLD_VERSION
Procedure
Enable the Ceph Ansible repositories on the Ansible administration node:
Red Hat Enterprise Linux 9
subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms
subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
cephadmandcephadm-ansiblepackage:Example
dnf update cephadm dnf update cephadm-ansible
[root@admin ~]# dnf update cephadm [root@admin ~]# dnf update cephadm-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to the
/usr/share/cephadm-ansible/directory:Example
cd /usr/share/cephadm-ansible
[root@admin ~]# cd /usr/share/cephadm-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the preflight playbook with the
upgrade_ceph_packagesparameter set totrueon the bootstrapped host in the storage cluster:Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ansible-playbook -i /etc/ansible/hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i /etc/ansible/hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"Copy to Clipboard Copied! Toggle word wrap Toggle overflow This package upgrades
cephadmon all the nodes.Log into the
cephadmshell:Example
cephadm shell
[root@host01 ~]# cephadm shellCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure all the hosts are online and that the storage cluster is healthy:
Example
[ceph: root@host01 /]# ceph -s
[ceph: root@host01 /]# ceph -sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the OSD
noout,noscrub, andnodeep-scrubflags to prevent OSDs from getting marked out during upgrade and to avoid unnecessary load on the cluster:Example
[ceph: root@host01 /]# ceph osd set noout [ceph: root@host01 /]# ceph osd set noscrub [ceph: root@host01 /]# ceph osd set nodeep-scrub
[ceph: root@host01 /]# ceph osd set noout [ceph: root@host01 /]# ceph osd set noscrub [ceph: root@host01 /]# ceph osd set nodeep-scrubCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check service versions and the available target containers:
Syntax
ceph orch upgrade check IMAGE_NAME
ceph orch upgrade check IMAGE_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch upgrade check registry.redhat.io/rhceph/rhceph-6-rhel9:latest
[ceph: root@host01 /]# ceph orch upgrade check registry.redhat.io/rhceph/rhceph-6-rhel9:latestCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe image name is applicable for both Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 9.
Upgrade the storage cluster:
Syntax
ceph orch upgrade start IMAGE_NAME
ceph orch upgrade start IMAGE_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch upgrade start registry.redhat.io/rhceph/rhceph-6-rhel9:latest
[ceph: root@host01 /]# ceph orch upgrade start registry.redhat.io/rhceph/rhceph-6-rhel9:latestCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo perform a staggered upgrade, see Performing a staggered upgrade.
While the upgrade is underway, a progress bar appears in the
ceph statusoutput.Example
[ceph: root@host01 /]# ceph status [...] progress: Upgrade to 17.2.6-70.el9cp (1s) [............................][ceph: root@host01 /]# ceph status [...] progress: Upgrade to 17.2.6-70.el9cp (1s) [............................]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the new IMAGE_ID and VERSION of the Ceph cluster:
Example
[ceph: root@host01 /]# ceph versions [ceph: root@host01 /]# ceph orch ps
[ceph: root@host01 /]# ceph versions [ceph: root@host01 /]# ceph orch psCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you are not using the
cephadm-ansibleplaybooks, after upgrading your Ceph cluster, you must upgrade theceph-commonpackage and client libraries on your client nodes.Example
[root@client01 ~] dnf update ceph-common
[root@client01 ~] dnf update ceph-commonCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify you have the latest version:
Example
[root@client01 ~] ceph --version
[root@client01 ~] ceph --versionCopy to Clipboard Copied! Toggle word wrap Toggle overflow When the upgrade is complete, unset the
noout,noscrub, andnodeep-scrubflags:Example
[ceph: root@host01 /]# ceph osd unset noout [ceph: root@host01 /]# ceph osd unset noscrub [ceph: root@host01 /]# ceph osd unset nodeep-scrub
[ceph: root@host01 /]# ceph osd unset noout [ceph: root@host01 /]# ceph osd unset noscrub [ceph: root@host01 /]# ceph osd unset nodeep-scrubCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3. Upgrading the Red Hat Ceph Storage cluster in a disconnected environment Copier lienLien copié sur presse-papiers!
You can upgrade the storage cluster in a disconnected environment by using the --image tag.
You can use ceph orch upgrade command for upgrading a Red Hat Ceph Storage 5 cluster.
Red Hat Enterprise Linux 9 and later does not support the cephadm-ansible playbook.
Prerequisites
- A running Red Hat Ceph Storage cluster 5.
-
Red Hat Enterprise Linux 9.0 or later with
ansible-corebundled into AppStream. - Root-level access to all the nodes.
-
Ansible user with sudo and passwordless
sshaccess to all nodes in the storage cluster. - At least two Ceph Manager nodes in the storage cluster: one active and one standby.
- Register the nodes to CDN and attach subscriptions.
- Check for the customer container images in a disconnected environment and change the configuration, if required. See the Changing configurations of custom container images for disconnected installations section in the Red Hat Ceph Storage Installation Guide for more details.
By default, the monitoring stack components are deployed based on the primary Ceph image. For disconnected environment of the storage cluster, you have to use the latest available monitoring stack component images.
| Monitoring stack component | Image details |
|---|---|
| Prometheus | registry.redhat.io/openshift4/ose-prometheus:v4.12 |
| Grafana | registry.redhat.io/rhceph-6-dashboard-rhel9:latest |
| Node-exporter | registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.12 |
| AlertManager | registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.12 |
| HAProxy | registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest |
| Keepalived | registry.redhat.io/rhceph/keepalived-rhel9:latest |
| SNMP Gateway | registry.redhat.io/rhceph/snmp-notifier-rhel9:latest |
Procedure
Enable the Ceph Ansible repositories on the Ansible administration node:
Red Hat Enterprise Linux 9
subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms
subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
cephadmandcephadm-ansiblepackage.Example
dnf update cephadm dnf update cephadm-ansible
[root@admin ~]# dnf update cephadm [root@admin ~]# dnf update cephadm-ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the preflight playbook with the
upgrade_ceph_packagesparameter set totrueand theceph_originparameter set tocustomon the bootstrapped host in the storage cluster:Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=custom upgrade_ceph_packages=true"
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=custom upgrade_ceph_packages=true"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ansible-playbook -i /etc/ansible/hosts cephadm-preflight.yml --extra-vars "ceph_origin=custom upgrade_ceph_packages=true"
[ceph-admin@admin ~]$ ansible-playbook -i /etc/ansible/hosts cephadm-preflight.yml --extra-vars "ceph_origin=custom upgrade_ceph_packages=true"Copy to Clipboard Copied! Toggle word wrap Toggle overflow This package upgrades
cephadmon all the nodes.Log into the
cephadmshell:Example
cephadm shell
[root@node0 ~]# cephadm shellCopy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure all the hosts are online and that the storage cluster is healthy:
Example
[ceph: root@node0 /]# ceph -s
[ceph: root@node0 /]# ceph -sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set the OSD
noout,noscrub, andnodeep-scrubflags to prevent OSDs from getting marked out during upgrade and to avoid unnecessary load on the cluster:Example
[ceph: root@host01 /]# ceph osd set noout [ceph: root@host01 /]# ceph osd set noscrub [ceph: root@host01 /]# ceph osd set nodeep-scrub
[ceph: root@host01 /]# ceph osd set noout [ceph: root@host01 /]# ceph osd set noscrub [ceph: root@host01 /]# ceph osd set nodeep-scrubCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check service versions and the available target containers:
Syntax
ceph orch upgrade check IMAGE_NAME
ceph orch upgrade check IMAGE_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@node0 /]# ceph orch upgrade check LOCAL_NODE_FQDN:5000/rhceph/rhceph-6-rhel9
[ceph: root@node0 /]# ceph orch upgrade check LOCAL_NODE_FQDN:5000/rhceph/rhceph-6-rhel9Copy to Clipboard Copied! Toggle word wrap Toggle overflow Upgrade the storage cluster:
Syntax
ceph orch upgrade start IMAGE_NAME
ceph orch upgrade start IMAGE_NAMECopy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@node0 /]# ceph orch upgrade start LOCAL_NODE_FQDN:5000/rhceph/rhceph-6-rhel9
[ceph: root@node0 /]# ceph orch upgrade start LOCAL_NODE_FQDN:5000/rhceph/rhceph-6-rhel9Copy to Clipboard Copied! Toggle word wrap Toggle overflow While the upgrade is underway, a progress bar appears in the
ceph statusoutput.Example
[ceph: root@node0 /]# ceph status [...] progress: Upgrade to 17.2.6-70.el9cp (1s) [............................][ceph: root@node0 /]# ceph status [...] progress: Upgrade to 17.2.6-70.el9cp (1s) [............................]Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the new IMAGE_ID and VERSION of the Ceph cluster:
Example
[ceph: root@node0 /]# ceph version [ceph: root@node0 /]# ceph versions [ceph: root@node0 /]# ceph orch ps
[ceph: root@node0 /]# ceph version [ceph: root@node0 /]# ceph versions [ceph: root@node0 /]# ceph orch psCopy to Clipboard Copied! Toggle word wrap Toggle overflow When the upgrade is complete, unset the
noout,noscrub, andnodeep-scrubflags:Example
[ceph: root@host01 /]# ceph osd unset noout [ceph: root@host01 /]# ceph osd unset noscrub [ceph: root@host01 /]# ceph osd unset nodeep-scrub
[ceph: root@host01 /]# ceph osd unset noout [ceph: root@host01 /]# ceph osd unset noscrub [ceph: root@host01 /]# ceph osd unset nodeep-scrubCopy to Clipboard Copied! Toggle word wrap Toggle overflow