Upgrade Guide
Upgrading a Red Hat Ceph Storage Cluster
Abstract
Chapter 1. Upgrade a Red Hat Ceph Storage cluster using cephadm Copy linkLink copied to clipboard!
As a storage administrator, you can use the cephadm
Orchestrator to Red Hat Ceph Storage 8 with the ceph orch upgrade
command.
Upgrading directly from Red Hat Ceph Storage 6 to Red Hat Ceph Storage 8 is supported.
The automated upgrade process follows Ceph best practices. For example:
- The upgrade order starts with Ceph Managers, Ceph Monitors, then other daemons.
- Each daemon is restarted only after Ceph indicates that the cluster will remain available.
The storage cluster health status is likely to switch to HEALTH_WARNING
during the upgrade. When the upgrade is complete, the health status should switch back to HEALTH_OK.
You do not get a message once the upgrade is successful. Run ceph versions
and ceph orch ps
commands to verify the new image ID and the version of the storage cluster.
1.1. Compatibility considerations between RHCS and podman versions Copy linkLink copied to clipboard!
podman
and Red Hat Ceph Storage have different end-of-life strategies that might make it challenging to find compatible versions.
Red Hat recommends to use the podman
version shipped with the corresponding Red Hat Enterprise Linux version for Red Hat Ceph Storage. See the Red Hat Ceph Storage: Supported configurations knowledge base article for more details. See the Contacting Red Hat support for service section in the Red Hat Ceph Storage Troubleshooting Guide for additional assistance.
The following table shows version compatibility between Red Hat Ceph Storage 8 and versions of podman
.
Ceph | Podman | |||||
---|---|---|---|---|---|---|
1.9 | 2.0 | 2.1 | 2.2 | 3.0 | >3.0 | |
Red Hat Ceph Storage 8 | false | true | true | false | true | true |
You must use a version of Podman that is 2.0.0 or higher.
1.2. Upgrading the Red Hat Ceph Storage cluster Copy linkLink copied to clipboard!
You can use ceph orch upgrade
command for upgrading a Red Hat Ceph Storage cluster.
Prerequisites
- Latest version of running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
-
Ansible user with sudo and passwordless
ssh
access to all nodes in the storage cluster. - At least two Ceph Manager nodes in the storage cluster: one active and one standby.
Procedure
Register the node, and when prompted, enter your Red Hat Customer Portal credentials:
Syntax
subscription-manager register
subscription-manager register
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pull the latest subscription data from the CDN:
Syntax
subscription-manager refresh
subscription-manager refresh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteRed Hat customers now use Simple Content Access (SCA), as a result, attaching a subscription to the system is no longer necessary. Registering the system and enabling the required repositories is sufficient. For more information, see How to register and subscribe a RHEL system to the Red Hat Customer Portal using Red Hat Subscription-Manager? on the Red Hat Customer Portal.
Update the system to receive the latest packages for Red Hat Enterprise Linux:
Syntax
dnf update
dnf update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the Ceph Ansible repositories on the Ansible administration node:
Red Hat Enterprise Linux 9
subscription-manager repos --enable=rhceph-8-tools-for-rhel-9-x86_64-rpms
subscription-manager repos --enable=rhceph-8-tools-for-rhel-9-x86_64-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
cephadm
,cephadm-ansible
, andcrun
packages:Example
dnf update cephadm dnf update cephadm-ansible dnf update crun
[root@admin ~]# dnf update cephadm [root@admin ~]# dnf update cephadm-ansible [root@admin ~]# dnf update crun
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to the
/usr/share/cephadm-ansible/
directory:Example
cd /usr/share/cephadm-ansible
[root@admin ~]# cd /usr/share/cephadm-ansible
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the preflight playbook with the
upgrade_ceph_packages
parameter set totrue
on the bootstrapped host in the storage cluster:Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This package upgrades
cephadm
on all the nodes.Log into the
cephadm
shell:Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure all the hosts are online and that the storage cluster is healthy:
Example
[ceph: root@host01 /]# ceph -s
[ceph: root@host01 /]# ceph -s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the OSD
noout
,noscrub
, andnodeep-scrub
flags to prevent OSDs from getting marked out during upgrade and to avoid unnecessary load on the cluster:Example
[ceph: root@host01 /]# ceph osd set noout [ceph: root@host01 /]# ceph osd set noscrub [ceph: root@host01 /]# ceph osd set nodeep-scrub
[ceph: root@host01 /]# ceph osd set noout [ceph: root@host01 /]# ceph osd set noscrub [ceph: root@host01 /]# ceph osd set nodeep-scrub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check service versions and the available target containers:
Syntax
ceph orch upgrade check IMAGE_NAME
ceph orch upgrade check IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch upgrade check registry.redhat.io/rhceph/rhceph-8-rhel9:latest
[ceph: root@host01 /]# ceph orch upgrade check registry.redhat.io/rhceph/rhceph-8-rhel9:latest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe image name is applicable for both Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 9.
Upgrade the storage cluster:
Syntax
ceph orch upgrade start IMAGE_NAME
ceph orch upgrade start IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch upgrade start registry.redhat.io/rhceph/rhceph-8-rhel9:latest
[ceph: root@host01 /]# ceph orch upgrade start registry.redhat.io/rhceph/rhceph-8-rhel9:latest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo perform a staggered upgrade, see Performing a staggered upgrade.
While the upgrade is underway, a progress bar appears in the
ceph status
output.Example
[ceph: root@host01 /]# ceph status [...] progress: Upgrade to 18.2.0-128.el9cp (1s) [............................]
[ceph: root@host01 /]# ceph status [...] progress: Upgrade to 18.2.0-128.el9cp (1s) [............................]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the new IMAGE_ID and VERSION of the Ceph cluster:
Example
[ceph: root@host01 /]# ceph versions [ceph: root@host01 /]# ceph orch ps
[ceph: root@host01 /]# ceph versions [ceph: root@host01 /]# ceph orch ps
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you are not using the
cephadm-ansible
playbooks, after upgrading your Ceph cluster, you must upgrade theceph-common
package and client libraries on your client nodes.Example
[root@client01 ~] dnf update ceph-common
[root@client01 ~] dnf update ceph-common
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify you have the latest version:
Example
[root@client01 ~] ceph --version
[root@client01 ~] ceph --version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the upgrade is complete, unset the
noout
,noscrub
, andnodeep-scrub
flags:Example
[ceph: root@host01 /]# ceph osd unset noout [ceph: root@host01 /]# ceph osd unset noscrub [ceph: root@host01 /]# ceph osd unset nodeep-scrub
[ceph: root@host01 /]# ceph osd unset noout [ceph: root@host01 /]# ceph osd unset noscrub [ceph: root@host01 /]# ceph osd unset nodeep-scrub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3. Upgrading the Red Hat Ceph Storage cluster in a disconnected environment Copy linkLink copied to clipboard!
You can upgrade the storage cluster in a disconnected environment by using the --image
tag.
You can use ceph orch upgrade
command for upgrading a Red Hat Ceph Storage cluster.
Prerequisites
- Latest version of running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
-
Ansible user with sudo and passwordless
ssh
access to all nodes in the storage cluster. - At least two Ceph Manager nodes in the storage cluster: one active and one standby.
- Register the nodes to CDN and attach subscriptions.
- Check for the customer container images in a disconnected environment and change the configuration, if required. See the Changing configurations of custom container images for disconnected installations section in the Red Hat Ceph Storage Installation Guide for more details.
By default, the monitoring stack components are deployed based on the primary Ceph image. For disconnected environment of the storage cluster, you have to use the latest available monitoring stack component images.
Monitoring stack component | Image details |
---|---|
Prometheus | registry.redhat.io/openshift4/ose-prometheus:v4.15 |
Grafana | registry.redhat.io/rhceph/grafana-rhel9:latest |
Node-exporter | registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.15 |
AlertManager | registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.15 |
HAProxy | registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest |
Keepalived | registry.redhat.io/rhceph/keepalived-rhel9:latest |
SNMP Gateway | registry.redhat.io/rhceph/snmp-notifier-rhel9:latest |
Procedure
Enable the Ceph Ansible repositories on the Ansible administration node:
Red Hat Enterprise Linux 9
subscription-manager repos --enable=rhceph-8-tools-for-rhel-9-x86_64-rpms
subscription-manager repos --enable=rhceph-8-tools-for-rhel-9-x86_64-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
cephadm
andcephadm-ansible
package.Example
dnf update cephadm dnf update cephadm-ansible
[root@admin ~]# dnf update cephadm [root@admin ~]# dnf update cephadm-ansible
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to the /usr/share/cephadm-ansible/ directory:
Example
cd /usr/share/cephadm-ansible
[root@admin ~]# cd /usr/share/cephadm-ansible
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the preflight playbook with the
upgrade_ceph_packages
parameter set totrue
and theceph_origin
parameter set tocustom
on the bootstrapped host in the storage cluster:Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=custom upgrade_ceph_packages=true"
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=custom upgrade_ceph_packages=true"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ansible-playbook -i /etc/ansible/hosts cephadm-preflight.yml --extra-vars "ceph_origin=custom upgrade_ceph_packages=true"
[ceph-admin@admin ~]$ ansible-playbook -i /etc/ansible/hosts cephadm-preflight.yml --extra-vars "ceph_origin=custom upgrade_ceph_packages=true"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This package upgrades
cephadm
on all the nodes.Log into the
cephadm
shell:Example
cephadm shell
[root@node0 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure all the hosts are online and that the storage cluster is healthy:
Example
[ceph: root@node0 /]# ceph -s
[ceph: root@node0 /]# ceph -s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the OSD
noout
,noscrub
, andnodeep-scrub
flags to prevent OSDs from getting marked out during upgrade and to avoid unnecessary load on the cluster:Example
[ceph: root@host01 /]# ceph osd set noout [ceph: root@host01 /]# ceph osd set noscrub [ceph: root@host01 /]# ceph osd set nodeep-scrub
[ceph: root@host01 /]# ceph osd set noout [ceph: root@host01 /]# ceph osd set noscrub [ceph: root@host01 /]# ceph osd set nodeep-scrub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check service versions and the available target containers:
Syntax
ceph orch upgrade check IMAGE_NAME
ceph orch upgrade check IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@node0 /]# ceph orch upgrade check LOCAL_NODE_FQDN:5000/rhceph/rhceph-8-rhel9
[ceph: root@node0 /]# ceph orch upgrade check LOCAL_NODE_FQDN:5000/rhceph/rhceph-8-rhel9
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Upgrade the storage cluster:
Syntax
ceph orch upgrade start IMAGE_NAME
ceph orch upgrade start IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@node0 /]# ceph orch upgrade start LOCAL_NODE_FQDN:5000/rhceph/rhceph-8-rhel9
[ceph: root@node0 /]# ceph orch upgrade start LOCAL_NODE_FQDN:5000/rhceph/rhceph-8-rhel9
Copy to Clipboard Copied! Toggle word wrap Toggle overflow While the upgrade is underway, a progress bar appears in the
ceph status
output.Example
[ceph: root@node0 /]# ceph status [...] progress: Upgrade to 18.2.0-128.el9cp (1s) [............................]
[ceph: root@node0 /]# ceph status [...] progress: Upgrade to 18.2.0-128.el9cp (1s) [............................]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the new IMAGE_ID and VERSION of the Ceph cluster:
Example
[ceph: root@node0 /]# ceph version [ceph: root@node0 /]# ceph versions [ceph: root@node0 /]# ceph orch ps
[ceph: root@node0 /]# ceph version [ceph: root@node0 /]# ceph versions [ceph: root@node0 /]# ceph orch ps
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the upgrade is complete, unset the
noout
,noscrub
, andnodeep-scrub
flags:Example
[ceph: root@host01 /]# ceph osd unset noout [ceph: root@host01 /]# ceph osd unset noscrub [ceph: root@host01 /]# ceph osd unset nodeep-scrub
[ceph: root@host01 /]# ceph osd unset noout [ceph: root@host01 /]# ceph osd unset noscrub [ceph: root@host01 /]# ceph osd unset nodeep-scrub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4. Updating Ceph Object Gateway to use cephadm self-signed certificates in a multi-site deployment Copy linkLink copied to clipboard!
Starting from Red Hat Ceph Storage 8.1, Red Hat Ceph Storage introduced a new root certificate authority (CA) format. As a result, when updating the Ceph Object Gateway in multi-site deployments from Red Hat Ceph Storage versions 8.0 or earlier, you must complete additional steps to enable cephadm self-signed certificates.
Use this procedure in any of the following scenarios:
- You are upgrading from Red Hat Ceph Storage versions 8.0 or earlier and are planning on using cephadm self-signed certificates.
- Cephadm self-signed certificates are expired.
If you do not plan on using cephadm self-signed certificates or the root CA commonName
has the cephadm-root-<FSID>
format and the certificates are valid, this procedure is not required.
Prerequisites
Before you begin, make sure that your Red Hat Ceph Storage storage system is using the earlier old-style root CA.
ceph orch certmgr cert ls
ceph orch certmgr cert ls
-
If the root CA
commonName
has thecephadm-root
format, it is in the old style. Continue with the procedure documented here. -
If the root CA
commonName
has thecephadm-root-<FSID>
format, your cluster has already been upgraded to the new format and no further actions are required.
Procedure
Remove the previous root CA.
Syntax
ceph orch certmgr rm cephadm_root_ca_cert
ceph orch certmgr rm cephadm_root_ca_cert
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reload the
certmgr
, to force creating the new root CA.Syntax
ceph orch certmgr reload
ceph orch certmgr reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
certmgr
has been loaded correctly.Syntax
ceph orch certmgr cert ls
ceph orch certmgr cert ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the
cephadm_root_ca_cert
subject has the following format:cephadm-root-<FSID>
The root CA is upgraded.
- Reissue new certificates for all services that use cephadm self-signed certificates.
Refresh any existing Grafana services.
Remove all Grafana certificates.
This step must be done for each Grafana service that uses self-signed certificates.
Syntax
ceph orch certmgr cert rm grafana_cert --hostname=HOSTNAME
ceph orch certmgr cert rm grafana_cert --hostname=HOSTNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Only continue once all certificates are removed.
Redeploy Grafana.
Syntax
ceph orch redeploy grafana
ceph orch redeploy grafana
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that all Grafana services are running correctly.
Syntax
ceph orch ps --service-name grafana
ceph orch ps --service-name grafana
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that new cephadm-signed certificates have been created for Grafana.
Syntax
ceph orch certmgr cert ls
ceph orch certmgr cert ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Refresh any existing Ceph Object Gateway (RGW) services.
This step must be done for each Ceph Object Gateway service that uses self-signed certificates.
Syntax
ceph orch certmgr cert rm rgw_frontend_ssl_cert --service-name RGW-SERVICE-NAME ceph orch redeploy RGW-SERVICE-NAME
ceph orch certmgr cert rm rgw_frontend_ssl_cert --service-name RGW-SERVICE-NAME ceph orch redeploy RGW-SERVICE-NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Ceph Object Gateway (RGW) service is deployed and running as expected.
Syntax
ceph orch ps --service-name RGW_SERVICE_NAME
ceph orch ps --service-name RGW_SERVICE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the certificates.
Syntax
ceph orch certmgr cert ls
ceph orch certmgr cert ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Upgrade the Ceph Dashboard certificates.
Generate the cephadm-signed certificates for the dashboard.
From the cephadm command-line interface (CLI), run the following commands:
Syntax
json="$(ceph orch certmgr generate-certificates dashboard)" printf '%s' "$json" | jq -r '.cert' > dashboard.crt printf '%s' "$json" | jq -r '.key' > dashboard.key
json="$(ceph orch certmgr generate-certificates dashboard)" printf '%s' "$json" | jq -r '.cert' > dashboard.crt printf '%s' "$json" | jq -r '.key' > dashboard.key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the new generated certificate and key.
From the Ceph command-line interface (CLI), run the following commands:
Syntax
ceph dashboard set-ssl-certificate -i dashboard.crt ceph dashboard set-ssl-certificate-key -i dashboard.key
ceph dashboard set-ssl-certificate -i dashboard.crt ceph dashboard set-ssl-certificate-key -i dashboard.key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
dashboard.crt
anddashboard.key
pair files are generated.
Enable and then disable the Ceph Dashboard to load the new certificates.
Syntax
ceph mgr module disable dashboard ceph mgr module enable dashboard
ceph mgr module disable dashboard ceph mgr module enable dashboard
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 2. Upgrading a host operating system from RHEL 8 to RHEL 9 Copy linkLink copied to clipboard!
You can perform a Red Hat Ceph Storage host operating system upgrade from Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 using the Leapp utility.
Prerequisites
- A running Red Hat Ceph Storage 6.1 or 7.1 cluster (latest relevant cluster version required).
The following are the supported combinations of containerized Ceph daemons. For more information, see the How colocation works and its advantages section in the Red Hat Ceph Storage Installation Guide.
-
Ceph Metadata Server (
ceph-mds
), Ceph OSD (ceph-osd
), and Ceph Object Gateway (radosgw
) -
Ceph Monitor (
ceph-mon
) or Ceph Manager (ceph-mgr
), Ceph OSD (ceph-osd
), and Ceph Object Gateway (radosgw
) -
Ceph Monitor (
ceph-mon
), Ceph Manager (ceph-mgr
), Ceph OSD (ceph-osd
), and Ceph Object Gateway (radosgw
)
Procedure
- Deploy the latest Red Hat Ceph Storage 6.1 or 7.1 cluster on Red Hat Enterprise Linux 8.8 with service.
Verify that the cluster contains two admin nodes, so that while performing host upgrade in one admin node (with _admin
label), the second admin can be used for managing clusters.
For full instructions, see Red Hat Ceph Storage installation in the Red Hat Ceph Storage Installation Guide and Deploying the Ceph daemons using the service specifications in the Operations guide.
Set the
noout
flag on the Ceph OSD.Example
[ceph: root@host01 /]# ceph osd set noout
[ceph: root@host01 /]# ceph osd set noout
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Perform host upgrade one node at a time using the Leapp utility.
Put respective node maintenance mode before performing host upgrade using Leapp.
Syntax
ceph orch host maintenance enter HOST
ceph orch host maintenance enter HOST
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ceph orch host maintenance enter host01
ceph orch host maintenance enter host01
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the ceph tools repo manually when executing the leapp command with the
--enablerepo
parameter.Example
leapp upgrade --enablerepo rhceph-8-tools-for-rhel-9-x86_64-rpms
leapp upgrade --enablerepo rhceph-8-tools-for-rhel-9-x86_64-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Refer to Upgrading RHEL 8 to RHEL 9 within the Red Hat Enterprise Linux product documentation on the Red Hat Customer Portal.
ImportantAfter performing in-place upgrade from Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9, you need to manually enable and start the
logrotate.timer
service.systemctl start logrotate.timer systemctl enable logrotate.timer
# systemctl start logrotate.timer # systemctl enable logrotate.timer
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify the new IMAGE_ID and VERSION of the Ceph cluster:
Example
[ceph: root@node0 /]# ceph version [ceph: root@node0 /]# ceph orch ps
[ceph: root@node0 /]# ceph version [ceph: root@node0 /]# ceph orch ps
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Continue with the latest Red Hat Ceph Storage 6.1 or 7.1 to Red Hat Ceph Storage 8 upgrade by following the steps listed in Upgrade a Red Hat Ceph Storage cluster using `cephadm`.
Chapter 3. Upgrading RHCS 6 to RHCS 8 involving RHEL 8 to RHEL 9 upgrades with stretch mode enabled Copy linkLink copied to clipboard!
You can perform an upgrade from Red Hat Ceph Storage 6 to Red Hat Ceph Storage 8 involving Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 with the stretch mode enabled.
Upgrade to the latest version of Red Hat Ceph Storage 6 prior to upgrading to the latest version of Red Hat Ceph Storage 8.
Prerequisites
- Red Hat Ceph Storage 6 on Red Hat Enterprise Linux 8 with necessary hosts and daemons running with stretch mode enabled.
-
Backup of Ceph binary (
/usr/sbin/cephadm
), ceph.pub (/etc/ceph
), and the Ceph cluster’s public SSH keys from the admin node.
Procedure
Log into the Cephadm shell:
Example
[ceph: root@host01 /]# cephadm shell
[ceph: root@host01 /]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Label a second node as the admin in the cluster to manage the cluster when the admin node is re-provisioned.
Syntax
ceph orch host label add HOSTNAME _admin
ceph orch host label add HOSTNAME _admin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host label add host02_admin
[ceph: root@host01 /]# ceph orch host label add host02_admin
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the
noout
flag.Example
[ceph: root@host01 /]# ceph osd set noout
[ceph: root@host01 /]# ceph osd set noout
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Drain all the daemons from the host:
Syntax
ceph orch host drain HOSTNAME --force
ceph orch host drain HOSTNAME --force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host drain host02 --force
[ceph: root@host01 /]# ceph orch host drain host02 --force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
_no_schedule
label is automatically applied to the host which blocks deployment.Check if all the daemons are removed from the storage cluster:
Syntax
ceph orch ps HOSTNAME
ceph orch ps HOSTNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch ps host02
[ceph: root@host01 /]# ceph orch ps host02
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Zap the devices so that if the hosts being drained have OSDs present, then they can be used to re-deploy OSDs when the host is added back.
Syntax
ceph orch device zap HOSTNAME DISK --force
ceph orch device zap HOSTNAME DISK --force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch device zap ceph-host02 /dev/vdb --force zap successful for /dev/vdb on ceph-host02
[ceph: root@host01 /]# ceph orch device zap ceph-host02 /dev/vdb --force zap successful for /dev/vdb on ceph-host02
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check the status of OSD removal:
Example
[ceph: root@host01 /]# ceph orch osd rm status
[ceph: root@host01 /]# ceph orch osd rm status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When no placement groups (PG) are left on the OSD, the OSD is decommissioned and removed from the storage cluster.
Remove the host from the cluster:
Syntax
ceph orch host rm HOSTNAME --force
ceph orch host rm HOSTNAME --force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch host rm host02 --force
[ceph: root@host01 /]# ceph orch host rm host02 --force
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Re-provision the respective hosts from RHEL 8 to RHEL 9 as described in Upgrading from RHEL 8 to RHEL 9.
Run the preflight playbook with the
--limit
option:Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --limit NEWHOST_NAME
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --limit NEWHOST_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin={storage-product}" --limit host02
[ceph: root@host01 /]# ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin={storage-product}" --limit host02
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The preflight playbook installs
podman
,lvm2
,chronyd
, andcephadm
on the new host. After installation is complete,cephadm
resides in the/usr/sbin/
directory.Extract the cluster’s public SSH keys to a folder:
Syntax
ceph cephadm get-pub-key ~/PATH
ceph cephadm get-pub-key ~/PATH
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph cephadm get-pub-key ~/ceph.pub
[ceph: root@host01 /]# ceph cephadm get-pub-key ~/ceph.pub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy Ceph cluster’s public SSH keys to the re-provisioned node:
Syntax
ssh-copy-id -f -i ~/PATH root@HOST_NAME_2
ssh-copy-id -f -i ~/PATH root@HOST_NAME_2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ssh-copy-id -f -i ~/ceph.pub root@host02
[ceph: root@host01 /]# ssh-copy-id -f -i ~/ceph.pub root@host02
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If the removed host has a monitor daemon, then, before adding the host to the cluster, add the
--unmanaged
flag to monitor deployment.Syntax
ceph orch apply mon PLACEMENT --unmanaged
ceph orch apply mon PLACEMENT --unmanaged
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Add the host again to the cluster and add the labels present earlier:
Syntax
ceph orch host add HOSTNAME IP_ADDRESS --labels=LABELS
ceph orch host add HOSTNAME IP_ADDRESS --labels=LABELS
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If the removed host had a monitor daemon deployed originally, the monitor daemon needs to be added back manually with the location attributes as described in Replacing the tiebreaker with a new monitor.
Syntax
ceph mon add HOSTNAME IP LOCATION
ceph mon add HOSTNAME IP LOCATION
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph mon add ceph-host02 10.0.211.62 datacenter=DC2
[ceph: root@host01 /]# ceph mon add ceph-host02 10.0.211.62 datacenter=DC2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Syntax
ceph orch daemon add mon HOSTNAME
ceph orch daemon add mon HOSTNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch daemon add mon ceph-host02
[ceph: root@host01 /]# ceph orch daemon add mon ceph-host02
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verify the daemons on the re-provisioned host running successfully with the same ceph version:
Syntax
ceph orch ps
ceph orch ps
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set back the monitor daemon placement to
managed
.Syntax
ceph orch apply mon PLACEMENT
ceph orch apply mon PLACEMENT
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Repeat the above steps for all hosts.
- The arbiter monitor cannot be drained or removed from the host. Therefor, the arbiter mon needs to be re-provisioned to another tie-breaker node, and then drained or removed from host as described in Replacing the tiebreaker with a new monitor.
- Follow the same approach to re-provision admin nodes and use a second admin node to manage clusters.
- Add the backup files again to the node.
-
. Add admin nodes again to cluster using the second admin node. Set the
mon
deployment tounmanaged
. - Re-add the old arbiter monintor and remove the temporary monitor created earlier. For more information, see Replacing the tiebreaker with a new monitor.
Unset the
noout
flag.Syntax
ceph osd unset noout
ceph osd unset noout
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Verify the Ceph version and the cluster status to ensure that all demons are working as expected after the Red Hat Enterprise Linux upgrade.
- Follow Upgrade a Red Hat Ceph Storage cluster using `cephadm` to perform a Red Hat Ceph Storage 6 to Red Hat Ceph Storage 8 upgrade.
Chapter 4. Staggered upgrade Copy linkLink copied to clipboard!
As a storage administrator, you can upgrade Red Hat Ceph Storage components in phases rather than all at once. The ceph orch upgrade
command enables you to specify options to limit which daemons are upgraded by a single upgrade command.
If you want to upgrade from a version that does not support staggered upgrades, you must first manually upgrade the Ceph Manager (ceph-mgr
) daemons. For more information on performing a staggered upgrade from previous releases, see Performing a staggered upgrade from previous releases.
4.1. Staggered upgrade options Copy linkLink copied to clipboard!
The ceph orch upgrade
command supports several options to upgrade cluster components in phases. The staggered upgrade options include:
--daemon_types
-
The
--daemon_types
option takes a comma-separated list of daemon types and will only upgrade daemons of those types. Valid daemon types for this option includemgr
,mon
,crash
,osd
,mds
,rgw
,rbd-mirror
,cephfs-mirror
, andnfs
. --services
-
The
--services
option is mutually exclusive with--daemon-types
, only takes services of one type at a time, and will only upgrade daemons belonging to those services. For example, you cannot provide an OSD and RGW service simultaneously. --hosts
-
You can combine the
--hosts
option with--daemon_types
,--services
, or use it on its own. The--hosts
option parameter follows the same format as the command line options for orchestrator CLI placement specification. --limit
-
The
--limit
option takes an integer greater than zero and provides a numerical limit on the number of daemonscephadm
will upgrade. You can combine the--limit
option with--daemon_types
,--services
, or--hosts
. For example, if you specify to upgrade daemons of typeosd
onhost01
with a limit set to3
,cephadm
will upgrade up to three OSD daemons on host01. --topological-labels
The
--topological-labels
option upgrades a specific set of hosts that have the specified topological label.For example, if the label
datacenter=A
is used on a set of hosts, you can target these specific hosts for upgrade, without needing to manually input each host.Note--topological-labels
is valid from Red Hat Ceph Storage 8.1 or later.
4.1.1. Performing a staggered upgrade Copy linkLink copied to clipboard!
As a storage administrator, you can use the ceph orch upgrade
options to limit which daemons are upgraded by a single upgrade command.
Cephadm strictly enforces an order for the upgrade of daemons that is still present in staggered upgrade scenarios. The current upgrade order is:
- Ceph Manager nodes
- Ceph Monitor nodes
- Ceph-crash daemons
- Ceph OSD nodes
- Ceph Metadata Server (MDS) nodes
- Ceph Object Gateway (RGW) nodes
- Ceph RBD-mirror node
- CephFS-mirror node
- Ceph NFS nodes
If you specify parameters that upgrade daemons out of order, the upgrade command blocks and notes which daemons you need to upgrade before you proceed.
Example
[ceph: root@host01 /]# ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest --hosts host02 Error EINVAL: Cannot start upgrade. Daemons with types earlier in upgrade order than daemons on given host need upgrading. Please first upgrade mon.ceph-host01 NOTE: Enforced upgrade order is: mgr -> mon -> crash -> osd -> mds -> rgw -> rbd-mirror -> cephfs-mirror
[ceph: root@host01 /]# ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest --hosts host02
Error EINVAL: Cannot start upgrade. Daemons with types earlier in upgrade order than daemons on given host need upgrading.
Please first upgrade mon.ceph-host01
NOTE: Enforced upgrade order is: mgr -> mon -> crash -> osd -> mds -> rgw -> rbd-mirror -> cephfs-mirror
There is no required order for restarting the instances. Red Hat recommends restarting the instance pointing to the pool with primary images followed by the instance pointing to the mirrored pool.
Prerequisites
Before you begin, make sure tha tyou have the following prerequisites in place:
- Latest version of a supported Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- At least two Ceph Manager nodes in the storage cluster: one active and one standby.
- From Red Hat Ceph Storage 8.1 or later, topological labels set on a host. Use topological labels to upgrade specific hosts. For more information, see Adding topological labels to a host.
Procedure
Log into the
cephadm
shell:Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure all the hosts are online and that the storage cluster is healthy:
Example
[ceph: root@host01 /]# ceph -s
[ceph: root@host01 /]# ceph -s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the OSD
noout
,noscrub
, andnodeep-scrub
flags to prevent OSDs from getting marked out during upgrade and to avoid unnecessary load on the cluster:Example
[ceph: root@host01 /]# ceph osd set noout [ceph: root@host01 /]# ceph osd set noscrub [ceph: root@host01 /]# ceph osd set nodeep-scrub
[ceph: root@host01 /]# ceph osd set noout [ceph: root@host01 /]# ceph osd set noscrub [ceph: root@host01 /]# ceph osd set nodeep-scrub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check service versions and the available target containers:
Syntax
ceph orch upgrade check IMAGE_NAME
ceph orch upgrade check IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch upgrade check registry.redhat.io/rhceph/rhceph-8-rhel9:latest
[ceph: root@host01 /]# ceph orch upgrade check registry.redhat.io/rhceph/rhceph-8-rhel9:latest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Upgrade the storage cluster in one of the following ways:
Upgrade specific daemon types on specific hosts.
Syntax
ceph orch upgrade start --image IMAGE_NAME --daemon-types DAEMON_TYPE1,DAEMON_TYPE2 --hosts HOST1,HOST2
ceph orch upgrade start --image IMAGE_NAME --daemon-types DAEMON_TYPE1,DAEMON_TYPE2 --hosts HOST1,HOST2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest --daemon-types mgr,mon --hosts host02,host03
[ceph: root@host01 /]# ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest --daemon-types mgr,mon --hosts host02,host03
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Specify specific services and limit the number of daemons to upgrade.
NoteIn staggered upgrade scenarios, if using a limiting parameter, the monitoring stack daemons, including Prometheus and
node-exporter
, are refreshed after the upgrade of the Ceph Manager daemons. As a result of the limiting parameter, Ceph Manager upgrades take longer to complete. The versions of monitoring stack daemons might not change between Ceph releases, in which case, they are only redeployed.NoteUpgrade commands with limiting parameters validates the options before beginning the upgrade, which can require pulling the new container image. As a result, the
upgrade start
command might take a while to return when you provide limiting parameters.Syntax
ceph orch upgrade start --image IMAGE_NAME --services SERVICE1,SERVICE2 --limit LIMIT_NUMBER
ceph orch upgrade start --image IMAGE_NAME --services SERVICE1,SERVICE2 --limit LIMIT_NUMBER
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest --services rgw.example1,rgw1.example2 --limit 2
[ceph: root@host01 /]# ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest --services rgw.example1,rgw1.example2 --limit 2
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Valid on {stroage-product} 8.1 or later only. Specify specific topological labels to run the upgrade on specific hosts.
NoteUsing topological labels does not allowe changing from the required daemon upgrade order.
Syntax
ceph orch upgrade start --topological-labels _TOPOLOGICAL_LABEL_
ceph orch upgrade start --topological-labels _TOPOLOGICAL_LABEL_
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch upgrade start --topological-labels datacenter=A
[ceph: root@host01 /]# ceph orch upgrade start --topological-labels datacenter=A
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Valid on Red Hat Ceph Storage 8.1 or later only with topological labels. Specify specific topological labels to run the upgrade on specific hosts, with the
--daemon-types
option.Use the
--daemon-types
option for hosts with a similar scenario to the following example:A host exists with both Monitor and OSD daemons that match
datacenter=A
but nonupgraded Monitor daemons on host that do not matchdatacenter=A
. In this case, you only want to upgrade the Monitor daemons on thedatacenter=A
host, to not break the upgrade order. In this scenario, use the following command to upgrade only the Monitor daemons ondatacenter=A
, but not the OSD daemons.Syntax
ceph orch upgrade start _IMAGE_NAME_ --daemon-types _DAEMON_TYPE1_ --topological-labels _TOPOLOGICAL_LABEL_
ceph orch upgrade start _IMAGE_NAME_ --daemon-types _DAEMON_TYPE1_ --topological-labels _TOPOLOGICAL_LABEL_
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch upgrade start registry.redhat.io/rhceph/rhceph-8-rhel9:latest --daemon-types mon --topological-labels datacenter=A
[ceph: root@host01 /]# ceph orch upgrade start registry.redhat.io/rhceph/rhceph-8-rhel9:latest --daemon-types mon --topological-labels datacenter=A
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the
ceph orch upgrade status
command to verify what was selected for upgrade. View the"which"
field in the output.Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
To see which daemons you still need to upgrade, run the
ceph orch upgrade check
orceph versions
command:Example
[ceph: root@host01 /]# ceph orch upgrade check --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest
[ceph: root@host01 /]# ceph orch upgrade check --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow To complete the staggered upgrade, verify the upgrade of all remaining services:
Syntax
ceph orch upgrade start --image IMAGE_NAME
ceph orch upgrade start --image IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest
[ceph: root@host01 /]# ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Verification
Verify the new IMAGE_ID and VERSION of the Ceph cluster:
Example
[ceph: root@host01 /]# ceph versions [ceph: root@host01 /]# ceph orch ps
[ceph: root@host01 /]# ceph versions [ceph: root@host01 /]# ceph orch ps
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the upgrade is complete, unset the
noout
,noscrub
, andnodeep-scrub
flags:Example
[ceph: root@host01 /]# ceph osd unset noout [ceph: root@host01 /]# ceph osd unset noscrub [ceph: root@host01 /]# ceph osd unset nodeep-scrub
[ceph: root@host01 /]# ceph osd unset noout [ceph: root@host01 /]# ceph osd unset noscrub [ceph: root@host01 /]# ceph osd unset nodeep-scrub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
4.1.2. Performing a staggered upgrade from previous releases Copy linkLink copied to clipboard!
You can perform a staggered upgrade on your storage cluster by providing the necessary arguments
You can perform a staggered upgrade on your storage cluster by providing the necessary arguments. If you want to upgrade from a version that does not support staggered upgrades, you must first manually upgrade the Ceph Manager (ceph-mgr
) daemons. Once you have upgraded the Ceph Manager daemons, you can pass the limiting parameters to complete the staggered upgrade.
Verify you have at least two running Ceph Manager daemons before attempting this procedure.
Prerequisites
- A cluster running Red Hat Ceph Storage 7.1 or earlier.
- At least two Ceph Manager nodes in the storage cluster: one active and one standby.
Procedure
Log into the Cephadm shell:
Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Determine which Ceph Manager is active and which are standby:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Manually upgrade each standby Ceph Manager daemon:
Syntax
ceph orch daemon redeploy mgr.ceph-HOST.MANAGER_ID --image IMAGE_ID
ceph orch daemon redeploy mgr.ceph-HOST.MANAGER_ID --image IMAGE_ID
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch daemon redeploy mgr.ceph-host02.pzgrhz --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest
[ceph: root@host01 /]# ceph orch daemon redeploy mgr.ceph-host02.pzgrhz --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Fail over to the upgraded standby Ceph Manager:
Example
[ceph: root@host01 /]# ceph mgr fail
[ceph: root@host01 /]# ceph mgr fail
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check that the standby Ceph Manager is now active:
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the active Ceph Manager is upgraded to the new version:
Syntax
ceph tell mgr.ceph-HOST.MANAGER_ID version
ceph tell mgr.ceph-HOST.MANAGER_ID version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Repeat steps 2 - 6 to upgrade the remaining Ceph Managers to the new version.
Check that all Ceph Managers are upgraded to the new version:
Example
[ceph: root@host01 /]# ceph mgr versions { "ceph version 18.2.0-128.el8cp (600e227816517e2da53d85f2fab3cd40a7483372) pacific (stable)": 2 }
[ceph: root@host01 /]# ceph mgr versions { "ceph version 18.2.0-128.el8cp (600e227816517e2da53d85f2fab3cd40a7483372) pacific (stable)": 2 }
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Once you upgrade all your Ceph Managers, you can specify the limiting parameters and complete the remainder of the staggered upgrade.
Chapter 5. Monitoring and managing upgrade of the storage cluster Copy linkLink copied to clipboard!
After running the ceph orch upgrade start
command to upgrade the Red Hat Ceph Storage cluster, you can check the status, pause, resume, or stop the upgrade process. The health of the cluster changes to HEALTH_WARNING
during an upgrade. If the host of the cluster is offline, the upgrade is paused.
You have to upgrade one daemon type after the other. If a daemon cannot be upgraded, the upgrade is paused.
Prerequisites
- A running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- At least two Ceph Manager nodes in the storage cluster: one active and one standby.
- Upgrade for the storage cluster initiated.
Procedure
Determine whether an upgrade is in process and the version to which the cluster is upgrading:
Example
[ceph: root@node0 /]# ceph orch upgrade status
[ceph: root@node0 /]# ceph orch upgrade status
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou do not get a message once the upgrade is successful. Run
ceph versions
andceph orch ps
commands to verify the new image ID and the version of the storage cluster.Optional: Pause the upgrade process:
Example
[ceph: root@node0 /]# ceph orch upgrade pause
[ceph: root@node0 /]# ceph orch upgrade pause
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Resume a paused upgrade process:
Example
[ceph: root@node0 /]# ceph orch upgrade resume
[ceph: root@node0 /]# ceph orch upgrade resume
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: Stop the upgrade process:
Example
[ceph: root@node0 /]# ceph orch upgrade stop
[ceph: root@node0 /]# ceph orch upgrade stop
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 6. Troubleshooting upgrade error messages Copy linkLink copied to clipboard!
The following table shows some cephadm
upgrade error messages. If the cephadm
upgrade fails for any reason, an error message appears in the storage cluster health status.
Error Message | Description |
---|---|
UPGRADE_NO_STANDBY_MGR | Ceph requires both active and standby manager daemons to proceed, but there is currently no standby. |
UPGRADE_FAILED_PULL | Ceph was unable to pull the container image for the target version. This can happen if you specify a version or container image that does not exist (e.g., 1.2.3), or if the container registry is not reachable from one or more hosts in the cluster. |