Este contenido no está disponible en el idioma seleccionado.
Chapter 1. Upgrade a Red Hat Ceph Storage cluster using cephadm
As a storage administrator, you can use the cephadm
Orchestrator to Red Hat Ceph Storage 8 with the ceph orch upgrade
command.
Upgrading directly from Red Hat Ceph Storage 6 to Red Hat Ceph Storage 8 is supported.
The automated upgrade process follows Ceph best practices. For example:
- The upgrade order starts with Ceph Managers, Ceph Monitors, then other daemons.
- Each daemon is restarted only after Ceph indicates that the cluster will remain available.
The storage cluster health status is likely to switch to HEALTH_WARNING
during the upgrade. When the upgrade is complete, the health status should switch back to HEALTH_OK.
You do not get a message once the upgrade is successful. Run ceph versions
and ceph orch ps
commands to verify the new image ID and the version of the storage cluster.
1.1. Compatibility considerations between RHCS and podman versions Copiar enlaceEnlace copiado en el portapapeles!
podman
and Red Hat Ceph Storage have different end-of-life strategies that might make it challenging to find compatible versions.
Red Hat recommends to use the podman
version shipped with the corresponding Red Hat Enterprise Linux version for Red Hat Ceph Storage. See the Red Hat Ceph Storage: Supported configurations knowledge base article for more details. See the Contacting Red Hat support for service section in the Red Hat Ceph Storage Troubleshooting Guide for additional assistance.
The following table shows version compatibility between Red Hat Ceph Storage 8 and versions of podman
.
Ceph | Podman | |||||
---|---|---|---|---|---|---|
1.9 | 2.0 | 2.1 | 2.2 | 3.0 | >3.0 | |
Red Hat Ceph Storage 8 | false | true | true | false | true | true |
You must use a version of Podman that is 2.0.0 or higher.
1.2. Upgrading the Red Hat Ceph Storage cluster Copiar enlaceEnlace copiado en el portapapeles!
You can use ceph orch upgrade
command for upgrading a Red Hat Ceph Storage cluster.
Prerequisites
- Latest version of running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
-
Ansible user with sudo and passwordless
ssh
access to all nodes in the storage cluster. - At least two Ceph Manager nodes in the storage cluster: one active and one standby.
Procedure
Register the node, and when prompted, enter your Red Hat Customer Portal credentials:
Syntax
subscription-manager register
subscription-manager register
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Pull the latest subscription data from the CDN:
Syntax
subscription-manager refresh
subscription-manager refresh
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteRed Hat customers now use Simple Content Access (SCA), as a result, attaching a subscription to the system is no longer necessary. Registering the system and enabling the required repositories is sufficient. For more information, see How to register and subscribe a RHEL system to the Red Hat Customer Portal using Red Hat Subscription-Manager? on the Red Hat Customer Portal.
Update the system to receive the latest packages for Red Hat Enterprise Linux:
Syntax
dnf update
dnf update
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the Ceph Ansible repositories on the Ansible administration node:
Red Hat Enterprise Linux 9
subscription-manager repos --enable=rhceph-8-tools-for-rhel-9-x86_64-rpms
subscription-manager repos --enable=rhceph-8-tools-for-rhel-9-x86_64-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
cephadm
,cephadm-ansible
, andcrun
packages:Example
dnf update cephadm dnf update cephadm-ansible dnf update crun
[root@admin ~]# dnf update cephadm [root@admin ~]# dnf update cephadm-ansible [root@admin ~]# dnf update crun
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to the
/usr/share/cephadm-ansible/
directory:Example
cd /usr/share/cephadm-ansible
[root@admin ~]# cd /usr/share/cephadm-ansible
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the preflight playbook with the
upgrade_ceph_packages
parameter set totrue
on the bootstrapped host in the storage cluster:Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"
[ceph-admin@admin cephadm-ansible]$ ansible-playbook -i hosts cephadm-preflight.yml --extra-vars "ceph_origin=rhcs upgrade_ceph_packages=true"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This package upgrades
cephadm
on all the nodes.Log into the
cephadm
shell:Example
cephadm shell
[root@host01 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure all the hosts are online and that the storage cluster is healthy:
Example
[ceph: root@host01 /]# ceph -s
[ceph: root@host01 /]# ceph -s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the OSD
noout
,noscrub
, andnodeep-scrub
flags to prevent OSDs from getting marked out during upgrade and to avoid unnecessary load on the cluster:Example
[ceph: root@host01 /]# ceph osd set noout [ceph: root@host01 /]# ceph osd set noscrub [ceph: root@host01 /]# ceph osd set nodeep-scrub
[ceph: root@host01 /]# ceph osd set noout [ceph: root@host01 /]# ceph osd set noscrub [ceph: root@host01 /]# ceph osd set nodeep-scrub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check service versions and the available target containers:
Syntax
ceph orch upgrade check IMAGE_NAME
ceph orch upgrade check IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch upgrade check registry.redhat.io/rhceph/rhceph-8-rhel9:latest
[ceph: root@host01 /]# ceph orch upgrade check registry.redhat.io/rhceph/rhceph-8-rhel9:latest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe image name is applicable for both Red Hat Enterprise Linux 8 and Red Hat Enterprise Linux 9.
Upgrade the storage cluster:
Syntax
ceph orch upgrade start IMAGE_NAME
ceph orch upgrade start IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@host01 /]# ceph orch upgrade start registry.redhat.io/rhceph/rhceph-8-rhel9:latest
[ceph: root@host01 /]# ceph orch upgrade start registry.redhat.io/rhceph/rhceph-8-rhel9:latest
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteTo perform a staggered upgrade, see Performing a staggered upgrade.
While the upgrade is underway, a progress bar appears in the
ceph status
output.Example
[ceph: root@host01 /]# ceph status [...] progress: Upgrade to 18.2.0-128.el9cp (1s) [............................]
[ceph: root@host01 /]# ceph status [...] progress: Upgrade to 18.2.0-128.el9cp (1s) [............................]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the new IMAGE_ID and VERSION of the Ceph cluster:
Example
[ceph: root@host01 /]# ceph versions [ceph: root@host01 /]# ceph orch ps
[ceph: root@host01 /]# ceph versions [ceph: root@host01 /]# ceph orch ps
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you are not using the
cephadm-ansible
playbooks, after upgrading your Ceph cluster, you must upgrade theceph-common
package and client libraries on your client nodes.Example
[root@client01 ~] dnf update ceph-common
[root@client01 ~] dnf update ceph-common
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify you have the latest version:
Example
[root@client01 ~] ceph --version
[root@client01 ~] ceph --version
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the upgrade is complete, unset the
noout
,noscrub
, andnodeep-scrub
flags:Example
[ceph: root@host01 /]# ceph osd unset noout [ceph: root@host01 /]# ceph osd unset noscrub [ceph: root@host01 /]# ceph osd unset nodeep-scrub
[ceph: root@host01 /]# ceph osd unset noout [ceph: root@host01 /]# ceph osd unset noscrub [ceph: root@host01 /]# ceph osd unset nodeep-scrub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.3. Upgrading the Red Hat Ceph Storage cluster in a disconnected environment Copiar enlaceEnlace copiado en el portapapeles!
You can upgrade the storage cluster in a disconnected environment by using the --image
tag.
You can use ceph orch upgrade
command for upgrading a Red Hat Ceph Storage cluster.
Prerequisites
- Latest version of running Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
-
Ansible user with sudo and passwordless
ssh
access to all nodes in the storage cluster. - At least two Ceph Manager nodes in the storage cluster: one active and one standby.
- Register the nodes to CDN and attach subscriptions.
- Check for the customer container images in a disconnected environment and change the configuration, if required. See the Changing configurations of custom container images for disconnected installations section in the Red Hat Ceph Storage Installation Guide for more details.
By default, the monitoring stack components are deployed based on the primary Ceph image. For disconnected environment of the storage cluster, you have to use the latest available monitoring stack component images.
Monitoring stack component | Image details |
---|---|
Prometheus | registry.redhat.io/openshift4/ose-prometheus:v4.15 |
Grafana | registry.redhat.io/rhceph/grafana-rhel9:latest |
Node-exporter | registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.15 |
AlertManager | registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.15 |
HAProxy | registry.redhat.io/rhceph/rhceph-haproxy-rhel9:latest |
Keepalived | registry.redhat.io/rhceph/keepalived-rhel9:latest |
SNMP Gateway | registry.redhat.io/rhceph/snmp-notifier-rhel9:latest |
Procedure
Enable the Ceph Ansible repositories on the Ansible administration node:
Red Hat Enterprise Linux 9
subscription-manager repos --enable=rhceph-8-tools-for-rhel-9-x86_64-rpms
subscription-manager repos --enable=rhceph-8-tools-for-rhel-9-x86_64-rpms
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Update the
cephadm
andcephadm-ansible
package.Example
dnf update cephadm dnf update cephadm-ansible
[root@admin ~]# dnf update cephadm [root@admin ~]# dnf update cephadm-ansible
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to the /usr/share/cephadm-ansible/ directory:
Example
cd /usr/share/cephadm-ansible
[root@admin ~]# cd /usr/share/cephadm-ansible
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the preflight playbook with the
upgrade_ceph_packages
parameter set totrue
and theceph_origin
parameter set tocustom
on the bootstrapped host in the storage cluster:Syntax
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=custom upgrade_ceph_packages=true"
ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars "ceph_origin=custom upgrade_ceph_packages=true"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
ansible-playbook -i /etc/ansible/hosts cephadm-preflight.yml --extra-vars "ceph_origin=custom upgrade_ceph_packages=true"
[ceph-admin@admin ~]$ ansible-playbook -i /etc/ansible/hosts cephadm-preflight.yml --extra-vars "ceph_origin=custom upgrade_ceph_packages=true"
Copy to Clipboard Copied! Toggle word wrap Toggle overflow This package upgrades
cephadm
on all the nodes.Log into the
cephadm
shell:Example
cephadm shell
[root@node0 ~]# cephadm shell
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure all the hosts are online and that the storage cluster is healthy:
Example
[ceph: root@node0 /]# ceph -s
[ceph: root@node0 /]# ceph -s
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the OSD
noout
,noscrub
, andnodeep-scrub
flags to prevent OSDs from getting marked out during upgrade and to avoid unnecessary load on the cluster:Example
[ceph: root@host01 /]# ceph osd set noout [ceph: root@host01 /]# ceph osd set noscrub [ceph: root@host01 /]# ceph osd set nodeep-scrub
[ceph: root@host01 /]# ceph osd set noout [ceph: root@host01 /]# ceph osd set noscrub [ceph: root@host01 /]# ceph osd set nodeep-scrub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Check service versions and the available target containers:
Syntax
ceph orch upgrade check IMAGE_NAME
ceph orch upgrade check IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@node0 /]# ceph orch upgrade check LOCAL_NODE_FQDN:5000/rhceph/rhceph-8-rhel9
[ceph: root@node0 /]# ceph orch upgrade check LOCAL_NODE_FQDN:5000/rhceph/rhceph-8-rhel9
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Upgrade the storage cluster:
Syntax
ceph orch upgrade start IMAGE_NAME
ceph orch upgrade start IMAGE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example
[ceph: root@node0 /]# ceph orch upgrade start LOCAL_NODE_FQDN:5000/rhceph/rhceph-8-rhel9
[ceph: root@node0 /]# ceph orch upgrade start LOCAL_NODE_FQDN:5000/rhceph/rhceph-8-rhel9
Copy to Clipboard Copied! Toggle word wrap Toggle overflow While the upgrade is underway, a progress bar appears in the
ceph status
output.Example
[ceph: root@node0 /]# ceph status [...] progress: Upgrade to 18.2.0-128.el9cp (1s) [............................]
[ceph: root@node0 /]# ceph status [...] progress: Upgrade to 18.2.0-128.el9cp (1s) [............................]
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the new IMAGE_ID and VERSION of the Ceph cluster:
Example
[ceph: root@node0 /]# ceph version [ceph: root@node0 /]# ceph versions [ceph: root@node0 /]# ceph orch ps
[ceph: root@node0 /]# ceph version [ceph: root@node0 /]# ceph versions [ceph: root@node0 /]# ceph orch ps
Copy to Clipboard Copied! Toggle word wrap Toggle overflow When the upgrade is complete, unset the
noout
,noscrub
, andnodeep-scrub
flags:Example
[ceph: root@host01 /]# ceph osd unset noout [ceph: root@host01 /]# ceph osd unset noscrub [ceph: root@host01 /]# ceph osd unset nodeep-scrub
[ceph: root@host01 /]# ceph osd unset noout [ceph: root@host01 /]# ceph osd unset noscrub [ceph: root@host01 /]# ceph osd unset nodeep-scrub
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.4. Updating Ceph Object Gateway to use cephadm self-signed certificates in a multi-site deployment Copiar enlaceEnlace copiado en el portapapeles!
Starting from Red Hat Ceph Storage 8.1, Red Hat Ceph Storage introduced a new root certificate authority (CA) format. As a result, when updating the Ceph Object Gateway in multi-site deployments from Red Hat Ceph Storage versions 8.0 or earlier, you must complete additional steps to enable cephadm self-signed certificates.
Use this procedure in any of the following scenarios:
- You are upgrading from Red Hat Ceph Storage versions 8.0 or earlier and are planning on using cephadm self-signed certificates.
- Cephadm self-signed certificates are expired.
If you do not plan on using cephadm self-signed certificates or the root CA commonName
has the cephadm-root-<FSID>
format and the certificates are valid, this procedure is not required.
Prerequisites
Before you begin, make sure that your Red Hat Ceph Storage storage system is using the earlier old-style root CA.
ceph orch certmgr cert ls
ceph orch certmgr cert ls
-
If the root CA
commonName
has thecephadm-root
format, it is in the old style. Continue with the procedure documented here. -
If the root CA
commonName
has thecephadm-root-<FSID>
format, your cluster has already been upgraded to the new format and no further actions are required.
Procedure
Remove the previous root CA.
Syntax
ceph orch certmgr rm cephadm_root_ca_cert
ceph orch certmgr rm cephadm_root_ca_cert
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Reload the
certmgr
, to force creating the new root CA.Syntax
ceph orch certmgr reload
ceph orch certmgr reload
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the
certmgr
has been loaded correctly.Syntax
ceph orch certmgr cert ls
ceph orch certmgr cert ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the
cephadm_root_ca_cert
subject has the following format:cephadm-root-<FSID>
The root CA is upgraded.
- Reissue new certificates for all services that use cephadm self-signed certificates.
Refresh any existing Grafana services.
Remove all Grafana certificates.
This step must be done for each Grafana service that uses self-signed certificates.
Syntax
ceph orch certmgr cert rm grafana_cert --hostname=HOSTNAME
ceph orch certmgr cert rm grafana_cert --hostname=HOSTNAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Only continue once all certificates are removed.
Redeploy Grafana.
Syntax
ceph orch redeploy grafana
ceph orch redeploy grafana
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that all Grafana services are running correctly.
Syntax
ceph orch ps --service-name grafana
ceph orch ps --service-name grafana
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that new cephadm-signed certificates have been created for Grafana.
Syntax
ceph orch certmgr cert ls
ceph orch certmgr cert ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Refresh any existing Ceph Object Gateway (RGW) services.
This step must be done for each Ceph Object Gateway service that uses self-signed certificates.
Syntax
ceph orch certmgr cert rm rgw_frontend_ssl_cert --service-name RGW-SERVICE-NAME ceph orch redeploy RGW-SERVICE-NAME
ceph orch certmgr cert rm rgw_frontend_ssl_cert --service-name RGW-SERVICE-NAME ceph orch redeploy RGW-SERVICE-NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that the Ceph Object Gateway (RGW) service is deployed and running as expected.
Syntax
ceph orch ps --service-name RGW_SERVICE_NAME
ceph orch ps --service-name RGW_SERVICE_NAME
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the certificates.
Syntax
ceph orch certmgr cert ls
ceph orch certmgr cert ls
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Upgrade the Ceph Dashboard certificates.
Generate the cephadm-signed certificates for the dashboard.
From the cephadm command-line interface (CLI), run the following commands:
Syntax
json="$(ceph orch certmgr generate-certificates dashboard)" printf '%s' "$json" | jq -r '.cert' > dashboard.crt printf '%s' "$json" | jq -r '.key' > dashboard.key
json="$(ceph orch certmgr generate-certificates dashboard)" printf '%s' "$json" | jq -r '.cert' > dashboard.crt printf '%s' "$json" | jq -r '.key' > dashboard.key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set the new generated certificate and key.
From the Ceph command-line interface (CLI), run the following commands:
Syntax
ceph dashboard set-ssl-certificate -i dashboard.crt ceph dashboard set-ssl-certificate-key -i dashboard.key
ceph dashboard set-ssl-certificate -i dashboard.crt ceph dashboard set-ssl-certificate-key -i dashboard.key
Copy to Clipboard Copied! Toggle word wrap Toggle overflow The
dashboard.crt
anddashboard.key
pair files are generated.
Enable and then disable the Ceph Dashboard to load the new certificates.
Syntax
ceph mgr module disable dashboard ceph mgr module enable dashboard
ceph mgr module disable dashboard ceph mgr module enable dashboard
Copy to Clipboard Copied! Toggle word wrap Toggle overflow