Este contenido no está disponible en el idioma seleccionado.
Chapter 4. Staggered upgrade
As a storage administrator, you can upgrade Red Hat Ceph Storage components in phases rather than all at once. The ceph orch upgrade command enables you to specify options to limit which daemons are upgraded by a single upgrade command.
If you want to upgrade from a version that does not support staggered upgrades, you must first manually upgrade the Ceph Manager (ceph-mgr) daemons. For more information on performing a staggered upgrade from previous releases, see Performing a staggered upgrade from previous releases.
4.1. Staggered upgrade options Copiar enlaceEnlace copiado en el portapapeles!
The ceph orch upgrade command supports several options to upgrade cluster components in phases. The staggered upgrade options include:
--daemon_types-
The
--daemon_typesoption takes a comma-separated list of daemon types and will only upgrade daemons of those types. Valid daemon types for this option includemgr,mon,crash,osd,mds,rgw,rbd-mirror,cephfs-mirror, andnfs. --services-
The
--servicesoption is mutually exclusive with--daemon-types, only takes services of one type at a time, and will only upgrade daemons belonging to those services. For example, you cannot provide an OSD and RGW service simultaneously. --hosts-
You can combine the
--hostsoption with--daemon_types,--services, or use it on its own. The--hostsoption parameter follows the same format as the command line options for orchestrator CLI placement specification. --limit-
The
--limitoption takes an integer greater than zero and provides a numerical limit on the number of daemonscephadmwill upgrade. You can combine the--limitoption with--daemon_types,--services, or--hosts. For example, if you specify to upgrade daemons of typeosdonhost01with a limit set to3,cephadmwill upgrade up to three OSD daemons on host01. --topological-labelsThe
--topological-labelsoption upgrades a specific set of hosts that have the specified topological label.For example, if the label
datacenter=Ais used on a set of hosts, you can target these specific hosts for upgrade, without needing to manually input each host.Note--topological-labelsis valid from Red Hat Ceph Storage 8.1 or later.
4.1.1. Performing a staggered upgrade Copiar enlaceEnlace copiado en el portapapeles!
As a storage administrator, you can use the ceph orch upgrade options to limit which daemons are upgraded by a single upgrade command.
Cephadm strictly enforces an order for the upgrade of daemons that is still present in staggered upgrade scenarios. The current upgrade order is:
- Ceph Manager nodes
- Ceph Monitor nodes
- Ceph-crash daemons
- Ceph OSD nodes
- Ceph Metadata Server (MDS) nodes
- Ceph Object Gateway (RGW) nodes
- Ceph RBD-mirror node
- CephFS-mirror node
- Ceph NFS nodes
If you specify parameters that upgrade daemons out of order, the upgrade command blocks and notes which daemons you need to upgrade before you proceed.
Example
[ceph: root@host01 /]# ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest --hosts host02
Error EINVAL: Cannot start upgrade. Daemons with types earlier in upgrade order than daemons on given host need upgrading.
Please first upgrade mon.ceph-host01
NOTE: Enforced upgrade order is: mgr -> mon -> crash -> osd -> mds -> rgw -> rbd-mirror -> cephfs-mirror
There is no required order for restarting the instances. Red Hat recommends restarting the instance pointing to the pool with primary images followed by the instance pointing to the mirrored pool.
Prerequisites
Before you begin, make sure tha tyou have the following prerequisites in place:
- Latest version of a supported Red Hat Ceph Storage cluster.
- Root-level access to all the nodes.
- At least two Ceph Manager nodes in the storage cluster: one active and one standby.
- From Red Hat Ceph Storage 8.1 or later, topological labels set on a host. Use topological labels to upgrade specific hosts. For more information, see Adding topological labels to a host.
Procedure
Log into the
cephadmshell:Example
[root@host01 ~]# cephadm shellEnsure all the hosts are online and that the storage cluster is healthy:
Example
[ceph: root@host01 /]# ceph -sSet the OSD
noout,noscrub, andnodeep-scrubflags to prevent OSDs from getting marked out during upgrade and to avoid unnecessary load on the cluster:Example
[ceph: root@host01 /]# ceph osd set noout [ceph: root@host01 /]# ceph osd set noscrub [ceph: root@host01 /]# ceph osd set nodeep-scrubCheck service versions and the available target containers:
Syntax
ceph orch upgrade check IMAGE_NAMEExample
[ceph: root@host01 /]# ceph orch upgrade check registry.redhat.io/rhceph/rhceph-8-rhel9:latestUpgrade the storage cluster in one of the following ways:
Upgrade specific daemon types on specific hosts.
Syntax
ceph orch upgrade start --image IMAGE_NAME --daemon-types DAEMON_TYPE1,DAEMON_TYPE2 --hosts HOST1,HOST2Example
[ceph: root@host01 /]# ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest --daemon-types mgr,mon --hosts host02,host03Specify specific services and limit the number of daemons to upgrade.
NoteIn staggered upgrade scenarios, if using a limiting parameter, the monitoring stack daemons, including Prometheus and
node-exporter, are refreshed after the upgrade of the Ceph Manager daemons. As a result of the limiting parameter, Ceph Manager upgrades take longer to complete. The versions of monitoring stack daemons might not change between Ceph releases, in which case, they are only redeployed.NoteUpgrade commands with limiting parameters validates the options before beginning the upgrade, which can require pulling the new container image. As a result, the
upgrade startcommand might take a while to return when you provide limiting parameters.Syntax
ceph orch upgrade start --image IMAGE_NAME --services SERVICE1,SERVICE2 --limit LIMIT_NUMBERExample
[ceph: root@host01 /]# ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest --services rgw.example1,rgw1.example2 --limit 2Valid on {stroage-product} 8.1 or later only. Specify specific topological labels to run the upgrade on specific hosts.
NoteUsing topological labels does not allowe changing from the required daemon upgrade order.
Syntax
ceph orch upgrade start --topological-labels _TOPOLOGICAL_LABEL_Example
[ceph: root@host01 /]# ceph orch upgrade start --topological-labels datacenter=AValid on Red Hat Ceph Storage 8.1 or later only with topological labels. Specify specific topological labels to run the upgrade on specific hosts, with the
--daemon-typesoption.Use the
--daemon-typesoption for hosts with a similar scenario to the following example:A host exists with both Monitor and OSD daemons that match
datacenter=Abut nonupgraded Monitor daemons on host that do not matchdatacenter=A. In this case, you only want to upgrade the Monitor daemons on thedatacenter=Ahost, to not break the upgrade order. In this scenario, use the following command to upgrade only the Monitor daemons ondatacenter=A, but not the OSD daemons.Syntax
ceph orch upgrade start _IMAGE_NAME_ --daemon-types _DAEMON_TYPE1_ --topological-labels _TOPOLOGICAL_LABEL_Example
[ceph: root@host01 /]# ceph orch upgrade start registry.redhat.io/rhceph/rhceph-8-rhel9:latest --daemon-types mon --topological-labels datacenter=AUse the
ceph orch upgrade statuscommand to verify what was selected for upgrade. View the"which"field in the output.Example
[ceph: root@host01 /]# ceph orch upgrade status { "in_progress": true, "target_image": "cp.icr.io/cp/ibm-ceph/ceph-8-rhel9:latest", "services_complete": [ "mon" ], "which": "Upgrading daemons of type(s) mon on host(s) host01", "progress": "1/1 daemons upgraded", "message": "Currently upgrading mon daemons", "is_paused": false }
To see which daemons you still need to upgrade, run the
ceph orch upgrade checkorceph versionscommand:Example
[ceph: root@host01 /]# ceph orch upgrade check --image registry.redhat.io/rhceph/rhceph-8-rhel9:latestTo complete the staggered upgrade, verify the upgrade of all remaining services:
Syntax
ceph orch upgrade start --image IMAGE_NAMEExample
[ceph: root@host01 /]# ceph orch upgrade start --image registry.redhat.io/rhceph/rhceph-8-rhel9:latest
Verification
Verify the new IMAGE_ID and VERSION of the Ceph cluster:
Example
[ceph: root@host01 /]# ceph versions [ceph: root@host01 /]# ceph orch psWhen the upgrade is complete, unset the
noout,noscrub, andnodeep-scrubflags:Example
[ceph: root@host01 /]# ceph osd unset noout [ceph: root@host01 /]# ceph osd unset noscrub [ceph: root@host01 /]# ceph osd unset nodeep-scrub
4.1.2. Performing a staggered upgrade from previous releases Copiar enlaceEnlace copiado en el portapapeles!
You can perform a staggered upgrade on your storage cluster by providing the necessary arguments
You can perform a staggered upgrade on your storage cluster by providing the necessary arguments. If you want to upgrade from a version that does not support staggered upgrades, you must first manually upgrade the Ceph Manager (ceph-mgr) daemons. Once you have upgraded the Ceph Manager daemons, you can pass the limiting parameters to complete the staggered upgrade.
Verify you have at least two running Ceph Manager daemons before attempting this procedure.
Prerequisites
- A cluster running Red Hat Ceph Storage 7.1 or earlier.
- At least two Ceph Manager nodes in the storage cluster: one active and one standby.
Procedure
Log into the Cephadm shell:
Example
[root@host01 ~]# cephadm shellDetermine which Ceph Manager is active and which are standby:
Example
[ceph: root@host01 /]# ceph -s cluster: id: 266ee7a8-2a05-11eb-b846-5254002d4916 health: HEALTH_OK services: mon: 2 daemons, quorum host01,host02 (age 92s) mgr: host01.ndtpjh(active, since 16h), standbys: host02.pzgrhzManually upgrade each standby Ceph Manager daemon:
Syntax
ceph orch daemon redeploy mgr.ceph-HOST.MANAGER_ID --image IMAGE_IDExample
[ceph: root@host01 /]# ceph orch daemon redeploy mgr.ceph-host02.pzgrhz --image registry.redhat.io/rhceph/rhceph-8-rhel9:latestFail over to the upgraded standby Ceph Manager:
Example
[ceph: root@host01 /]# ceph mgr failCheck that the standby Ceph Manager is now active:
Example
[ceph: root@host01 /]# ceph -s cluster: id: 266ee7a8-2a05-11eb-b846-5254002d4916 health: HEALTH_OK services: mon: 2 daemons, quorum host01,host02 (age 1h) mgr: host02.pzgrhz(active, since 25s), standbys: host01.ndtpjhVerify that the active Ceph Manager is upgraded to the new version:
Syntax
ceph tell mgr.ceph-HOST.MANAGER_ID versionExample
[ceph: root@host01 /]# ceph tell mgr.host02.pzgrhz version { "version": "18.2.0-128.el8cp", "release": "reef", "release_type": "stable" }- Repeat steps 2 - 6 to upgrade the remaining Ceph Managers to the new version.
Check that all Ceph Managers are upgraded to the new version:
Example
[ceph: root@host01 /]# ceph mgr versions { "ceph version 18.2.0-128.el8cp (600e227816517e2da53d85f2fab3cd40a7483372) pacific (stable)": 2 }- Once you upgrade all your Ceph Managers, you can specify the limiting parameters and complete the remainder of the staggered upgrade.