Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 13. Upgrading the control plane operating system
Upgrade the operating system on your control plane nodes. The upgrade includes the following tasks:
- Running the overcloud upgrade prepare command with the system upgrade parameters
- Running the overcloud system upgrade, which uses Leapp to upgrade RHEL in-place
- Rebooting the nodes
If you are using Red Hat Ceph Storage, before performing the Leapp upgrade verify if the ceph-common package is present on your control plane nodes. If the ceph-common package is present on a node, take the precautions described in Upgrading RHCS 5 hosts from RHEL 8 to RHEL 9 removes ceph-common package. Services fail to start. to ensure the Red Hat Ceph Storage services restart after the control plane node reboots after the Leapp upgrade.
13.1. Upgrading the control plane nodes Copier lienLien copié sur presse-papiers!
To upgrade the control plane nodes in your environment to Red Hat Enterprise Linux 9.2, you must upgrade one-third of your control plane nodes at a time, starting with the bootstrap nodes.
You upgrade your control plane nodes by using the openstack overcloud upgrade run command. This command performs the following actions:
- Performs a Leapp upgrade of the operating system.
- Performs a reboot as a part of the Leapp upgrade.
Each node is rebooted during the system upgrade. The performance of the Pacemaker cluster and the Red Hat Ceph Storage cluster is degraded during this downtime, but there is no outage.
This example includes the following nodes with composable roles:
-
controller-0 -
controller-1 -
controller-2 -
database-0 -
database-1 -
database-2 -
networker-0 -
networker-1 -
networker-2 -
ceph-0 -
ceph-1 -
ceph-2
For information about the duration and impact of this upgrade procedure, see Upgrade duration and impact.
Prerequisites
If your environment includes Red Hat Ceph Storage nodes, check whether the nodes have a version lock. You must run the following commands on each Red Hat Ceph Storage node:
$ yum versionlock listClear any version locks that are listed:
$ yum versionlock clear
Procedure
-
Log in to the undercloud host as the
stackuser. Source the
stackrcundercloud credentials file:$ source ~/stackrcRun the following script without the
CONTROL_PLANE_ROLESparameter. Ensure that you include the variables that you used to prepare the containers in Running the overcloud upgrade preparation.python3 \ /usr/share/openstack-tripleo-heat-templates/tools/multi-rhel-container-image-prepare.py \ ${COMPUTE_ROLES} \ --enable-multi-rhel \ --excludes collectd \ --excludes nova-libvirt \ --minor-override \ "{${EL8_TAGS}${EL8_NAMESPACE}${CEPH_OVERRIDE}${NEUTRON_DRIVER}\"no_tag\":\"not_used\"}" \ --major-override \ "{${EL9_TAGS}${NAMESPACE}${CEPH_OVERRIDE}${NEUTRON_DRIVER}\"no_tag\":\"not_used\"}" \ --output-env-file \ /home/stack/containers-prepare-parameter.yamlNoteThe
CONTROL_PLANE_ROLESparameter defines the list of your control plane roles. Removing this parameter from the script prepares the control plane roles for an upgrade to RHEL 9.2. If theCONTROL_PLANE_ROLESparameter is included in the script, the control plane roles remain on RHEL 8.4.In the
skip_rhel_release.yamlfile, set theSkipRhelEnforcementparameter tofalse:parameter_defaults: SkipRhelEnforcement: falseUpdate the
overcloud_upgrade_prepare.shfile:$ openstack overcloud upgrade prepare --yes \ ... -e /home/stack/system_upgrade.yaml \ -e /home/stack/containers-prepare-parameter.yaml \ -e /home/stack/skip_rhel_release.yaml \ ...-
Include the
system_upgrade.yamlfile with the upgrade-specific parameters (-e). -
Include the
containers-prepare-parameter.yamlfile with the control plane roles removed (-e). -
Include the
skip_rhel_release.yamlfile with the release parameters (-e).
-
Include the
Run the
overcloud_upgrade_prepare.shscript:$ sh /home/stack/overcloud_upgrade_prepare.shFetch any new or modified containers that you require for the system upgrade:
$ openstack overcloud external-upgrade run \ --stack <stack> \ --tags container_image_prepare 2>&1Upgrade the first one-third of the control plane nodes:
$ openstack overcloud upgrade run --yes \ --stack <stack> \ --tags system_upgrade \ --limit <controller-0>,<database-0>,<messaging-0>,<networker-0>,<ceph-0>-
Replace
<stack>with the name of your stack. -
Replace
<controller-0>,<database-0>,<messaging-0>,<networker-0>,<ceph-0>with your own node names.
-
Replace
Log in to each upgraded node and verify that the cluster in each node is running:
$ sudo pcs statusRepeat this verification step after you upgrade the second one-third of your control plane nodes, and after you upgrade the last one-third of your control plane nodes.
Upgrade the second one-third of the control plane nodes:
$ openstack overcloud upgrade run --yes \ --stack <stack> \ --tags system_upgrade \ --limit <controller-1>,<database-1>,<messaging-1>,<networker-1>,<ceph-1>-
Replace
<controller-1>,<database-1>,<messaging-1>,<networker-1>,<ceph-1>with your own node names.
-
Replace
Upgrade the last one-third of the control plane nodes:
$ openstack overcloud upgrade run --yes \ --stack <stack> \ --tags system_upgrade \ --limit <controller-2>,<database-2>,<messaging-2>,<networker-2>,<ceph-2>-
Replace
<controller-2>,<database-2>,<messaging-2>,<networker-2>,<ceph-2>with your own node names.
-
Replace
If you enabled STF, you must update the
collectdcontainer on all nodes after the operating system upgrade:Run the upgrade command with no tags:
$ openstack overcloud upgrade run --yes \ --stack <stack> \ --limit <undercloud>,<controller-0>,<controller-1>,<controller-2>,<database-0>,<database-1>,<database-2>,<networker-0>,<networker-1>,<networker-2>,<ceph-0>,<ceph-1>,<ceph-2>-
Replace
<undercloud>,<controller-0>,<controller-1>,<controller-2>,<database-0>,<database-1>,<database-2>,<networker-0>,<networker-1>,<networker-2>,<ceph-0>,<ceph-1>,<ceph-2>with your own node names.
-
Replace
When the upgrade process is complete, check the status of the
collectdcontainer on each overcloud node:$ sudo podman ps | grep collectdIf the
collectdcontainer is not running on the overcloud nodes, then manually start thecollectdcontainer on all the overcloud nodes:$ ansible -i overcloud-deploy/overcloud/tripleo-ansible-inventory.yaml allovercloud -m shell -a "sudo podman restart collectd