Chapter 11. Performing a minor update of the RHOSP overcloud with director Operator
After you update the openstackclient
pod, update the overcloud by running the overcloud and container image preparation deployments, updating your nodes, and running the overcloud update converge deployment. During a minor update, the control plane API is available.
A minor update of your Red Hat OpenStack Platform (RHOSP) environment involves updating the RPM packages and containers on the overcloud nodes. You might also need to update the configuration of some services. The data plane and control plane are fully available during the minor update. You must complete each of the following steps to update your RHOSP environment:
- Prepare your RHOSP environment for the minor update.
-
Optional: Update the
ovn-controller
container. - Update Controller nodes and composable nodes that contain Pacemaker services.
- Update Compute nodes.
- Update Red Hat Ceph Storage nodes.
- Update the Red Hat Ceph Storage cluster.
- Reboot the overcloud nodes.
Prerequisites
- You have a backup of your RHOSP deployment. For more information, see Backing up and restoring a director Operator deployed overcloud.
11.1. Preparing director Operator for a minor update
To prepare your Red Hat OpenStack Platform (RHOSP) environment to perform a minor update with director Operator (OSPdO), complete the following tasks:
- Lock the RHOSP environment to a Red Hat Enterprise Linux (RHEL) release.
- Update RHOSP repositories.
- Update the container image preparation file.
- Disable fencing in the overcloud.
11.1.1. Locking the RHOSP environment to a RHEL release
Red Hat OpenStack Platform (RHOSP) 17.1 is supported on Red Hat Enterprise Linux (RHEL) 9.2. Before you perform the update, lock the overcloud repositories to the RHEL 9.2 release to avoid upgrading the operating system to a newer minor release.
Procedure
Copy the overcloud subscription management environment file,
rhsm.yaml
, toopenstackclient
:$ oc cp rhsm.yaml openstackclient:/home/cloud-admin/rhsm.yaml
Access the remote shell for the
openstackclient
pod:$ oc rsh openstackclient
Open the
rhsm.yaml
file and check if your subscription management configuration includes therhsm_release
parameter. If therhsm_release
parameter is not present, add it and set it to9.2
:parameter_defaults: RhsmVars: … rhsm_username: "myusername" rhsm_password: "p@55w0rd!" rhsm_org_id: "1234567" rhsm_pool_ids: "1a85f9223e3d5e43013e3d6e8ff506fd" rhsm_method: "portal" rhsm_release: "9.2"
-
Save the
rhsm.yaml
file. Create a playbook named
set_release.yaml
that contains a task to lock the operating system version to RHEL 9.2 on all nodes:- hosts: all gather_facts: false tasks: - name: set release to 9.2 command: subscription-manager release --set=9.2 become: true
Run the
set_release.yaml
playbook on theopenstackclient
pod:$ ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/set_release.yaml --limit Controller,Compute
Use the
--limit
option to apply the content to all RHOSP nodes. Do not run this playbook against Red Hat Ceph Storage nodes because you might have a different subscription for these nodes.NoteTo manually lock a node to a version, log in to the node and run the
subscription-manager release
command:$ sudo subscription-manager release --set=9.2
Exit the remote shell for the
openstackclient
pod:$ exit
11.1.2. Updating RHOSP repositories
Update your repositories to use Red Hat OpenStack Platform (RHOSP) 17.1.
Procedure
Open the
rhsm.yaml
file and update therhsm_repos
parameter to the correct repository versions:parameter_defaults: RhsmVars: rhsm_repos: - rhel-9-for-x86_64-baseos-eus-rpms - rhel-9-for-x86_64-appstream-eus-rpms - rhel-9-for-x86_64-highavailability-eus-rpms - openstack-17.1-for-rhel-9-x86_64-rpms - fast-datapath-for-rhel-9-x86_64-rpms
-
Save the
rhsm.yaml
file. Access the remote shell for the
openstackclient
pod:$ oc rsh openstackclient
Create a playbook named
update_rhosp_repos.yaml
that contains a task to set the repositories toRHOSP 17.1
on all nodes:- hosts: all gather_facts: false tasks: - name: change osp repos command: subscription-manager repos --enable=openstack-17.1-for-rhel-9-x86_64-rpms become: true
Run the
update_rhosp_repos.yaml
playbook on theopenstackclient
pod:$ ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/update_rhosp_repos.yaml --limit Controller,Compute
Use the
--limit
option to apply the content to all RHOSP nodes. Do not run this playbook against Red Hat Ceph Storage nodes because they use a different subscription.Create a playbook named
update_ceph_repos.yaml
that contains a task to set the repositories toRHOSP 17.1
on all Red Hat Ceph Storage nodes:- hosts: all gather_facts: false tasks: - name: change ceph repos command: subscription-manager repos --enable=openstack-17.1-deployment-tools-for-rhel-9-x86_64-rpms become: true
Run the
update_ceph_repos.yaml
playbook on theopenstackclient
pod:$ ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/update_ceph_repos.yaml --limit CephStorage
Use the
--limit
option to apply the content to Red Hat Ceph Storage nodes.Exit the remote shell for the
openstackclient
pod:$ exit
11.1.3. Updating the container image preparation file
The container preparation file is the file that contains the ContainerImagePrepare
parameter. You use this file to define the rules for obtaining container images for the overcloud.
Before you update your environment, check the file to ensure that you obtain the correct image versions.
Procedure
-
Edit the container preparation file. The default name for this file is
containers-prepare-parameter.yaml
. Ensure the
tag
parameter is set to17.1
for each rule set:parameter_defaults: ContainerImagePrepare: - push_destination: false set: ... tag: '17.1' tag_from_label: '{version}-{release}'
NoteIf you do not want to use a specific tag for the update, such as
17.1
or17.1.1
, remove thetag
key-value pair and specifytag_from_label
only. Thetag_from_label
tag uses the installed Red Hat OpenStack Platform (RHOSP) version to determine the value for the tag to use as part of the update process.-
Save the
containers-prepare-parameter.yaml
file.
11.1.4. Disabling fencing in the overcloud
Before you update the overcloud, ensure that fencing is disabled.
If fencing is deployed in your environment during the Controller nodes update process, the overcloud might detect certain nodes as disabled and attempt fencing operations, which can cause unintended results.
If you have enabled fencing in the overcloud, you must temporarily disable fencing for the duration of the update.
Procedure
Access the remote shell for the
openstackclient
pod:$ oc rsh openstackclient
Log in to a Controller node and run the Pacemaker command to disable fencing:
$ ssh <controller-0.ctlplane> "sudo pcs property set stonith-enabled=false"
-
Replace
<controller-0.ctlplane>
with the name of your Controller node.
-
Replace
Exit the remote shell for the
openstackclient
pod:$ exit
Additional Resources
11.2. Running the overcloud update preparation for director Operator
To prepare the overcloud for the update process, generate an update prepare configuration, which creates updated ansible playbooks and prepares the nodes for the update.
Procedure
Create an
OpenStackConfigGenerator
resource calledosconfiggenerator-update-prepare.yaml
:$ cat <<EOF > osconfiggenerator-update-prepare.yaml apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackConfigGenerator metadata: name: "update" namespace: openstack spec: gitSecret: git-secret enableFencing: false heatEnvs: - lifecycle/update-prepare.yaml heatEnvConfigMap: heat-env-config-update tarballConfigMap: tripleo-tarball-config-update EOF
Apply the configuration:
$ oc apply -f osconfiggenerator-update-prepare.yaml
- Wait until the update preparation process completes.
11.3. Updating the ovn-controller
container on all overcloud servers
If you deployed your overcloud with the Modular Layer 2 Open Virtual Network mechanism driver (ML2/OVN), update the ovn-controller
container to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version. The update occurs on every overcloud server that runs the ovn-controller
container.
The following procedure updates the ovn-controller
containers on Compute nodes before it updates the ovn-northd
service on Controller nodes. If you accidentally update the ovn-northd
service before following this procedure, you might not be able to reach your virtual machine instances or create new instances or virtual networks. The following procedure restores connectivity.
Procedure
Create an
OpenStackDeploy
custom resource (CR) namedosdeploy-ovn-update.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: ovn-update spec: configVersion: <config_version> configGenerator: update mode: externalUpdate advancedSettings: tags: - ovn
Apply the updated configuration:
$ oc apply -f osdeploy-ovn-update.yaml
-
Wait until the
ovn-controller
container update completes.
11.4. Updating all Controller nodes
Update all the Controller nodes to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version.
Procedure
Create an
OpenStackDeploy
custom resource (CR) namedosdeploy-controller-update.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: controller-update spec: configVersion: <config_version> configGenerator: update mode: update advancedSettings: limit: Controller
Apply the updated configuration:
$ oc apply -f osdeploy-controller-update.yaml
- Wait until the Controller node update completes.
11.5. Updating all Compute nodes
Update all Compute nodes to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version. To update Compute nodes, create an OpenStackDeploy
custom resource (CR) with the limit: Compute
option to restrict operations only to the Compute nodes.
Procedure
Create an
OpenStackDeploy
CR namedosdeploy-compute-update.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: compute-update spec: configVersion: <config_version> configGenerator: update mode: update advancedSettings: limit: Compute
Apply the updated configuration:
$ oc apply -f osdeploy-compute-update.yaml
- Wait until the Compute node update completes.
11.6. Updating all HCI Compute nodes
Update the Hyperconverged Infrastructure (HCI) Compute nodes to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version. To update the HCI Compute nodes, create an OpenStackDeploy
custom resource (CR) with the limit: ComputeHCI
option to restrict operations to only the HCI nodes. You must also create an OpenStackDeploy
CR with the mode: external-update
and tags: ["ceph"]
options to perform an update to a containerized Red Hat Ceph Storage 4 cluster.
Procedure
Create an
OpenStackDeploy
CR namedosdeploy-computehci-update.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: computehci-update spec: configVersion: <config_version> configGenerator: update mode: update advancedSettings: limit: ComputeHCI
Apply the updated configuration:
$ oc apply -f osdeploy-computehci-update.yaml
- Wait until the ComputeHCI node update completes.
Create an
OpenStackDeploy
CR namedosdeploy-ceph-update.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: ceph-update spec: configVersion: <config_version> configGenerator: update mode: external-update advancedSettings: tags: - ceph
Apply the updated configuration:
$ oc apply -f osdeploy-ceph-update.yaml
- Wait until the Red Hat Ceph Storage node update completes.
11.7. Updating all Red Hat Ceph Storage nodes
Update the Red Hat Ceph Storage nodes to the latest Red Hat OpenStack Platform (RHOSP) 17.1 version.
RHOSP 17.1 is supported on RHEL 9.2. However, hosts that are mapped to the CephStorage
role update to the latest major RHEL release. For more information, see Red Hat Ceph Storage: Supported configurations.
Procedure
Create an
OpenStackDeploy
custom resource (CR) namedosdeploy-cephstorage-update.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: cephstorage-update spec: configVersion: <config_version> configGenerator: update mode: externalUpdate advancedSettings: limit: CephStorage
Apply the updated configuration:
$ oc apply -f osdeploy-cephstorage-update.yaml
- Wait until the Red Hat Ceph Storage node update completes.
Create an
OpenStackDeploy
CR namedosdeploy-ceph-update.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: ceph-update spec: configVersion: <config_version> configGenerator: update mode: externalUpdate advancedSettings: tags: - ceph
Apply the updated configuration:
$ oc apply -f osdeploy-ceph-update.yaml
- Wait until the Red Hat Ceph Storage node update completes.
11.8. Updating the Red Hat Ceph Storage cluster
Update the director-deployed Red Hat Ceph Storage cluster to the latest version that is compatible with Red Hat OpenStack Platform (RHOSP) 17.1 by using the cephadm
Orchestrator.
This procedure uses cephadm
to upgrade your deployment. If you are using pre-provisioned nodes, cephadm
is available by default in the first Controller node. You can manually install it in the other Controllers to access the cephadm
shell.
For more information about installing cephadm
, see the Red Hat Ceph Storage 6 Installation Guide.
Procedure
Access the remote shell for the
openstackclient
pod:$ oc rsh openstackclient
Log in to the first Controller node:
$ ssh <controller-0.ctlplane>
-
Replace
<controller-0.ctlplane>
with the name of the first Controller node in your deployment.
-
Replace
Log into the
cephadm
shell:[cloud-admin@controller-0 ~]$ sudo cephadm shell
-
Upgrade your Red Hat Ceph Storage cluster by using
cephadm
. For more information, see Upgrade a Red Hat Ceph Storage cluster using cephadm in the Red Hat Ceph Storage 6 Upgrade Guide. Exit the remote shell for the
openstackclient
pod:$ exit
11.9. Performing online database updates
Some overcloud components require an online update or migration of their databases tables. Online database updates apply to the following components:
- Block Storage service (cinder)
- Compute service (nova)
Procedure
Create an
OpenStackDeploy
custom resource (CR) namedosdeploy-online-migration.yaml
:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: online-migration spec: configVersion: <config_version> configGenerator: update mode: external-update advancedSettings: tags: - online_upgrade
Apply the updated configuration:
$ oc apply -f osdeploy-online-migration.yaml
11.10. Re-enabling fencing in the overcloud
To update to the latest Red Hat OpenStack Platform (RHOSP) 17.1, you must re-enable fencing in the overcloud.
Procedure
Access the remote shell for the
openstackclient
pod:$ oc rsh openstackclient
Log in to a Controller node and run the Pacemaker command to enable fencing:
$ ssh <controller-0.ctlplane> "sudo pcs property set stonith-enabled=true"
-
Replace
<controller-0.ctlplane>
with the name of your Controller node.
-
Replace
Exit the remote shell for the
openstackclient
pod:$ exit
11.11. Rebooting the overcloud
After you perform a minor Red Hat OpenStack Platform (RHOSP) update to the latest 17.1 version, reboot your overcloud. The reboot refreshes the nodes with any associated kernel, system-level, and container component updates. These updates provide performance and security benefits. Plan downtime to perform the reboot procedures.
Use the following guidance to understand how to reboot different node types:
- If you reboot all nodes in one role, reboot each node individually. If you reboot all nodes in a role simultaneously, service downtime can occur during the reboot operation.
Complete the reboot procedures on the nodes in the following order:
11.11.1. Rebooting Controller and composable nodes
Reboot Controller nodes and standalone nodes based on composable roles, and exclude Compute nodes and Ceph Storage nodes.
Procedure
- Log in to the node that you want to reboot.
Optional: If the node uses Pacemaker resources, stop the cluster:
[tripleo-admin@overcloud-controller-0 ~]$ sudo pcs cluster stop
Reboot the node:
[tripleo-admin@overcloud-controller-0 ~]$ sudo reboot
- Wait until the node boots.
Verification
Verify that the services are enabled.
If the node uses Pacemaker services, check that the node has rejoined the cluster:
[tripleo-admin@overcloud-controller-0 ~]$ sudo pcs status
If the node uses Systemd services, check that all services are enabled:
[tripleo-admin@overcloud-controller-0 ~]$ sudo systemctl status
If the node uses containerized services, check that all containers on the node are active:
[tripleo-admin@overcloud-controller-0 ~]$ sudo podman ps
11.11.2. Rebooting a Ceph Storage (OSD) cluster
Complete the following steps to reboot a cluster of Ceph Storage (OSD) nodes.
Prerequisites
On a Ceph Monitor or Controller node that is running the
ceph-mon
service, check that the Red Hat Ceph Storage cluster status is healthy and the pg status isactive+clean
:$ sudo cephadm -- shell ceph status
If the Ceph cluster is healthy, it returns a status of
HEALTH_OK
.If the Ceph cluster status is unhealthy, it returns a status of
HEALTH_WARN
orHEALTH_ERR
. For troubleshooting guidance, see the Red Hat Ceph Storage 5 Troubleshooting Guide or the Red Hat Ceph Storage 6 Troubleshooting Guide.
Procedure
Log in to a Ceph Monitor or Controller node that is running the
ceph-mon
service, and disable Ceph Storage cluster rebalancing temporarily:$ sudo cephadm shell -- ceph osd set noout $ sudo cephadm shell -- ceph osd set norebalance
NoteIf you have a multistack or distributed compute node (DCN) architecture, you must specify the Ceph cluster name when you set the
noout
andnorebalance
flags. For example:sudo cephadm shell -c /etc/ceph/<cluster>.conf -k /etc/ceph/<cluster>.client.keyring
.- Select the first Ceph Storage node that you want to reboot and log in to the node.
Reboot the node:
$ sudo reboot
- Wait until the node boots.
Log in to the node and check the Ceph cluster status:
$ sudo cephadm -- shell ceph status
Check that the
pgmap
reports allpgs
as normal (active+clean
).- Log out of the node, reboot the next node, and check its status. Repeat this process until you have rebooted all Ceph Storage nodes.
When complete, log in to a Ceph Monitor or Controller node that is running the
ceph-mon
service and enable Ceph cluster rebalancing:$ sudo cephadm shell -- ceph osd unset noout $ sudo cephadm shell -- ceph osd unset norebalance
NoteIf you have a multistack or distributed compute node (DCN) architecture, you must specify the Ceph cluster name when you unset the
noout
andnorebalance
flags. For example:sudo cephadm shell -c /etc/ceph/<cluster>.conf -k /etc/ceph/<cluster>.client.keyring
Perform a final status check to verify that the cluster reports
HEALTH_OK
:$ sudo cephadm shell ceph status
11.11.3. Rebooting Compute nodes
To ensure minimal downtime of instances in your Red Hat OpenStack Platform environment, the Migrating instances workflow outlines the steps you must complete to migrate instances from the Compute node that you want to reboot.
Migrating instances workflow
- Decide whether to migrate instances to another Compute node before rebooting the node.
- Select and disable the Compute node that you want to reboot so that it does not provision new instances.
- Migrate the instances to another Compute node.
- Reboot the empty Compute node.
- Enable the empty Compute node.
Prerequisites
Before you reboot the Compute node, you must decide whether to migrate instances to another Compute node while the node is rebooting.
Review the list of migration constraints that you might encounter when you migrate virtual machine instances between Compute nodes. For more information, see Migration constraints in Configuring the Compute service for instance creation.
NoteIf you have a Multi-RHEL environment, and you want to migrate virtual machines from a Compute node that is running RHEL 9.2 to a Compute node that is running RHEL 8.4, only cold migration is supported. For more information about cold migration, see Cold migrating an instance in Configuring the Compute service for instance creation.
If you cannot migrate the instances, you can set the following core template parameters to control the state of the instances after the Compute node reboots:
NovaResumeGuestsStateOnHostBoot
-
Determines whether to return instances to the same state on the Compute node after reboot. When set to
False
, the instances remain down and you must start them manually. The default value isFalse
. NovaResumeGuestsShutdownTimeout
Number of seconds to wait for an instance to shut down before rebooting. It is not recommended to set this value to
0
. The default value is300
.For more information about overcloud parameters and their usage, see Overcloud parameters.
Procedure
-
Log in to the undercloud as the
stack
user. Retrieve a list of your Compute nodes to identify the host name of the node that you want to reboot:
(undercloud)$ source ~/overcloudrc (overcloud)$ openstack compute service list
Identify the host name of the Compute node that you want to reboot.
Disable the Compute service on the Compute node that you want to reboot:
(overcloud)$ openstack compute service list (overcloud)$ openstack compute service set <hostname> nova-compute --disable
-
Replace
<hostname>
with the host name of your Compute node.
-
Replace
List all instances on the Compute node:
(overcloud)$ openstack server list --host <hostname> --all-projects
Optional: To migrate the instances to another Compute node, complete the following steps:
If you decide to migrate the instances to another Compute node, use one of the following commands:
To migrate the instance to a different host, run the following command:
(overcloud) $ openstack server migrate <instance_id> --live <target_host> --wait
-
Replace
<instance_id>
with your instance ID. -
Replace
<target_host>
with the host that you are migrating the instance to.
-
Replace
Let
nova-scheduler
automatically select the target host:(overcloud) $ nova live-migration <instance_id>
Live migrate all instances at once:
$ nova host-evacuate-live <hostname>
NoteThe
nova
command might cause some deprecation warnings, which are safe to ignore.
- Wait until migration completes.
Confirm that the migration was successful:
(overcloud) $ openstack server list --host <hostname> --all-projects
- Continue to migrate instances until none remain on the Compute node.
Log in to the Compute node and reboot the node:
[tripleo-admin@overcloud-compute-0 ~]$ sudo reboot
- Wait until the node boots.
Re-enable the Compute node:
$ source ~/overcloudrc (overcloud) $ openstack compute service set <hostname> nova-compute --enable
Check that the Compute node is enabled:
(overcloud) $ openstack compute service list
11.11.4. Validating RHOSP after the overcloud update
After you update your Red Hat OpenStack Platform (RHOSP) environment, validate your overcloud with the tripleo-validations
playbooks.
For more information about validations, see Using the validation framework in Installing and managing Red Hat OpenStack Platform with director.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Run the validation:
$ validation run -i ~/overcloud-deploy/<stack>/config-download/<stack>/tripleo-ansible-inventory.yaml --group post-update
- Replace <stack> with the name of the stack.
Verification
- To view the results of the validation report, see Viewing validation history in Installing and managing Red Hat OpenStack Platform with director.
If a host is not found when you run a validation, the command reports the status as SKIPPED
. A status of SKIPPED
means that the validation is not executed, which is expected. Additionally, if a validation’s pass criteria is not met, the command reports the status as FAILED
. A FAILED
validation does not prevent you from using your updated RHOSP environment. However, a FAILED
validation can indicate an issue with your environment.