이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 3. Performing pre-upgrade actions
Perform pre-upgrade actions to ensure that your Red Hat OpenStack Platform environment is ready to be upgraded and to avoid potential issues during the upgrade.
For more information about additional pre-upgrade checks to perform on the undercloud, see the Pre-upgrade Checks section in the Red Hat Knowledgebase article FFU 16.2 - 17.1. Undercloud Upgrade Checks.
3.1. Checking the health of the OVN cluster 링크 복사링크가 클립보드에 복사되었습니다!
Before you upgrade your environment, validate that the OVN cluster is healthy. If the output from the overcloud Controller node shows the ovndb_servers resource in Failed Resource Actions, you must resolve this issue to avoid a data plane failure during the upgrade.
Procedure
Check the status of the OVN cluster resources and fix any issues:
sudo pcs status
$ sudo pcs statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you need help resolving any issues, contact Red Hat Support before you proceed with the upgrade.
Check in the
Failed Resource Actionssection of the output of the OVN cluster resources status for the following error:Failed Resource Actions: * ovndb_servers_monitor_30000 on ovn-dbs-bundle-1 'not running' (7): call=24, status='complete', exitreason='', last-rc-change='2025-08-08 09:21:30Z', queued=0ms, exec=159ms
Failed Resource Actions: * ovndb_servers_monitor_30000 on ovn-dbs-bundle-1 'not running' (7): call=24, status='complete', exitreason='', last-rc-change='2025-08-08 09:21:30Z', queued=0ms, exec=159msCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you see the error, check the status of the
ovndb_serversresource oncontroller-0:sudo podman exec <ovn-dbs container> ovs-appctl -t /var/run/openvswitch/ovnnb_db.ctl ovsdb-server/sync-status
$ sudo podman exec <ovn-dbs container> ovs-appctl -t /var/run/openvswitch/ovnnb_db.ctl ovsdb-server/sync-statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<ovn-dbs container>with the name of theovn-dbscontainer that you want to check.
-
Replace
If the output includes a reference to the IP address,
192.0.2.254, instead of the virtual IP address, you must restart theovn-dbs-bundleon the overcloud Controller node:sudo pcs resource restart ovn-dbs-bundle
$ sudo pcs resource restart ovn-dbs-bundleCopy to Clipboard Copied! Toggle word wrap Toggle overflow 192.0.2.254is a fallback IP address. Its presence in the output indicates that there is an issue.
Verification
Check the status of the OVN cluster resources and confirm that the
Failed Resource Actionserror is gone:sudo pcs status sudo podman exec <ovn-dbs container> ovs-appctl -t /var/run/openvswitch/ovnnb_db.ctl ovsdb-server/sync-status
$ sudo pcs status $ sudo podman exec <ovn-dbs container> ovs-appctl -t /var/run/openvswitch/ovnnb_db.ctl ovsdb-server/sync-statusCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the error is still there, contact Red Hat Support for assistance.
3.2. Network configuration file conversion 링크 복사링크가 클립보드에 복사되었습니다!
If your network configuration templates include the following functions, you must manually convert your NIC templates to Jinja2 Ansible format before you upgrade the undercloud. The following functions are not supported with automatic conversion:
-
'get_file' -
'get_resource' -
'digest' -
'repeat' -
'resource_facade' -
'str_replace' -
'str_replace_strict' -
'str_split' -
'map_merge' -
'map_replace' -
'yaql' -
'equals' -
'if' -
'not' -
'and' -
'or' -
'filter' -
'make_url' -
'contains'
For more information about manually converting your NIC templates, see Manually converting NIC templates to Jinja2 Ansible format in Customizing your Red Hat OpenStack Platform deployment.
3.3. Deployment file configuration 링크 복사링크가 클립보드에 복사되었습니다!
Before you run the undercloud upgrade, extract the following files and check that there are no issues. If there are issues, the files might not generate during the undercloud upgrade. For more information about extracting the files, see Files are not generated after undercloud upgrade during RHOSP upgrade from 16.2 to 17.1.
-
tripleo-<stack>-passwords.yaml -
tripleo-<stack>-network-data.yaml -
tripleo-<stack>-virtual-ips.yaml -
tripleo-<stack>-baremetal-deployment.yaml
3.4. Setting bare-metal provisioned nodes to the active state 링크 복사링크가 클립보드에 복사되었습니다!
Before you upgrade your environment, you must confirm that all of your bare-metal provisioned nodes are in the ACTIVE state. If any nodes are in the MAINTENANCE state, you must unset the MAINTENANCE state. If any nodes remain in the MAINTENANCE state, you cannot proceed with the upgrade.
Procedure
Confirm that all bare-metal provisioned nodes are in the
ACTIVEstate:openstack baremetal node list
$ openstack baremetal node listCopy to Clipboard Copied! Toggle word wrap Toggle overflow If any nodes are in the
MAINTENANCEstate, identify and troubleshoot the root cause of the nodes that are inMAINTENANCEby running the following command and checking thelast_errorfield:openstack baremetal node show <node_uuid>
$ openstack baremetal node show <node_uuid>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<node_uuid>with the UUID of the node.
-
Replace
Unset the
MAINTENANCEstate:openstack baremetal node maintenance unset <node_uuid>
$ openstack baremetal node maintenance unset <node_uuid>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait three to five minutes to see if the node returns to the
MAINTENANCEstate.ImportantIf you are unable to remove the nodes from
MAINTENANCE, contact Red Hat Support.