Ce contenu n'est pas disponible dans la langue sélectionnée.
Chapter 11. Upgrading the overcloud
Upgrade Red Hat OpenStack Platform content across the whole overcloud on each stack in your environment.
11.1. Upgrading RHOSP on all nodes in each stack Copier lienLien copié sur presse-papiers!
Upgrade all overcloud nodes to Red Hat OpenStack Platform (RHOSP) 17.1 for each stack, starting with the main stack.
You must ensure that the pacemaker is running on all controllers before you upgrade the overcloud nodes.
For information about the duration and impact of this upgrade procedure, see Upgrade duration and impact.
Procedure
-
Log in to the undercloud host as the
stackuser. Source the
stackrcundercloud credentials file:$ source ~/stackrcRun the overcloud upgrade. Choose one of the following upgrade methods based on your specific environment:
If you do not have a DCN or multi-cell deployment that includes ML2/OVN, upgrade RHOSP on all nodes in your main stack:
Upgrade RHOSP on all nodes in your main stack:
$ openstack overcloud upgrade run --yes --stack <stack> --debug --limit allovercloud,undercloud --playbook allImportantDo not modify the
--limitoption. You must upgrade all nodes in the stack at once to avoid workload disruption. For more information about the importance of upgrading all of your overcloud nodes at the same time, see The importance of upgrading Red Hat OpenStack Platform on all overcloud nodes at once.Replace
<stack>with the name of the overcloud stack that you want to upgrade the nodes on.Repeat this step for each stack in your RHOSP deployment.
If you have a DCN or multi-cell deployment that includes ML2/OVN, complete the following steps:
Upgrade the OVN containers and all host packages on each stack. For both DCN and multi-cell deployments, you must upgrade the central stack and control stack first, respectively:
$ openstack overcloud upgrade run --stack <stack> --tags setup_packages,ovn --limit allovercloud --yesReplace
<stack>with the name of the stack you are upgrading.ImportantAfter you run the RHOSP upgrade on the central or control stack, the OVN DBs are migrated from a Pacemaker deployment to an Ansible-controlled cluster. As a result, the endpoint that the OVN controllers connect to changes. This connection is updated during the RHOSP upgrade on each additional stack. Depending on the size of your environment, some stacks might be updated later, which can result in a data plane outage. To avoid this issue, update the connection to use the previous endpoint and new endpoint at the same time. The following example shows the workaround for a multi-cell environment. The workaround is the same for DCN environments that include OVN. Ensure that you export the
overcloud-export.yamlfile first:$ cat <<'EOF' > ~/ovn_workaround.yaml # Playbook - become: true hosts: '{{ ovn_compute_role }}' strategy: tripleo_free name: OVN workaround playbook tasks: - name: Read ovn southbound port command: puppet lookup --facts /etc/puppet/hieradata/service_configs.json ovn::southbound::port --render-as s register: ovn_southbound_port failed_when: false - name: Read ovn vip command: puppet lookup --facts /etc/puppet/hieradata/all_nodes.json ovn_dbs_vip --render-as s register: ovn_dbs_vip - name: Create the new connection set_fact: new_ovn_connection: "ssl:{{ovn_dbs_vip.stdout}}:{{ovn_southbound_port.stdout}},ssl:{{parameter_defaults.AllNodesExtraMapData.ovn_dbs_node_ips[0]}}:{{ovn_southbound_port.stdout}},ssl:{{parameter_defaults.AllNodesExtraMapData.ovn_dbs_node_ips[1]}}:{{ovn_southbound_port.stdout}},ssl:{{parameter_defaults.AllNodesExtraMapData.ovn_dbs_node_ips[2]}}:{{ovn_southbound_port.stdout}}" - name: showing the new connection debug: msg: "the new connection is {{ new_ovn_connection }}" tags: - debug_msg - name: Get ovn connection command: ovs-vsctl get Open_vSwitch . external_ids:ovn-remote register: ovn_connection - set_fact: change_needed: "{{ ovn_connection.stdout.split(',') | length == 1 }}" - name: Reconfigure ovn connection when: change_needed|bool command: ovs-vsctl set Open_vSwitch . external_ids:ovn-remote="{{new_ovn_connection}}" tags: - apply_conf - name: Reconfigure ovn connection for metadata agents on computes when: change_needed|bool shell: | crudini --set /var/lib/config-data/puppet-generated/neutron/etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini ovn ovn_sb_connection "{{ new_ovn_connection }}" systemctl restart tripleo_ovn_metadata_agent tags: - apply_conf EOF$ ansible-playbook -i overcloud-deploy/$CELLSTACK/tripleo-ansible-inventory.yaml -e @~/overcloud-deploy/overcloud/overcloud-export.yaml -e ovn_compute_role=Compute ovn_workaround.yaml
Verify that the
ovn-controlleris updated on all overcloud nodes:$ sudo podman ps | grep ovn_controllerSample output
5ddc21ef9056 undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-ovn-controller:17.1_20230905.1 kolla_start 20 hours ago Up 20 hours (healthy) ovn_controllerUpgrade the service containers and update the host packages on each stack, starting with the central stack or control stack:
$ openstack overcloud upgrade run --stack <stack> --skip-tags ovn --limit allovercloud --yesNoteTo avoid a long maintenance window, you can run the upgrade only on the
CellControllerrole in your cell stack first, and then run the upgrade on all the nodes in the cell stack in the next maintenance window. For example:First maintenance window:
$ openstack overcloud upgrade run --stack AZ2 --skip-tags ovn --limit CellController --yesSecond maintenance window:
$ openstack overcloud upgrade run --stack AZ2 --skip-tags ovn --limit allovercloud --yes