Chapter 18. Upgrading an overcloud on a Red Hat OpenShift Container Platform cluster with director Operator (16.2 to 17.1)
You can upgrade your Red Hat OpenStack Platform (RHOSP) 16.2 overcloud to a RHOSP 17.1 overcloud with director Operator (OSPdO) by using the in-place framework for upgrades (FFU) workflow.
To perform an upgrade, you must perform the following tasks:
- Prepare your environment for the upgrade.
-
Update custom
roles_datafiles to the composable services supported by RHOSP 17.1. -
Optional: Upgrade Red Hat Ceph Storage and adopt
cephadm. - Upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 8.
- Upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 9.
- Perform post-upgrade tasks.
18.1. Prerequisites Copy linkLink copied to clipboard!
- You are using the latest version of OSPdO.
- The overcloud deployment is running RHOSP version 16.2.4 or later. If your overcloud deployment is running a RHOSP version that is earlier than 16.2.4, you must update the environment to the latest minor version of your current release. For information about how to perform a minor update, see Performing a minor update of the RHOSP overcloud with director Operator.
-
The minimum kernel version running on the overcloud nodes is
kernel-4.18.0-305.41.1.el8.
18.2. Updating director Operator Copy linkLink copied to clipboard!
You must update your director Operator (OSPdO) to the latest 17.1 version before performing the overcloud upgrade. To update OSPdO, you must first delete and reinstall the current OSPdO. To delete OSPdO, you delete the OSPdO subscription and CSV.
Procedure
Check the current version of the director Operator in the
currentCSVfield:$ oc get subscription osp-director-operator-subscription -n openstack -o yaml | grep currentCSVDelete the CSV for the director Operator in the target namespace:
$ oc delete clusterserviceversion <current_CSV> -n openstack-
Replace
<current_CSV>with thecurrentCSVvalue from step 1.
-
Replace
Delete the subscription:
$ oc delete subscription osp-director-operator.openstack -n openstack- Install the latest 17.1 director Operator. For information, see Installing director Operator.
18.3. Preparing your director Operator environment for upgrade Copy linkLink copied to clipboard!
You must prepare your director Operator (OSPdO) deployed Red Hat OpenStack Platform (RHOSP) environment for the upgrade to RHOSP 17.1.
Procedure
Set
openStackReleaseto 17.1 on theopenstackcontrolplaneCR:$ oc patch openstackcontrolplane -n openstack overcloud --type=json -p="[{'op': 'replace', 'path': '/spec/openStackRelease', 'value': '17.1'}]"Retrieve the OSPdO
ClusterServiceVersion(csv) CR:$ oc get csv -n openstackDelete all instances of the
OpenStackConfigGeneratorCR:$ oc delete -n openstack openstackconfiggenerator --allIf your deployment includes HCI, the adoption from
ceph-ansibletocephadmmust be performed using the RHOSP 17.1 on RHEL8openstackclientimage:$ oc patch openstackclient -n openstack openstackclient --type=json -p="[{'op': 'replace', 'path': '/spec/imageURL', 'value': 'registry.redhat.io/rhosp-rhel8/openstack-tripleoclient:17.1'}]"If your deployment does not include HCI, or the
cephadmadoption has already been completed, then switch to the 17.1 OSPdO defaultopenstackclientimage by removing the currentimageURLfrom theopenstackclientCR:$ oc patch openstackclient -n openstack openstackclient --type=json -p="[{'op': 'remove', 'path': '/spec/imageURL'}]"If you have enabled fencing in the overcloud, you must temporarily disable fencing on one of the Controller nodes for the duration of the upgrade:
$ oc rsh -n openstack openstackclient $ ssh controller-0.ctlplane "sudo pcs property set stonith-enabled=false"
18.4. Updating composable services in custom roles_data files Copy linkLink copied to clipboard!
If your deployment includes custom roles_data files that you created, you must update them to the supported Red Hat OpenStack Platform (RHOSP) 17.1 composable services. For more information about the supported RHOSP 17.1 composable services, see Updating composable services in custom roles_data files in the Framework for Upgrades (16.2 to 17.1) guide.
Procedure
Remove the following services from all roles:
`OS::TripleO::Services::CinderBackendDellEMCXTREMIOISCSI` `OS::TripleO::Services::CinderBackendDellPs` `OS::TripleO::Services::CinderBackendVRTSHyperScale` `OS::TripleO::Services::Ec2Api` `OS::TripleO::Services::Fluentd` `OS::TripleO::Services::FluentdAlt` `OS::TripleO::Services::Keepalived` `OS::TripleO::Services::MistralApi` `OS::TripleO::Services::MistralEngine` `OS::TripleO::Services::MistralEventEngine` `OS::TripleO::Services::MistralExecutor` `OS::TripleO::Services::NeutronLbaasv2Agent` `OS::TripleO::Services::NeutronLbaasv2Api` `OS::TripleO::Services::NeutronML2FujitsuCfab` `OS::TripleO::Services::NeutronML2FujitsuFossw` `OS::TripleO::Services::NeutronSriovHostConfig` `OS::TripleO::Services::NovaConsoleauth` `OS::TripleO::Services::Ntp` `OS::TripleO::Services::OpenDaylightApi` `OS::TripleO::Services::OpenDaylightOvs` `OS::TripleO::Services::OpenShift::GlusterFS` `OS::TripleO::Services::OpenShift::Infra` `OS::TripleO::Services::OpenShift::Master` `OS::TripleO::Services::OpenShift::Worker` `OS::TripleO::Services::PankoApi` `OS::TripleO::Services::Rear` `OS::TripleO::Services::SaharaApi` `OS::TripleO::Services::SaharaEngine` `OS::TripleO::Services::SensuClient` `OS::TripleO::Services::SensuClientAlt` `OS::TripleO::Services::SkydiveAgent` `OS::TripleO::Services::SkydiveAnalyzer` `OS::TripleO::Services::Tacker` `OS::TripleO::Services::TripleoUI` `OS::TripleO::Services::UndercloudMinionMessaging` `OS::TripleO::Services::UndercloudUpgradeEphemeralHeat` `OS::TripleO::Services::Zaqar`-
Add the
OS::TripleO::Services::GlanceApiInternalservice to your Controller role. -
Update the
OS::TripleO::Services::NovaLibvirtservice on the Compute roles toOS::TripleO::Services::NovaLibvirtLegacy. -
If your environment includes Red Hat Ceph Storage, set the
DeployedCephparameter tofalseto enable director-managedcephadmdeployments.
18.5. Converting your NIC templates to Jinja2 Ansible format Copy linkLink copied to clipboard!
If your network configuration templates include the following functions, you must manually convert your NIC templates to Jinja2 Ansible format before you upgrade the overcloud. The following functions are not supported with automatic conversion:
'get_file'
'get_resource'
'digest'
'repeat'
'resource_facade'
'str_replace'
'str_replace_strict'
'str_split'
'map_merge'
'map_replace'
'yaql'
'equals'
'if'
'not'
'and'
'or'
'filter'
'make_url'
'contains'
For more information about converting your NIC templates, see Updating the format of your network configuration files in Customizing your Red Hat OpenStack Platform deployment.
Procedure
-
Create a Jinja2 template. You can create a new template by copying an example template from the
/usr/share/ansible/roles/tripleo_network_config/templates/directory on the undercloud node. Replace the heat intrinsic functions with Jinja2 filters. For example, use the following filter to calculate the
min_viable_mtu:{% set mtu_list = [ctlplane_mtu] %} {% for network in role_networks %} {{ mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) }} {%- endfor %} {% set min_viable_mtu = mtu_list | max %}Use Ansible variables to configure the network properties for your deployment. You can configure each individual network manually, or programatically configure each network by iterating over
role_networks:To manually configure each network, replace each
get_paramfunction with the equivalent Ansible variable. For example, if your current deployment configuresvlan_idby usingget_param: InternalApiNetworkVlanID, then add the following configuration to your template:vlan_id: {{ internal_api_vlan_id }}Expand Table 18.1. Example network property mapping from heat parameters to Ansible vars yamlfile formatJinja2 ansible format, j2- type: vlan device: nic2 vlan_id: get_param: InternalApiNetworkVlanID addresses: - ip_netmask: get_param: InternalApiIpSubnet- type: vlan device: nic2 vlan_id: {{ internal_api_vlan_id }} addresses: - ip_netmask: {{ internal_api_ip }}/{{ internal_api_cidr }}To programatically configure each network, add a Jinja2 for-loop structure to your template that retrieves the available networks by their role name by using
role_networks.Example
{% for network in role_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {%- endfor %}
For a full list of the mappings from the heat parameter to the Ansible
varsequivalent, see Heat parameter to Ansible variable mappings.Configure the
*NetworkConfigTemplateparameters in yournetwork-environment.yamlfile to point to the generated.j2files:parameter_defaults: ControllerNetworkConfigTemplate: '/home/stack/templates/custom-nics/controller.j2' ComputeNetworkConfigTemplate: '/home/stack/templates/custom-nics/compute.j2'Delete the
resource_registrymappings from yournetwork-environment.yamlfile for the old network configuration files:resource_registry: OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml
18.6. Upgrading Red Hat Ceph Storage and adopting cephadm Copy linkLink copied to clipboard!
If your environment includes Red Hat Ceph Storage deployments, you must upgrade your deployment to Red Hat Ceph Storage 5. With an upgrade to version 5, cephadm now manages Red Hat Ceph Storage instead of ceph-ansible.
Procedure
-
Create an Ansible playbook file named
ceph-admin-user-playbook.yamlto create aceph-adminuser on the overcloud nodes. Add the following configuration to the
ceph-admin-user-playbook.yamlfile:- hosts: localhost gather_facts: false tasks: - name: set ssh key path facts set_fact: private_key: "{{ lookup('env', 'HOME') }}/.ssh/{{ tripleo_admin_user }}-id_rsa" public_key: "{{ lookup('env', 'HOME') }}/.ssh/{{ tripleo_admin_user }}-id_rsa.pub" run_once: true - name: stat private key stat: path: "{{ private_key }}" register: private_key_stat - name: create private key if it does not exist shell: "ssh-keygen -t rsa -q -N '' -f {{ private_key }}" no_log: true when: - not private_key_stat.stat.exists - name: stat public key stat: path: "{{ public_key }}" register: public_key_stat - name: create public key if it does not exist shell: "ssh-keygen -y -f {{ private_key }} > {{ public_key }}" when: - not public_key_stat.stat.exists - hosts: overcloud gather_facts: false become: true pre_tasks: - name: Get local private key slurp: src: "{{ hostvars['localhost']['private_key'] }}" register: private_key_get delegate_to: localhost no_log: true - name: Get local public key slurp: src: "{{ hostvars['localhost']['public_key'] }}" register: public_key_get delegate_to: localhost roles: - role: tripleo_create_admin tripleo_admin_user: "{{ tripleo_admin_user }}" tripleo_admin_pubkey: "{{ public_key_get['content'] | b64decode }}" tripleo_admin_prikey: "{{ private_key_get['content'] | b64decode }}" no_log: trueCopy the playbook to the
openstackclientcontainer:$ oc cp -n openstack ceph-admin-user-playbook.yml openstackclient:/home/cloud-admin/ceph-admin-user-playbook.ymlRun the playbook on the
openstackclientcontainer:$ oc rsh -n openstack openstackclient $ ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory -e tripleo_admin_user=ceph-admin -e distribute_private_key=true /home/cloud-admin/ceph-admin-user-playbook.ymlUpdate the Red Hat Ceph Storage container image parameters in the
containers-prepare-parameter.yamlfile for the version of Red Hat Ceph Storage that your deployment uses:ceph_namespace: registry.redhat.io/rhceph ceph_image: <ceph_image_file> ceph_tag: latest ceph_grafana_image: <grafana_image_file> ceph_grafana_namespace: registry.redhat.io/rhceph ceph_grafana_tag: latestReplace
<ceph_image_file>with the name of the image file for the version of Red Hat Ceph Storage that your deployment uses:-
Red Hat Ceph Storage 5:
rhceph-5-rhel8
-
Red Hat Ceph Storage 5:
Replace
<grafana_image_file>with the name of the image file for the version of Red Hat Ceph Storage that your deployment uses:-
Red Hat Ceph Storage 5:
rhceph-5-dashboard-rhel8
-
Red Hat Ceph Storage 5:
-
If your deployment includes HCI, update the
CephAnsibleRepoparameter incompute-hci.yamlto "rhelosp-ceph-5-tools". Create an environment file named
upgrade.yamland add the following configuration to it:parameter_defaults: UpgradeInitCommand: | sudo subscription-manager repos --disable * if $( grep -q 9.2 /etc/os-release ) then sudo subscription-manager repos --enable=rhel-9-for-x86_64-baseos-e4s-rpms --enable=rhel-9-for-x86_64-appstream-e4s-rpms --enable=rhel-9-for-x86_64-highavailability-e4s-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms sudo podman ps | grep -q ceph && subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpms sudo subscription-manager release --set=9.2 else sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-aus-rpms --enable=rhel-8-for-x86_64-appstream-aus-rpms --enable=rhel-8-for-x86_64-highavailability-aus-rpms --enable=openstack-17.1-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms sudo podman ps | grep -q ceph && subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms sudo subscription-manager release --set=8.4 fi sudo dnf -y install cephadm-
Create a new
OpenStackConfigGeneratorCR namedceph-upgradethat includes the updated environment file and tripleo-tarball ConfigMaps. Create a file named
openstack-ceph-upgrade.yamlon your workstation to define anOpenStackDeployCR for the upgrade from Red Hat Ceph Storage 4 to 5:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: ceph-upgrade spec: configVersion: <config_version> configGenerator: ceph-upgrade mode: externalUpgrade advancedSettings: skipTags: - ceph_health - opendev-validation - ceph_ansible_remote_tmp tags: - ceph - facts-
Save the
openstack-ceph-upgrade.yamlfile. Create the
OpenStackDeployresource:$ oc create -f openstack-ceph-upgrade.yaml -n openstack- Wait for the deployment to finish.
Create a file named
openstack-ceph-upgrade-packages.yamlon your workstation to define anOpenStackDeployCR that upgrades the Red Hat Ceph Storage packages:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: ceph-upgrade-packages spec: configVersion: <config_version> configGenerator: ceph-upgrade mode: upgrade advancedSettings: limit: ceph_osd,ceph_mon,Undercloud playbook: - upgrade_steps_playbook.yaml skipTags: - ceph_health - opendev-validation - ceph_ansible_remote_tmp tags: - setup_packages-
Save the
openstack-ceph-upgrade-packages.yamlfile. Create the
OpenStackDeployresource:$ oc create -f openstack-ceph-upgrade-packages.yaml -n openstack- Wait for the deployment to finish.
Create a file named
openstack-ceph-upgrade-to-cephadm.yamlon your workstation to define anOpenStackDeployCR that runs thecephadmadoption:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: ceph-upgrade-to-cephadm spec: configVersion: <config_version> configGenerator: ceph-upgrade mode: externalUpgrade advancedSettings: skipTags: - ceph_health - opendev-validation - ceph_ansible_remote_tmp tags: - cephadm_adopt-
Save the
openstack-ceph-upgrade-to-cephadm.yamlfile. Create the
OpenStackDeployresource:$ oc create -f openstack-ceph-upgrade-to-cephadm.yaml -n openstack- Wait for the deployment to finish.
Update the
openstackclientimage to the RHEL9 container image by removing the currentimageURLfrom theopenstackclientCR:$ oc patch openstackclient -n openstack openstackclient --type=json -p="[{'op': 'remove', 'path': '/spec/imageURL'}]"
18.7. Upgrading the overcloud to RHOSP17.1 on RHEL8 Copy linkLink copied to clipboard!
To upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 8 you must update the container preparation file, which is the file that contains the ContainerImagePrepare parameter. You use this file to define the rules for obtaining container images for the overcloud.
You must update your container preparation file for both RHEL 8 and RHEL 9 hosts:
- RHEL 9 hosts: All containers are based on RHEL9.
-
RHEL 8 hosts: All containers are based on RHEL9 except for
libvirtandcollectd. Thelibvirtandcollectdcontainers must use the same base as the host.
You must then generate a new OpenStackConfigGenerator CR before deploying the updates.
Procedure
-
Open the container preparation file,
containers-prepare-parameter.yaml, and check that it obtains the correct image versions. Add the
ContainerImagePrepareRhel8parameter tocontainers-prepare-parameter.yaml:parameter_defaults: #default container image configuration for RHEL 9 hosts ContainerImagePrepare: - push_destination: false set: &container_image_prepare_rhel9_contents tag: 17.1.2 name_prefix: openstack- namespace: registry.redhat.io/rhosp-rhel9 ceph_namespace: registry.redhat.io/rhceph ceph_image: rhceph-5-rhel8 ceph_tag: latest ceph_alertmanager_image: ose-prometheus-alertmanager ceph_alertmanager_namespace: registry.redhat.io/openshift4 ceph_alertmanager_tag: v4.10 ceph_grafana_image: rhceph-5-dashboard-rhel8 ceph_grafana_namespace: registry.redhat.io/rhceph ceph_grafana_tag: latest ceph_node_exporter_image: ose-prometheus-node-exporter ceph_node_exporter_namespace: registry.redhat.io/openshift4 ceph_node_exporter_tag: v4.10 ceph_prometheus_image: ose-prometheus ceph_prometheus_namespace: registry.redhat.io/openshift4 ceph_prometheus_tag: v4.10 # RHEL8 hosts pin the collectd and libvirt containers to rhosp-rhel8 # To apply the following configuration, reference the followingparameter # in the role specific parameters below: <Role>ContainerImagePrepare ContainerImagePrepareRhel8: &container_image_prepare_rhel8 - push_destination: false set: *container_image_prepare_rhel9_contents excludes: - collectd - nova-libvirt - push_destination: false set: tag: 17.1.2 name_prefix: openstack- namespace: registry.redhat.io/rhosp-rhel8 ceph_namespace: registry.redhat.io/rhceph ceph_image: rhceph-5-rhel8 ceph_tag: latest ceph_alertmanager_image: ose-prometheus-alertmanager ceph_alertmanager_namespace: registry.redhat.io/openshift4 ceph_alertmanager_tag: v4.10 ceph_grafana_image: rhceph-5-dashboard-rhel8 ceph_grafana_namespace: registry.redhat.io/rhceph ceph_grafana_tag: latest ceph_node_exporter_image: ose-prometheus-node-exporter ceph_node_exporter_namespace: registry.redhat.io/openshift4 ceph_node_exporter_tag: v4.10 ceph_prometheus_image: ose-prometheus ceph_prometheus_namespace: registry.redhat.io/openshift4 ceph_prometheus_tag: v4.10 includes: - collectd - nova-libvirt # Initially all hosts are RHEL 8 so set the role specific container # image prepare parameter to the RHEL 8 configuration ControllerContainerImagePrepare: *container_image_prepare_rhel8 ComputeContainerImagePrepare: *container_image_prepare_rhel8 ...-
Create an environment file named
upgrade.yaml. Add the following configuration to the
upgrade.yamlfile:parameter_defaults: UpgradeInitCommand: | sudo subscription-manager repos --disable * if $( grep -q 9.2 /etc/os-release ) then sudo subscription-manager repos --enable=rhel-9.2-for-x86_64-baseos-eus-rpms --enable=rhel-9.2-for-x86_64-appstream-eus-rpms --enable=rhel-9.2-for-x86_64-highavailability-eus-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms else sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhel-8-for-x86_64-highavailability-eus-rpms --enable=openstack-17.1-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms fi-
Create an environment file named
disable_compute_service_check.yaml. Add the following configuration to the
disable_compute_service_check.yamlfile:parameter_defaults: ExtraConfig: nova::workarounds::disable_compute_service_check_for_ffu: true parameter_merge_strategies: ExtraConfig: merge-
If your deployment includes HCI, update the Red Hat Ceph Storage and HCI parameters from
ceph-ansiblevalues in RHOSP 16.2 tocephadmvalues in RHOSP 17.1. For more information, see Custom environment file for configuring Hyperconverged Infrastructure (HCI) storage in director Operator. Create a file named
openstack-configgen-upgrade.yamlon your workstation that defines a newOpenStackConfigGeneratorCR named "upgrade":apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackConfigGenerator metadata: name: "upgrade" namespace: openstack spec: enableFencing: False gitSecret: git-secret heatEnvs: - ssl/tls-endpoints-public-dns.yaml - ssl/enable-tls.yaml - nova-hw-machine-type-upgrade.yaml - lifecycle/upgrade-prepare.yaml heatEnvConfigMap: heat-env-config-upgrade tarballConfigMap: tripleo-tarball-config-upgradeCreate a file named
openstack-upgrade.yamlon your workstation to create anOpenStackDeployCR for the overcloud upgrade:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: upgrade spec: configVersion: <config_version> configGenerator: upgrade mode: upgrade-
Save the
openstack-upgrade.yamlfile. Create the
OpenStackDeployresource:$ oc create -f openstack-upgrade.yaml -n openstack- Wait for the deployment to finish. The overcloud nodes are now running 17.1 containers on RHEL8.
18.8. Upgrading the overcloud to RHEL 9 Copy linkLink copied to clipboard!
To upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 9, you must update the container preparation file, which is the file that contains the ContainerImagePrepare parameter. You use this file to define the rules for obtaining container images for the overcloud. You must then generate a new OpenStackConfigGenerator CR before deploying the updates.
Procedure
-
Open the container preparation file,
containers-prepare-parameter.yamland check that it obtains the correct image versions. Remove the following role specific overrides from the
containers-prepare-paramater.yamlfile:ControllerContainerImagePrepare: *container_image_prepare_rhel8 ComputeContainerImagePrepare: *container_image_prepare_rhel8-
Open the
roles_data.yamlfile and replaceOS::TripleO::Services::NovaLibvirtLegacywithOS::TripleO::Services::NovaLibvirt. Create an environment file named
skip_rhel_release.yaml, and add the following configuration:parameter_defaults: SkipRhelEnforcement: falseCreate an environment file named
system_upgrade.yamland add the following configuration:parameter_defaults: NICsPrefixesToUdev: ['en'] UpgradeLeappDevelSkip: "LEAPP_UNSUPPORTED=1 LEAPP_DEVEL_SKIP_CHECK_OS_RELEASE=1 LEAPP_NO_NETWORK_RENAMING=1 LEAPP_DEVEL_TARGET_RELEASE=9.2" UpgradeLeappDebug: false UpgradeLeappEnabled: true LeappActorsToRemove: ['checkifcfg','persistentnetnamesdisable','checkinstalledkernels','biosdevname'] LeappRepoInitCommand: | sudo subscription-manager repos --disable=* subscription-manager repos --enable rhel-8-for-x86_64-baseos-aus-rpms --enable rhel-8-for-x86_64-appstream-aus-rpms --enable openstack-17.1-for-rhel-8-x86_64-rpms subscription-manager release --set=8.4 UpgradeLeappCommandOptions: "--enablerepo=rhel-9-for-x86_64-baseos-e4s-rpms --enablerepo=rhel-9-for-x86_64-appstream-e4s-rpms --enablerepo=rhel-9-for-x86_64-highavailability-e4s-rpms --enablerepo=openstack-17.1-for-rhel-9-x86_64-rpms --enablerepo=fast-datapath-for-rhel-9-x86_64-rpms" LeappInitCommand: | sudo subscription-manager repos --disable=* sudo subscription-manager repos --enable=rhel-9-for-x86_64-baseos-e4s-rpms --enable=rhel-9-for-x86_64-appstream-e4s-rpms --enable=rhel-9-for-x86_64-highavailability-e4s-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms leapp answer --add --section check_vdo.confirm=True dnf -y remove ruby-irbFor more information on the recommended Leapp parameters, see Upgrade parameters in the Framework for upgrades (16.2 to 17.1) guide.
-
Create a new
OpenStackConfigGeneratorCR namedsystem-upgradethat includes the updated heat environment and tripleo tarball ConfigMaps. Create a file named
openstack-controller0-upgrade.yamlon your workstation to define anOpenStackDeployCR for the first controller node:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: system-upgrade-controller-0 spec: configVersion: <config_version> configGenerator: system-upgrade mode: upgrade advancedSettings: limit: Controller[0] tags: - system_upgrade-
Save the
openstack-controller0-upgrade.yamlfile. Create the
OpenStackDeployresource to run the system upgrade on Controller 0:$ oc create -f openstack-controller0-upgrade.yaml -n openstack- Wait for the deployment to finish.
Create a file named
openstack-controller1-upgrade.yamlon your workstation to define anOpenStackDeployCR for the second controller node:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: system-upgrade-controller-1 spec: configVersion: <config_version> configGenerator: system-upgrade mode: upgrade advancedSettings: limit: Controller[1] tags: - system_upgrade-
Save the
openstack-controller1-upgrade.yamlfile. Create the
OpenStackDeployresource to run the system upgrade on Controller 1:$ oc create -f openstack-controller1-upgrade.yaml -n openstack- Wait for the deployment to finish.
Create a file named
openstack-controller2-upgrade.yamlon your workstation to define anOpenStackDeployCR for the third controller node:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: system-upgrade-controller-2 spec: configVersion: <config_version> configGenerator: system-upgrade mode: upgrade advancedSettings: limit: Controller[2] tags: - system_upgrade-
Save the
openstack-controller2-upgrade.yamlfile. Create the
OpenStackDeployresource to run the system upgrade on Controller 1:$ oc create -f openstack-controller2-upgrade.yaml -n openstack- Wait for the deployment to finish.
Create a file named
openstack-computes-upgrade.yamlon your workstation to define anOpenStackDeployCR that upgrades all Compute nodes:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: system-upgrade-computes spec: configVersion: <config_version> configGenerator: system-upgrade mode: upgrade advancedSettings: limit: <nodes_upgrade> tags: - system_upgrade-
Replace
<nodes_upgrade>with the name of the Compute role, or a comma-separated list of nodes that you want to upgrade.
-
Replace
-
Save the
openstack-computes-upgrade.yamlfile. Create the
OpenStackDeployresource to run the system upgrade on the Compute nodes:$ oc create -f openstack-computes-upgrade.yaml -n openstack- Wait for the deployment to finish.
Create a file named
openstack-computes-containers-upgrade.yamlon your workstation to define anOpenStackDeployCR that upgrades the containers on the Compute nodes to RHEL 9.2:apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: upgrade-containers spec: configVersion: <config_version> configGenerator: system-upgrade mode: upgrade advancedSettings: limit: <nodes_upgrade>-
Replace
<nodes_upgrade>with the name of the Compute role, or a comma-separated list of nodes that you want to upgrade.
-
Replace
-
Save the
openstack-computes-containers-upgrade.yamlfile. Create the
OpenStackDeployresource to run the container upgrade:$ oc create -f openstack-computes-containers-upgrade.yaml -n openstack- Wait for the deployment to finish.
18.9. Performing post-upgrade tasks Copy linkLink copied to clipboard!
You must perform some post-upgrade tasks to complete the upgrade after the overcloud upgrades are successfully complete.
Procedure
-
Update the
baseImageUrlparameter to a RHEL 9.2 guest image in yourOpenStackProvisionServerCR andOpenStackBaremetalSetCR. Re-enable fencing on the controllers:
$ oc rsh -n openstack openstackclient $ ssh controller-0.ctlplane "sudo pcs property set stonith-enabled=true"- Perform any other post-upgrade actions relevant to your environment. For more information, see Performing post-upgrade actions in the Framework for upgrades (16.2 to 17.1) guide.