Search

Chapter 18. Upgrading an overcloud on a Red Hat OpenShift Container Platform cluster with director Operator (16.2 to 17.1)

download PDF

You can upgrade your Red Hat OpenStack Platform (RHOSP) 16.2 overcloud to a RHOSP 17.1 overcloud with director Operator (OSPdO) by using the in-place framework for upgrades (FFU) workflow.

To perform an upgrade, you must perform the following tasks:

  1. Prepare your environment for the upgrade.
  2. Update custom roles_data files to the composable services supported by RHOSP 17.1.
  3. Optional: Upgrade Red Hat Ceph Storage and adopt cephadm.
  4. Upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 8.
  5. Upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 9.
  6. Perform post-upgrade tasks.

18.1. Prerequisites

18.2. Updating director Operator

You must update your director Operator (OSPdO) to the latest 17.1 version before performing the overcloud upgrade. To update OSPdO, you must first delete and reinstall the current OSPdO. To delete OSPdO, you delete the OSPdO subscription and CSV.

Procedure

  1. Check the current version of the director Operator in the currentCSV field:

    $ oc get subscription osp-director-operator.openstack -n openstack -o yaml | grep currentCSV
  2. Delete the CSV for the director Operator in the target namespace:

    $ oc delete clusterserviceversion <current_CSV> -n openstack
    • Replace <current_CSV> with the currentCSV value from step 1.
  3. Delete the subscription:

    $ oc delete subscription  osp-director-operator.openstack -n openstack
  4. Install the latest 17.1 director Operator. For information, see Installing director Operator.

18.3. Preparing your director Operator environment for upgrade

You must prepare your director Operator (OSPdO) deployed Red Hat OpenStack Platform (RHOSP) environment for the upgrade to RHOSP 17.1.

Procedure

  1. Set openStackRelease to 17.1 on the openstackcontrolplane CR:

    $ oc patch openstackcontrolplane -n openstack overcloud --type=json -p="[{'op': 'replace', 'path': '/spec/openStackRelease', 'value': '17.1'}]"
  2. Retrieve the OSPdO ClusterServiceVersion (csv) CR:

    $ oc get csv -n openstack
  3. Delete all instances of the OpenStackConfigGenerator CR:

    $ oc delete -n openstack openstackconfiggenerator --all
  4. If your deployment includes HCI, the adoption from ceph-ansible to cephadm must be performed using the RHOSP 17.1 on RHEL8 openstackclient image:

    $ oc patch openstackclient -n openstack openstackclient --type=json -p="[{'op': 'replace', 'path': '/spec/imageURL', 'value': 'registry.redhat.io/rhosp-rhel8/openstack-tripleoclient:17.1'}]"

    If your deployment does not include HCI, or the cephadm adoption has already been completed, then switch to the 17.1 OSPdO default openstackclient image by removing the current imageURL from the openstackclient CR:

    $ oc patch openstackclient -n openstack openstackclient --type=json -p="[{'op': 'remove', 'path': '/spec/imageURL'}]"
  5. If you have enabled fencing in the overcloud, you must temporarily disable fencing on one of the Controller nodes for the duration of the upgrade:

    $ oc rsh -n openstack openstackclient
    $ ssh controller-0.ctlplane "sudo pcs property set stonith-enabled=false"

18.4. Updating composable services in custom roles_data files

You must update your roles_data files to the supported Red Hat OpenStack Platform (RHOSP) 17.1 composable services. For more information, see Updating composable services in custom roles_data files in the Framework for Upgrades (16.2 to 17.1) guide.

Procedure

  1. Remove the following services from all roles:

    `OS::TripleO::Services::CinderBackendDellEMCXTREMIOISCSI`
    `OS::TripleO::Services::CinderBackendDellPs`
    `OS::TripleO::Services::CinderBackendVRTSHyperScale`
    `OS::TripleO::Services::Ec2Api`
    `OS::TripleO::Services::Fluentd`
    `OS::TripleO::Services::FluentdAlt`
    `OS::TripleO::Services::Keepalived`
    `OS::TripleO::Services::MistralApi`
    `OS::TripleO::Services::MistralEngine`
    `OS::TripleO::Services::MistralEventEngine`
    `OS::TripleO::Services::MistralExecutor`
    `OS::TripleO::Services::NeutronLbaasv2Agent`
    `OS::TripleO::Services::NeutronLbaasv2Api`
    `OS::TripleO::Services::NeutronML2FujitsuCfab`
    `OS::TripleO::Services::NeutronML2FujitsuFossw`
    `OS::TripleO::Services::NeutronSriovHostConfig`
    `OS::TripleO::Services::NovaConsoleauth`
    `OS::TripleO::Services::Ntp`
    `OS::TripleO::Services::OpenDaylightApi`
    `OS::TripleO::Services::OpenDaylightOvs`
    `OS::TripleO::Services::OpenShift::GlusterFS`
    `OS::TripleO::Services::OpenShift::Infra`
    `OS::TripleO::Services::OpenShift::Master`
    `OS::TripleO::Services::OpenShift::Worker`
    `OS::TripleO::Services::PankoApi`
    `OS::TripleO::Services::Rear`
    `OS::TripleO::Services::SaharaApi`
    `OS::TripleO::Services::SaharaEngine`
    `OS::TripleO::Services::SensuClient`
    `OS::TripleO::Services::SensuClientAlt`
    `OS::TripleO::Services::SkydiveAgent`
    `OS::TripleO::Services::SkydiveAnalyzer`
    `OS::TripleO::Services::Tacker`
    `OS::TripleO::Services::TripleoUI`
    `OS::TripleO::Services::UndercloudMinionMessaging`
    `OS::TripleO::Services::UndercloudUpgradeEphemeralHeat`
    `OS::TripleO::Services::Zaqar`
  2. Add the OS::TripleO::Services::GlanceApiInternal service to your Controller role.
  3. Update the OS::TripleO::Services::NovaLibvirt service on the Compute roles to OS::TripleO::Services::NovaLibvirtLegacy.
  4. If your environment includes Red Hat Ceph Storage, set the DeployedCeph parameter to false to enable director-managed cephadm deployments.
  5. If your network configuration templates include the following functions, you must manually convert your NIC templates to Jinja2 Ansible format before you upgrade the overcloud. The following functions are not supported with automatic conversion:

    'get_file'
    'get_resource'
    'digest'
    'repeat'
    'resource_facade'
    'str_replace'
    'str_replace_strict'
    'str_split'
    'map_merge'
    'map_replace'
    'yaql'
    'equals'
    'if'
    'not'
    'and'
    'or'
    'filter'
    'make_url'
    'contains'

    For more information about manually converting your NIC templates, see Manually converting NIC templates to Jinja2 Ansible format in Installing and managing Red Hat OpenStack Platform with director.

18.5. Upgrading Red Hat Ceph Storage and adopting cephadm

If your environment includes Red Hat Ceph Storage deployments, you must upgrade your deployment to Red Hat Ceph Storage 5. With an upgrade to version 5, cephadm now manages Red Hat Ceph Storage instead of ceph-ansible.

Procedure

  1. Create an Ansible playbook file named ceph-admin-user-playbook.yaml to create a ceph-admin user on the overcloud nodes.
  2. Add the following configuration to the ceph-admin-user-playbook.yaml file:

    - hosts: localhost
      gather_facts: false
      tasks:
        - name: set ssh key path facts
          set_fact:
            private_key: "{{ lookup('env', 'HOME') }}/.ssh/{{ tripleo_admin_user }}-id_rsa"
            public_key: "{{ lookup('env', 'HOME') }}/.ssh/{{ tripleo_admin_user }}-id_rsa.pub"
          run_once: true
        - name: stat private key
          stat:
            path: "{{ private_key }}"
          register: private_key_stat
        - name: create private key if it does not exist
          shell: "ssh-keygen -t rsa -q -N '' -f {{ private_key }}"
          no_log: true
          when:
            - not private_key_stat.stat.exists
        - name: stat public key
          stat:
            path: "{{ public_key }}"
          register: public_key_stat
        - name: create public key if it does not exist
          shell: "ssh-keygen -y -f {{ private_key }} > {{ public_key }}"
          when:
            - not public_key_stat.stat.exists
    
    - hosts: overcloud
      gather_facts: false
      become: true
      pre_tasks:
        - name: Get local private key
          slurp:
            src: "{{ hostvars['localhost']['private_key'] }}"
          register: private_key_get
          delegate_to: localhost
          no_log: true
        - name: Get local public key
          slurp:
            src: "{{ hostvars['localhost']['public_key'] }}"
          register: public_key_get
          delegate_to: localhost
      roles:
        - role: tripleo_create_admin
          tripleo_admin_user: "{{ tripleo_admin_user }}"
          tripleo_admin_pubkey: "{{ public_key_get['content'] | b64decode }}"
          tripleo_admin_prikey: "{{ private_key_get['content'] | b64decode }}"
          no_log: true
  3. Copy the playbook to the openstackclient container:

    $ oc cp -n openstack ceph-admin-user-playbook.yml openstackclient:/home/cloud-admin/ceph-admin-user-playbook.yml
  4. Run the playbook on the openstackclient container:

    $ oc rsh -n openstack openstackclient
    $ ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory -e tripleo_admin_user=ceph-admin -e distribute_private_key=true /home/cloud-admin/ceph-admin-user-playbook.yml
  5. Update the Red Hat Ceph Storage container image parameters in the containers-prepare-parameter.yaml file for the version of Red Hat Ceph Storage that your deployment uses:

    ceph_namespace: registry.redhat.io/rhceph
    ceph_image: <ceph_image_file>
    ceph_tag: latest
    ceph_grafana_image: <grafana_image_file>
    ceph_grafana_namespace: registry.redhat.io/rhceph
    ceph_grafana_tag: latest
    • Replace <ceph_image_file> with the name of the image file for the version of Red Hat Ceph Storage that your deployment uses:

      • Red Hat Ceph Storage 5: rhceph-5-rhel8
    • Replace <grafana_image_file> with the name of the image file for the version of Red Hat Ceph Storage that your deployment uses:

      • Red Hat Ceph Storage 5: rhceph-5-dashboard-rhel8
  6. If your deployment includes HCI, update the CephAnsibleRepo parameter in compute-hci.yaml to "rhelosp-ceph-5-tools".
  7. Create an environment file named upgrade.yaml and add the following configuration to it:

    parameter_defaults:
      UpgradeInitCommand: |
        sudo subscription-manager repos --disable *
        if $( grep -q 9.2 /etc/os-release )
        then
          sudo subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms
            sudo podman ps | grep -q ceph && subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpms
            sudo subscription-manager release --set=9.2
        else
          sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-tus-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-highavailability-tus-rpms --enable=openstack-17.1-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms
            sudo podman ps | grep -q ceph && subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms
            sudo subscription-manager release --set=8.4
        fi
        sudo dnf -y install cephadm
  8. Create a new OpenStackConfigGenerator CR named ceph-upgrade that includes the updated environment file and tripleo-tarball ConfigMaps.
  9. Create a file named openstack-ceph-upgrade.yaml on your workstation to define an OpenStackDeploy CR for the upgrade from Red Hat Ceph Storage 4 to 5:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackDeploy
    metadata:
      name: ceph-upgrade
    spec:
      configVersion: <config_version>
      configGenerator: ceph-upgrade
      mode: externalUpgrade
      advancedSettings:
        skipTags:
        - ceph_health
        - opendev-validation
        - ceph_ansible_remote_tmp
        tags:
        - ceph
        - facts
  10. Save the openstack-ceph-upgrade.yaml file.
  11. Create the OpenStackDeploy resource:

    $ oc create -f openstack-ceph-upgrade.yaml -n openstack
  12. Wait for the deployment to finish.
  13. Create a file named openstack-ceph-upgrade-packages.yaml on your workstation to define an OpenStackDeploy CR that upgrades the Red Hat Ceph Storage packages:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackDeploy
    metadata:
      name: ceph-upgrade-packages
    spec:
      configVersion: <config_version>
      configGenerator: ceph-upgrade
      mode: upgrade
      advancedSettings:
        limit: ceph_osd,ceph_mon,Undercloud
        playbook:
        - upgrade_steps_playbook.yaml
        skipTags:
        - ceph_health
        - opendev-validation
        - ceph_ansible_remote_tmp
        tags:
        - setup_packages
  14. Save the openstack-ceph-upgrade-packages.yaml file.
  15. Create the OpenStackDeploy resource:

    $ oc create -f openstack-ceph-upgrade-packages.yaml -n openstack
  16. Wait for the deployment to finish.
  17. Create a file named openstack-ceph-upgrade-to-cephadm.yaml on your workstation to define an OpenStackDeploy CR that runs the cephadm adoption:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackDeploy
    metadata:
      name: ceph-upgrade-to-cephadm
    spec:
      configVersion: <config_version>
      configGenerator: ceph-upgrade
      mode: externalUpgrade
      advancedSettings:
        skipTags:
        - ceph_health
        - opendev-validation
        - ceph_ansible_remote_tmp
        tags:
        - cephadm_adopt
  18. Save the openstack-ceph-upgrade-to-cephadm.yaml file.
  19. Create the OpenStackDeploy resource:

    $ oc create -f openstack-ceph-upgrade-to-cephadm.yaml -n openstack
  20. Wait for the deployment to finish.
  21. Update the openstackclient image to the RHEL9 container image by removing the current imageURL from the openstackclient CR:

    $ oc patch openstackclient -n openstack openstackclient --type=json -p="[{'op': 'remove', 'path': '/spec/imageURL'}]"

18.6. Upgrading the overcloud to RHOSP17.1 on RHEL8

To upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 8 you must update the container preparation file, which is the file that contains the ContainerImagePrepare parameter. You use this file to define the rules for obtaining container images for the overcloud.

You must update your container preparation file for both RHEL 8 and RHEL 9 hosts:

  • RHEL 9 hosts: All containers are based on RHEL9.
  • RHEL 8 hosts: All containers are based on RHEL9 except for libvirt and collectd. The libvirt and collectd containers must use the same base as the host.

You must then generate a new OpenStackConfigGenerator CR before deploying the updates.

Procedure

  1. Open the container preparation file, containers-prepare-parameter.yaml, and check that it obtains the correct image versions.
  2. Add the ContainerImagePrepareRhel8 parameter to containers-prepare-parameter.yaml:

    parameter_defaults:
      #default container image configuration for RHEL 9 hosts
      ContainerImagePrepare:
      - push_destination: false
        set: &container_image_prepare_rhel9_contents
          tag: 17.1.2
          name_prefix: openstack-
          namespace: registry.redhat.io/rhosp-rhel9
          ceph_namespace: registry.redhat.io/rhceph
          ceph_image: rhceph-5-rhel8
          ceph_tag: latest
          ceph_alertmanager_image: ose-prometheus-alertmanager
          ceph_alertmanager_namespace: registry.redhat.io/openshift4
          ceph_alertmanager_tag: v4.10
          ceph_grafana_image: rhceph-5-dashboard-rhel8
          ceph_grafana_namespace: registry.redhat.io/rhceph
          ceph_grafana_tag: latest
          ceph_node_exporter_image: ose-prometheus-node-exporter
          ceph_node_exporter_namespace: registry.redhat.io/openshift4
          ceph_node_exporter_tag: v4.10
          ceph_prometheus_image: ose-prometheus
          ceph_prometheus_namespace: registry.redhat.io/openshift4
          ceph_prometheus_tag: v4.10
    
      # RHEL8 hosts pin the collectd and libvirt containers to rhosp-rhel8
      # To apply the following configuration, reference the followingparameter
      # in the role specific parameters below: <Role>ContainerImagePrepare
      ContainerImagePrepareRhel8: &container_image_prepare_rhel8
      - push_destination: false
        set: *container_image_prepare_rhel9_contents
        excludes:
        - collectd
        - nova-libvirt
      - push_destination: false
        set:
          tag: 17.1.2
          name_prefix: openstack-
          namespace: registry.redhat.io/rhosp-rhel8
          ceph_namespace: registry.redhat.io/rhceph
          ceph_image: rhceph-5-rhel8
          ceph_tag: latest
          ceph_alertmanager_image: ose-prometheus-alertmanager
          ceph_alertmanager_namespace: registry.redhat.io/openshift4
          ceph_alertmanager_tag: v4.10
          ceph_grafana_image: rhceph-5-dashboard-rhel8
          ceph_grafana_namespace: registry.redhat.io/rhceph
          ceph_grafana_tag: latest
          ceph_node_exporter_image: ose-prometheus-node-exporter
          ceph_node_exporter_namespace: registry.redhat.io/openshift4
          ceph_node_exporter_tag: v4.10
          ceph_prometheus_image: ose-prometheus
          ceph_prometheus_namespace: registry.redhat.io/openshift4
          ceph_prometheus_tag: v4.10
        includes:
        - collectd
        - nova-libvirt
    # Initially all hosts are RHEL 8 so set the role specific container
    # image prepare parameter to the RHEL 8 configuration
      ControllerContainerImagePrepare: *container_image_prepare_rhel8
      ComputeContainerImagePrepare: *container_image_prepare_rhel8
    ...
  3. Create an environment file named upgrade.yaml.
  4. Add the following configuration to the upgrade.yaml file:

    parameter_defaults:
      UpgradeInitCommand: |
        sudo subscription-manager repos --disable *
          if $( grep -q  9.2  /etc/os-release )
          then
            sudo subscription-manager repos --enable=rhel-9.2-for-x86_64-baseos-eus-rpms --enable=rhel-9.2-for-x86_64-appstream-eus-rpms --enable=rhel-9.2-for-x86_64-highavailability-eus-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms
          else
            sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhel-8-for-x86_64-highavailability-eus-rpms --enable=openstack-17.1-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms
            fi
  5. Create an environment file named disable_compute_service_check.yaml.
  6. Add the following configuration to the disable_compute_service_check.yaml file:

    parameter_defaults:
      ExtraConfig:
        nova::workarounds::disable_compute_service_check_for_ffu: true
    
    parameter_merge_strategies:
      ExtraConfig: merge
  7. If your deployment includes HCI, update the Red Hat Ceph Storage and HCI parameters from ceph-ansible values in RHOSP 16.2 to cephadm values in RHOSP 17.1. For more information, see Custom environment file for configuring Hyperconverged Infrastructure (HCI) storage in director Operator.
  8. Create a file named openstack-configgen-upgrade.yaml on your workstation that defines a new OpenStackConfigGenerator CR named "upgrade":

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackConfigGenerator
    metadata:
      name: "upgrade"
      namespace: openstack
    spec:
      enableFencing: False
      gitSecret: git-secret
      heatEnvs:
        - ssl/tls-endpoints-public-dns.yaml
        - ssl/enable-tls.yaml
        - nova-hw-machine-type-upgrade.yaml
        - lifecycle/upgrade-prepare.yaml
      heatEnvConfigMap: heat-env-config-upgrade
      tarballConfigMap: tripleo-tarball-config-upgrade
  9. Create a file named openstack-upgrade.yaml on your workstation to create an OpenStackDeploy CR for the overcloud upgrade:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackDeploy
    metadata:
      name: upgrade
    spec:
      configVersion: <config_version>
      configGenerator: upgrade
      mode: upgrade
  10. Save the openstack-upgrade.yaml file.
  11. Create the OpenStackDeploy resource:

    $ oc create -f openstack-upgrade.yaml -n openstack
  12. Wait for the deployment to finish. The overcloud nodes are now running 17.1 containers on RHEL8.

18.7. Upgrading the overcloud to RHEL 9

To upgrade the overcloud nodes to run RHOSP 17.1 containers on RHEL 9, you must update the container preparation file, which is the file that contains the ContainerImagePrepare parameter. You use this file to define the rules for obtaining container images for the overcloud. You must then generate a new OpenStackConfigGenerator CR before deploying the updates.

Procedure

  1. Open the container preparation file, containers-prepare-parameter.yaml and check that it obtains the correct image versions.
  2. Remove the following role specific overrides from the containers-prepare-paramater.yaml file:

      ControllerContainerImagePrepare: *container_image_prepare_rhel8
      ComputeContainerImagePrepare: *container_image_prepare_rhel8
  3. Open the roles_data.yaml file and replace OS::TripleO::Services::NovaLibvirtLegacy with OS::TripleO::Services::NovaLibvirt.
  4. Create an environment file named skip_rhel_release.yaml, and add the following configuration:

    parameter_defaults:
      SkipRhelEnforcement: false
  5. Create an environment file named system_upgrade.yaml and add the following configuration:

    parameter_defaults:
      NICsPrefixesToUdev: ['en']
      UpgradeLeappDevelSkip: "LEAPP_UNSUPPORTED=1 LEAPP_DEVEL_SKIP_CHECK_OS_RELEASE=1 LEAPP_NO_NETWORK_RENAMING=1 LEAPP_DEVEL_TARGET_RELEASE=9.2"
      UpgradeLeappDebug: false
      UpgradeLeappEnabled: true
      LeappActorsToRemove: ['checkifcfg','persistentnetnamesdisable','checkinstalledkernels','biosdevname']
      LeappRepoInitCommand: |
        sudo subscription-manager repos --disable=*
        subscription-manager repos --enable rhel-8-for-x86_64-baseos-tus-rpms --enable rhel-8-for-x86_64-appstream-tus-rpms --enable openstack-17.1-for-rhel-8-x86_64-rpms
        subscription-manager release --set=8.4
      UpgradeLeappCommandOptions:
    "--enablerepo=rhel-9-for-x86_64-baseos-eus-rpms --enablerepo=rhel-9-for-x86_64-appstream-eus-rpms --enablerepo=rhel-9-for-x86_64-highavailability-eus-rpms --enablerepo=openstack-17.1-for-rhel-9-x86_64-rpms --enablerepo=fast-datapath-for-rhel-9-x86_64-rpms"
      LeappInitCommand: |
        sudo subscription-manager repos --disable=*
        sudo subscription-manager repos
      --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms
    
        leapp answer --add --section check_vdo.confirm=True
    
        dnf -y remove irb

    For more information on the recommended Leapp parameters, see Upgrade parameters in the Framework for upgrades (16.2 to 17.1) guide.

  6. Create a new OpenStackConfigGenerator CR named system-upgrade that includes the updated heat environment and tripleo tarball ConfigMaps.
  7. Create a file named openstack-controller0-upgrade.yaml on your workstation to define an OpenStackDeploy CR for the first controller node:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackDeploy
    metadata:
      name: system-upgrade-controller-0
    spec:
      configVersion: <config_version>
      configGenerator: system-upgrade
      mode: upgrade
      advancedSettings:
        limit: Controller[0]
        tags:
        - system_upgrade
  8. Save the openstack-controller0-upgrade.yaml file.
  9. Create the OpenStackDeploy resource to run the system upgrade on Controller 0:

    $ oc create -f openstack-controller0-upgrade.yaml -n openstack
  10. Wait for the deployment to finish.
  11. Create a file named openstack-controller1-upgrade.yaml on your workstation to define an OpenStackDeploy CR for the second controller node:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackDeploy
    metadata:
      name: system-upgrade-controller-1
    spec:
      configVersion: <config_version>
      configGenerator: system-upgrade
      mode: upgrade
      advancedSettings:
        limit: Controller[1]
        tags:
        - system_upgrade
  12. Save the openstack-controller1-upgrade.yaml file.
  13. Create the OpenStackDeploy resource to run the system upgrade on Controller 1:

    $ oc create -f openstack-controller1-upgrade.yaml -n openstack
  14. Wait for the deployment to finish.
  15. Create a file named openstack-controller2-upgrade.yaml on your workstation to define an OpenStackDeploy CR for the third controller node:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackDeploy
    metadata:
      name: system-upgrade-controller-2
    spec:
      configVersion: <config_version>
      configGenerator: system-upgrade
      mode: upgrade
      advancedSettings:
        limit: Controller[2]
        tags:
        - system_upgrade
  16. Save the openstack-controller2-upgrade.yaml file.
  17. Create the OpenStackDeploy resource to run the system upgrade on Controller 1:

    $ oc create -f openstack-controller2-upgrade.yaml -n openstack
  18. Wait for the deployment to finish.
  19. Create a file named openstack-computes-upgrade.yaml on your workstation to define an OpenStackDeploy CR that upgrades all Compute nodes:

    apiVersion: osp-director.openstack.org/v1beta1
    kind: OpenStackDeploy
    metadata:
      name: system-upgrade-computes
    spec:
      configVersion: <config_version>
      configGenerator: system-upgrade
      mode: upgrade
      advancedSettings:
        limit: Compute
        tags:
        - system_upgrade
  20. Save the openstack-computes-upgrade.yaml file.
  21. Create the OpenStackDeploy resource to run the system upgrade on the Compute nodes:

    $ oc create -f openstack-computes-upgrade.yaml -n openstack
  22. Wait for the deployment to finish.

18.8. Performing post-upgrade tasks

You must perform some post-upgrade tasks to complete the upgrade after the overcloud upgrades are successfully complete.

Procedure

  1. Update the baseImageUrl parameter to a RHEL 9.2 guest image in your OpenStackProvisionServer CR and OpenStackBaremetalSet CR.
  2. Re-enable fencing on the controllers:

    $ oc rsh -n openstack openstackclient
    $ ssh controller-0.ctlplane "sudo pcs property set stonith-enabled=true"
  3. Perform any other post-upgrade actions relevant to your environment. For more information, see Performing post-upgrade actions in the Framework for upgrades (16.2 to 17.1) guide.
Red Hat logoGithubRedditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.