이 콘텐츠는 선택한 언어로 제공되지 않습니다.

Chapter 15. Performing post-upgrade actions


After you have completed the overcloud upgrade, you must perform some post-upgrade configuration to ensure that your environment is fully supported and ready for future operations.

Important

If you run additional overcloud commands after the upgrade from Red Hat OpenStack Platform 16.2 to 17.1, you must consider the following:

  • Overcloud commands that you run after the upgrade must include the YAML files that you created or updated during the upgrade process. For example, to provision overcloud nodes during a scale-up operation, use the /home/stack/tripleo-[stack]-baremetal-deployment.yaml file instead of the /home/stack/templates/overcloud-baremetal-deployed.yaml file.
  • Include all the options that you passed to the last run of the openstack overcloud upgrade prepare command, except for the system_upgrade.yaml file and the upgrades-environment.yaml file.

15.1. Performing post-upgrade tasks on the operating system

If you upgraded the operating system on your hosts to Red Hat Enterprise Linux 9.2, you must perform post-upgrade tasks such as removing any remaining Leapp packages. For more information on these tasks, see Performing post-upgrade tasks in Upgrading from RHEL 8 to RHEL 9.

15.2. Upgrading the overcloud images

You must replace your current overcloud images with new versions. The new images ensure that director can introspect and provision your nodes using the latest version of Red Hat OpenStack Platform (RHOSP) software.

Prerequisites

  • You have upgraded the undercloud to the latest version.
Note

You must use the new version of the overcloud images if you redeploy your overcloud. For more information on installing overcloud images, see Installing the overcloud images in Installing and managing Red Hat OpenStack Platform with director.

Procedure

  1. Check the list of RPMs that you installed and ensure that there are no RHOSP 16.2 or legacy 17.1 images, for example, rhosp-director-images-x86_64-17.1:

    $ rpm -qa | egrep "rhosp-director-images-*-16.2|rhosp-director-images-x86_64-17.1"
    Copy to Clipboard Toggle word wrap

    If the list displays these images, remove them by running the following command:

    $ sudo dnf -y remove rhosp-director-images-*-16.2 rhosp-director-images-x86_64-17.1
    Copy to Clipboard Toggle word wrap
  2. Remove any existing images from the images directory on the stack user’s home (/home/stack/images):

    $ rm -rf ~/images/*
    Copy to Clipboard Toggle word wrap
  3. Extract the archives:

    $ cd ~/images
    $ for i in /usr/share/rhosp-director-images/overcloud-hardened-uefi-full.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-17.1.tar; do tar -xvf $i; done
    $ cd ~
    Copy to Clipboard Toggle word wrap
  4. Import the images into director:

    (undercloud) [stack@director images]$ openstack overcloud image upload --image-path /home/stack/images/ --update-existing
    Copy to Clipboard Toggle word wrap

    The command completes the following tasks:

    • Converts the image format from QCOW to RAW.
    • Provides status updates about the upload of the image.

15.3. Updating CPU pinning parameters

You must migrate the CPU pinning configuration from the NovaVcpuPinSet parameter to the following parameters after completing the upgrade to Red Hat OpenStack Platform 17.1:

NovaComputeCpuDedicatedSet
Sets the dedicated (pinned) CPUs.
NovaComputeCpuSharedSet
Sets the shared (unpinned) CPUs.

Procedure

  1. Log in to the undercloud as the stack user.
  2. If your Compute nodes support simultaneous multithreading (SMT) but you created instances with the hw:cpu_thread_policy=isolated policy, you must perform one of the following options:

    • Create a new flavor that does not set the hw:cpu_thread_policy thread policy and resize the instances with that flavor:

      1. Source your overcloud authentication file:

        $ source ~/overcloudrc
        Copy to Clipboard Toggle word wrap
      2. Create a flavor with the default thread policy, prefer:

        (overcloud) $ openstack flavor create <flavor>
        Copy to Clipboard Toggle word wrap
        Note

        When you resize an instance, you must use a new flavor. You cannot reuse the current flavor. For more information, see Resizing an instance in the Creating and managing instances guide.

      3. Convert the instances to use the new flavor:

        (overcloud) $ openstack server resize --flavor <flavor> <server>
        (overcloud) $ openstack server resize confirm <server>
        Copy to Clipboard Toggle word wrap
      4. Repeat this step for all pinned instances that use the hw:cpu_thread_policy=isolated policy.
    • Migrate instances from the Compute node and disable SMT on the Compute node:

      1. Source your overcloud authentication file:

        $ source ~/overcloudrc
        Copy to Clipboard Toggle word wrap
      2. Disable the Compute node from accepting new virtual machines:

        (overcloud) $ openstack compute service list
        (overcloud) $ openstack compute service set <hostname> nova-compute --disable
        Copy to Clipboard Toggle word wrap
      3. Migrate all instances from the Compute node. For more information on instance migration, see Migrating virtual machine instances between Compute nodes.
      4. Reboot the Compute node and disable SMT in the BIOS of the Compute node.
      5. Boot the Compute node.
      6. Re-enable the Compute node:

        (overcloud) $ openstack compute service set <hostname> nova-compute --enable
        Copy to Clipboard Toggle word wrap
  3. Source the stackrc file:

    $ source ~/stackrc
    Copy to Clipboard Toggle word wrap
  4. Edit the environment file that contains the NovaVcpuPinSet parameter.
  5. Migrate the CPU pinning configuration from the NovaVcpuPinSet parameter to NovaComputeCpuDedicatedSet and NovaComputeCpuSharedSet:

    • Migrate the value of NovaVcpuPinSet to NovaComputeCpuDedicatedSet for hosts that were previously used for pinned instances.
    • Migrate the value of NovaVcpuPinSet to NovaComputeCpuSharedSet for hosts that were previously used for unpinned instances.
    • If there is no value set for NovaVcpuPinSet, then all Compute node cores should be assigned to either NovaComputeCpuDedicatedSet or NovaComputeCpuSharedSet, depending on the type of instances you intend to host on the nodes.

    For example, your previous environment file might contain the following pinning configuration:

    parameter_defaults:
      ...
      NovaVcpuPinSet: 1,2,3,5,6,7
      ...
    Copy to Clipboard Toggle word wrap

    To migrate the configuration to a pinned configuration, set the NovaComputeCpuDedicatedSet parameter and unset the NovaVcpuPinSet parameter:

    parameter_defaults:
      ...
      NovaComputeCpuDedicatedSet: 1,2,3,5,6,7
      NovaVcpuPinSet: ""
      ...
    Copy to Clipboard Toggle word wrap

    To migrate the configuration to an unpinned configuration, set the NovaComputeCpuSharedSet parameter and unset the NovaVcpuPinSet parameter:

    parameter_defaults:
      ...
      NovaComputeCpuSharedSet: 1,2,3,5,6,7
      NovaVcpuPinSet: ""
      ...
    Copy to Clipboard Toggle word wrap
    Important

    Ensure the configuration of either NovaComputeCpuDedicatedSet or NovaComputeCpuSharedSet matches the configuration defined in NovaVcpuPinSet. To change the configuration for either of these, or to configure both NovaComputeCpuDedicatedSet or NovaComputeCpuSharedSet, ensure the Compute nodes with the pinning configuration are not running any instances before updating the configuration.

  6. Save the file.
  7. Run the deployment command to update the overcloud with the new CPU pinning parameters.

    (undercloud) $ openstack overcloud deploy \
        --stack _STACK NAME_ \
        --templates \
        ...
        -e /home/stack/templates/<compute_environment_file>.yaml
        ...
    Copy to Clipboard Toggle word wrap

15.4. Updating the default machine type for hosts after an upgrade to RHOSP 17

The machine type of an instance is a virtual chipset that provides certain default devices, such as a PCIe graphics card or Ethernet controller. Cloud users can specify the machine type for their instances by using an image with the hw_machine_type metadata property that they require.

Cloud administrators can use the Compute parameter NovaHWMachineType to configure each Compute node architecture with a default machine type to apply to instances hosted on that architecture. If the hw_machine_type image property is not provided when launching the instance, the default machine type for the host architecture is applied to the instance. Red Hat OpenStack Platform (RHOSP) 17 is based on RHEL 9. The pc-i440fx QEMU machine type is deprecated in RHEL 9, therefore the default machine type for x86_64 instances that run on RHEL 9 has changed from pc to q35. Based on this change in RHEL 9, the default value for machine type x86_64 has also changed from pc in RHOSP 16 to q35 in RHOSP 17.

From RHOSP 16.2 and later, the Compute service records the instance machine type within the system metadata of the instance when it launches an instance. This means that it is now possible to change the NovaHWMachineType during the lifetime of a RHOSP deployment without affecting the machine type of existing instances.

The Compute service records the machine type of instances that are not in a SHELVED_OFFLOADED state. Therefore, after an upgrade to RHOSP 17 you must manually record the machine type of instances that are in SHELVED_OFFLOADED state, and verify that all instances within the environment or specific cell have had a machine type recorded. After you have updated the system metadata for each instance with the machine types, you can update the NovaHWMachineType parameter to the RHOSP 17 default, q35, without affecting the machine type of existing instances.

Note

From RHOSP OSP17.0 onwards, Q35 is the default machine type. The Q35 machine type uses PCIe ports. You can manage the number of PCIe port devices by configuring the heat parameter NovaLibvirtNumPciePorts. The number of devices that can attach to a PCIe port is fewer than instances running on previous versions. If you want to use more devices, you must use the hw_disk_bus=scsi or hw_scsi_model=virtio-scsi image property. For more information, see Metadata properties for virtual hardware.

Prerequisites

Procedure

  1. Log in to the undercloud as the stack user.
  2. Source the stackrc file.

    $ source ~/stackrc
    Copy to Clipboard Toggle word wrap
  3. Log in to a Controller node as the heat-admin user:

    (undercloud)$ metalsmith list
    $ ssh heat-admin@<controller_ip>
    Copy to Clipboard Toggle word wrap

    Replace <controller_ip> with the IP address of the Controller node.

  4. Retrieve the list of instances that have no machine type set:

    [heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \
      nova-manage libvirt list_unset_machine_type
    Copy to Clipboard Toggle word wrap
  5. Check the NovaHWMachineType parameter in the nova-hw-machine-type-upgrade.yaml file for the default machine type for the instance host. The default value for the NovaHWMachineType parameter in RHOSP 16.2 is as follows:

    x86_64=pc-i440fx-rhel7.6.0,aarch64=virt-rhel7.6.0,ppc64=pseries-rhel7.6.0,ppc64le=pseries-rhel7.6.0

  6. Update the system metadata of each instance with the default instance machine type:

    [heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \
      nova-manage libvirt update_machine_type <instance_uuid> <machine_type>
    Copy to Clipboard Toggle word wrap
    • Replace <instance_uuid> with the UUID of the instance.
    • Replace <machine_type> with the machine type to record for the instance.

      Warning

      If you set the machine type to something other than the machine type of the image on which the instance was booted, the existing instance might fail to boot.

  7. Confirm that the machine type is recorded for all instances:

    [heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \
      nova-status upgrade check
    Copy to Clipboard Toggle word wrap

    This command returns a warning if an instance is found without a machine type. If you get this warning, repeat this procedure from step 4.

  8. Change the default value of NovaHWMachineType in a Compute environment file to x86_64=q35 and deploy the overcloud.

Verification

  1. Create an instance that has the default machine type:

    (overcloud)$ openstack server create --flavor <flavor> \
      --image <image> --network <network> \
      --wait defaultMachineTypeInstance
    Copy to Clipboard Toggle word wrap
    • Replace <flavor> with the name or ID of a flavor for the instance.
    • Replace <image> with the name or ID of an image that does not set hw_machine_type.
    • Replace <network> with the name or ID of the network to connect the instance to.
  2. Verify that the instance machine type is set to the default value:

    [heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \
      nova-manage libvirt get_machine_type <instance_uuid>
    Copy to Clipboard Toggle word wrap

    Replace <instance_uuid> with the UUID of the instance.

  3. Hard reboot an instance with a machine type of x86_64=pc-i440fx:

    (overcloud)$ openstack server reboot --hard <instance_uuid>
    Copy to Clipboard Toggle word wrap

    Replace <instance_uuid> with the UUID of the instance.

  4. Verify that the instance machine type has not been changed:

    [heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \
      nova-manage libvirt get_machine_type <instance_uuid>
    Copy to Clipboard Toggle word wrap

    Replace <instance_uuid> with the UUID of the instance.

15.5. Re-enabling fencing in the overcloud

Before you upgraded the overcloud, you disabled fencing in Disabling fencing in the overcloud. After you upgrade your environment, re-enable fencing to protect your data if a node fails.

Procedure

  1. Log in to the undercloud host as the stack user.
  2. Source the stackrc undercloud credentials file:

    $ source ~/stackrc
    Copy to Clipboard Toggle word wrap
  3. Log in to a Controller node and run the Pacemaker command to re-enable fencing:

    $ ssh tripleo-admin@<controller_ip> "sudo pcs property set stonith-enabled=true"
    Copy to Clipboard Toggle word wrap
    • Replace <controller_ip> with the IP address of a Controller node. You can find the IP addresses of your Controller nodes with the openstack server list command.
  4. If you use SBD fencing, reset the watchdog timer device interval to its original value before you disabled it:

    # pcs property set stonith-watchdog-timeout=<interval>
    Copy to Clipboard Toggle word wrap
    • Replace <interval> with the original value of the watchdog timer device, for example, 10.
  5. In the fencing.yaml environment file, set the EnableFencing parameter to true.

15.6. Compressing Red Hat OpenStack Platform dashboard files

After the Red Hat OpenStack Platform (RHOSP) upgrade, if your RHOSP dashboard (horizon) has errors that are similar to the following example, you must compress your files manually. Static file compression does not run automatically. You must repeat this procedure on every horizon container that you upgraded.

compressor.exceptions.OfflineGenerationError: You have offline compression enabled but key "dbf52fe9eafa4b50d57c151a16962bcb02dfc37de3ae4fde450231af213e84a9" is missing from offline manifest. You may need to run "python manage.py compress". Here is the original content:
Copy to Clipboard Toggle word wrap

Procedure

  1. Enter the shell for the horizon container:

    $ podman exec -it horizon /bin/bash
    Copy to Clipboard Toggle word wrap
  2. Navigate to the directory that contains the files to compress:

    $ cd /usr/bin/
    Copy to Clipboard Toggle word wrap
  3. Run the compression:

    $ python3 manage.py compress
    Copy to Clipboard Toggle word wrap
맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat