이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 15. Performing post-upgrade actions
After you have completed the overcloud upgrade, you must perform some post-upgrade configuration to ensure that your environment is fully supported and ready for future operations.
If you run additional overcloud commands after the upgrade from Red Hat OpenStack Platform 16.2 to 17.1, you must consider the following:
-
Overcloud commands that you run after the upgrade must include the
YAMLfiles that you created or updated during the upgrade process. For example, to provision overcloud nodes during a scale-up operation, use the/home/stack/tripleo-[stack]-baremetal-deployment.yamlfile instead of the/home/stack/templates/overcloud-baremetal-deployed.yamlfile. -
Include all the options that you passed to the last run of the
openstack overcloud upgrade preparecommand, except for thesystem_upgrade.yamlfile and theupgrades-environment.yamlfile.
15.1. Performing post-upgrade tasks on the operating system 링크 복사링크가 클립보드에 복사되었습니다!
If you upgraded the operating system on your hosts to Red Hat Enterprise Linux 9.2, you must perform post-upgrade tasks such as removing any remaining Leapp packages. For more information on these tasks, see Performing post-upgrade tasks in Upgrading from RHEL 8 to RHEL 9.
15.2. Upgrading the overcloud images 링크 복사링크가 클립보드에 복사되었습니다!
You must replace your current overcloud images with new versions. The new images ensure that director can introspect and provision your nodes using the latest version of Red Hat OpenStack Platform (RHOSP) software.
Prerequisites
- You have upgraded the undercloud to the latest version.
You must use the new version of the overcloud images if you redeploy your overcloud. For more information on installing overcloud images, see Installing the overcloud images in Installing and managing Red Hat OpenStack Platform with director.
Procedure
Check the list of RPMs that you installed and ensure that there are no RHOSP 16.2 or legacy 17.1 images, for example,
rhosp-director-images-x86_64-17.1:rpm -qa | egrep "rhosp-director-images-*-16.2|rhosp-director-images-x86_64-17.1"
$ rpm -qa | egrep "rhosp-director-images-*-16.2|rhosp-director-images-x86_64-17.1"Copy to Clipboard Copied! Toggle word wrap Toggle overflow If the list displays these images, remove them by running the following command:
sudo dnf -y remove rhosp-director-images-*-16.2 rhosp-director-images-x86_64-17.1
$ sudo dnf -y remove rhosp-director-images-*-16.2 rhosp-director-images-x86_64-17.1Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove any existing images from the
imagesdirectory on thestackuser’s home (/home/stack/images):rm -rf ~/images/*
$ rm -rf ~/images/*Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the archives:
cd ~/images for i in /usr/share/rhosp-director-images/overcloud-hardened-uefi-full.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-17.1.tar; do tar -xvf $i; done cd ~
$ cd ~/images $ for i in /usr/share/rhosp-director-images/overcloud-hardened-uefi-full.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-17.1.tar; do tar -xvf $i; done $ cd ~Copy to Clipboard Copied! Toggle word wrap Toggle overflow Import the images into director:
openstack overcloud image upload --image-path /home/stack/images/ --update-existing
(undercloud) [stack@director images]$ openstack overcloud image upload --image-path /home/stack/images/ --update-existingCopy to Clipboard Copied! Toggle word wrap Toggle overflow The command completes the following tasks:
- Converts the image format from QCOW to RAW.
- Provides status updates about the upload of the image.
15.3. Updating CPU pinning parameters 링크 복사링크가 클립보드에 복사되었습니다!
You must migrate the CPU pinning configuration from the NovaVcpuPinSet parameter to the following parameters after completing the upgrade to Red Hat OpenStack Platform 17.1:
NovaComputeCpuDedicatedSet- Sets the dedicated (pinned) CPUs.
NovaComputeCpuSharedSet- Sets the shared (unpinned) CPUs.
Procedure
-
Log in to the undercloud as the
stackuser. If your Compute nodes support simultaneous multithreading (SMT) but you created instances with the
hw:cpu_thread_policy=isolatedpolicy, you must perform one of the following options:Create a new flavor that does not set the
hw:cpu_thread_policythread policy and resize the instances with that flavor:Source your overcloud authentication file:
source ~/overcloudrc
$ source ~/overcloudrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a flavor with the default thread policy,
prefer:(overcloud) $ openstack flavor create <flavor>
(overcloud) $ openstack flavor create <flavor>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteWhen you resize an instance, you must use a new flavor. You cannot reuse the current flavor. For more information, see Resizing an instance in the Creating and managing instances guide.
Convert the instances to use the new flavor:
(overcloud) $ openstack server resize --flavor <flavor> <server> (overcloud) $ openstack server resize confirm <server>
(overcloud) $ openstack server resize --flavor <flavor> <server> (overcloud) $ openstack server resize confirm <server>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Repeat this step for all pinned instances that use the
hw:cpu_thread_policy=isolatedpolicy.
Migrate instances from the Compute node and disable SMT on the Compute node:
Source your overcloud authentication file:
source ~/overcloudrc
$ source ~/overcloudrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Disable the Compute node from accepting new virtual machines:
(overcloud) $ openstack compute service list (overcloud) $ openstack compute service set <hostname> nova-compute --disable
(overcloud) $ openstack compute service list (overcloud) $ openstack compute service set <hostname> nova-compute --disableCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Migrate all instances from the Compute node. For more information on instance migration, see Migrating virtual machine instances between Compute nodes.
- Reboot the Compute node and disable SMT in the BIOS of the Compute node.
- Boot the Compute node.
Re-enable the Compute node:
(overcloud) $ openstack compute service set <hostname> nova-compute --enable
(overcloud) $ openstack compute service set <hostname> nova-compute --enableCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Source the
stackrcfile:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Edit the environment file that contains the
NovaVcpuPinSetparameter. Migrate the CPU pinning configuration from the
NovaVcpuPinSetparameter toNovaComputeCpuDedicatedSetandNovaComputeCpuSharedSet:-
Migrate the value of
NovaVcpuPinSettoNovaComputeCpuDedicatedSetfor hosts that were previously used for pinned instances. -
Migrate the value of
NovaVcpuPinSettoNovaComputeCpuSharedSetfor hosts that were previously used for unpinned instances. -
If there is no value set for NovaVcpuPinSet, then all Compute node cores should be assigned to either
NovaComputeCpuDedicatedSetorNovaComputeCpuSharedSet, depending on the type of instances you intend to host on the nodes.
For example, your previous environment file might contain the following pinning configuration:
parameter_defaults: ... NovaVcpuPinSet: 1,2,3,5,6,7 ...
parameter_defaults: ... NovaVcpuPinSet: 1,2,3,5,6,7 ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow To migrate the configuration to a pinned configuration, set the
NovaComputeCpuDedicatedSetparameter and unset theNovaVcpuPinSetparameter:parameter_defaults: ... NovaComputeCpuDedicatedSet: 1,2,3,5,6,7 NovaVcpuPinSet: "" ...
parameter_defaults: ... NovaComputeCpuDedicatedSet: 1,2,3,5,6,7 NovaVcpuPinSet: "" ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow To migrate the configuration to an unpinned configuration, set the
NovaComputeCpuSharedSetparameter and unset theNovaVcpuPinSetparameter:parameter_defaults: ... NovaComputeCpuSharedSet: 1,2,3,5,6,7 NovaVcpuPinSet: "" ...
parameter_defaults: ... NovaComputeCpuSharedSet: 1,2,3,5,6,7 NovaVcpuPinSet: "" ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantEnsure the configuration of either
NovaComputeCpuDedicatedSetorNovaComputeCpuSharedSetmatches the configuration defined inNovaVcpuPinSet. To change the configuration for either of these, or to configure bothNovaComputeCpuDedicatedSetorNovaComputeCpuSharedSet, ensure the Compute nodes with the pinning configuration are not running any instances before updating the configuration.-
Migrate the value of
- Save the file.
Run the deployment command to update the overcloud with the new CPU pinning parameters.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
15.4. Updating the default machine type for hosts after an upgrade to RHOSP 17 링크 복사링크가 클립보드에 복사되었습니다!
The machine type of an instance is a virtual chipset that provides certain default devices, such as a PCIe graphics card or Ethernet controller. Cloud users can specify the machine type for their instances by using an image with the hw_machine_type metadata property that they require.
Cloud administrators can use the Compute parameter NovaHWMachineType to configure each Compute node architecture with a default machine type to apply to instances hosted on that architecture. If the hw_machine_type image property is not provided when launching the instance, the default machine type for the host architecture is applied to the instance. Red Hat OpenStack Platform (RHOSP) 17 is based on RHEL 9. The pc-i440fx QEMU machine type is deprecated in RHEL 9, therefore the default machine type for x86_64 instances that run on RHEL 9 has changed from pc to q35. Based on this change in RHEL 9, the default value for machine type x86_64 has also changed from pc in RHOSP 16 to q35 in RHOSP 17.
From RHOSP 16.2 and later, the Compute service records the instance machine type within the system metadata of the instance when it launches an instance. This means that it is now possible to change the NovaHWMachineType during the lifetime of a RHOSP deployment without affecting the machine type of existing instances.
The Compute service records the machine type of instances that are not in a SHELVED_OFFLOADED state. Therefore, after an upgrade to RHOSP 17 you must manually record the machine type of instances that are in SHELVED_OFFLOADED state, and verify that all instances within the environment or specific cell have had a machine type recorded. After you have updated the system metadata for each instance with the machine types, you can update the NovaHWMachineType parameter to the RHOSP 17 default, q35, without affecting the machine type of existing instances.
From RHOSP OSP17.0 onwards, Q35 is the default machine type. The Q35 machine type uses PCIe ports. You can manage the number of PCIe port devices by configuring the heat parameter NovaLibvirtNumPciePorts. The number of devices that can attach to a PCIe port is fewer than instances running on previous versions. If you want to use more devices, you must use the hw_disk_bus=scsi or hw_scsi_model=virtio-scsi image property. For more information, see Metadata properties for virtual hardware.
Prerequisites
- Upgrade all Compute nodes to RHEL 9.2. For more information about upgrading Compute nodes, see Upgrading all Compute nodes to RHEL 9.2.
Procedure
-
Log in to the undercloud as the
stackuser. Source the
stackrcfile.source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to a Controller node as the
heat-adminuser:metalsmith list
(undercloud)$ metalsmith list $ ssh heat-admin@<controller_ip>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<controller_ip>with the IP address of the Controller node.Retrieve the list of instances that have no machine type set:
[heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \ nova-manage libvirt list_unset_machine_type
[heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \ nova-manage libvirt list_unset_machine_typeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Check the
NovaHWMachineTypeparameter in thenova-hw-machine-type-upgrade.yamlfile for the default machine type for the instance host. The default value for theNovaHWMachineTypeparameter in RHOSP 16.2 is as follows:x86_64=pc-i440fx-rhel7.6.0,aarch64=virt-rhel7.6.0,ppc64=pseries-rhel7.6.0,ppc64le=pseries-rhel7.6.0Update the system metadata of each instance with the default instance machine type:
[heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \ nova-manage libvirt update_machine_type <instance_uuid> <machine_type>
[heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \ nova-manage libvirt update_machine_type <instance_uuid> <machine_type>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<instance_uuid>with the UUID of the instance. Replace
<machine_type>with the machine type to record for the instance.WarningIf you set the machine type to something other than the machine type of the image on which the instance was booted, the existing instance might fail to boot.
-
Replace
Confirm that the machine type is recorded for all instances:
[heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \ nova-status upgrade check
[heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \ nova-status upgrade checkCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command returns a warning if an instance is found without a machine type. If you get this warning, repeat this procedure from step 4.
-
Change the default value of
NovaHWMachineTypein a Compute environment file tox86_64=q35and deploy the overcloud.
Verification
Create an instance that has the default machine type:
openstack server create --flavor <flavor> \ --image <image> --network <network> \ --wait defaultMachineTypeInstance
(overcloud)$ openstack server create --flavor <flavor> \ --image <image> --network <network> \ --wait defaultMachineTypeInstanceCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<flavor>with the name or ID of a flavor for the instance. -
Replace
<image>with the name or ID of an image that does not sethw_machine_type. -
Replace
<network>with the name or ID of the network to connect the instance to.
-
Replace
Verify that the instance machine type is set to the default value:
[heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \ nova-manage libvirt get_machine_type <instance_uuid>
[heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \ nova-manage libvirt get_machine_type <instance_uuid>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<instance_uuid>with the UUID of the instance.Hard reboot an instance with a machine type of
x86_64=pc-i440fx:openstack server reboot --hard <instance_uuid>
(overcloud)$ openstack server reboot --hard <instance_uuid>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<instance_uuid>with the UUID of the instance.Verify that the instance machine type has not been changed:
[heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \ nova-manage libvirt get_machine_type <instance_uuid>
[heat-admin@<controller_ip> ~]$ sudo podman exec -i -u root nova_api \ nova-manage libvirt get_machine_type <instance_uuid>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
<instance_uuid>with the UUID of the instance.
15.5. Re-enabling fencing in the overcloud 링크 복사링크가 클립보드에 복사되었습니다!
Before you upgraded the overcloud, you disabled fencing in Disabling fencing in the overcloud. After you upgrade your environment, re-enable fencing to protect your data if a node fails.
Procedure
-
Log in to the undercloud host as the
stackuser. Source the
stackrcundercloud credentials file:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to a Controller node and run the Pacemaker command to re-enable fencing:
ssh tripleo-admin@<controller_ip> "sudo pcs property set stonith-enabled=true"
$ ssh tripleo-admin@<controller_ip> "sudo pcs property set stonith-enabled=true"Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<controller_ip>with the IP address of a Controller node. You can find the IP addresses of your Controller nodes with theopenstack server listcommand.
-
Replace
If you use SBD fencing, reset the watchdog timer device interval to its original value before you disabled it:
pcs property set stonith-watchdog-timeout=<interval>
# pcs property set stonith-watchdog-timeout=<interval>Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Replace
<interval>with the original value of the watchdog timer device, for example,10.
-
Replace
-
In the
fencing.yamlenvironment file, set theEnableFencingparameter totrue.
Additional Resources
15.6. Compressing Red Hat OpenStack Platform dashboard files 링크 복사링크가 클립보드에 복사되었습니다!
After the Red Hat OpenStack Platform (RHOSP) upgrade, if your RHOSP dashboard (horizon) has errors that are similar to the following example, you must compress your files manually. Static file compression does not run automatically. You must repeat this procedure on every horizon container that you upgraded.
compressor.exceptions.OfflineGenerationError: You have offline compression enabled but key "dbf52fe9eafa4b50d57c151a16962bcb02dfc37de3ae4fde450231af213e84a9" is missing from offline manifest. You may need to run "python manage.py compress". Here is the original content:
compressor.exceptions.OfflineGenerationError: You have offline compression enabled but key "dbf52fe9eafa4b50d57c151a16962bcb02dfc37de3ae4fde450231af213e84a9" is missing from offline manifest. You may need to run "python manage.py compress". Here is the original content:
Procedure
Enter the shell for the horizon container:
podman exec -it horizon /bin/bash
$ podman exec -it horizon /bin/bashCopy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate to the directory that contains the files to compress:
cd /usr/bin/
$ cd /usr/bin/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the compression:
python3 manage.py compress
$ python3 manage.py compressCopy to Clipboard Copied! Toggle word wrap Toggle overflow