Chapter 25. Performing post-upgrade actions
After you have completed the overcloud upgrade, you must perform some post-upgrade configuration to ensure that your environment is fully supported and ready for future operations.
25.1. Removing unnecessary packages and directories from the undercloud
After the Leapp upgrade, remove the unnecessary packages and directories that remain on the undercloud.
Procedure
Remove the unnecessary packages
$ sudo dnf -y remove --exclude=python-pycadf-common python2*
Remove the content from the
/httpboot
and/tftpboot
directories that includes old images used in Red Hat OpenStack 13:$ sudo rm -rf /httpboot /tftpboot
25.2. Validating the post-upgrade functionality
Run the post-upgrade validation group to check the post-upgrade functionality.
Procedure
Source the
stackrc
file.$ source ~/stackrc
Run the
openstack tripleo validator run
command with the --group post-upgrade option:$ openstack tripleo validator run --group post-upgrade
Review the results of the validation report. To view detailed output from a specific validation, run the
openstack tripleo validator show run --full
command against the UUID of the specific validation from the report:$ openstack tripleo validator show run --full <UUID>
A FAILED
validation does not prevent you from deploying or running Red Hat OpenStack Platform. However, a FAILED
validation can indicate a potential issue with a production environment.
25.3. Upgrading the overcloud images
You must replace your current overcloud images with new versions. The new images ensure that the director can introspect and provision your nodes using the latest version of OpenStack Platform software.
Prerequisites
- You have upgraded the undercloud to the latest version.
Procedure
-
Log in to the undercloud as the
stack
user. Source the
stackrc
file.$ source ~/stackrc
Install the packages containing the overcloud QCOW2 archives:
$ sudo dnf install rhosp-director-images rhosp-director-images-ipa
Remove any existing images from the
images
directory on thestack
user’s home (/home/stack/images
):$ rm -rf ~/images/*
Extract the archives:
$ cd ~/images $ for i in /usr/share/rhosp-director-images/overcloud-full-latest-16.1.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-16.1.tar; do tar -xvf $i; done $ cd ~
Import the latest images into the director:
$ openstack overcloud image upload --update-existing --image-path /home/stack/images/
Configure your nodes to use the new images:
$ openstack overcloud node configure $(openstack baremetal node list -c UUID -f value)
When you deploy overcloud nodes, ensure that the overcloud image version corresponds to the respective heat template version. For example, use the OpenStack Platform 16.1 images only with the OpenStack Platform 16.1 heat templates.
The new overcloud-full
image replaces the old overcloud-full
image. If you made changes to the old image, you must repeat the changes in the new image, especially if you want to deploy new nodes in the future.
25.4. Updating CPU pinning parameters
Red Hat OpenStack Platform 16.1 uses new parameters for CPU pinning:
NovaComputeCpuDedicatedSet
- Sets the dedicated (pinned) CPUs.
NovaComputeCpuSharedSet
- Sets the shared (unpinned) CPUs.
You must migrate the CPU pinning configuration from the NovaVcpuPinSet
parameter to the NovaComputeCpuDedicatedSet
and NovaComputeCpuSharedSet
parameters after completing the upgrade to Red Hat OpenStack Platform 16.1.
Procedure
-
Log in to the undercloud as the
stack
user. If your Compute nodes support simultaneous multithreading (SMT) but you created instances with the
hw:cpu_thread_policy=isolate
policy, you must perform one of the following options:Unset the
hw:cpu_thread_policy
thread policy and resize the instances:Source your overcloud authentication file:
$ source ~/overcloudrc
Unset the
hw:cpu_thread_policy
property of the flavor:(overcloud) $ openstack flavor unset --property hw:cpu_thread_policy <flavor>
Note-
Unsetting the
hw:cpu_thread_policy
attribute sets the policy to the defaultprefer
policy, which sets the instance to use an SMT-enabled Compute node if available. You can also set thehw:cpu_thread_policy
attribute torequire
, which sets a hard requirements for an SMT-enabled Compute node. -
If the Compute node does not have an SMT architecture or enough CPU cores with available thread siblings, scheduling will fail. To prevent this, set
hw:cpu_thread_policy
toprefer
instead ofrequire
. The defaultprefer
policy ensures that thread siblings are used when available. -
If you use
hw:cpu_thread_policy=isolate
, you must have SMT disabled or use a platform that does not support SMT.
-
Unsetting the
Convert the instances to use the new thread policy.
(overcloud) $ openstack server resize --flavor <flavor> <server> (overcloud) $ openstack server resize confirm <server>
Repeat this step for all pinned instances using the
hw:cpu_thread_policy=isolated
policy.
Migrate instances from the Compute node and disable SMT on the Compute node:
Source your overcloud authentication file:
$ source ~/overcloudrc
Disable the Compute node from accepting new virtual machines:
(overcloud) $ openstack compute service list (overcloud) $ openstack compute service set <hostname> nova-compute --disable
- Migrate all instances from the Compute node. For more information on instance migration, see Migrating virtual machine instances between Compute nodes.
- Reboot the Compute node and disable SMT in the BIOS of the Compute node.
- Boot the Compute node.
Re-enable the Compute node:
(overcloud) $ openstack compute service set <hostname> nova-compute --enable
Source the
stackrc
file:$ source ~/stackrc
-
Edit the environment file that contains the
NovaVcpuPinSet
parameter. Migrate the CPU pinning configuration from the
NovaVcpuPinSet
parameter toNovaComputeCpuDedicatedSet
andNovaComputeCpuSharedSet
:-
Migrate the value of
NovaVcpuPinSet
toNovaComputeCpuDedicatedSet
for hosts that were previously used for pinned instances. -
Migrate the value of
NovaVcpuPinSet
toNovaComputeCpuSharedSet
for hosts that were previously used for unpinned instances. -
If there is no value set for NovaVcpuPinSet, then all Compute node cores should be assigned to either
NovaComputeCpuDedicatedSet
orNovaComputeCpuSharedSet
, depending on the type of instances you intend to host on the nodes.
For example, your previous environment file might contain the following pinning configuration:
parameter_defaults: ... NovaVcpuPinSet: 1,2,3,5,6,7 ...
To migrate the configuration to a pinned configuration, set the
NovaComputeCpuDedicatedSet
parameter and unset theNovaVcpuPinSet
parameter:parameter_defaults: ... NovaComputeCpuDedicatedSet: 1,2,3,5,6,7 NovaVcpuPinSet: "" ...
To migrate the configuration to an unpinned configuration, set the
NovaComputeCpuSharedSet
parameter and unset theNovaVcpuPinSet
parameter:parameter_defaults: ... NovaComputeCpuSharedSet: 1,2,3,5,6,7 NovaVcpuPinSet: "" ...
ImportantEnsure the configuration of either
NovaComputeCpuDedicatedSet
orNovaComputeCpuSharedSet
matches the configuration defined inNovaVcpuPinSet
. To change the configuration for either of these, or to configure bothNovaComputeCpuDedicatedSet
orNovaComputeCpuSharedSet
, ensure the Compute nodes with the pinning configuration are not running any instances before updating the configuration.-
Migrate the value of
- Save the file.
Run the deployment command to update the overcloud with the new CPU pinning parameters.
(undercloud) $ openstack overcloud deploy \ --stack _STACK NAME_ \ --templates \ ... -e /home/stack/templates/<compute_environment_file>.yaml ...
Additional resources
25.5. Migrating users to the member role
In Red Hat OpenStack Platform 13, the default member role is called _member_
.
In Red Hat OpenStack Platform 16.1, the default member role is called member
.
When you complete the upgrade from Red Hat OpenStack Platform 13 to Red Hat OpenStack Platform 16.1, users that you assigned to the _member_
role still have that role. You can migrate all of the users to the member
role by using the following steps.
Prerequisites
- You have upgraded the overcloud to the latest version.
Procedure
List all of the users on your cloud that have the
_member_
role:openstack role assignment list --names --role _member_ --sort-column project
For each user, remove the
_member_
role, and apply themember
role:openstack role remove --user <user> --project <project> _member_ openstack role add --user <user> --project <project> member