Chapter 25. Performing post-upgrade actions
After you have completed the overcloud upgrade, you must perform some post-upgrade configuration to ensure that your environment is fully supported and ready for future operations.
25.1. Removing unnecessary packages and directories from the undercloud Copy linkLink copied to clipboard!
After the Leapp upgrade, remove the unnecessary packages and directories that remain on the undercloud.
Procedure
Remove the unnecessary packages
sudo dnf -y remove --exclude=python-pycadf-common python2*
$ sudo dnf -y remove --exclude=python-pycadf-common python2*Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove the content from the
/httpbootand/tftpbootdirectories that includes old images used in Red Hat OpenStack 13:sudo rm -rf /httpboot /tftpboot
$ sudo rm -rf /httpboot /tftpbootCopy to Clipboard Copied! Toggle word wrap Toggle overflow
25.2. Validating the post-upgrade functionality Copy linkLink copied to clipboard!
Run the post-upgrade validation group to check the post-upgrade functionality.
Procedure
Source the
stackrcfile.source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
openstack tripleo validator runcommand with the --group post-upgrade option:openstack tripleo validator run --group post-upgrade
$ openstack tripleo validator run --group post-upgradeCopy to Clipboard Copied! Toggle word wrap Toggle overflow Review the results of the validation report. To view detailed output from a specific validation, run the
openstack tripleo validator show run --fullcommand against the UUID of the specific validation from the report:openstack tripleo validator show run --full <UUID>
$ openstack tripleo validator show run --full <UUID>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
A FAILED validation does not prevent you from deploying or running Red Hat OpenStack Platform. However, a FAILED validation can indicate a potential issue with a production environment.
25.3. Upgrading the overcloud images Copy linkLink copied to clipboard!
You must replace your current overcloud images with new versions. The new images ensure that the director can introspect and provision your nodes using the latest version of OpenStack Platform software.
Prerequisites
- You have upgraded the undercloud to the latest version.
Procedure
-
Log in to the undercloud as the
stackuser. Source the
stackrcfile.source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Install the packages containing the overcloud QCOW2 archives:
sudo dnf install rhosp-director-images rhosp-director-images-ipa
$ sudo dnf install rhosp-director-images rhosp-director-images-ipaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Remove any existing images from the
imagesdirectory on thestackuser’s home (/home/stack/images):rm -rf ~/images/*
$ rm -rf ~/images/*Copy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the archives:
cd ~/images for i in /usr/share/rhosp-director-images/overcloud-full-latest-16.1.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-16.1.tar; do tar -xvf $i; done cd ~
$ cd ~/images $ for i in /usr/share/rhosp-director-images/overcloud-full-latest-16.1.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-16.1.tar; do tar -xvf $i; done $ cd ~Copy to Clipboard Copied! Toggle word wrap Toggle overflow Import the latest images into the director:
openstack overcloud image upload --update-existing --image-path /home/stack/images/
$ openstack overcloud image upload --update-existing --image-path /home/stack/images/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure your nodes to use the new images:
openstack overcloud node configure $(openstack baremetal node list -c UUID -f value)
$ openstack overcloud node configure $(openstack baremetal node list -c UUID -f value)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
When you deploy overcloud nodes, ensure that the overcloud image version corresponds to the respective heat template version. For example, use the OpenStack Platform 16.1 images only with the OpenStack Platform 16.1 heat templates.
The new overcloud-full image replaces the old overcloud-full image. If you made changes to the old image, you must repeat the changes in the new image, especially if you want to deploy new nodes in the future.
25.4. Updating CPU pinning parameters Copy linkLink copied to clipboard!
Red Hat OpenStack Platform 16.1 uses new parameters for CPU pinning:
NovaComputeCpuDedicatedSet- Sets the dedicated (pinned) CPUs.
NovaComputeCpuSharedSet- Sets the shared (unpinned) CPUs.
You must migrate the CPU pinning configuration from the NovaVcpuPinSet parameter to the NovaComputeCpuDedicatedSet and NovaComputeCpuSharedSet parameters after completing the upgrade to Red Hat OpenStack Platform 16.1.
Procedure
-
Log in to the undercloud as the
stackuser. If your Compute nodes support simultaneous multithreading (SMT) but you created instances with the
hw:cpu_thread_policy=isolatepolicy, you must perform one of the following options:Unset the
hw:cpu_thread_policythread policy and resize the instances:Source your overcloud authentication file:
source ~/overcloudrc
$ source ~/overcloudrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Unset the
hw:cpu_thread_policyproperty of the flavor:(overcloud) $ openstack flavor unset --property hw:cpu_thread_policy <flavor>
(overcloud) $ openstack flavor unset --property hw:cpu_thread_policy <flavor>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note-
Unsetting the
hw:cpu_thread_policyattribute sets the policy to the defaultpreferpolicy, which sets the instance to use an SMT-enabled Compute node if available. You can also set thehw:cpu_thread_policyattribute torequire, which sets a hard requirements for an SMT-enabled Compute node. -
If the Compute node does not have an SMT architecture or enough CPU cores with available thread siblings, scheduling will fail. To prevent this, set
hw:cpu_thread_policytopreferinstead ofrequire. The defaultpreferpolicy ensures that thread siblings are used when available. -
If you use
hw:cpu_thread_policy=isolate, you must have SMT disabled or use a platform that does not support SMT.
-
Unsetting the
Convert the instances to use the new thread policy.
(overcloud) $ openstack server resize --flavor <flavor> <server> (overcloud) $ openstack server resize confirm <server>
(overcloud) $ openstack server resize --flavor <flavor> <server> (overcloud) $ openstack server resize confirm <server>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Repeat this step for all pinned instances using the
hw:cpu_thread_policy=isolatedpolicy.
Migrate instances from the Compute node and disable SMT on the Compute node:
Source your overcloud authentication file:
source ~/overcloudrc
$ source ~/overcloudrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Disable the Compute node from accepting new virtual machines:
(overcloud) $ openstack compute service list (overcloud) $ openstack compute service set <hostname> nova-compute --disable
(overcloud) $ openstack compute service list (overcloud) $ openstack compute service set <hostname> nova-compute --disableCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Migrate all instances from the Compute node. For more information on instance migration, see Migrating virtual machine instances between Compute nodes.
- Reboot the Compute node and disable SMT in the BIOS of the Compute node.
- Boot the Compute node.
Re-enable the Compute node:
(overcloud) $ openstack compute service set <hostname> nova-compute --enable
(overcloud) $ openstack compute service set <hostname> nova-compute --enableCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Source the
stackrcfile:source ~/stackrc
$ source ~/stackrcCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Edit the environment file that contains the
NovaVcpuPinSetparameter. Migrate the CPU pinning configuration from the
NovaVcpuPinSetparameter toNovaComputeCpuDedicatedSetandNovaComputeCpuSharedSet:-
Migrate the value of
NovaVcpuPinSettoNovaComputeCpuDedicatedSetfor hosts that were previously used for pinned instances. -
Migrate the value of
NovaVcpuPinSettoNovaComputeCpuSharedSetfor hosts that were previously used for unpinned instances. -
If there is no value set for NovaVcpuPinSet, then all Compute node cores should be assigned to either
NovaComputeCpuDedicatedSetorNovaComputeCpuSharedSet, depending on the type of instances you intend to host on the nodes.
For example, your previous environment file might contain the following pinning configuration:
parameter_defaults: ... NovaVcpuPinSet: 1,2,3,5,6,7 ...
parameter_defaults: ... NovaVcpuPinSet: 1,2,3,5,6,7 ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow To migrate the configuration to a pinned configuration, set the
NovaComputeCpuDedicatedSetparameter and unset theNovaVcpuPinSetparameter:parameter_defaults: ... NovaComputeCpuDedicatedSet: 1,2,3,5,6,7 NovaVcpuPinSet: "" ...
parameter_defaults: ... NovaComputeCpuDedicatedSet: 1,2,3,5,6,7 NovaVcpuPinSet: "" ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow To migrate the configuration to an unpinned configuration, set the
NovaComputeCpuSharedSetparameter and unset theNovaVcpuPinSetparameter:parameter_defaults: ... NovaComputeCpuSharedSet: 1,2,3,5,6,7 NovaVcpuPinSet: "" ...
parameter_defaults: ... NovaComputeCpuSharedSet: 1,2,3,5,6,7 NovaVcpuPinSet: "" ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantEnsure the configuration of either
NovaComputeCpuDedicatedSetorNovaComputeCpuSharedSetmatches the configuration defined inNovaVcpuPinSet. To change the configuration for either of these, or to configure bothNovaComputeCpuDedicatedSetorNovaComputeCpuSharedSet, ensure the Compute nodes with the pinning configuration are not running any instances before updating the configuration.-
Migrate the value of
- Save the file.
Run the deployment command to update the overcloud with the new CPU pinning parameters.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Additional resources
25.5. Migrating users to the member role Copy linkLink copied to clipboard!
In Red Hat OpenStack Platform 13, the default member role is called _member_.
In Red Hat OpenStack Platform 16.1, the default member role is called member.
When you complete the upgrade from Red Hat OpenStack Platform 13 to Red Hat OpenStack Platform 16.1, users that you assigned to the _member_ role still have that role. You can migrate all of the users to the member role by using the following steps.
Prerequisites
- You have upgraded the overcloud to the latest version.
Procedure
List all of the users on your cloud that have the
_member_role:openstack role assignment list --names --role _member_ --sort-column project
openstack role assignment list --names --role _member_ --sort-column projectCopy to Clipboard Copied! Toggle word wrap Toggle overflow For each user, remove the
_member_role, and apply thememberrole:openstack role remove --user <user> --project <project> _member_ openstack role add --user <user> --project <project> member
openstack role remove --user <user> --project <project> _member_ openstack role add --user <user> --project <project> memberCopy to Clipboard Copied! Toggle word wrap Toggle overflow