Upgrading Red Hat OpenStack Platform
Upgrading a Red Hat OpenStack Platform environment
Abstract
Chapter 1. Introduction
This document provides a workflow to help upgrade your Red Hat OpenStack Platform environment to the latest major version and keep it updated with minor releases of that version.
This guide provides an upgrade path through the following versions:
Old Overcloud Version | New Overcloud Version |
---|---|
Red Hat OpenStack Platform 13 | Red Hat OpenStack Platform 15 |
1.1. High level workflow
The following table provides an outline of the steps required for the upgrade process:
Step | Description |
---|---|
Preparing your environment | Perform a backup of the database and configuration of the undercloud and overcloud Controller nodes. Update to the latest minor release. Validate the environment. |
Preparing container images | Create an environment file containing the parameters to prepare container images for OpenStack Platform 15 services. |
Upgrading the undercloud | Upgrade the undercloud from OpenStack Platform 13 to OpenStack Platform 15. |
Preparing the overcloud | Perform relevant steps to transition your overcloud configuration files to OpenStack Platform 15. |
Upgrading your Controller nodes | Upgrade all Controller nodes simultaneously to OpenStack Platform 15. |
Upgrading your Compute nodes | Test the upgrade on selected Compute nodes. If the test succeeds, upgrade all Compute nodes. |
Upgrading your Ceph Storage nodes | Upgrade all Ceph Storage nodes. This includes an upgrade to containerized version of Red Hat Ceph Storage 3. |
Finalize the upgrade | Run the convergence command to refresh your overcloud stack. |
1.2. Major changes
The following is a high-level list of major changes that occur during the upgrade.
- The undercloud now uses containers to run services. The undercloud also uses the same architecture that configures the overcloud.
- The director now uses a container preparation method to automatically obtain container images for the undercloud and overcloud.
-
The overcloud now uses an Ansible-based Red Hat Subscription Management method. This replaces the previous
rhel-registration
method. - Composable networks now define a list of routes.
- OpenStack Telemetry (ceilometer) has been removed completely from OpenStack Platform 15.
1.3. Before starting the upgrade
- Apply any firmware updates to your hardware before performing the upgrade.
During updates, if the Open vSwitch (OVS) major version changes (for example, 2.9 to 2.11), director renames any user-customized configuration files with an .rpmsave extension, and installs the default OVS configuration.
If you want to retain your earlier OVS customizations, you must manually reapply your modifications contained in the renamed files (for example, logrotate configurations in /etc/logrotate.d/openvswitch). This two-step update method avoids data plane interruptions that would be triggered by an automatic RPM package update.
Chapter 2. Preparing for an OpenStack Platform Upgrade
This process prepares your OpenStack Platform environment for a full update. This involves the following process:
- Backup both the undercloud and overcloud
- Update the undercloud packages and run the upgrade command
- Reboot the undercloud in case a newer kernel or newer system packages are installed
- Update the overcloud using the overcloud upgrade command
- Reboot the overcloud nodes in case a newer kernel or newer system packages are installed
- Perform a validation check on both the undercloud and overcloud
These procedures ensure your OpenStack Platform environment is in the best possible state before proceeding with the upgrade.
2.1. Updating the current OpenStack Platform version
Before performing the upgrade to the next version of OpenStack Platform, it is recommended to perform a minor version update to your existing undercloud and overcloud. This means performing a minor version update for OpenStack Platform 13.
Follow the instructions in the Keeping Red Hat OpenStack Platform Updated guide for OpenStack Platform 13.
2.2. Backing up a baremetal undercloud
A full undercloud backup includes the following databases and files:
- All MariaDB databases on the undercloud node
- MariaDB configuration file on the undercloud (so that you can accurately restore databases)
-
The configuration data:
/etc
-
Log data:
/var/log
-
Image data:
/var/lib/glance
-
Certificate generation data if using SSL:
/var/lib/certmonger
-
Any container image data:
/var/lib/docker
and/var/lib/registry
-
All swift data:
/srv/node
-
All data in the stack user home directory:
/home/stack
Confirm that you have sufficient disk space available on the undercloud before performing the backup process. Expect the archive file to be at least 3.5 GB, if not larger.
Procedure
-
Log into the undercloud as the
root
user. Back up the database:
mysqldump --opt --all-databases > /root/undercloud-all-databases.sql
[root@director ~]# mysqldump --opt --all-databases > /root/undercloud-all-databases.sql
Copy to Clipboard Copied! Create a
backup
directory and change the user ownership of the directory to thestack
user:mkdir /backup chown stack: /backup
[root@director ~]# mkdir /backup [root@director ~]# chown stack: /backup
Copy to Clipboard Copied! You will use this directory to store the archive containing the undercloud database and file system.
Change to the
backup
directorycd /backup
[root@director ~]# cd /backup
Copy to Clipboard Copied! Archive the database backup and the configuration files:
tar --xattrs --ignore-failed-read -cf \ undercloud-backup-`date +%F`.tar \ /etc \ /var/log \ /var/lib/glance \ /var/lib/certmonger \ /srv/node \ /root \ /home/stack
[root@director ~]# tar --xattrs --ignore-failed-read -cf \ undercloud-backup-`date +%F`.tar \ /etc \ /var/log \ /var/lib/glance \ /var/lib/certmonger \ /srv/node \ /root \ /home/stack
Copy to Clipboard Copied! -
The
--ignore-failed-read
option skips any directory that does not apply to your undercloud. -
The
--xattrs
option includes extended attributes, which are required to store metadata for Object Storage (swift).
This creates a file named
undercloud-backup-<date>.tar.gz
, where<date>
is the system date. Copy thistar
file to a secure location.-
The
2.3. Backing up containerized overcloud control plane services
The following procedure creates a backup of the containerized overcloud databases and configuration. A backup of the overcloud database and services ensures that you have a snapshot of a working environment. Having this snapshot helps in case you need to restore the overcloud to its original state in case of an operational failure.
This procedure includes only crucial control plane services. It does not include backups of Compute node workloads, data on Ceph Storage nodes, nor any additional services.
Procedure
Perform the database backup:
Log into a Controller node. You can access the overcloud from the undercloud:
ssh heat-admin@192.0.2.100
$ ssh heat-admin@192.0.2.100
Copy to Clipboard Copied! Change to the
root
user:sudo -i
$ sudo -i
Copy to Clipboard Copied! Install the
mariadb
client tools if they are not installed already:dnf install mariadb
# dnf install mariadb
Copy to Clipboard Copied! Create a temporary directory to store the backups:
mkdir -p /var/tmp/mysql_backup/
# mkdir -p /var/tmp/mysql_backup/
Copy to Clipboard Copied! Obtain the database password and store it in the
MYSQLDBPASS
environment variable. The password is stored in themysql::server::root_password
variable within the/etc/puppet/hieradata/service_configs.json
file. Use the following command to store the password:MYSQLDBPASS=$(sudo hiera -c /etc/puppet/hiera.yaml mysql::server::root_password)
# MYSQLDBPASS=$(sudo hiera -c /etc/puppet/hiera.yaml mysql::server::root_password)
Copy to Clipboard Copied! Backup the database:
mysql -uroot -p$MYSQLDBPASS -s -N -e "select distinct table_schema from information_schema.tables where engine='innodb' and table_schema != 'mysql';" | xargs mysqldump -uroot -p$MYSQLDBPASS --single-transaction --databases > /var/tmp/mysql_backup/openstack_databases-`date +%F`-`date +%T`.sql
# mysql -uroot -p$MYSQLDBPASS -s -N -e "select distinct table_schema from information_schema.tables where engine='innodb' and table_schema != 'mysql';" | xargs mysqldump -uroot -p$MYSQLDBPASS --single-transaction --databases > /var/tmp/mysql_backup/openstack_databases-`date +%F`-`date +%T`.sql
Copy to Clipboard Copied! This dumps a database backup called
/var/tmp/mysql_backup/openstack_databases-<date>.sql
where<date>
is the system date and time. Copy this database dump to a secure location.Backup all the users and permissions information:
mysql -uroot -p$MYSQLDBPASS -s -N -e "SELECT CONCAT('\"SHOW GRANTS FOR ''',user,'''@''',host,''';\"') FROM mysql.user where (length(user) > 0 and user NOT LIKE 'root')" | xargs -n1 mysql -uroot -p$MYSQLDBPASS -s -N -e | sed 's/$/;/' > /var/tmp/mysql_backup/openstack_databases_grants-`date +%F`-`date +%T`.sql
# mysql -uroot -p$MYSQLDBPASS -s -N -e "SELECT CONCAT('\"SHOW GRANTS FOR ''',user,'''@''',host,''';\"') FROM mysql.user where (length(user) > 0 and user NOT LIKE 'root')" | xargs -n1 mysql -uroot -p$MYSQLDBPASS -s -N -e | sed 's/$/;/' > /var/tmp/mysql_backup/openstack_databases_grants-`date +%F`-`date +%T`.sql
Copy to Clipboard Copied! This dumps a database backup called
/var/tmp/mysql_backup/openstack_databases_grants-<date>.sql
where<date>
is the system date and time. Copy this database dump to a secure location.
Backup the Pacemaker configuration:
- Log into a Controller node.
Run the following command to create an archive of the current Pacemaker configuration:
sudo pcs config backup pacemaker_controller_backup
# sudo pcs config backup pacemaker_controller_backup
Copy to Clipboard Copied! -
Copy the resulting archive (
pacemaker_controller_backup.tar.bz2
) to a secure location.
Backup the Redis cluster:
Obtain the Redis endpoint from HAProxy:
REDISIP=$(sudo hiera -c /etc/puppet/hiera.yaml redis_vip)
# REDISIP=$(sudo hiera -c /etc/puppet/hiera.yaml redis_vip)
Copy to Clipboard Copied! Obtain the master password for the Redis cluster:
REDISPASS=$(sudo hiera -c /etc/puppet/hiera.yaml redis::masterauth)
# REDISPASS=$(sudo hiera -c /etc/puppet/hiera.yaml redis::masterauth)
Copy to Clipboard Copied! Check connectivity to the Redis cluster:
redis-cli -a $REDISPASS -h $REDISIP ping
# redis-cli -a $REDISPASS -h $REDISIP ping
Copy to Clipboard Copied! Dump the Redis database:
redis-cli -a $REDISPASS -h $REDISIP bgsave
# redis-cli -a $REDISPASS -h $REDISIP bgsave
Copy to Clipboard Copied! This stores the database backup in the default
/var/lib/redis/
directory. Copy this database dump to a secure location.
Backup the filesystem on each Controller node:
Create a directory for the backup:
mkdir -p /var/tmp/filesystem_backup/
# mkdir -p /var/tmp/filesystem_backup/
Copy to Clipboard Copied! Run the following
tar
command:tar --ignore-failed-read --xattrs \ -zcvf /var/tmp/filesystem_backup/fs_backup-`date '+%Y-%m-%d-%H-%M-%S'`.tar.gz \ /var/lib/kolla \ /var/lib/config-data \ /var/log/containers \ /var/log/openvswitch \ /etc \ /srv/node \ /root \ /home/heat-admin
# tar --ignore-failed-read --xattrs \ -zcvf /var/tmp/filesystem_backup/fs_backup-`date '+%Y-%m-%d-%H-%M-%S'`.tar.gz \ /var/lib/kolla \ /var/lib/config-data \ /var/log/containers \ /var/log/openvswitch \ /etc \ /srv/node \ /root \ /home/heat-admin
Copy to Clipboard Copied! The
--ignore-failed-read
option ignores any missing directories, which is useful if certain services are not used or are separated on their own custom roles.
-
Copy the resulting
tar
file to a secure location.
2.4. Validating the undercloud
The following is a set of steps to check the functionality of your undercloud.
Procedure
Source the undercloud access details:
source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied! Check for failed Systemd services:
(undercloud) $ sudo systemctl list-units --state=failed 'tripleo_*'
(undercloud) $ sudo systemctl list-units --state=failed 'tripleo_*'
Copy to Clipboard Copied! Check the undercloud free space:
(undercloud) $ df -h
(undercloud) $ df -h
Copy to Clipboard Copied! Use the "Undercloud Reqirements" as a basis to determine if you have adequate free space.
If you have NTP installed on the undercloud, check that clocks are synchronized:
(undercloud) $ sudo chronyc -a 'burst 4/4'
(undercloud) $ sudo chronyc -a 'burst 4/4'
Copy to Clipboard Copied! Check the undercloud network services:
(undercloud) $ openstack network agent list
(undercloud) $ openstack network agent list
Copy to Clipboard Copied! All agents should be
Alive
and their state should beUP
.Check the undercloud compute services:
(undercloud) $ openstack compute service list
(undercloud) $ openstack compute service list
Copy to Clipboard Copied! All agents' status should be
enabled
and their state should beup
Related Information
- The following solution article shows how to remove deleted stack entries in your OpenStack Orchestration (heat) database: https://access.redhat.com/solutions/2215131
2.5. Validating a containerized overcloud
The following is a set of steps to check the functionality of your containerized overcloud.
Procedure
Source the undercloud access details:
source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied! Check the status of your bare metal nodes:
(undercloud) $ openstack baremetal node list
(undercloud) $ openstack baremetal node list
Copy to Clipboard Copied! All nodes should have a valid power state (
on
) and maintenance mode should befalse
.Check for failed Systemd services:
(undercloud) $ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo systemctl list-units --state=failed 'tripleo_*' 'ceph*'" ; done
(undercloud) $ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo systemctl list-units --state=failed 'tripleo_*' 'ceph*'" ; done
Copy to Clipboard Copied! Check for failed containerized services:
(undercloud) $ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo podman ps -f 'exited=1' --all" ; done
(undercloud) $ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo podman ps -f 'exited=1' --all" ; done
Copy to Clipboard Copied! Check the HAProxy connection to all services. Obtain the Control Plane VIP address and authentication details for the
haproxy.stats
service:(undercloud) $ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE sudo 'grep "listen haproxy.stats" -A 6 /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg'
(undercloud) $ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE sudo 'grep "listen haproxy.stats" -A 6 /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg'
Copy to Clipboard Copied! Use these details in the following cURL request:
(undercloud) $ curl -s -u admin:<PASSWORD> "http://<IP ADDRESS>:1993/;csv" | egrep -vi "(frontend|backend)" | cut -d, -f 1,2,18,37,57 | column -s, -t
(undercloud) $ curl -s -u admin:<PASSWORD> "http://<IP ADDRESS>:1993/;csv" | egrep -vi "(frontend|backend)" | cut -d, -f 1,2,18,37,57 | column -s, -t
Copy to Clipboard Copied! Replace
<PASSWORD>
and<IP ADDRESS>
details with the actual details from thehaproxy.stats
service. The resulting list shows the OpenStack Platform services on each node and their connection status.NoteIn case the nodes run Redis services, only one node displays an
ON
status for that service. This is because Redis is an active-passive service, which runs only on one node at a time.Check overcloud database replication health:
(undercloud) $ for NODE in $(openstack server list --name controller -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo podman exec clustercheck clustercheck" ; done
(undercloud) $ for NODE in $(openstack server list --name controller -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo podman exec clustercheck clustercheck" ; done
Copy to Clipboard Copied! Check Pacemaker resource health:
(undercloud) $ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE "sudo pcs status"
(undercloud) $ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE "sudo pcs status"
Copy to Clipboard Copied! Look for:
-
All cluster nodes
online
. -
No resources
stopped
on any cluster nodes. -
No
failed
pacemaker actions.
-
All cluster nodes
Check the disk space on each overcloud node:
(undercloud) $ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo df -h --output=source,fstype,avail -x overlay -x tmpfs -x devtmpfs" ; done
(undercloud) $ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo df -h --output=source,fstype,avail -x overlay -x tmpfs -x devtmpfs" ; done
Copy to Clipboard Copied! Check overcloud Ceph Storage cluster health. The following command runs the
ceph
tool on a Controller node to check the cluster:(undercloud) $ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE "sudo ceph -s"
(undercloud) $ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE "sudo ceph -s"
Copy to Clipboard Copied! Check Ceph Storage OSD for free space. The following command runs the
ceph
tool on a Controller node to check the free space:(undercloud) $ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE "sudo ceph df"
(undercloud) $ NODE=$(openstack server list --name controller-0 -f value -c Networks | cut -d= -f2); ssh heat-admin@$NODE "sudo ceph df"
Copy to Clipboard Copied! Check that clocks are synchronized on overcloud nodes
(undercloud) $ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo chronyc -a 'burst 4/4'" ; done
(undercloud) $ for NODE in $(openstack server list -f value -c Networks | cut -d= -f2); do echo "=== $NODE ===" ; ssh heat-admin@$NODE "sudo chronyc -a 'burst 4/4'" ; done
Copy to Clipboard Copied! Source the overcloud access details:
(undercloud) $ source ~/overcloudrc
(undercloud) $ source ~/overcloudrc
Copy to Clipboard Copied! Check the overcloud network services:
(overcloud) $ openstack network agent list
(overcloud) $ openstack network agent list
Copy to Clipboard Copied! All agents should be
Alive
and their state should beUP
.Check the overcloud compute services:
(overcloud) $ openstack compute service list
(overcloud) $ openstack compute service list
Copy to Clipboard Copied! All agents' status should be
enabled
and their state should beup
Check the overcloud volume services:
(overcloud) $ openstack volume service list
(overcloud) $ openstack volume service list
Copy to Clipboard Copied! All agents' status should be
enabled
and their state should beup
.
Related Information
- Review the article "How can I verify my OpenStack environment is deployed with Red Hat recommended configurations?". This article provides some information on how to check your Red Hat OpenStack Platform environment and tune the configuration to Red Hat’s recommendations.
Chapter 3. Upgrading the Undercloud
This process upgrades the undercloud and its overcloud images to Red Hat OpenStack Platform 15.
3.1. Converting to next generation power management drivers
Red Hat OpenStack Platform now uses next generation drivers, also known as hardware types, that replace older drivers.
The following table shows an analogous comparison between older drivers with their next generation hardware type equivalent:
Old Driver | New Hardware Type |
---|---|
|
|
|
|
|
|
|
|
|
|
VBMC ( |
|
|
|
In OpenStack Platform 15, these older drivers have been removed and are no longer accessible. You must change to hardware types before upgrading to OpenStack Platform 15.
Procedure
Check the current list of hardware types enabled:
source ~/stackrc openstack baremetal driver list --type dynamic
$ source ~/stackrc $ openstack baremetal driver list --type dynamic
Copy to Clipboard Copied! If you use a hardware type driver that is not enabled, enable the driver using the
enabled_hardware_types
parameter in theundercloud.conf
file:enabled_hardware_types = ipmi,redfish,idrac
enabled_hardware_types = ipmi,redfish,idrac
Copy to Clipboard Copied! Save the file and refresh the undercloud:
openstack undercloud install
$ openstack undercloud install
Copy to Clipboard Copied! Run the following commands, substituting the
OLDDRIVER
andNEWDRIVER
variables for your power management type:source ~/stackrc OLDDRIVER="pxe_ipmitool" NEWDRIVER="ipmi" for NODE in $(openstack baremetal node list --driver $OLDDRIVER -c UUID -f value) ; do openstack baremetal node set $NODE --driver $NEWDRIVER; done
$ source ~/stackrc $ OLDDRIVER="pxe_ipmitool" $ NEWDRIVER="ipmi" $ for NODE in $(openstack baremetal node list --driver $OLDDRIVER -c UUID -f value) ; do openstack baremetal node set $NODE --driver $NEWDRIVER; done
Copy to Clipboard Copied!
3.2. Upgrading the director packages to OpenStack Platform 14
This procedure upgrades the director toolset and the core Heat template collection to the OpenStack Platform 14 release.
Procedure
-
Log in to the director as the
stack
user. Disable the current OpenStack Platform repository:
sudo subscription-manager repos --disable=rhel-7-server-openstack-13-rpms
$ sudo subscription-manager repos --disable=rhel-7-server-openstack-13-rpms
Copy to Clipboard Copied! Enable the new OpenStack Platform repository:
sudo subscription-manager repos --enable=rhel-7-server-openstack-14-rpms
$ sudo subscription-manager repos --enable=rhel-7-server-openstack-14-rpms
Copy to Clipboard Copied! Run
dnf
to upgrade the director’s main packages:sudo dnf update -y python-tripleoclient* openstack-tripleo-common openstack-tripleo-heat-templates
$ sudo dnf update -y python-tripleoclient* openstack-tripleo-common openstack-tripleo-heat-templates
Copy to Clipboard Copied!
3.3. Preparing container images
The overcloud configuration requires initial registry configuration to determine where to obtain images and how to store them. Complete the following steps to generate and customize an environment file for preparing your container images.
Procedure
- Log in to your undercloud host as the stack user.
Generate the default container image preparation file:
openstack tripleo container image prepare default \ --local-push-destination \ --output-env-file containers-prepare-parameter.yaml
$ openstack tripleo container image prepare default \ --local-push-destination \ --output-env-file containers-prepare-parameter.yaml
Copy to Clipboard Copied! This command includes the following additional options:
-
--local-push-destination
sets the registry on the undercloud as the location for container images. This means the director pulls the necessary images from the Red Hat Container Catalog and pushes them to the registry on the undercloud. The director uses this registry as the container image source. To pull directly from the Red Hat Container Catalog, omit this option. --output-env-file
is an environment file name. The contents of this file include the parameters for preparing your container images. In this case, the name of the file iscontainers-prepare-parameter.yaml
.NoteYou can also use the same
containers-prepare-parameter.yaml
file to define a container image source for both the undercloud and the overcloud.
-
-
Edit the
containers-prepare-parameter.yaml
and make the modifications to suit your requirements.
3.4. Container image preparation parameters
The default file for preparing your containers (containers-prepare-parameter.yaml
) contains the ContainerImagePrepare
Heat parameter. This parameter defines a list of strategies for preparing a set of images:
parameter_defaults: ContainerImagePrepare: - (strategy one) - (strategy two) - (strategy three) ...
parameter_defaults:
ContainerImagePrepare:
- (strategy one)
- (strategy two)
- (strategy three)
...
Each strategy accepts a set of sub-parameters that define which images to use and what to do with them. The following table contains information about the sub-parameters you can use with each ContainerImagePrepare
strategy:
Parameter | Description |
---|---|
| List of image name substrings to exclude from a strategy. |
|
List of image name substrings to include in a strategy. At least one image name must match an existing image. All |
|
String to append to the tag for the destination image. For example, if you pull an image with the tag |
| A dictionary of image labels that filter the images to modify. If an image matches the labels defined, the director includes the image in the modification process. |
| String of ansible role names to run during upload but before pushing the image to the destination registry. |
|
Dictionary of variables to pass to |
|
The namespace of the registry to push images during the upload process. When you specify a namespace for this parameter, all image parameters use this namespace too. If set to |
| The source registry from where to pull the original container images. |
|
A dictionary of |
|
Defines the label pattern to tag the resulting images. Usually sets to |
The set
parameter accepts a set of key: value
definitions. The following table contains information about the keys:
Key | Description |
---|---|
| The name of the Ceph Storage container image. |
| The namespace of the Ceph Storage container image. |
| The tag of the Ceph Storage container image. |
| A prefix for each OpenStack service image. |
| A suffix for each OpenStack service image. |
| The namespace for each OpenStack service image. |
|
The driver to use to determine which OpenStack Networking (neutron) container to use. Use a null value to set to the standard |
|
The tag that the director uses to identify the images to pull from the source registry. You usually keep this key set to |
The ContainerImageRegistryCredentials
parameter maps a container registry to a username and password to authenticate to that registry.
If a container registry requires a username and password, you can use ContainerImageRegistryCredentials
to include their values with the following syntax:
ContainerImagePrepare: - push_destination: 192.168.24.1:8787 set: namespace: registry.redhat.io/... ... ContainerImageRegistryCredentials: registry.redhat.io: my_username: my_password
ContainerImagePrepare:
- push_destination: 192.168.24.1:8787
set:
namespace: registry.redhat.io/...
...
ContainerImageRegistryCredentials:
registry.redhat.io:
my_username: my_password
In the example, replace my_username
and my_password
with your authentication credentials. Instead of using your individual user credentials, Red Hat recommends creating a registry service account and using those credentials to access registry.redhat.io
content. For more information, see "Red Hat Container Registry Authentication".
The ContainerImageRegistryLogin
parameter is used to control the registry login on the systems being deployed. This must be set to true
if push_destination
is set to false or not used.
ContainerImagePrepare: - set: namespace: registry.redhat.io/... ... ContainerImageRegistryCredentials: registry.redhat.io: my_username: my_password ContainerImageRegistryLogin: true
ContainerImagePrepare:
- set:
namespace: registry.redhat.io/...
...
ContainerImageRegistryCredentials:
registry.redhat.io:
my_username: my_password
ContainerImageRegistryLogin: true
3.5. Checking the director configuration
Check the /usr/share/python-tripleoclient/undercloud.conf.sample
for new or deprecated parameters that might be applicable to your environment. Modify these parameters in your current /home/stack/undercloud.conf
file. In particular, note the following parameters:
-
container_images_file
, which you should set to the absolute location of yourcontainers-prepare-parameter.yaml
file. -
enabled_drivers
, which you should remove. The older drivers have now been replaced byhardware_types
. -
generate_service_certificate
, which now defaults totrue
. Switch tofalse
if your undercloud did not originally use SSL and you have no intention to enable SSL . Note that enabling SSL on the undercloud requires providing extra environment files during upgrade to establish trust between the undercloud and overcloud nodes
3.6. Director configuration parameters
The following list contains information about parameters for configuring the undercloud.conf
file. Keep all parameters within their relevant sections to avoid errors.
Defaults
The following parameters are defined in the [DEFAULT]
section of the undercloud.conf
file:
- additional_architectures
A list of additional (kernel) architectures that an overcloud supports. Currently the overcloud supports
ppc64le
architecture.NoteWhen enabling support for ppc64le, you must also set
ipxe_enabled
toFalse
- certificate_generation_ca
-
The
certmonger
nickname of the CA that signs the requested certificate. Use this option only if you have set thegenerate_service_certificate
parameter. If you select thelocal
CA, certmonger extracts the local CA certificate to/etc/pki/ca-trust/source/anchors/cm-local-ca.pem
and adds the certificate to the trust chain. - clean_nodes
- Defines whether to wipe the hard drive between deployments and after introspection.
- cleanup
-
Cleanup temporary files. Set this to
False
to leave the temporary files used during deployment in place after the command is run. This is useful for debugging the generated files or if errors occur. - container_cli
-
The CLI tool for container management. Leave this parameter set to
podman
since Red Hat Enterprise Linux 8 only supportspodman
. - container_healthcheck_disabled
-
Disables containerized service health checks. It is recommended to keep health checks enabled and leave this option set to
false
. - container_images_file
Heat environment file with container image information. This can either be:
- Parameters for all required container images
-
Or the
ContainerImagePrepare
parameter to drive the required image preparation. Usually the file containing this parameter is namedcontainers-prepare-parameter.yaml
.
- container_insecure_registries
-
A list of insecure registries for
podman
to use. Use this parameter if you want to pull images from another source, such as a private container registry. In most cases,podman
has the certificates to pull container images from either the Red Hat Container Catalog or from your Satellite server if the undercloud is registered to Satellite. - container_registry_mirror
-
An optional
registry-mirror
configured thatpodman
uses. - custom_env_files
- Additional environment file to add to the undercloud installation.
- deployment_user
-
The user installing the undercloud. Leave this parameter unset to use the current default user (
stack
). - discovery_default_driver
-
Sets the default driver for automatically enrolled nodes. Requires
enable_node_discovery
enabled and you must include the driver in theenabled_hardware_types
list. - enable_ironic; enable_ironic_inspector; enable_mistral; enable_tempest; enable_validations; enable_zaqar
-
Defines the core services to enable for director. Leave these parameters set to
true
. - enable_node_discovery
-
Automatically enroll any unknown node that PXE-boots the introspection ramdisk. New nodes use the
fake_pxe
driver as a default but you can setdiscovery_default_driver
to override. You can also use introspection rules to specify driver information for newly enrolled nodes. - enable_novajoin
-
Defines whether to install the
novajoin
metadata service in the Undercloud. - enable_routed_networks
- Defines whether to enable support for routed control plane networks.
- enable_swift_encryption
- Defines whether to enable Swift encryption at-rest.
- enable_telemetry
-
Defines whether to install OpenStack Telemetry services (gnocchi, aodh, panko) in the undercloud. Set
enable_telemetry
parameter totrue
if you want to install and configure telemetry services automatically. The default value isfalse
, which disables telemetry on the undercloud. This parameter is required if using other products that consume metrics data, such as Red Hat CloudForms. - enabled_hardware_types
- A list of hardware types to enable for the undercloud.
- generate_service_certificate
-
Defines whether to generate an SSL/TLS certificate during the undercloud installation, which is used for the
undercloud_service_certificate
parameter. The undercloud installation saves the resulting certificate/etc/pki/tls/certs/undercloud-[undercloud_public_vip].pem
. The CA defined in thecertificate_generation_ca
parameter signs this certificate. - heat_container_image
- URL for the heat container image to use. Leave unset.
- heat_native
-
Use native heat templates. Leave as
true
. - hieradata_override
-
Path to
hieradata
override file that configures Puppet hieradata on the director, providing custom configuration to services beyond theundercloud.conf
parameters. If set, the undercloud installation copies this file to the/etc/puppet/hieradata
directory and sets it as the first file in the hierarchy. See Configuring hieradata on the undercloud for details on using this feature. - inspection_extras
-
Defines whether to enable extra hardware collection during the inspection process. This parameter requires
python-hardware
orpython-hardware-detect
package on the introspection image. - inspection_interface
-
The bridge the director uses for node introspection. This is a custom bridge that the director configuration creates. The
LOCAL_INTERFACE
attaches to this bridge. Leave this as the defaultbr-ctlplane
. - inspection_runbench
-
Runs a set of benchmarks during node introspection. Set this parameter to
true
to enable the benchmarks. This option is necessary if you intend to perform benchmark analysis when inspecting the hardware of registered nodes. - ipa_otp
-
Defines the one time password to register the Undercloud node to an IPA server. This is required when
enable_novajoin
is enabled. - ipxe_enabled
-
Defines whether to use iPXE or standard PXE. The default is
true
, which enables iPXE. Set tofalse
to set to standard PXE. - local_interface
The chosen interface for the director’s Provisioning NIC. This is also the device the director uses for DHCP and PXE boot services. Change this value to your chosen device. To see which device is connected, use the
ip addr
command. For example, this is the result of anip addr
command:2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:75:24:09 brd ff:ff:ff:ff:ff:ff inet 192.168.122.178/24 brd 192.168.122.255 scope global dynamic eth0 valid_lft 3462sec preferred_lft 3462sec inet6 fe80::5054:ff:fe75:2409/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noop state DOWN link/ether 42:0b:c2:a5:c1:26 brd ff:ff:ff:ff:ff:ff
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:00:75:24:09 brd ff:ff:ff:ff:ff:ff inet 192.168.122.178/24 brd 192.168.122.255 scope global dynamic eth0 valid_lft 3462sec preferred_lft 3462sec inet6 fe80::5054:ff:fe75:2409/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noop state DOWN link/ether 42:0b:c2:a5:c1:26 brd ff:ff:ff:ff:ff:ff
Copy to Clipboard Copied! In this example, the External NIC uses
eth0
and the Provisioning NIC useseth1
, which is currently not configured. In this case, set thelocal_interface
toeth1
. The configuration script attaches this interface to a custom bridge defined with theinspection_interface
parameter.- local_ip
-
The IP address defined for the director’s Provisioning NIC. This is also the IP address that the director uses for DHCP and PXE boot services. Leave this value as the default
192.168.24.1/24
unless you use a different subnet for the Provisioning network, for example, if it conflicts with an existing IP address or subnet in your environment. - local_mtu
-
MTU to use for the
local_interface
. Do not exceed 1500 for the undercloud. - local_subnet
-
The local subnet to use for PXE boot and DHCP interfaces. The
local_ip
address should reside in this subnet. The default isctlplane-subnet
. - net_config_override
-
Path to network configuration override template. If you set this parameter, the undercloud uses a JSON format template to configure the networking with
os-net-config
. The undercloud ignores the network parameters set inundercloud.conf
. See/usr/share/python-tripleoclient/undercloud.conf.sample
for an example. - networks_file
-
Networks file to override for
heat
. - output_dir
- Directory to output state, processed heat templates, and Ansible deployment files.
- overcloud_domain_name
The DNS domain name to use when deploying the overcloud.
NoteWhen configuring the overcloud, the
CloudDomain
parameter must be set to a matching value. Set this parameter in an environment file when you configure your overcloud.- roles_file
- The roles file to override for undercloud installation. It is highly recommended to leave unset so that the director installation uses the default roles file.
- scheduler_max_attempts
- Maximum number of times the scheduler attempts to deploy an instance. This value must be greater or equal to the number of bare metal nodes that you expect to deploy at once to work around potential race condition when scheduling.
- service_principal
- The Kerberos principal for the service using the certificate. Use this parameter only if your CA requires a Kerberos principal, such as in FreeIPA.
- subnets
-
List of routed network subnets for provisioning and introspection. See Subnets for more information. The default value includes only the
ctlplane-subnet
subnet. - templates
- Heat templates file to override.
- undercloud_admin_host
-
The IP address or hostname defined for director Admin API endpoints over SSL/TLS. The director configuration attaches the IP address to the director software bridge as a routed IP address, which uses the
/32
netmask. - undercloud_debug
-
Sets the log level of undercloud services to
DEBUG
. Set this value totrue
to enable. - undercloud_enable_selinux
-
Enable or disable SELinux during the deployment. It is highly recommended to leave this value set to
true
unless you are debugging an issue. - undercloud_hostname
- Defines the fully qualified host name for the undercloud. If set, the undercloud installation configures all system host name settings. If left unset, the undercloud uses the current host name, but the user must configure all system host name settings appropriately.
- undercloud_log_file
-
The path to a log file to store the undercloud install/upgrade logs. By default, the log file is
install-undercloud.log
within the home directory. For example,/home/stack/install-undercloud.log
. - undercloud_nameservers
- A list of DNS nameservers to use for the undercloud hostname resolution.
- undercloud_ntp_servers
- A list of network time protocol servers to help synchronize the undercloud date and time.
- undercloud_public_host
-
The IP address or hostname defined for director Public API endpoints over SSL/TLS. The director configuration attaches the IP address to the director software bridge as a routed IP address, which uses the
/32
netmask. - undercloud_service_certificate
- The location and filename of the certificate for OpenStack SSL/TLS communication. Ideally, you obtain this certificate from a trusted certificate authority. Otherwise, generate your own self-signed certificate.
- undercloud_timezone
- Host timezone for the undercloud. If you specify no timezone, director uses the existing timezone configuration.
- undercloud_update_packages
- Defines whether to update packages during the undercloud installation.
Subnets
Each provisioning subnet is a named section in the undercloud.conf
file. For example, to create a subnet called ctlplane-subnet
, use the following sample in your undercloud.conf
file:
[ctlplane-subnet] cidr = 192.168.24.0/24 dhcp_start = 192.168.24.5 dhcp_end = 192.168.24.24 inspection_iprange = 192.168.24.100,192.168.24.120 gateway = 192.168.24.1 masquerade = true
[ctlplane-subnet]
cidr = 192.168.24.0/24
dhcp_start = 192.168.24.5
dhcp_end = 192.168.24.24
inspection_iprange = 192.168.24.100,192.168.24.120
gateway = 192.168.24.1
masquerade = true
You can specify as many provisioning networks as necessary to suit your environment.
- gateway
-
The gateway for the overcloud instances. This is the undercloud host, which forwards traffic to the External network. Leave this as the default
192.168.24.1
unless you use a different IP address for the director or want to use an external gateway directly.
The director configuration also enables IP forwarding automatically using the relevant sysctl
kernel parameter.
- cidr
-
The network that the director uses to manage overcloud instances. This is the Provisioning network, which the undercloud
neutron
service manages. Leave this as the default192.168.24.0/24
unless you use a different subnet for the Provisioning network. - masquerade
-
Defines whether to masquerade the network defined in the
cidr
for external access. This provides the Provisioning network with a degree of network address translation (NAT) so that the Provisioning network has external access through the director. - dhcp_start; dhcp_end
- The start and end of the DHCP allocation range for overcloud nodes. Ensure this range contains enough IP addresses to allocate your nodes.
- dhcp_exclude
- IP addresses to exclude in the DHCP allocation range.
- host_routes
-
Host routes for the Neutron-managed subnet for the Overcloud instances on this network. This also configures the host routes for the
local_subnet
on the undercloud. - inspection_iprange
-
A range of IP address that the director’s introspection service uses during the PXE boot and provisioning process. Use comma-separated values to define the start and end of this range. For example,
192.168.24.100,192.168.24.120
. Make sure this range contains enough IP addresses for your nodes and does not conflict with the range fordhcp_start
anddhcp_end
.
3.7. Upgrading the director
Complete the following steps to upgrade the director.
Procedure
Run the following command to upgrade the director on the undercloud:
openstack undercloud upgrade
$ openstack undercloud upgrade
Copy to Clipboard Copied! This command launches the director configuration script. The director upgrades its packages and configures its services to suit the settings in the
undercloud.conf
. This script takes several minutes to complete.NoteThe director configuration script prompts for confirmation before proceeding. Bypass this confirmation using the
-y
option:openstack undercloud upgrade -y
$ openstack undercloud upgrade -y
Copy to Clipboard Copied! The script also starts all OpenStack Platform service containers on the undercloud automatically. Check the enabled containers using the following command:
sudo docker ps
[stack@director ~]$ sudo docker ps
Copy to Clipboard Copied! The script adds the
stack
user to thedocker
group to ensure that thestack
user has access to container management commands. Refresh thestack
user permissions with the following command:exec su -l stack
[stack@director ~]$ exec su -l stack
Copy to Clipboard Copied! The command prompts you to log in again. Enter the stack user password.
To initialize the
stack
user to use the command line tools, run the following command:source ~/stackrc
[stack@director ~]$ source ~/stackrc
Copy to Clipboard Copied! The prompt now indicates OpenStack commands authenticate and execute against the undercloud;
(undercloud) [stack@director ~]$
Copy to Clipboard Copied!
The director upgrade is complete.
3.8. Upgrading the overcloud images
You must replace your current overcloud images with new versions. The new images ensure that the director can introspect and provision your nodes using the latest version of OpenStack Platform software.
Prerequisites
- You have upgraded the undercloud to the latest version.
Procedure
Remove any existing images from the
images
directory on thestack
user’s home (/home/stack/images
):rm -rf ~/images/*
$ rm -rf ~/images/*
Copy to Clipboard Copied! Extract the archives:
cd ~/images for i in /usr/share/rhosp-director-images/overcloud-full-latest-14.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-14.0.tar; do tar -xvf $i; done cd ~
$ cd ~/images $ for i in /usr/share/rhosp-director-images/overcloud-full-latest-14.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-14.0.tar; do tar -xvf $i; done $ cd ~
Copy to Clipboard Copied! Import the latest images into the director:
openstack overcloud image upload --update-existing --image-path /home/stack/images/
$ openstack overcloud image upload --update-existing --image-path /home/stack/images/
Copy to Clipboard Copied! Configure your nodes to use the new images:
openstack overcloud node configure $(openstack baremetal node list -c UUID -f value)
$ openstack overcloud node configure $(openstack baremetal node list -c UUID -f value)
Copy to Clipboard Copied! Verify the existence of the new images:
openstack image list ls -l /var/lib/ironic/httpboot/
$ openstack image list $ ls -l /var/lib/ironic/httpboot/
Copy to Clipboard Copied!
When deploying overcloud nodes, ensure that the Overcloud image version corresponds to the respective Heat template version. For example, use only the OpenStack Platform 14 images with the OpenStack Platform 14 Heat templates.
3.9. Undercloud Post-Upgrade Notes
-
If you use a local set of core templates in your
stack
user home directory, ensure that you update the templates using the recommended workflow in Using Customized Core Heat Templates. You must update the local copy before upgrading the overcloud.
3.10. Next Steps
The undercloud upgrade is complete. You can now prepare the overcloud for the upgrade.
Chapter 4. Preparing for the Overcloud upgrade
This process prepares the overcloud for the upgrade process.
Prerequisites
- You have upgraded the undercloud to the latest version.
4.1. Red Hat Subscription Manager (RHSM) composable service
The rhsm
composable service provides a method to register overcloud nodes through Ansible. Each role in the default roles_data
file contains a OS::TripleO::Services::Rhsm
resource, which is disabled by default. To enable the service, register the resource to the rhsm
composable service file:
resource_registry: OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/extraconfig/services/rhsm.yaml
resource_registry:
OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/extraconfig/services/rhsm.yaml
The rhsm
composable service accepts a RhsmVars
parameter, which allows you to define multiple sub-parameters relevant to your registration. For example:
parameter_defaults: RhsmVars: rhsm_repos: - rhel-8-for-x86_64-baseos-rpms - rhel-8-for-x86_64-appstream-rpms - rhel-8-for-x86_64-highavailability-rpms - ansible-2.8-for-rhel-8-x86_64-rpms - advanced-virt-for-rhel-8-x86_64-rpms - openstack-15-for-rhel-8-x86_64-rpms - rhceph-4-osd-for-rhel-8-x86_64-rpms - rhceph-4-mon-for-rhel-8-x86_64-rpms - rhceph-4-tools-for-rhel-8-x86_64-rpms - fast-datapath-for-rhel-8-x86_64-rpms rhsm_username: "myusername" rhsm_password: "p@55w0rd!" rhsm_org_id: "1234567"
parameter_defaults:
RhsmVars:
rhsm_repos:
- rhel-8-for-x86_64-baseos-rpms
- rhel-8-for-x86_64-appstream-rpms
- rhel-8-for-x86_64-highavailability-rpms
- ansible-2.8-for-rhel-8-x86_64-rpms
- advanced-virt-for-rhel-8-x86_64-rpms
- openstack-15-for-rhel-8-x86_64-rpms
- rhceph-4-osd-for-rhel-8-x86_64-rpms
- rhceph-4-mon-for-rhel-8-x86_64-rpms
- rhceph-4-tools-for-rhel-8-x86_64-rpms
- fast-datapath-for-rhel-8-x86_64-rpms
rhsm_username: "myusername"
rhsm_password: "p@55w0rd!"
rhsm_org_id: "1234567"
You can also use the RhsmVars
parameter in combination with role-specific parameters (e.g. ControllerParameters
) to provide flexibility when enabling specific repositories for different nodes types.
4.2. Switching to the rhsm composable service
The previous rhel-registration
method runs a bash script to handle the overcloud registration. The scripts and environment files for this method are located in the core Heat template collection at /usr/share/openstack-tripleo-heat-templates/extraconfig/pre_deploy/rhel-registration/
.
Complete the following steps to switch from the rhel-registration
method to the rhsm
composable service.
Procedure
Exclude the
rhel-registration
environment files from future deployments operations. In most cases, exclude the following files:-
rhel-registration/environment-rhel-registration.yaml
-
rhel-registration/rhel-registration-resource-registry.yaml
-
If you use a custom
roles_data
file, ensure that each role in yourroles_data
file contains theOS::TripleO::Services::Rhsm
composable service. For example:- name: Controller description: | Controller role that has all the controller services loaded and handles Database, Messaging and Network functions. CountDefault: 1 ... ServicesDefault: ... - OS::TripleO::Services::Rhsm ...
- name: Controller description: | Controller role that has all the controller services loaded and handles Database, Messaging and Network functions. CountDefault: 1 ... ServicesDefault: ... - OS::TripleO::Services::Rhsm ...
Copy to Clipboard Copied! -
Add the environment file for
rhsm
composable service parameters to future deployment operations.
This method replaces the rhel-registration
parameters with the rhsm
service parameters and changes the Heat resource that enables the service from:
resource_registry: OS::TripleO::NodeExtraConfig: rhel-registration.yaml
resource_registry:
OS::TripleO::NodeExtraConfig: rhel-registration.yaml
To:
resource_registry: OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/extraconfig/services/rhsm.yaml
resource_registry:
OS::TripleO::Services::Rhsm: /usr/share/openstack-tripleo-heat-templates/extraconfig/services/rhsm.yaml
You can also include the /usr/share/openstack-tripleo-heat-templates/environments/rhsm.yaml
environment file with your deployment to enable the service.
4.3. rhel-registration to rhsm mappings
rhel-registration | rhsm / RhsmVars |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4.4. Updating composable services
This section contains information about new and deprecated composable services.
-
If you use the default
roles_data
file, these services are included automatically. -
If you use a custom
roles_data
file, add the new services and remove the deprecated services for each relevant role.
Controller Nodes
The following services have been deprecated for Controller nodes. Remove them from your Controller role.
Service | Reason |
---|---|
| OpenStack Platform no longer includes Ceilometer services. |
| This service has been substituted for two new services:
|
The following services are new for Controller nodes. Add them to your Controller role.
Service | Reason |
---|---|
| Only required if enabling the Block Storage (cinder) NVMeOF backend, |
| Run the commands to automatically pull and prepare container images relevant to the services in your overcloud. |
| Services for DNS-as-a-Service (designate). |
| Service for Bare Metal Introspection for the overcloud. |
| The networking agent for OpenStack Bare Metal (ironic). |
|
Replacement services for the |
|
Service to remove |
Compute Nodes
The following services are new for Compute nodes. Add them to your Compute role.
Service | Reason |
---|---|
|
Service to enable |
All Nodes
The following services are new for all nodes. Add them to all roles.
Service | Reason |
---|---|
| Service to enable Qpid Dispatch Router service for metrics and monitoring. |
| Service to enable Ansible-based Red Hat Subscription Management. |
4.5. Deprecated parameters
The following parameters are deprecated and have been replaced with role-specific parameters:
Old Parameter | New Parameter |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Update these parameters in your custom environment files.
If your OpenStack Platform environment still requires these deprecated parameters, the default roles_data
file allows their use. However, if you are using a custom roles_data
file and your overcloud still requires these deprecated parameters, you can allow access to them by editing the roles_data
file and adding the following to each role:
Controller Role
- name: Controller uses_deprecated_params: True deprecated_param_extraconfig: 'controllerExtraConfig' deprecated_param_flavor: 'OvercloudControlFlavor' deprecated_param_image: 'controllerImage' ...
- name: Controller
uses_deprecated_params: True
deprecated_param_extraconfig: 'controllerExtraConfig'
deprecated_param_flavor: 'OvercloudControlFlavor'
deprecated_param_image: 'controllerImage'
...
Compute Role
- name: Compute uses_deprecated_params: True deprecated_param_image: 'NovaImage' deprecated_param_extraconfig: 'NovaComputeExtraConfig' deprecated_param_metadata: 'NovaComputeServerMetadata' deprecated_param_scheduler_hints: 'NovaComputeSchedulerHints' deprecated_param_ips: 'NovaComputeIPs' deprecated_server_resource_name: 'NovaCompute' ...
- name: Compute
uses_deprecated_params: True
deprecated_param_image: 'NovaImage'
deprecated_param_extraconfig: 'NovaComputeExtraConfig'
deprecated_param_metadata: 'NovaComputeServerMetadata'
deprecated_param_scheduler_hints: 'NovaComputeSchedulerHints'
deprecated_param_ips: 'NovaComputeIPs'
deprecated_server_resource_name: 'NovaCompute'
...
Object Storage Role
- name: ObjectStorage uses_deprecated_params: True deprecated_param_metadata: 'SwiftStorageServerMetadata' deprecated_param_ips: 'SwiftStorageIPs' deprecated_param_image: 'SwiftStorageImage' deprecated_param_flavor: 'OvercloudSwiftStorageFlavor' ...
- name: ObjectStorage
uses_deprecated_params: True
deprecated_param_metadata: 'SwiftStorageServerMetadata'
deprecated_param_ips: 'SwiftStorageIPs'
deprecated_param_image: 'SwiftStorageImage'
deprecated_param_flavor: 'OvercloudSwiftStorageFlavor'
...
4.6. Deprecated CLI options
Some command line options are outdated or deprecated in favor of using Heat template parameters, which you include in the parameter_defaults
section of an environment file. The following table maps deprecated options to their Heat template equivalents.
Option | Description | Heat Template Parameter |
---|---|---|
| The number of Controller nodes to scale out |
|
| The number of Compute nodes to scale out |
|
| The number of Ceph Storage nodes to scale out |
|
| The number of Cinder nodes to scale out |
|
| The number of Swift nodes to scale out |
|
| The flavor to use for Controller nodes |
|
| The flavor to use for Compute nodes |
|
| The flavor to use for Ceph Storage nodes |
|
| The flavor to use for Cinder nodes |
|
| The flavor to use for Swift storage nodes |
|
| The overcloud creation process performs a set of pre-deployment checks. This option exits if any fatal errors occur from the pre-deployment checks. It is advisable to use this option as any errors can cause your deployment to fail. | No parameter mapping |
|
Disable the pre-deployment validations entirely. These validations were built-in pre-deployment validations, which have been replaced with external validations from the | No parameter mapping |
|
Run deployment using the | No parameter mapping |
| Sets the NTP server to use to synchronize time |
|
These parameters have been removed from Red Hat OpenStack Platform. It is recommended that you convert your CLI options to Heat parameters and add them to an environment file.
4.7. Composable networks
This version of Red Hat OpenStack Platform introduces a new feature for composable networks. If you use a custom roles_data
file, edit the file to add the composable networks to each role. For example, for Controller nodes:
- name: Controller networks: - External - InternalApi - Storage - StorageMgmt - Tenant
- name: Controller
networks:
- External
- InternalApi
- Storage
- StorageMgmt
- Tenant
Check the default /usr/share/openstack-tripleo-heat-templates/roles_data.yaml
file for further examples of syntax. Also check the example role snippets in /usr/share/openstack-tripleo-heat-templates/roles
.
The following table contains a mapping of composable networks to custom standalone roles:
Role | Networks Required |
---|---|
Ceph Storage Monitor |
|
Ceph Storage OSD |
|
Ceph Storage RadosGW |
|
Cinder API |
|
Compute |
|
Controller |
|
Database |
|
Glance |
|
Heat |
|
Horizon |
|
Ironic | None required. Uses the Provisioning/Control Plane network for API. |
Keystone |
|
Load Balancer |
|
Manila |
|
Message Bus |
|
Networker |
|
Neutron API |
|
Nova |
|
OpenDaylight |
|
Redis |
|
Sahara |
|
Swift API |
|
Swift Storage |
|
Telemetry |
|
In previous versions, the *NetName
parameters (e.g. InternalApiNetName
) changed the names of the default networks. This is no longer supported. Use a custom composable network file. For more information, see "Using Composable Networks" in the Advanced Overcloud Customization guide.
4.8. Updating network interface templates
A new feature in OpenStack Platform 15 allows you to specify routes for each network in the overcloud’s network_data
file. To accommodate this new feature, the network interface templates now require parameters for the route list in each network. These parameters use the following format:
[network-name]InterfaceRoutes
Even if your overcloud does not use a routing list, you must still include these parameters for each network interface template.
- If you use one of the default NIC template sets, these parameters are included automatically.
-
If you use a custom set of static NIC template, add these new parameters to the
parameters
of each role’s template.
Red Hat OpenStack Platform includes a script to automatically add the missing parameters to your template files.
Procedure
Change to the director’s core template collection:
cd /usr/share/openstack-tripleo-heat-templates
$ cd /usr/share/openstack-tripleo-heat-templates
Copy to Clipboard Copied! Run the
merge-new-params-nic-config-script.py
python script in the tools directory. For example, to update a custom Controller node NIC template, run the script with the following options:python ./tools/merge-new-params-nic-config-script.py --role-name Controller -t /home/stack/ccsosp-templates/custom-nics/controller.yaml
$ python ./tools/merge-new-params-nic-config-script.py --role-name Controller -t /home/stack/ccsosp-templates/custom-nics/controller.yaml
Copy to Clipboard Copied! Note the following options used with this script:
-
--role-name
defines the name of the role to use as a basis for the template update. -
-t, --template
defines the filename of the NIC template to update. -
-n, --network-data
defines the relative path to thenetwork_data
file. Use this option for customnetwork_data
files. If omitted, the script uses the default file. -
-r, --roles-data
, defines the relative path to theroles_data.yaml
file. Use this option for customroles_data
files. If omitted, the script uses the default file.
-
The script saves a copy of the original template and adds a timestamp extension to the copy’s filename. To compare the differences between the original and updated template, run the following command:
diff /home/stack/ccsosp-templates/custom-nics/controller.yaml.[TIMESTAMP] /home/stack/ccsosp-templates/custom-nics/controller.yaml
$ diff /home/stack/ccsosp-templates/custom-nics/controller.yaml.[TIMESTAMP] /home/stack/ccsosp-templates/custom-nics/controller.yaml
Copy to Clipboard Copied! Replace
[TIMESTAMP]
with the timestamp on the original filename.The output displays the new route parameters for that role:
StorageMgmtInterfaceRoutes: default: [] description: > Routes for the storage_mgmt network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json TenantInterfaceRoutes: default: [] description: > Routes for the tenant network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json ExternalInterfaceRoutes: default: [] description: > Routes for the external network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json InternalApiInterfaceRoutes: default: [] description: > Routes for the internal_api network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json ManagementInterfaceRoutes: default: [] description: > Routes for the management network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json StorageInterfaceRoutes: default: [] description: > Routes for the storage network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json
StorageMgmtInterfaceRoutes: default: [] description: > Routes for the storage_mgmt network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json TenantInterfaceRoutes: default: [] description: > Routes for the tenant network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json ExternalInterfaceRoutes: default: [] description: > Routes for the external network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json InternalApiInterfaceRoutes: default: [] description: > Routes for the internal_api network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json ManagementInterfaceRoutes: default: [] description: > Routes for the management network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json StorageInterfaceRoutes: default: [] description: > Routes for the storage network traffic. JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}] Unless the default is changed, the parameter is automatically resolved from the subnet host_routes attribute. type: json
Copy to Clipboard Copied!
For more information, see "Isolating Networks".
4.9. Preparing Block Storage service to receive custom configuration files
When upgrading to the containerized environment, use the CinderVolumeOptVolumes
parameter to add docker volume mounts. This enables custom configuration files on the host to be made available to the cinder-volume service when it’s running in a container.
For example:
parameter_defaults: CinderVolumeOptVolumes: /etc/cinder/nfs_shares1:/etc/cinder/nfs_shares1 /etc/cinder/nfs_shares2:/etc/cinder/nfs_shares2
parameter_defaults:
CinderVolumeOptVolumes:
/etc/cinder/nfs_shares1:/etc/cinder/nfs_shares1
/etc/cinder/nfs_shares2:/etc/cinder/nfs_shares2
4.10. Next Steps
The overcloud preparation stage is complete. You can now perform an upgrade of the overcloud to 15 using the steps in Chapter 5, Upgrading the Overcloud.
Chapter 5. Upgrading the Overcloud
This process upgrades the overcloud.
Prerequisites
- You have upgraded the undercloud to the latest version.
- You have prepared your custom environment files to accommodate the changes in the upgrade.
5.1. Relevant files for upgrade
The following is a list of new and modified files for the overcloud upgrade.
Roles
-
If you use custom roles, include the updated
roles_data
file with new and deprecated services.
Network
-
If you use isolated networks, include the
network_data
file. - If you use custom NIC template, include the new versions.
Environment File
-
Include the
containers-prepare-parameter.yaml
file created during the undercloud upgrade. -
Replace the
rhel-registration
environment files with the environment file to configure the Ansible-based Red Hat Subscription Management service. - Include any additional environment files relevant to your overcloud configuration.
5.2. Running the overcloud upgrade preparation
The upgrade requires running openstack overcloud upgrade prepare
command, which performs the following tasks:
- Updates the overcloud plan to OpenStack Platform 15
- Prepares the nodes for the upgrade
Procedure
Source the
stackrc
file:source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied! Run the upgrade preparation command:
openstack overcloud upgrade prepare \ --templates \ -e /home/stack/containers-prepare-parameter.yaml \ -e <ENVIRONMENT FILE>
$ openstack overcloud upgrade prepare \ --templates \ -e /home/stack/containers-prepare-parameter.yaml \ -e <ENVIRONMENT FILE>
Copy to Clipboard Copied! Include the following options relevant to your environment:
-
Custom configuration environment files (
-e
) -
The environment file (
containers-prepare-parameter.yaml
) with your new container image locations (-e
). In most cases, this is the same environment file that the undercloud uses. -
If applicable, your custom roles (
roles_data
) file using--roles-file
. -
If applicable, your composable network (
network_data
) file using--networks-file
. -
If you use a custom stack name, pass the name with the
--stack
option.
-
Custom configuration environment files (
- Wait until the upgrade preparation completes.
5.3. Running the container image preparation
The overcloud requires the OpenStack Platform 15 container images before performing the upgrade. This involves executing the container_image_prepare
external upgrade process. To execute this process, run the openstack overcloud external-upgrade run
command against tasks tagged with the container_image_prepare
tag. These tasks perform the following operations:
- Automatically prepare all container image configuration relevant to your environment.
- Pull the relevant container images to your undercloud, unless you have previously disabled this option.
Procedure
Source the
stackrc
file:source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied! Run the
openstack overcloud external-upgrade run
command against tasks tagged with thecontainer_image_prepare
tag:openstack overcloud external-upgrade run --tags container_image_prepare
$ openstack overcloud external-upgrade run --tags container_image_prepare
Copy to Clipboard Copied! -
If you use a custom stack name, pass the name with the
--stack
option.
-
If you use a custom stack name, pass the name with the
5.4. Upgrading Controller and custom role nodes
Use the following process to upgrade all the Controller nodes, split Controller services, and other custom nodes to OpenStack Platform 15. The process involves running the openstack overcloud upgrade run
command and including the --nodes
option to restrict operations to only the selected nodes:
openstack overcloud upgrade run --nodes [ROLE]
$ openstack overcloud upgrade run --nodes [ROLE]
Substitute [ROLE]
for the name of a role or a comma-separated list of roles.
If your overcloud uses monolithic Controller nodes, run this command against the Controller
role.
If your overcloud uses split Controller services, use the following guide to upgrade the node role in the following order:
-
All roles that use Pacemaker. For example:
ControllerOpenStack
,Database
,Messaging
, andTelemetry
. -
Networker
nodes - Any other custom roles
Do not upgrade the following nodes yet:
-
Compute
nodes -
CephStorage
nodes
You will upgrade these nodes at a later stage.
Procedure
Source the
stackrc
file:source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied! If you use monolithic Controller nodes, run the upgrade command against the
Controller
role:openstack overcloud upgrade run --nodes Controller
$ openstack overcloud upgrade run --nodes Controller
Copy to Clipboard Copied! -
If you use a custom stack name, pass the name with the
--stack
option.
-
If you use a custom stack name, pass the name with the
If you use Controller services split across multiple roles:
- Run the upgrade command for roles with Pacemaker services:
openstack overcloud upgrade run --nodes ControllerOpenStack openstack overcloud upgrade run --nodes Database openstack overcloud upgrade run --nodes Messaging openstack overcloud upgrade run --nodes Telemetry
$ openstack overcloud upgrade run --nodes ControllerOpenStack
$ openstack overcloud upgrade run --nodes Database
$ openstack overcloud upgrade run --nodes Messaging
$ openstack overcloud upgrade run --nodes Telemetry
If you use a custom stack name, pass the name with the
--stack
option.Run the upgrade command for the
Networker
role:openstack overcloud upgrade run --nodes Networker
$ openstack overcloud upgrade run --nodes Networker
Copy to Clipboard Copied!
If you use a custom stack name, pass the name with the
--stack
option.Run the upgrade command for any remaining custom roles, except for
Compute
orCephStorage
roles:openstack overcloud upgrade run --nodes ObjectStorage
$ openstack overcloud upgrade run --nodes ObjectStorage
Copy to Clipboard Copied!
-
If you use a custom stack name, pass the name with the
--stack
option.
5.5. Upgrading all Compute nodes
This process upgrades all remaining Compute nodes to OpenStack Platform 15. The process involves running the openstack overcloud upgrade run
command and including the --nodes Compute
option to restrict operations to the Compute nodes only.
Procedure
Source the
stackrc
file:source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied! Run the upgrade command:
openstack overcloud upgrade run --nodes Compute
$ openstack overcloud upgrade run --nodes Compute
Copy to Clipboard Copied! -
If you use a custom stack name, pass the name with the
--stack
option.
-
If you use a custom stack name, pass the name with the
- Wait until the Compute node upgrade completes.
5.6. Upgrading all Ceph Storage nodes
This process upgrades the Ceph Storage nodes. The process involves:
-
Running the
openstack overcloud upgrade run
command and including the--nodes CephStorage
option to restrict operations to the Ceph Storage nodes. -
Running the
openstack overcloud ceph-upgrade run
command to perform an upgrade to a containerized Red Hat Ceph Storage 3 cluster.
Procedure
Source the
stackrc
file:source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied! Run the upgrade command:
openstack overcloud upgrade run --nodes CephStorage
$ openstack overcloud upgrade run --nodes CephStorage
Copy to Clipboard Copied! -
If you use a custom stack name, pass the name with the
--stack
option.
-
If you use a custom stack name, pass the name with the
- Wait until the node upgrade completes.
Run the Ceph Storage external upgrade process. For example:
openstack overcloud external-upgrade run --tags ceph
$ openstack overcloud external-upgrade run --tags ceph
Copy to Clipboard Copied! -
If you use a custom stack name, pass the name with the
--stack
option. - To pass any additional overrides, see in Section 5.6.1, “Custom parameters for upgrades”.
-
If you use a custom stack name, pass the name with the
- Wait until the Ceph Storage node upgrade completes.
5.6.1. Custom parameters for upgrades
When migrating Ceph to containers, each Ceph monitor and OSD is stopped sequentially. The migration does not continue until the same service that was stopped is successfully restarted. Ansible waits 15 seconds (the delay) and checks 5 times for the service to start (the retries). If the service does not restart, the migration stops so the operator can intervene.
Depending on the size of the Ceph cluster, you may need to increase the retry or delay values. The exact names of these parameters and their defaults are as follows:
health_mon_check_retries: 5 health_mon_check_delay: 15 health_osd_check_retries: 5 health_osd_check_delay: 15
health_mon_check_retries: 5
health_mon_check_delay: 15
health_osd_check_retries: 5
health_osd_check_delay: 15
To change the default values and make the cluster check 30 times and wait 40 seconds between each check, pass the following parameters in a yaml file with -e
using the openstack overcloud deploy
command:
parameter_defaults: CephAnsibleExtraConfig: health_osd_check_delay: 40 health_osd_check_retries: 30
parameter_defaults:
CephAnsibleExtraConfig:
health_osd_check_delay: 40
health_osd_check_retries: 30
5.7. Performing online database upgrades
Some overcloud components require an online upgrade (or migration) of their databases tables. This involves executing the online_upgrade
external upgrade process. To execute this process, run the openstack overcloud external-upgrade run
command against tasks tagged with the online_upgrade
tag. This performs online database upgrades to the following components:
- OpenStack Block Storage (cinder)
- OpenStack Compute (nova)
- OpenStack Bare Metal (ironic) if enabled in the overcloud
Procedure
Source the
stackrc
file:source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied! Run the
openstack overcloud external-upgrade run
command against tasks tagged with theonline_upgrade
tag:openstack overcloud external-upgrade run --tags online_upgrade
$ openstack overcloud external-upgrade run --tags online_upgrade
Copy to Clipboard Copied! -
If you use a custom stack name, pass the name with the
--stack
option.
-
If you use a custom stack name, pass the name with the
5.8. Finalizing the upgrade
The upgrade requires a final step to update the overcloud stack. This ensures the stack’s resource structure aligns with a regular deployment of OpenStack Platform 15 and allows you to perform standard openstack overcloud deploy
functions in the future.
Procedure
Source the
stackrc
file:source ~/stackrc
$ source ~/stackrc
Copy to Clipboard Copied! Run the upgrade finalization command:
openstack overcloud upgrade converge \ --templates \ -e <ENVIRONMENT FILE>
$ openstack overcloud upgrade converge \ --templates \ -e <ENVIRONMENT FILE>
Copy to Clipboard Copied! Include the following options relevant to your environment:
-
Custom configuration environment files (
-e
). -
If you use a custom stack name, pass the name with the
--stack
option. -
If applicable, your custom roles (
roles_data
) file using--roles-file
. -
If applicable, your composable network (
network_data
) file using--networks-file
.
-
Custom configuration environment files (
- Wait until the upgrade finalization completes.