Appendix A. Deployment migration options
This section includes topics related validation of DCN storage, as well as migrating or changing architectures.
A.1. Validating edge storage
Ensure that the deployment of central and edge sites are working by testing glance multi-store and instance creation.
You can import images into glance that are available on the local filesystem or available on a web server.
Always store an image copy in the central site, even if there are no instances using the image at the central location.
Prerequisites
Check the stores that are available through the Image service by using the
glance stores-info
command. In the following example, three stores are available: central, dcn1, and dcn2. These correspond to glance stores at the central location and edge sites, respectively:$ glance stores-info +----------+----------------------------------------------------------------------------------+ | Property | Value | +----------+----------------------------------------------------------------------------------+ | stores | [{"default": "true", "id": "central", "description": "central rbd glance | | | store"}, {"id": "dcn0", "description": "dcn0 rbd glance store"}, | | | {"id": "dcn1", "description": "dcn1 rbd glance store"}] | +----------+----------------------------------------------------------------------------------+
A.1.1. Importing from a local file
You must upload the image to the central location’s store first, then copy the image to remote sites.
Ensure that your image file is in RAW format. If the image is not in raw format, you must convert the image before importing it into the Image service:
file cirros-0.5.1-x86_64-disk.img cirros-0.5.1-x86_64-disk.img: QEMU QCOW2 Image (v3), 117440512 bytes qemu-img convert -f qcow2 -O raw cirros-0.5.1-x86_64-disk.img cirros-0.5.1-x86_64-disk.raw
Import the image into the default back end at the central site:
glance image-create \ --disk-format raw --container-format bare \ --name cirros --file cirros-0.5.1-x86_64-disk.raw \ --store central
A.1.2. Importing an image from a web server
If the image is hosted on a web server, you can use the GlanceImageImportPlugins
parameter to upload the image to multiple stores.
This procedure assumes that the default image conversion plugin is enabled in glance. This feature automatically converts QCOW2 file formats into RAW images, which are optimal for Ceph RBD. You can confirm that a glance image is in RAW format by running the glance image-show ID | grep disk_format
.
Procedure
Use the
image-create-via-import
parameter of theglance
command to import an image from a web server. Use the--stores
parameter.# glance image-create-via-import \ --disk-format qcow2 \ --container-format bare \ --name cirros \ --uri http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img \ --import-method web-download \ --stores central,dcn1
In this example, the qcow2 cirros image is downloaded from the official Cirros site, converted to RAW by glance, and imported into the central site and edge site 1 as specified by the
--stores
parameter.
Alternatively you can replace --stores
with --all-stores True
to upload the image to all of the stores.
A.1.3. Copying an image to a new site
You can copy existing images from the central location to edge sites, which gives you access to previously created images at newly established locations.
Use the UUID of the glance image for the copy operation:
ID=$(openstack image show cirros -c id -f value) glance image-import $ID --stores dcn0,dcn1 --import-method copy-image
NoteIn this example, the
--stores
option specifies that thecirros
image will be copied from the central site to edge sites dcn1 and dcn2. Alternatively, you can use the--all-stores True
option, which uploads the image to all the stores that don’t currently have the image.Confirm a copy of the image is in each store. Note that the
stores
key, which is the last item in the properties map, is set tocentral,dcn0,dcn1
.:$ openstack image show $ID | grep properties | properties | direct_url=rbd://d25504ce-459f-432d-b6fa-79854d786f2b/images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076/snap, locations=[{u'url: u'rbd://d25504ce-459f-432d-b6fa-79854d786f2b/images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076/snap', u'metadata': {u'store': u'central'}}, {u'url': u'rbd://0c10d6b5-a455-4c4d-bd53-8f2b9357c3c7/images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076/snap', u'metadata': {u'store': u'dcn0'}}, {u'url': u'rbd://8649d6c3-dcb3-4aae-8c19-8c2fe5a853ac/images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076/snap', u'metadata': {u'store': u'dcn1'}}], os_glance_failed_import=', os_glance_importing_to_stores=', os_hash_algo='sha512, os_hash_value=b795f047a1b10ba0b7c95b43b2a481a59289dc4cf2e49845e60b194a911819d3ada03767bbba4143b44c93fd7f66c96c5a621e28dff51d1196dae64974ce240e, os_hidden=False, stores=central,dcn0,dcn1 |
Always store an image copy in the central site even if there is no VM using it on that site.
A.1.4. Confirming that an instance at an edge site can boot with image based volumes
You can use an image at the edge site to create a persistent root volume.
Procedure
Identify the ID of the image to create as a volume, and pass that ID to the
openstack volume create
command:IMG_ID=$(openstack image show cirros -c id -f value) openstack volume create --size 8 --availability-zone dcn0 pet-volume-dcn0 --image $IMG_ID
Identify the volume ID of the newly created volume and pass it to the
openstack server create
command:VOL_ID=$(openstack volume show -f value -c id pet-volume-dcn0) openstack server create --flavor tiny --key-name dcn0-key --network dcn0-network --security-group basic --availability-zone dcn0 --volume $VOL_ID pet-server-dcn0
You can verify that the volume is based on the image by running the rbd command within a ceph-mon container at the dcn0 edge site to list the volumes pool.
$ sudo podman exec ceph-mon-$HOSTNAME rbd --cluster dcn0 -p volumes ls -l NAME SIZE PARENT FMT PROT LOCK volume-28c6fc32-047b-4306-ad2d-de2be02716b7 8 GiB images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076@snap 2 excl $
Confirm that you can create a cinder snapshot of the root volume of the instance. Ensure that the server is stopped to quiesce data to create a clean snapshot. Use the --force option, because the volume status remains
in-use
when the instance is off.openstack server stop pet-server-dcn0 openstack volume snapshot create pet-volume-dcn0-snap --volume $VOL_ID --force openstack server start pet-server-dcn0
List the contents of the volumes pool on the dcn0 Ceph cluster to show the newly created snapshot.
$ sudo podman exec ceph-mon-$HOSTNAME rbd --cluster dcn0 -p volumes ls -l NAME SIZE PARENT FMT PROT LOCK volume-28c6fc32-047b-4306-ad2d-de2be02716b7 8 GiB images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076@snap 2 excl volume-28c6fc32-047b-4306-ad2d-de2be02716b7@snapshot-a1ca8602-6819-45b4-a228-b4cd3e5adf60 8 GiB images/8083c7e7-32d8-4f7a-b1da-0ed7884f1076@snap 2 yes
A.1.5. Confirming image snapshots can be created and copied between sites
Verify that you can create a new image at the dcn0 site. Ensure that the server is stopped to quiesce data to create a clean snapshot:
NOVA_ID=$(openstack server show pet-server-dcn0 -f value -c id) openstack server stop $NOVA_ID openstack server image create --name cirros-snapshot $NOVA_ID openstack server start $NOVA_ID
Copy the image from the
dcn0
edge site back to the hub location, which is the default back end for glance:IMAGE_ID=$(openstack image show cirros-snapshot -f value -c id) glance image-import $IMAGE_ID --stores central --import-method copy-image
For more information on glance multistore operations, see Image service with multiple stores.
A.2. Migrating to a spine and leaf deployment
It is possible to migrate an existing cloud with a pre-existing network configuration to one with a spine leaf architecture. For this, the following conditions are needed:
-
All bare metal ports must have their
physical-network
property value set toctlplane
. -
The parameter
enable_routed_networks
is added and set totrue
in undercloud.conf, followed by a re-run of the undercloud installation command,openstack undercloud install
.
Once the undercloud is re-deployed, the overcloud is considered a spine leaf, with a single leaf leaf0
. You can add additional provisioning leaves to the deployment through the following steps.
- Add the desired subnets to undercloud.conf as shown in Configuring routed spine-leaf in the undercloud.
-
Re-run the undercloud installation command,
openstack undercloud install
. Add the desired additional networks and roles to the overcloud templates,
network_data.yaml
androles_data.yaml
respectively.NoteIf you are using the
{{network.name}}InterfaceRoutes
parameter in the network configuration file, then you’ll need to ensure that theNetworkDeploymentActions
parameter includes the value UPDATE.NetworkDeploymentActions: ['CREATE','UPDATE'])
- Finally, re-run the overcloud installation script that includes all relevant heat templates for your cloud deployment.
A.3. Migrating to a multistack deployment
You can migrate from a single stack deployment to a multistack deployment by treating the existing deployment as the central site, and adding additional edge sites.
You cannot split the existing stack. You can scale down the existing stack to remove compute nodes if needed. These compute nodes can then be added to edge sites.
This action creates workload interruptions if all compute nodes are removed.
A.4. Backing up and restoring across edge sites
You can back up and restore Block Storage service (cinder) volumes across distributed compute node (DCN) architectures in edge site and availability zones. The cinder-backup
service runs in the central availability zone (AZ), and backups are stored in the central AZ. The Block Storage service does not store backups at DCN sites.
Prerequisites
- Deploy the optional Block Storage backup service. For more information, see Block Storage backup service deployment in Backing up Block Storage volumes.
- Block Storage (cinder) REST API microversion 3.51 or later.
-
All sites must use a common
openstack
cephx client name. For more information, see Creating a Ceph key for external access in Deploying a Distributed Compute Node (DCN) architecture.
Procedure
Create a backup of a volume in the first DCN site:
$ cinder --os-volume-api-version 3.51 backup-create --name <volume_backup> --availability-zone <az_central> <edge_volume>
-
Replace
<volume_backup>
with a name for the volume backup. -
Replace
<az_central>
with the name of the central availability zone that hosts thecinder-backup
service. Replace
<edge_volume>
with the name of the volume that you want to back up.NoteIf you experience issues with Ceph keyrings, you might need to restart the
cinder-backup
container so that the keyrings copy from the host to the container successfully.
-
Replace
Restore the backup to a new volume in the second DCN site:
$ cinder --os-volume-api-version 3.51 create --availability-zone <az_2> --name <new_volume> --backup-id <volume_backup> <volume_size>
-
Replace
<az_2>
with the name of the availability zone where you want to restore the backup. -
Replace
<new_volume>
with a name for the new volume. -
Replace
<volume_backup>
with the name of the volume backup that you created in the previous step. -
Replace
<volume_size>
with a value in GB equal to or greater than the size of the original volume.
-
Replace
A.5. Overcloud adoption and preparation in a DCN environment
You must perform the following tasks for overcloud adoption:
- Each site is fully upgraded separately, one by one, starting with the central location.
- Adopt the network and host provisioning configuration exports into the overcloud, for central location stack.```suggestion:-0+0
- Define new containers and additional compatibility configuration.
After adoption, you must run the upgrade preparation script, which performs the following tasks:
- Updates the overcloud plan to OpenStack Platform 17.1
- Prepares the nodes for the upgrade
For information about the duration and impact of this upgrade procedure, see Upgrade duration and impact.
Prerequisites
All nodes are in the
ACTIVE
state:$ openstack baremetal node list
If any nodes are in the
MAINTENANCE
state, set them toACTIVE
:$ openstack baremetal node maintenance unset <node_uuid>
-
Replace
<node_uuid>
with the UUID of the node.
Procedure
-
Log in to the undercloud host as the
stack
user. Source the
stackrc
undercloud credentials file:$ source ~/stackrc
Verify that the following files that were exported during the undercloud upgrade contain the expected configuration for the overcloud upgrade. You can find the following files in the
~/overcloud-deploy
directory:-
tripleo-<stack>-passwords.yaml
-
tripleo-<stack>-network-data.yaml
-
tripleo-<stack>-virtual-ips.yaml
tripleo-<stack>-baremetal-deployment.yaml
NoteIf the files were not generated after the undercloud upgrade, contact Red Hat Support.
ImportantIf you have a multi-cell environment, review Overcloud adoption for multi-cell environments for an example of copying the files to each cell stack.
-
On the main stack, copy the
passwords.yaml
file to the~/overcloud-deploy/$(<stack>)
directory. Repeat this step on each stack in your environment:$ cp ~/overcloud-deploy/<stack>/tripleo-<stack>-passwords.yaml ~/overcloud-deploy/<stack>/<stack>-passwords.yaml
-
Replace
<stack>
with the name of your stack.
-
Replace
If you are performing the preparation and adoption at the central location, copy the
network-data.yaml
file to the stack user’s home directory and deploy the networks. Do this only for the central location:$ cp /home/stack/overcloud-deploy/central/tripleo-central-network-data.yaml ~/ $ mkdir /home/stack/overcloud_adopt $ openstack overcloud network provision --debug \ --output /home/stack/overcloud_adopt/generated-networks-deployed.yaml tripleo-central-network-data.yaml
For more information, see Provisioning and deploying your overcloud in Installing and managing Red Hat OpenStack Platform with director.
If you are performing the preparation and adoption at the central location, copy the
virtual-ips.yaml
file to the stack user’s home directory and provision the network VIPs. Do this only for the central location:$ cp /home/stack/overcloud-deploy/central/tripleo-central-virtual-ips.yaml ~/ $ openstack overcloud network vip provision --debug \ --stack <stack> --output \ /home/stack/overcloud_adopt/generated-vip-deployed.yaml tripleo-central-virtual-ips.yaml
On the main stack, copy the
baremetal-deployment.yaml
file to the stack user’s home directory and provision the overcloud nodes. Repeat this step on each stack in your environment:$ cp ~/overcloud-deploy/<stack>/tripleo-<stack>-baremetal-deployment.yaml ~/ $ openstack overcloud node provision --debug --stack <stack> \ --output /home/stack/overcloud_adopt/baremetal-central-deployment.yaml \ tripleo-<stack>-baremetal-deployment.yaml
NoteThis is the final step of the overcloud adoption. If your overcloud adoption takes longer than 10 minutes to complete, contact Red Hat Support.
Complete the following steps to prepare the containers:
Back up the
containers-prepare-parameter.yaml
file that you used for the undercloud upgrade:$ cp containers-prepare-parameter.yaml \ containers-prepare-parameter.yaml.orig
Define the following environment variables before you run the script to update the
containers-prepare-parameter.yaml
file:-
NAMESPACE
: The namespace for the UBI9 images. For example,NAMESPACE='"namespace":"example.redhat.com:5002",'
-
EL8_NAMESPACE
: The namespace for the UBI8 images. -
NEUTRON_DRIVER
: The driver to use and determine which OpenStack Networking (neutron) container to use. Set to the type of containers you used to deploy the original stack. For example, set toNEUTRON_DRIVER='"neutron_driver":"ovn",'
to use OVN-based containers. EL8_TAGS
: The tags of the UBI8 images, for example,EL8_TAGS='"tag":"17.1",'
.-
Replace
"17.1",
with the tag that you use in your content view.
-
Replace
EL9_TAGS
: The tags of the UBI9 images, for example,EL9_TAGS='"tag":"17.1",'
.Replace
"17.1",
with the tag that you use in your content view.For more information about the
tag
parameter, see Container image preparation parameters in Customizing your Red Hat OpenStack Platform deployment.
CONTROL_PLANE_ROLES
: The list of control plane roles using the--role
option, for example,--role ControllerOpenstack, --role Database, --role Messaging, --role Networker, --role CephStorage
. To view the list of control plane roles in your environment, run the following command:$ export STACK=<stack> \ $ sudo awk '/tripleo_role_name/ {print "--role " $2}' \ /var/lib/mistral/${STACK}/tripleo-ansible-inventory.yaml \ | grep -vi compute
-
Replace
<stack>
with the name of your stack.
-
Replace
COMPUTE_ROLES
: The list of Compute roles using the--role
option, for example,--Compute-1
. To view the list of Compute roles in your environment, run the following command:$ sudo awk '/tripleo_role_name/ {print "--role " $2}' \ /var/lib/mistral/${STACK}/tripleo-ansible-inventory.yaml \ | grep -i compute
CEPH_OVERRIDE
: If you deployed Red Hat Ceph Storage, specify the Red Hat Ceph Storage 5 container images. For example:CEPH_OVERRIDE='"ceph_image":"rhceph-5-rhel8","ceph_tag":"<latest>",'
Replace
<latest>
with the latestceph_tag
version, for example,5-499
.The following is an example of the
containers-prepare-parameter.yaml
file configuration:NAMESPACE='"namespace":"registry.redhat.io/rhosp-rhel9",' EL8_NAMESPACE='"namespace":"registry.redhat.io/rhosp-rhel8",' NEUTRON_DRIVER='"neutron_driver":"ovn",' EL8_TAGS='"tag":"17.1",' EL9_TAGS='"tag":"17.1",' CONTROL_PLANE_ROLES="--role Controller" COMPUTE_ROLES="--role Compute" CEPH_TAGS='"ceph_tag":"5",'
-
Run the following script to to update the
containers-prepare-parameter.yaml
file:WarningIf you deployed Red Hat Ceph Storage, ensure that the
CEPH_OVERRIDE
environment variable is set to the correct values before executing the following command. Failure to do so results in issues when upgrading Red Hat Ceph Storage.$ python3 /usr/share/openstack-tripleo-heat-templates/tools/multi-rhel-container-image-prepare.py \ ${COMPUTE_ROLES} \ ${CONTROL_PLANE_ROLES} \ --enable-multi-rhel \ --excludes collectd \ --excludes nova-libvirt \ --minor-override "{${EL8_TAGS}${EL8_NAMESPACE}${CEPH_OVERRIDE}${NEUTRON_DRIVER}\"no_tag\":\"not_used\"}" \ --major-override "{${EL9_TAGS}${NAMESPACE}${CEPH_OVERRIDE}${NEUTRON_DRIVER}\"no_tag\":\"not_used\"}" \ --output-env-file \ /home/stack/containers-prepare-parameter.yaml
The
multi-rhel-container-image-prepare.py
script supports the following parameters:--output-env-file
-
Writes the environment file that contains the default
ContainerImagePrepare
value. --local-push-destination
- Triggers an upload to a local registry.
--enable-registry-login
-
Enables the flag that allows the system to attempt to log in to a remote registry prior to pulling the containers. Use this flag when
--local-push-destination
is not used and the target systems have network connectivity to remote registries. Do not use this flag for an overcloud that might not have network connectivity to a remote registry. --enable-multi-rhel
- Enables multi-rhel.
--excludes
- Lists the services to exclude.
--major-override
- Lists the override parameters for a major release.
--minor-override
- Lists the override parameters for a minor release.
--role
- The list of roles.
--role-file
-
The
role_data.yaml
file.
-
If you deployed Red Hat Ceph Storage, open the
containers-prepare-parameter.yaml
file to confirm that the Red Hat Ceph Storage 5 container images are specified and that there are no references to Red Hat Ceph Storage 6 container images.
If you have a director-deployed Red Hat Ceph Storage deployment, create a file called
ceph_params.yaml
and include the following content:parameter_defaults: CephSpecFqdn: true CephConfigPath: "/etc/ceph" CephAnsibleRepo: "rhceph-5-tools-for-rhel-8-x86_64-rpms" DeployedCeph: true
ImportantDo not remove the
ceph_params.yaml
file after the RHOSP upgrade is complete. This file must be present in director-deployed Red Hat Ceph Storage environments. Additionally, any time you runopenstack overcloud deploy
, you must include theceph_params.yaml
file, for example,-e ceph_params.yaml
.NoteIf your Red Hat Ceph Storage deployment includes short names, you must set the
CephSpecFqdn
parameter tofalse
. If set totrue
, the inventory generates with both the short names and domain names, causing the Red Hat Ceph Storage upgrade to fail.Create an environment file called
upgrades-environment.yaml
in your templates directory and include the following content:parameter_defaults: ExtraConfig: nova::workarounds::disable_compute_service_check_for_ffu: true DnsServers: ["<dns_servers>"] DockerInsecureRegistryAddress: <undercloud_FQDN> UpgradeInitCommand: | sudo subscription-manager repos --disable=* if $( grep -q 9.2 /etc/os-release ) then sudo subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms sudo podman ps | grep -q ceph && subscription-manager repos --enable=rhceph-5-tools-for-rhel-9-x86_64-rpms sudo subscription-manager release --set=9.2 else sudo subscription-manager repos --enable=rhel-8-for-x86_64-baseos-tus-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-highavailability-tus-rpms --enable=openstack-17.1-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms sudo podman ps | grep -q ceph && subscription-manager repos --enable=rhceph-5-tools-for-rhel-8-x86_64-rpms sudo subscription-manager release --set=8.4 fi if $(sudo podman ps | grep -q ceph ) then sudo dnf -y install cephadm fi
-
Replace
<dns_servers>
with a comma-separated list of your DNS server IP addresses, for example,["10.0.0.36", "10.0.0.37"]
. Replace
<undercloud_FQDN>
with the fully qualified domain name (FQDN) of the undercloud host, for example,"undercloud-0.ctlplane.redhat.local:8787"
.For more information about the upgrade parameters that you can configure in the environment file, see Upgrade parameters.
-
Replace
If you are performing the preparation and adoption at an edge location, set the
AuthCloudName
parameter to the name of the central location:parameter_defaults: AuthCloudName: central
If multiple Image service (glance) stores are deployed, set the Image service API policy for copy-image to allow all rules:
parameter_defaults: GlanceApiPolicies: {glance-copy_image: {key 'copy-image', value: ""}}
On the undercloud, create a file called
overcloud_upgrade_prepare.sh
in your templates directory.- You must create this file for each stack in your environment. This file includes the original content of your overcloud deploy file and the environment files that are relevant to your environment.
If you are creating the
overcloud_upgrade_prepare.sh
for a DCN edge location, you must include the following templates:-
An environment template that contains exported central site parameters. You can find this file in
/home/stack/overcloud-deploy/centra/central-export.yaml
. -
generated-networks-deployed.yaml
, the resulting file from running theopenstack overcloud network provision
command at the central location. generated-vip-deployed.yaml
, the resulting file from running theopenstack overcloud network vip provision
command at the central location. + For example:#!/bin/bash openstack overcloud upgrade prepare --yes \ --timeout 460 \ --templates /usr/share/openstack-tripleo-heat-templates \ --ntp-server 192.168.24.1 \ --stack <stack> \ -r /home/stack/roles_data.yaml \ -e /home/stack/templates/internal.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml \ -e /home/stack/templates/network/network-environment.yaml \ -e /home/stack/templates/inject-trust-anchor.yaml \ -e /home/stack/templates/hostnames.yml \ -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \ -e /home/stack/templates/nodes_data.yaml \ -e /home/stack/templates/debug.yaml \ -e /home/stack/templates/firstboot.yaml \ -e /home/stack/templates/upgrades-environment.yaml \ -e /home/stack/overcloud-params.yaml \ -e /home/stack/overcloud-deploy/<stack>/overcloud-network-environment.yaml \ -e /home/stack/overcloud-adopt/<stack>-passwords.yaml \ -e /home/stack/overcloud_adopt/<stack>-baremetal-deployment.yaml \ -e /home/stack/overcloud_adopt/generated-networks-deployed.yaml \ -e /home/stack/overcloud_adopt/generated-vip-deployed.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/nova-hw-machine-type-upgrade.yaml \ -e /home/stack/skip_rhel_release.yaml \ -e ~/containers-prepare-parameter.yaml
NoteIf you have a multi-cell environment, review Overcloud adoption for multi-cell environments for an example of creating the
overcloud_upgrade_prepare.sh
file for each cell stack.-
In the original
network-environment.yaml
file (/home/stack/templates/network/network-environment.yaml
), remove all the resource_registry resources that point toOS::TripleO::*::Net::SoftwareConfig
. -
In the
overcloud_upgrade_prepare.sh
file, include the following options relevant to your environment:
-
In the original
-
The environment file (
upgrades-environment.yaml
) with the upgrade-specific parameters (-e
). -
The environment file (
containers-prepare-parameter.yaml
) with your new container image locations (-e
). In most cases, this is the same environment file that the undercloud uses. -
The environment file (
skip_rhel_release.yaml
) with the release parameters (-e). -
Any custom configuration environment files (
-e
) relevant to your deployment. -
If applicable, your custom roles (
roles_data
) file by using--roles-file
. -
For Ceph deployments, the environment file (
ceph_params.yaml
) with the Ceph parameters (-e). -
The files that were generated during overcloud adoption (
networks-deployed.yaml
,vip-deployed.yaml
,baremetal-deployment.yaml
) (-e). -
If applicable, the environment file (
ipa-environment.yaml
) with your IPA service (-e). If you are using composable networks, the (
network_data
) file by using--network-file
.NoteDo not include the
network-isolation.yaml
file in your overcloud deploy file or theovercloud_upgrade_prepare.sh
file. Network isolation is defined in thenetwork_data.yaml
file.If you use a custom stack name, pass the name with the
--stack
option.NoteYou must include the
nova-hw-machine-type-upgrade.yaml
file in your templates until all of your RHEL 8 Compute nodes are upgraded to RHEL 9 in the environment. If this file is excluded, an error appears in thenova_compute.log
in the/var/log/containers/nova
directory. After you upgrade all of your RHEL 8 Compute nodes to RHEL 9, you can remove this file from your configuration and update the stack.In the director-deployed Red Hat Ceph Storage use case, if you enabled the Shared File Systems service (manila) with CephFS through NFS on the deployment that you are upgrading, you must specify an additional environment file at the end of the
overcloud_upgrade_prepare.sh
script file. You must add the environment file at the end of the script because it overrides another environment file that is specified earlier in the script:-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/manila-cephfsganesha-config.yaml
In the external Red Hat Ceph Storage use case, if you enabled the Shared File Systems service (manila) with CephFS through NFS on the deployment that you are upgrading, you must check that the associated environment file in the
overcloud_upgrade_prepare.sh
script points to the tripleo-basedceph-nfs
role. If present, remove the following environment file:-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/manila-cephfsganesha-config.yaml
And add the following environment file:
-e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsganesha-config.yaml
-
An environment template that contains exported central site parameters. You can find this file in
Run the upgrade preparation script for each stack in your environment:
$ source stackrc $ chmod 755 /home/stack/overcloud_upgrade_prepare.sh $ sh /home/stack/overcloud_upgrade_prepare.sh
NoteIf you have a multi-cell environment, you must run the script for each
overcloud_upgrade_prepare.sh
file that you created for each cell stack. For an example, see Overcloud adoption for multi-cell environments.- Wait until the upgrade preparation completes.
Download the container images:
$ openstack overcloud external-upgrade run --stack <stack> --tags container_image_prepare