Ansible Automation Platform migration
Migrate your deployment of Ansible Automation Platform from one installation type to another
Abstract
Providing feedback on Red Hat documentation
If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Chapter 1. Introduction and objectives
This document outlines the necessary steps and considerations for migrating between different Ansible Automation Platform deployment types for Ansible Automation Platform 2.5. Specifically, it focuses on these migration paths:
Source environment | Target environment |
---|---|
RPM-based Ansible Automation Platform | Container-based Ansible Automation Platform platform |
RPM-based Ansible Automation Platform | OpenShift Container Platform |
RPM-based Ansible Automation Platform | Managed Ansible Automation Platform |
Container-based Ansible Automation Platform | OpenShift Container Platform |
Container-based Ansible Automation Platform | Managed Ansible Automation Platform |
Migrations outside of those listed are not supported at this time.
The primary goals of this document are to:
- Document all components and configurations that must be migrated between Ansible Automation Platform platforms
- Give step-by-step migration workflows for different deployment scenarios
- Identify potential challenges and unknowns that require further investigation
Chapter 2. Out of scope
This guide is focused on the core components of Ansible Automation Platform. The following items are currently out of scope for the migration processes described in this document:
- Event-Driven Ansible: Configuration and content for Event-Driven Ansible must be manually recreated in the target environment.
- Instance groups: Instance group configurations must be manually recreated after migration.
- Hub content: Content hosted in automation hub must be manually reimported or reconfigured.
- Custom Certificate Authority (CA) for receptor mesh: Custom CA configurations for receptor mesh must be manually reconfigured.
- Disconnected environments: The migration processes for disconnected environments is not covered in this guide.
- Execution environments (other than the default one): Custom execution environments must be rebuilt or reimported manually.
As of the date of writing this guide, the content and configuration for these items are expected to be re-created, imported, or configured manually in the target environment. These out-of-scope items might be added as supported components in future updates to this migration guide.
Chapter 3. Migration process overview
The migration between Ansible Automation Platform installation types follows this general workflow:
- Prepare and assess the source environment - Prepare and assess the existing source environment for migration.
- Export the source environment - Extract the necessary data and configurations from the source environment.
- Create and verify the migration artifact - Package all collected data and configurations into a migration artifact.
- Prepare and assess the target environment - Prepare and assess the new target environment for migration.
- Import the migration content to the target environment - Transfer the migration artifact into the prepared target environment.
- Reconcile the target environment post-import - Address any inconsistencies and reconfigure services in the target environment after import.
- Validate the target environment - Confirm the migrated environment is fully operational.
Chapter 4. Migration prerequisites
Prerequisites for migrating your Ansible Automation Platform deployment. For your specific migration path, ensure that you meet all necessary conditions before proceeding.
4.1. Prerequisites for migrating from an RPM deployment to a containerized deployment
Before migrating from an RPM-based deployment to a container-based deployment, ensure you meet the following prerequisites:
- You have a source RPM-based deployment of Ansible Automation Platform.
- The source RPM-based deployment is on the latest async release of the version you are on.
- You have a target environment prepared for a container-based deployment of Ansible Automation Platform.
- The target deployment is on the latest release of the Ansible Automation Platform version you are on.
- You have downloaded the containerized installer.
- You have enough storage for database dumps and backups.
- There is network connectivity between source and target environments.
4.2. Prerequisites for migrating from an RPM-based deployment to an OpenShift Container Platform deployment
Before migrating from an RPM-based deployment to an OpenShift Container Platform deployment, ensure you meet the following prerequisites:
- You have a source RPM-based deployment of Ansible Automation Platform.
- The source RPM-based deployment is on the latest async release of the version you are on.
- You have a target OpenShift Container Platform environment ready.
- The target deployment is on the latest release of the Ansible Automation Platform version you are on.
- You have Ansible Automation Platform Operator available.
- You have made a decision on internal or external database configuration.
- You have made a decision on internal or external Redis configuration.
- There is network connectivity between source and target environments.
4.3. Prerequisites for migrating from an RPM-based deployment to a Managed Ansible Automation Platform deployment
Before migrating from an RPM-based deployment to a Managed Ansible Automation Platform deployment, ensure you meet the following prerequisites:
- You have a source RPM-based deployment of Ansible Automation Platform.
- The source deployment is on the latest release of the Ansible Automation Platform version you are on.
- You have a target Managed Ansible Automation Platform deployment.
- You have enabled local authentication on the source deployment before the migration.
- A local administrator account must be functional on the source deployment before migration. Verify this by performing a successful login to the source deployment.
- You have a plan to retain a backup throughout the migration process and to ensure that your existing Ansible Automation Platform deployment remains active until your migration has completed successfully.
You have a plan for any environment changes based on the migration from a self-hosted Ansible Automation Platform deployment to a Managed Ansible Automation Platform deployment:
- Job log retention changes from a customer-configured option to 30 days.
- Network changes occur when moving the control plane to the managed service.
- Automation mesh requires reconfiguration.
- You must reconfigure or re-create SSO identity providers post-migration to account for URL changes.
4.4. Prerequisites for migrating from a container-based deployment to an OpenShift Container Platform deployment
Before migrating from a container-based deployment to an OpenShift Container Platform deployment, ensure that you meet the following prerequisites:
- You have a source container-based deployment of Ansible Automation Platform.
- The source deployment is on the latest async release of the version you are on.
- You have a target OpenShift Container Platform environment ready.
- The target deployment is on the latest release of the Ansible Automation Platform version you are on.
- You have an Ansible Automation Platform Operator available.
- You have decided between internal or external database configuration.
- You have decided between internal or external Redis configuration.
- There is network connectivity between source and target environments.
4.5. Prerequisites for migrating from a container-based deployment to a Managed Ansible Automation Platform deployment
Before migrating from a container-based deployment to a Managed Ansible Automation Platform deployment, ensure that you meet the following prerequisites:
- You have a source container-based deployment of Ansible Automation Platform.
- The source deployment is on the latest release of the Ansible Automation Platform version you are on.
- You have a target Managed Ansible Automation Platform deployment.
- You have enabled local authentication on the source deployment before the migration.
- A local administrator account must be functional on the source deployment before migration. Verify this by performing a successful login to the source deployment.
- You have a plan to retain a backup throughout the migration process and to ensure that your existing Ansible Automation Platform deployment remains active until your migration has completed successfully.
You have a plan for any environment changes based on the migration from a self-hosted Ansible Automation Platform deployment to a Managed Ansible Automation Platform deployment:
- Job log retention changes from a customer-configured option to 30 days.
- Network changes occur when moving the control plane to the managed service.
- Automation mesh requires reconfiguration.
- You must reconfigure or re-create SSO identity providers post-migration to account for URL changes.
Chapter 5. Migration artifact structure and verification
The migration artifact is a critical component for successfully transferring your Ansible Automation Platform deployment. It packages all necessary data and configurations from your source environment.
This section details the structure of the migration artifact and includes a migration checklist for artifact verification.
5.1. Artifact structure
The migration artifact serves as a comprehensive package containing all necessary components to successfully transfer your Ansible Automation Platform deployment.
Structure the artifact as follows:
/ manifest.yml secrets.yml sha256sum.txt -> controller: controller.pgc -> custom_configs: foo.py bar.py -> gateway: gateway.pgc -> hub: hub.pgc
/
manifest.yml
secrets.yml
sha256sum.txt
-> controller:
controller.pgc
-> custom_configs:
foo.py
bar.py
-> gateway:
gateway.pgc
-> hub:
hub.pgc
5.2. Manifest file
The manifest.yml
file serves as the primary metadata document for the migration artifact, containing critical versioning and component information from your source environment.
Structure the manifest as follows:
aap_version: X.Y # The version being migrated platform: rpm # The source platform type components: - name: controller version: x.y.z - name: hub version: x.y.z - name: gateway version: x.y.z
aap_version: X.Y # The version being migrated
platform: rpm # The source platform type
components:
- name: controller
version: x.y.z
- name: hub
version: x.y.z
- name: gateway
version: x.y.z
5.3. Secrets file
The secrets.yml
file in the migration artifact includes essential Django SECRET_KEY
values and other sensitive data required for authentication between services.
Structure the secrets file as follows:
controller_pg_database: <redacted> controller_secret_key: <redacted> gateway_pg_database: <redacted> gateway_secret_key: <redacted> hub_pg_database: <redacted> hub_secret_key: <redacted> hub_db_fields_encryption_key: <redacted>
controller_pg_database: <redacted>
controller_secret_key: <redacted>
gateway_pg_database: <redacted>
gateway_secret_key: <redacted>
hub_pg_database: <redacted>
hub_secret_key: <redacted>
hub_db_fields_encryption_key: <redacted>
Ensure the secrets.yml
file is encrypted kept in a secure location.
5.4. Migration artifact creation checklist
Use this checklist to verify the migration artifact.
Database dumps: Include complete database dumps for each component.
-
Ensure the automation controller database (
controller.pgc
) is present in the artifact. -
Ensure the automation hub database (
hub.pgc
) is present in the artifact. -
Ensure the platform gateway database (
gateway.pgc
) is present in the artifact.
-
Ensure the automation controller database (
Secret dumps: Export and include all security-related information.
-
Validate that all secret values are present in the
secrets.yml
file.
-
Validate that all secret values are present in the
Custom configurations: Package all customizations from the source environment.
-
Validate that any custom Python scripts or modules (for example
foo.py
,bar.py
) are present on the artifact. - Document any non-standard configurations or environment-specific settings.
-
Validate that any custom Python scripts or modules (for example
Database information: Document database details.
- Include the database names for all components.
- Document database users and required permissions.
- Note any database-specific configurations or optimizations.
Verification: Ensure artifact integrity and completeness.
- Verify that all required files are included in the artifact.
- Verify that checksums exist for all included database files.
- Test the artifact’s structure and accessibility.
- Consider encrypting the artifact for secure transfer to the target environment.
- Document any known limitations or special considerations.
Chapter 6. Source environment
Prepare and export data from your existing Ansible Automation Platform deployment. The exported data forms a critical migration artifact, which you use to configure your new environment.
6.1. RPM-based Ansible Automation Platform
Prepare and export data from your RPM-based Ansible Automation Platform deployment.
6.1.1. Preparing and assessing the source environment
Before beginning your migration, document your current RPM deployment. This documentation serves as a reference throughout the migration process and is critical for properly configuring your target environment.
Procedure
Document the full topology of your current RPM deployment:
- Map out all servers, nodes, and their roles (for example control nodes, execution nodes, database servers).
- Note the hostname, IP address, and function of each server in your deployment.
- Document the network configuration between components.
Ansible Automation Platform version information:
- Record the exact Ansible Automation Platform version (X.Y) currently deployed.
Document the specific version of each component:
- Automation controller version
- Automation hub version
- Platform gateway version
Database configuration:
- Database names for each component
- Database users and roles
- Connection parameters and authentication methods
- Any custom PostgreSQL configurations or optimizations
6.1.2. Exporting the source environment
From your source environment, export the data and configurations needed for migration.
Procedure
Verify the PostgreSQL database version is PostgreSQL version 15.
You can verify your current PostgreSQL version by connecting to your database server and running the following command as the
postgres
user:psql -c 'SELECT version();'
$ psql -c 'SELECT version();'
Copy to Clipboard Copied! ImportantPostgreSQL version 15 is a strict requirement for the migration process to succeed. If running PostgreSQL 13 or earlier, upgrade to version 15 before proceeding with the migration.
If using an Ansible Automation Platform managed database, re-run the installation program to upgrade the PostgreSQL version. If using a customer provided (external) database, contact your database administrator or service provider to confirm the version and arrange for an upgrade if required.
Create a complete backup of the source environment:
./setup.sh -e 'backup_dest=/path/to/backup_dir/' -b
$ ./setup.sh -e 'backup_dest=/path/to/backup_dir/' -b
Copy to Clipboard Copied! Get the connection settings from one node from each of the component groups.
For each command, access the host and become the
root
user.Access the automation controller node and run:
awx-manage print_settings | grep '^DATABASES'
# awx-manage print_settings | grep '^DATABASES'
Copy to Clipboard Copied! Access the automation hub node and run:
grep '^DATABASES' /etc/pulp/settings.py
# grep '^DATABASES' /etc/pulp/settings.py
Copy to Clipboard Copied! Access the platform gateway node and run:
aap-gateway-manage print_settings | grep '^DATABASES'
# aap-gateway-manage print_settings | grep '^DATABASES'
Copy to Clipboard Copied!
Stage the manually created artifact on the platform gateway node.
mkdir -p /tmp/backups/artifact/{controller,gateway,hub}
# mkdir -p /tmp/backups/artifact/{controller,gateway,hub}
Copy to Clipboard Copied! mkdir -p /tmp/backups/artifact/controller/custom_configs
# mkdir -p /tmp/backups/artifact/controller/custom_configs
Copy to Clipboard Copied! touch /tmp/backups/artifact/secrets.yml
# touch /tmp/backups/artifact/secrets.yml
Copy to Clipboard Copied! cd /tmp/backups/artifact/
# cd /tmp/backups/artifact/
Copy to Clipboard Copied! Validate the database size and make sure you have enough space on the filesystem for the
pg_dump
.You can verify the database sizes by connecting to your database server and running the following command as the
postgres
user:psql -c '\l+'
$ psql -c '\l+'
Copy to Clipboard Copied! Adjust the filesystem size or mount an external filesystem as needed before performing the next step.
NoteThis procedure assumes that all target files will be sent to the
/tmp
filesystem. You must adjust the commands to match your environment’s needs.Perform database dumps of all components on the platform gateway node within the artifact you created.
psql -h <pg_hostname> -U <component_pg_user> -d <database_name> -t -c 'SHOW server_version;' # ensure connectivity to the database
# psql -h <pg_hostname> -U <component_pg_user> -d <database_name> -t -c 'SHOW server_version;' # ensure connectivity to the database
Copy to Clipboard Copied! pg_dump -h <pg_hostname> -U <component_pg_user> -d <component_pg_name> --clean --create -Fc -f <component>/<component>.pgc
# pg_dump -h <pg_hostname> -U <component_pg_user> -d <component_pg_name> --clean --create -Fc -f <component>/<component>.pgc
Copy to Clipboard Copied! ls -ld <component>/<component>.pgc
# ls -ld <component>/<component>.pgc
Copy to Clipboard Copied! echo "<component>_pg_database: <database_name>" >> secrets.yml ## Add the database name for the component to the secrets file
# echo "<component>_pg_database: <database_name>" >> secrets.yml ## Add the database name for the component to the secrets file
Copy to Clipboard Copied! Export secrets from the RPM environment from one node of each component group.
For each of the following steps, use the
root
user to run the commands.Access the automation controller node, gather the secret key, and add it to the
controller_secret_key
value in thesecrets.yml
file.cat /etc/tower/SECRET_KEY
# cat /etc/tower/SECRET_KEY
Copy to Clipboard Copied! Access the automation hub node, gather the secret key, and add it to the
hub_secret_key
value in thesecrets.yml
file.grep 'SECRET_KEY' /etc/pulp/settings.py | awk -F'=' '{ print $2}'
# grep 'SECRET_KEY' /etc/pulp/settings.py | awk -F'=' '{ print $2}'
Copy to Clipboard Copied! Access the automation hub node, gather the
database_fields.symmetric.key
value, and add it to thehub_db_fields_encryption_key
value in thesecrets.yml
file.cat /etc/pulp/certs/database_fields.symmetric.key
# cat /etc/pulp/certs/database_fields.symmetric.key
Copy to Clipboard Copied! Access the platform gateway node, gather the secret key, and add it to the
gateway_secret_key
value in thesecrets.yml
file.cat /etc/ansible-automation-platform/gateway/SECRET_KEY
# cat /etc/ansible-automation-platform/gateway/SECRET_KEY
Copy to Clipboard Copied!
Export automation controller custom configurations.
If any custom settings exist on the
/etc/tower/conf.d
, copy them to/tmp/backups/artifact/controller/custom_configs
.Configuration files on automation controller that are managed by the installation program and not considered custom:
-
/etc/tower/conf.d/postgres.py
-
/etc/tower/conf.d/channels.py
-
/etc/tower/conf.d/caching.py
-
/etc/tower/conf.d/cluster_host_id.py
-
Package the artifact.
cd /tmp/backups/artifact/
# cd /tmp/backups/artifact/
Copy to Clipboard Copied! [ -f sha256sum.txt ] && rm -f sha256sum.txt; find . -type f -name "*.pgc" -exec sha256sum {} \; >> sha256sum.txt
# [ -f sha256sum.txt ] && rm -f sha256sum.txt; find . -type f -name "*.pgc" -exec sha256sum {} \; >> sha256sum.txt
Copy to Clipboard Copied! cat sha256sum.txt
# cat sha256sum.txt
Copy to Clipboard Copied! cd
# cd
Copy to Clipboard Copied! tar cf artifact.tar artifact
# tar cf artifact.tar artifact
Copy to Clipboard Copied! sha256sum artifact.tar > artifact.tar.sha256
# sha256sum artifact.tar > artifact.tar.sha256
Copy to Clipboard Copied! sha256sum --check artifact.tar.sha256
# sha256sum --check artifact.tar.sha256
Copy to Clipboard Copied! tar tvf artifact.tar
# tar tvf artifact.tar
Copy to Clipboard Copied! Example output of
tar tvf artifact.tar
:drwxr-xr-x ansible/ansible 0 2025-05-08 16:48 artifact/ drwxr-xr-x ansible/ansible 0 2025-05-08 16:33 artifact/controller/ -rw-r--r-- ansible/ansible 732615 2025-05-08 16:26 artifact/controller/controller.pgc drwxr-xr-x ansible/ansible 0 2025-05-08 16:33 artifact/controller/custom_configs/ drwxr-xr-x ansible/ansible 0 2025-05-08 16:11 artifact/gateway/ -rw-r--r-- ansible/ansible 231155 2025-05-08 16:28 artifact/gateway/gateway.pgc drwxr-xr-x ansible/ansible 0 2025-05-08 16:26 artifact/hub/ -rw-r--r-- ansible/ansible 29252002 2025-05-08 16:26 artifact/hub/hub.pgc -rw-r--r-- ansible/ansible 614 2025-05-08 16:24 artifact/secrets.yml -rw-r--r-- ansible/ansible 338 2025-05-08 16:48 artifact/sha256sum.txt
drwxr-xr-x ansible/ansible 0 2025-05-08 16:48 artifact/ drwxr-xr-x ansible/ansible 0 2025-05-08 16:33 artifact/controller/ -rw-r--r-- ansible/ansible 732615 2025-05-08 16:26 artifact/controller/controller.pgc drwxr-xr-x ansible/ansible 0 2025-05-08 16:33 artifact/controller/custom_configs/ drwxr-xr-x ansible/ansible 0 2025-05-08 16:11 artifact/gateway/ -rw-r--r-- ansible/ansible 231155 2025-05-08 16:28 artifact/gateway/gateway.pgc drwxr-xr-x ansible/ansible 0 2025-05-08 16:26 artifact/hub/ -rw-r--r-- ansible/ansible 29252002 2025-05-08 16:26 artifact/hub/hub.pgc -rw-r--r-- ansible/ansible 614 2025-05-08 16:24 artifact/secrets.yml -rw-r--r-- ansible/ansible 338 2025-05-08 16:48 artifact/sha256sum.txt
Copy to Clipboard Copied! -
Download the
artifact.tar
andartifact.tar.sha256
to your local machine or transfer to the target node with thescp
command.
6.1.3. Creating and verifying the migration artifact
To create and verify the migration artifact, follow the instructions in Migration artifact structure and verification.
6.2. Container-based Ansible Automation Platform
Prepare and export data from your container-based Ansible Automation Platform deployment.
6.2.1. Preparing and assessing the source environment
Before beginning your migration, document your current containerized deployment. This documentation serves as a reference throughout the migration process and is critical for properly configuring your target environment.
Procedure
Document the full topology of your current containerized deployment:
- Map out all servers, nodes, and their roles (for example control nodes, execution nodes, database servers).
- Note the hostname, IP address, and function of each server in your deployment.
- Document the network configuration between components.
Ansible Automation Platform version information:
- Record the exact Ansible Automation Platform version (X.Y) currently deployed.
Document the specific version of each component:
- Automation controller version
- Automation hub version
- Platform gateway version
Database configuration:
- Database names for each component
- Database users and roles
- Connection parameters and authentication methods
- Any custom PostgreSQL configurations or optimizations
- Identify all custom configurations and settings
- Document container resource allocations and volumes
6.2.2. Exporting the source environment
From your source environment, export the data and configurations needed for migration.
Procedure
Verify the PostgreSQL database version is PostgreSQL version 15.
You can verify your current PostgreSQL version by connecting to your database server and running the following command as the
postgres
user:psql -c 'SELECT version();'
$ psql -c 'SELECT version();'
Copy to Clipboard Copied! ImportantPostgreSQL version 15 is a strict requirement for the migration process to succeed. If running PostgreSQL 13 or earlier, upgrade to version 15 before proceeding with the migration.
If using an Ansible Automation Platform managed database, re-run the installation program to upgrade the PostgreSQL version. If using a customer provided (external) database, contact your database administrator or service provider to confirm the version and arrange for an upgrade if required.
Create a complete backup of the source environment:
ansible-playbook -i <path_to_inventory> ansible.containerized_installer.backup
$ ansible-playbook -i <path_to_inventory> ansible.containerized_installer.backup
Copy to Clipboard Copied! Get the connection settings from one node in each of the component groups.
Access the automation controller node and run:
podman exec -it automation-controller-task bash -c 'awx-manage print_settings | grep DATABASES'
$ podman exec -it automation-controller-task bash -c 'awx-manage print_settings | grep DATABASES'
Copy to Clipboard Copied! Access the automation hub node and run:
podman exec -it automation-hub-api bash -c "pulpcore-manager diffsettings | grep '^DATABASES'"
$ podman exec -it automation-hub-api bash -c "pulpcore-manager diffsettings | grep '^DATABASES'"
Copy to Clipboard Copied! Access the platform gateway node and run:
podman exec -it automation-gateway bash -c "aap-gateway-manage print_settings | grep '^DATABASES'"
$ podman exec -it automation-gateway bash -c "aap-gateway-manage print_settings | grep '^DATABASES'"
Copy to Clipboard Copied!
Validate the database size and make sure you have enough space on the filesystem for the
pg_dump
.You can verify the database sizes by connecting to your database server and running the following command as the
postgres
user:podman exec -it postgresql bash -c 'psql -c "\l+"'
$ podman exec -it postgresql bash -c 'psql -c "\l+"'
Copy to Clipboard Copied! Adjust the filesystem size or mount an external filesystem as needed before performing the next step.
NoteThis procedure assumes that all target files will be sent to the
/tmp
filesystem. You might want to adjust the commands to match your environment’s needs.Stage the manually created artifact on the platform gateway node.
mkdir -p /tmp/backups/artifact/{controller,gateway,hub}
# mkdir -p /tmp/backups/artifact/{controller,gateway,hub}
Copy to Clipboard Copied! mkdir -p /tmp/backups/artifact/controller/custom_configs
# mkdir -p /tmp/backups/artifact/controller/custom_configs
Copy to Clipboard Copied! touch /tmp/backups/artifact/secrets.yml
# touch /tmp/backups/artifact/secrets.yml
Copy to Clipboard Copied! cd /tmp/backups/artifact/
# cd /tmp/backups/artifact/
Copy to Clipboard Copied! Perform database dumps of all components on the platform gateway node within the artifact created previously.
To run the
psql
andpg_restore
commands, you must create a temporary container and run the commands inside of it. This command must be run from the database node.podman run -it --rm --name postgresql_restore_temp --network host --volume ~/aap/tls/extracted:/etc/pki/ca-trust/extracted:z --volume ~/aap/postgresql/server.crt:/var/lib/pgsql/server.crt:ro,z --volume ~/aap/postgresql/server.key:/var/lib/pgsql/server.key:ro,z --volume /tmp/backups/artifact:/var/lib/pgsql/backups:ro,z registry.redhat.io/rhel8/postgresql-15:latest bash
$ podman run -it --rm --name postgresql_restore_temp --network host --volume ~/aap/tls/extracted:/etc/pki/ca-trust/extracted:z --volume ~/aap/postgresql/server.crt:/var/lib/pgsql/server.crt:ro,z --volume ~/aap/postgresql/server.key:/var/lib/pgsql/server.key:ro,z --volume /tmp/backups/artifact:/var/lib/pgsql/backups:ro,z registry.redhat.io/rhel8/postgresql-15:latest bash
Copy to Clipboard Copied! NoteThis command assumes the image
registry.redhat.io/rhel8/postgresql-15:latest
. If you are missing the image, check the available images for the user withpodman images ls
.The command above opens a shell inside the container named
postgresql_restore_temp
and has the artifact mounted into/var/lib/pgsql/backups
. Also, this command is mounting the PostgreSQL certificates to ensure that you can resolve the correct certificates.bash-4.4$ cd /var/lib/pgsql/backups bash-4.4$ psql -h <pg_hostname> -U <component_pg_user> -d <database_name> -t -c 'SHOW server_version;' # ensure connectivity to db bash-4.4$ pg_dump -h <pg_hostname> -U <component_pg_user> -d <component_pg_name> --clean --create -Fc -f <component>/<component>.pgc bash-4.4$ ls -ld <component>/<component>.pgc bash-4.4$ echo "<component>_pg_database: <database_name>" >> secrets.yml ## Add the DB name for the component to the secrets file
bash-4.4$ cd /var/lib/pgsql/backups bash-4.4$ psql -h <pg_hostname> -U <component_pg_user> -d <database_name> -t -c 'SHOW server_version;' # ensure connectivity to db bash-4.4$ pg_dump -h <pg_hostname> -U <component_pg_user> -d <component_pg_name> --clean --create -Fc -f <component>/<component>.pgc bash-4.4$ ls -ld <component>/<component>.pgc bash-4.4$ echo "<component>_pg_database: <database_name>" >> secrets.yml ## Add the DB name for the component to the secrets file
Copy to Clipboard Copied! After collecting this data, exit from this temporary container.
Export the secrets from the containerized environment from one node of each component group.
For each step below, use the
root
user to run the commands.Access the automation controller node and gather the secret key and add to the
controller_secret_key
value insecrets.yaml
file.podman secret inspect --showsecret --format "{{.SecretData}}" controller_secret_key
$ podman secret inspect --showsecret --format "{{.SecretData}}" controller_secret_key
Copy to Clipboard Copied! Access the automation hub node and gather the secret key and add to the
hub_secret_key
value insecrets.yaml
file.podman secret inspect --showsecret --format "{{.SecretData}}" hub_secret_key
$ podman secret inspect --showsecret --format "{{.SecretData}}" hub_secret_key
Copy to Clipboard Copied! Access the automation hub node and gather the
database_fields.symmetric.key
value and add to thehub_db_fields_encryption_key
value insecrets.yaml
file.podman secret inspect --showsecret --format "{{.SecretData}}" hub_database_fields
$ podman secret inspect --showsecret --format "{{.SecretData}}" hub_database_fields
Copy to Clipboard Copied! Access the platform gateway node and gather the secret key and add to the
gateway_secret_key
value insecrets.yaml
file.podman secret inspect --showsecret --format "{{.SecretData}}" gateway_secret_key
$ podman secret inspect --showsecret --format "{{.SecretData}}" gateway_secret_key
Copy to Clipboard Copied!
Export automation controller custom configurations.
If any
extra_settings
exist in your containerized installation inventory, copy them into a new file and saving them under/tmp/backups/artifact/controller/custom_configs
.Package the artifact.
cd /tmp/backups/artifact/ [ -f sha256sum.txt ] && rm -f sha256sum.txt; find . -type f -name "*.pgc" -exec sha256sum {} \; >> sha256sum.txt cat sha256sum.txt cd /tmp/backups/ tar cf artifact.tar artifact sha256sum artifact.tar > artifact.tar.sha256 sha256sum --check artifact.tar.sha256 tar tvf artifact.tar
# cd /tmp/backups/artifact/ # [ -f sha256sum.txt ] && rm -f sha256sum.txt; find . -type f -name "*.pgc" -exec sha256sum {} \; >> sha256sum.txt # cat sha256sum.txt # cd /tmp/backups/ # tar cf artifact.tar artifact # sha256sum artifact.tar > artifact.tar.sha256 # sha256sum --check artifact.tar.sha256 # tar tvf artifact.tar
Copy to Clipboard Copied! Example output of
tar tvf artifact.tar
:drwxr-xr-x ansible/ansible 0 2025-05-08 16:48 artifact/ drwxr-xr-x ansible/ansible 0 2025-05-08 16:33 artifact/controller/ -rw-r--r-- ansible/ansible 732615 2025-05-08 16:26 artifact/controller/controller.pgc drwxr-xr-x ansible/ansible 0 2025-05-08 16:33 artifact/controller/custom_configs/ drwxr-xr-x ansible/ansible 0 2025-05-08 16:11 artifact/gateway/ -rw-r--r-- ansible/ansible 231155 2025-05-08 16:28 artifact/gateway/gateway.pgc drwxr-xr-x ansible/ansible 0 2025-05-08 16:26 artifact/hub/ -rw-r--r-- ansible/ansible 29252002 2025-05-08 16:26 artifact/hub/hub.pgc -rw-r--r-- ansible/ansible 614 2025-05-08 16:24 artifact/secrets.yml -rw-r--r-- ansible/ansible 338 2025-05-08 16:48 artifact/sha256sum.txt
drwxr-xr-x ansible/ansible 0 2025-05-08 16:48 artifact/ drwxr-xr-x ansible/ansible 0 2025-05-08 16:33 artifact/controller/ -rw-r--r-- ansible/ansible 732615 2025-05-08 16:26 artifact/controller/controller.pgc drwxr-xr-x ansible/ansible 0 2025-05-08 16:33 artifact/controller/custom_configs/ drwxr-xr-x ansible/ansible 0 2025-05-08 16:11 artifact/gateway/ -rw-r--r-- ansible/ansible 231155 2025-05-08 16:28 artifact/gateway/gateway.pgc drwxr-xr-x ansible/ansible 0 2025-05-08 16:26 artifact/hub/ -rw-r--r-- ansible/ansible 29252002 2025-05-08 16:26 artifact/hub/hub.pgc -rw-r--r-- ansible/ansible 614 2025-05-08 16:24 artifact/secrets.yml -rw-r--r-- ansible/ansible 338 2025-05-08 16:48 artifact/sha256sum.txt
Copy to Clipboard Copied! -
Download the
artifact.tar
andartifact.tar.sha256
to your local machine or transfer to the target node with thescp
command.
6.2.3. Creating and verifying the migration artifact
To create and verify the migration artifact, follow the instructions in Migration artifact structure and verification.
Chapter 7. Target environment
Prepare, configure, and validate your target Ansible Automation Platform environment.
7.1. Container-based Ansible Automation Platform
Prepare and assess your target container-based Ansible Automation Platform environment, and import and reconcile your migrated content.
7.1.1. Preparing and assessing the target environment
To prepare your target environment, perform the following steps.
Procedure
- Validate the file system home folder size and make sure it has enough space to transfer the artifact.
-
Transfer the artifact to the nodes where you will be working by using
scp
or any preferred file transfer method. It is recommended that you work from the platform gateway node as it will have access to most systems. However, if you have access or file system space limitations due to the PostgreSQL dumps, then work from the database node. - Download the latest version of containerized Ansible Automation Platform from the Ansible Automation Platform download page.
- Validate the artifact checksum.
Extract the artifact on the home folder for the user running the containers.
cd ~
$ cd ~
Copy to Clipboard Copied! sha256sum-check artifact.tar.sha256
$ sha256sum-check artifact.tar.sha256
Copy to Clipboard Copied! tar xf artifact.tar
$ tar xf artifact.tar
Copy to Clipboard Copied! cd artifact
$ cd artifact
Copy to Clipboard Copied! sha256sum-check sha256sum.txt
$ sha256sum-check sha256sum.txt
Copy to Clipboard Copied! Generate inventory file for containerized deployment.
Configure the inventory file to match the same topology as the source environment. Configure the component database names and the
secret_key
values seen on thesecrets.yml
file from the artifact. You can do this by either setting the extra variables in the inventory file or by using thesecrets.yml
file as an additional variables file when running the installation program.Option 1: Extra variables in the inventory file
egrep 'pg_database_key' inventory
$ egrep 'pg_database_key' inventory controller_pg_database=<redacted> controller_secret_key=<redacted> gateway_pg_database=<redacted> gateway_secret_key=<redacted> hub_pg_database=<redacted> hub_secret_key=<redacted> _hub_database_fields=<redacted>
Copy to Clipboard Copied! NoteThe
_hub_database_fields
value comes from thehub_db_fields_encryption_key
value in your secret.Option 2: Additional variables file
ansible-playbook -i inventory ansible.containerized_installer.install -e @~/artifact/secrets.yml -e "_hub_database_fields='{{ hub_db_fields_encryption_key }}'"
$ ansible-playbook -i inventory ansible.containerized_installer.install -e @~/artifact/secrets.yml -e "_hub_database_fields='{{ hub_db_fields_encryption_key }}'"
Copy to Clipboard Copied!
- Install and configure the containerized target environment.
- Verify PostgreSQL database version is on version 15.
Create a backup of the initial containerized environment.
ansible-playbook -i <path_to_inventory> ansible.containerized_installer.backup
$ ansible-playbook -i <path_to_inventory> ansible.containerized_installer.backup
Copy to Clipboard Copied! - Ensure the fresh installation is functional.
7.1.2. Importing the migration content to the target environment
To import your migration content into the target environment, stop the containerized services, import the database dumps, and then restart the services.
Procedure
Stop the containerized services, except the database.
In all nodes, if Performance Co-Pilot is configured, run the following command:
systemctl --user stop pcp
$ systemctl --user stop pcp
Copy to Clipboard Copied! Access the automation controller node and run:
systemctl --user stop automation-controller-task automation-controller-web automation-controller-rsyslog systemctl --user stop receptor
$ systemctl --user stop automation-controller-task automation-controller-web automation-controller-rsyslog $ systemctl --user stop receptor
Copy to Clipboard Copied! Access the automation hub node and run:
systemctl --user stop automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2
$ systemctl --user stop automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2
Copy to Clipboard Copied! Access the Event-Driven Ansible node and run:
systemctl --user stop automation-eda-scheduler automation-eda-daphne automation-eda-web automation-eda-api automation-eda-worker-1 automation-eda-worker-2 automation-eda-activation-worker-1 automation-eda-activation-worker-2
$ systemctl --user stop automation-eda-scheduler automation-eda-daphne automation-eda-web automation-eda-api automation-eda-worker-1 automation-eda-worker-2 automation-eda-activation-worker-1 automation-eda-activation-worker-2
Copy to Clipboard Copied! Access the platform gateway node and run:
systemctl --user stop automation-gateway automation-gateway-proxy
$ systemctl --user stop automation-gateway automation-gateway-proxy
Copy to Clipboard Copied! Access the platform gateway node when using standalone Redis, or all nodes from the Redis group in your inventory file when using clustered Redis, and run:
systemctl --user stop redis-unix redis-tcp
$ systemctl --user stop redis-unix redis-tcp
Copy to Clipboard Copied! NoteIn an enterprise deployment, the components run on different nodes. Run the commands on each component node.
Import database dumps to the containerized environment.
If you are using an Ansible Automation Platform managed database, you must create a temporary container to run the
psql
andpg_restore
commands. Run this command from the database node.podman run -it --rm --name postgresql_restore_temp --network host --volume ~/aap/tls/extracted:/etc/pki/ca-trust/extracted:z --volume ~/aap/postgresql/server.crt:/var/lib/pgsql/server.crt:ro,z --volume ~/aap/postgresql/server.key:/var/lib/pgsql/server.key:ro,z --volume ~/artifact:/var/lib/pgsql/backups:ro,z registry.redhat.io/rhel8/postgresql-15:latest bash
$ podman run -it --rm --name postgresql_restore_temp --network host --volume ~/aap/tls/extracted:/etc/pki/ca-trust/extracted:z --volume ~/aap/postgresql/server.crt:/var/lib/pgsql/server.crt:ro,z --volume ~/aap/postgresql/server.key:/var/lib/pgsql/server.key:ro,z --volume ~/artifact:/var/lib/pgsql/backups:ro,z registry.redhat.io/rhel8/postgresql-15:latest bash
Copy to Clipboard Copied! NoteThe command above opens a shell inside the container named
postgresql_restore_temp
with the artifact mounted at/var/lib/pgsql/backups
. Additionally, it mounts the PostgreSQL certificates to ensure that you can resolve the correct certificates.The command assumes the image
registry.redhat.io/rhel8/postgresql-15:latest
is available. If you are missing the image, check the available images for the user withpodman images ls
.It also assumes that the artifact is located in the current user’s home folder. If the artifact is located elsewhere, change the
~/artifact
with the required path.-
If you are using a customer-provided (external) database, you can run the
psql
andpg_restore
commands from any node that has these commands installed and that has to access the database. Reach out to your database administrator if you are unsure. From inside the container, access the database and ensure the users have the
CREATEDB
role.bash-4.4$ psql -h <pg_hostname> -U postgres postgres=# \l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges --------------------+------------------+----------+-----------+------------+------------------- automationedacontroller | eda | UTF8 | en_US.UTF-8 | en_US.UTF-8 | automationhub | automationhub | UTF8 | en_US.UTF-8 | en_US.UTF-8 | awx | awx | UTF8 | en_US.UTF-8 | en_US.UTF-8 | gateway | gateway | UTF8 | en_US.UTF-8 | en_US.UTF-8 | (4 rows)
bash-4.4$ psql -h <pg_hostname> -U postgres postgres=# \l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges --------------------+------------------+----------+-----------+------------+------------------- automationedacontroller | eda | UTF8 | en_US.UTF-8 | en_US.UTF-8 | automationhub | automationhub | UTF8 | en_US.UTF-8 | en_US.UTF-8 | awx | awx | UTF8 | en_US.UTF-8 | en_US.UTF-8 | gateway | gateway | UTF8 | en_US.UTF-8 | en_US.UTF-8 | (4 rows)
Copy to Clipboard Copied! For each component name, add the
CREATEDB
role to theOwner
. For example:postgres=# ALTER ROLE awx WITH CREATEDB; postgres=# \q
postgres=# ALTER ROLE awx WITH CREATEDB; postgres=# \q
Copy to Clipboard Copied! Replace
awx
with the database owner.With the
CREATEDB
in place, access the path where the artifact is mounted, and run thepg_restore
commands.bash$ cd /var/lib/pgsql/backups bash$ pg_restore --clean --create --no-owner -h <pg_hostname> -U <component_pg_user> -d template1 <component>/<component>.pgc
bash$ cd /var/lib/pgsql/backups bash$ pg_restore --clean --create --no-owner -h <pg_hostname> -U <component_pg_user> -d template1 <component>/<component>.pgc
Copy to Clipboard Copied! After the restore, remove the permissions from the user. For example:
postgres=# ALTER ROLE awx WITH NOCREATEDB; postgres=# \q
postgres=# ALTER ROLE awx WITH NOCREATEDB; postgres=# \q
Copy to Clipboard Copied! Replace
awx
with each user containing the role.
Start the containerized services, except the database.
In all nodes, if Performance Co-Pilot is configured, run the following command:
systemctl --user start pcp
$ systemctl --user start pcp
Copy to Clipboard Copied! Access the automation controller node and run:
systemctl --user start automation-controller-task automation-controller-web automation-controller-rsyslog systemctl --user start receptor
$ systemctl --user start automation-controller-task automation-controller-web automation-controller-rsyslog $ systemctl --user start receptor
Copy to Clipboard Copied! Access the automation hub node and run:
systemctl --user start automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2
$ systemctl --user start automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2
Copy to Clipboard Copied! Access the Event-Driven Ansible node and run:
systemctl --user start automation-eda-scheduler automation-eda-daphne automation-eda-web automation-eda-api automation-eda-worker-1 automation-eda-worker-2 automation-eda-activation-worker-1 automation-eda-activation-worker-2
$ systemctl --user start automation-eda-scheduler automation-eda-daphne automation-eda-web automation-eda-api automation-eda-worker-1 automation-eda-worker-2 automation-eda-activation-worker-1 automation-eda-activation-worker-2
Copy to Clipboard Copied! Access the platform gateway node and run:
systemctl --user start automation-gateway automation-gateway-proxy
$ systemctl --user start automation-gateway automation-gateway-proxy
Copy to Clipboard Copied! Access the platform gateway node when using standalone Redis, or all nodes from the Redis group in your inventory when using clustered Redis, and run:
systemctl --user start redis-unix redis-tcp
$ systemctl --user start redis-unix redis-tcp
Copy to Clipboard Copied! NoteIn an enterprise deployment, the components run on different nodes. Run the commands on each component node.
7.1.3. Reconciling the target environment post-import
Perform the following post-import reconciliation steps to ensure your target environment is fully functional and correctly configured.
Procedure
Deprovision the platform gateway configuration.
SSH to the host serving a platform gateway container as the same rootless user used in the source environment export, and run the following commands to remove the platform gateway proxy configuration:
podman exec -it automation-gateway bash
$ podman exec -it automation-gateway bash
Copy to Clipboard Copied! aap-gateway-manage migrate
$ aap-gateway-manage migrate
Copy to Clipboard Copied! aap-gateway-manage shell_plus >> HTTPPort.objects.all().delete(); ServiceNode.objects.all().delete(); ServiceCluster.objects.all().delete()
$ aap-gateway-manage shell_plus >>> HTTPPort.objects.all().delete(); ServiceNode.objects.all().delete(); ServiceCluster.objects.all().delete()
Copy to Clipboard Copied! Transfer custom configurations and settings.
Edit the inventory file and apply any relevant
extra_settings
to each component by using thecomponent_extra_settings
.- Re-run the installation program on the target environment by using the same inventory from the installation.
Validate instances for automation execution.
SSH to the host serving an
automation-controller-task
container as the rootless user, and run the following commands to validate and remove instances that are orphaned from the source artifact:podman exec -it automation-controller-task bash
$ podman exec -it automation-controller-task bash
Copy to Clipboard Copied! awx-manage list_instances
$ awx-manage list_instances
Copy to Clipboard Copied! Find nodes that are no longer part of this cluster. A good indicator is nodes with 0 capacity as they have failed their health checks:
[ungrouped capacity=0] [DISABLED] node1.example.org capacity=0 node_type=hybrid version=X.Y.Z heartbeat="..." [DISABLED] node2.example.org capacity=0 node_type=execution version ansible-runner-X.Y.Z heartbeat="..."
[ungrouped capacity=0] [DISABLED] node1.example.org capacity=0 node_type=hybrid version=X.Y.Z heartbeat="..." [DISABLED] node2.example.org capacity=0 node_type=execution version ansible-runner-X.Y.Z heartbeat="..."
Copy to Clipboard Copied! Remove those nodes with
awx-manage
, leaving only theaap-controller-task
instance:awx-manage deprovision_instance --host=node1.example.org awx-manage deprovision_instance --host=node2.example.org
awx-manage deprovision_instance --host=node1.example.org awx-manage deprovision_instance --host=node2.example.org
Copy to Clipboard Copied! Repair orphaned automation hub content links for Pulp.
Run the following command from any host that has direct access to the automation hub address:
curl -d '{"verify_checksums": true}' -X POST -k https://<gateway_url>/api/galaxy/pulp/api/v3/repair/ -u <gateway_admin_user>:<gateway_admin_password>
$ curl -d '{"verify_checksums": true}' -X POST -k https://<gateway_url>/api/galaxy/pulp/api/v3/repair/ -u <gateway_admin_user>:<gateway_admin_password>
Copy to Clipboard Copied! Reconcile instance groups configuration:
- Go to → → .
- Select the Instance Group and then select the Instances tab.
- Associate or disassociate instances as required.
Reconcile decision environments and credentials:
- Go to → .
- Edit each decision environment which references a registry URL either unrelated or no longer accessible to this new environment. For example, the automation hub decision environment might require modification for the target automation hub environment.
- Select each associated credential to these decision environments and ensure their addresses align with the new environment.
Reconcile execution environments and credentials:
- Go to → → .
- Check each execution environment image and verify their addresses against the new environment.
- Go to → → .
- Edit each credential and ensure that all environment specific information aligns with the new environment.
- Verify any further customizations or configurations after the migration, such as RBAC rules with instance groups.
7.1.4. Validating the target environment
After completing the migration, validate your target environment to ensure all components are functional and operating as expected.
Procedure
Verify all migrated components are functional.
To ensure that all components have been successfully migrated, verify that each component is operational and accessible:
-
Platform gateway: Access the Ansible Automation Platform URL at
https://<gateway_hostname>/
and verify that the dashboard loads correctly. Check that the platform gateway service is running and properly connected to automation controller. - Automation controller: Under Automation Execution, check that projects, inventories, and job templates are present and properly configured.
- Automation hub: Under Automation Content, verify that collections, namespaces, and their contents are visible.
Event-Driven Ansible (if applicable): Under Automation Execution Decisions, verify that rule audits, rulebook activations, and projects are accessible.
For each component, check the logs to ensure there are no startup errors or warnings:
podman logs <container_name>
podman logs <container_name>
Copy to Clipboard Copied!
-
Platform gateway: Access the Ansible Automation Platform URL at
Test workflows and automation processes.
After you have confirmed that all components are functional, test critical automation workflows to ensure they operate correctly in the containerized environment:
- Run job templates: Run several key job templates, including those with dependencies on various credential types.
- Test workflow templates: Run workflow templates to ensure that workflow nodes run in the correct order and that the workflow completes successfully.
- Verify execution environments: Ensure that jobs run in the appropriate execution environments and can access required dependencies.
- Check job artifacts: Verify that job artifacts are properly stored and accessible.
- Validate job scheduling: Test scheduled jobs to ensure they run at the expected times.
Validate user access and permissions.
Confirm that user accounts, teams, and roles were correctly migrated:
- User authentication: Test login functionality with various user accounts to ensure authentication works correctly.
- Role-based access controls: Verify that users have appropriate permissions for organizations, projects, inventories, and job templates.
- Team memberships: Confirm that team memberships and team-based permissions are intact.
- API access: Test API tokens and ensure that API access is functioning properly.
- SSO integration (if applicable): Verify that Single Sign-On authentication is working correctly.
Confirm content synchronization and availability.
Ensure that all content sources are properly configured and accessible:
- Collection synchronization: Check that you can synchronize collections from a remote.
- Collection Upload: Check that you can upload collections.
- Collection repositories: Verify that collections are available in automation hub and can be used in execution environments.
- Project synchronization: Check that projects can sync content from source control repositories.
- External content sources: Test synchronization from automation hub and Ansible Galaxy (if configured).
- Execution environment availability: Confirm that all required execution environments are available and can be accessed by the execution nodes.
- Content dependencies: Verify that content dependencies are correctly resolved when running jobs.
7.2. OpenShift Container Platform
Prepare and assess your target OpenShift Container Platform environment, and import and reconcile your migrated content.
7.2.1. Preparing and assessing the target environment
To prepare and assess your target environment, perform the following steps.
Procedure
- Configure Ansible Automation Platform Operator for an Ansible Automation Platform deployment.
- Set up the database configuration (internal or external).
- Set up the Redis configuration (internal or external).
- Install Ansible Automation Platform using Ansible Automation Platform Operator.
- Create a backup of the initial OpenShift Container Platform deployment.
- Verify the fresh installation is functional.
7.2.2. Importing the migration content to the target environment
To import your environment, scale down Ansible Automation Platform components, restore databases, replace encryption secrets, and scale services back up.
This guide assumes you have the latest version of Ansible Automation Platform named 'aap' in the default 'aap' namespace and all default database names and database users.
Procedure
Begin by scaling down the Ansible Automation Platform deployment using
idle_aap
.oc patch ansibleautomationplatform aap --type merge -p '{"spec":{"idle_aap":true}}'
oc patch ansibleautomationplatform aap --type merge -p '{"spec":{"idle_aap":true}}'
Copy to Clipboard Copied! Wait for component pods to stop. Only the 6 Operator pods will remain running.
NAME READY STATUS RESTARTS AGE pod/aap-controller-migration-4.6.13-5swc6 0/1 Completed 0 160m pod/aap-gateway-operator-controller-manager-6b75c95458-4zrxv 2/2 Running 0 26h pod/ansible-lightspeed-operator-controller-manager-b674c55b8-qncjp 2/2 Running 0 45h pod/automation-controller-operator-controller-manager-6b79d48d4cchn 2/2 Running 0 45h pod/automation-hub-operator-controller-manager-5cd674c984-5njfj 2/2 Running 0 45h pod/eda-server-operator-controller-manager-645f4db5-d2flt 2/2 Running 0 45h pod/resource-operator-controller-manager-86b8f7bb54-cvz6d 2/2 Running 0 45h
NAME READY STATUS RESTARTS AGE pod/aap-controller-migration-4.6.13-5swc6 0/1 Completed 0 160m pod/aap-gateway-operator-controller-manager-6b75c95458-4zrxv 2/2 Running 0 26h pod/ansible-lightspeed-operator-controller-manager-b674c55b8-qncjp 2/2 Running 0 45h pod/automation-controller-operator-controller-manager-6b79d48d4cchn 2/2 Running 0 45h pod/automation-hub-operator-controller-manager-5cd674c984-5njfj 2/2 Running 0 45h pod/eda-server-operator-controller-manager-645f4db5-d2flt 2/2 Running 0 45h pod/resource-operator-controller-manager-86b8f7bb54-cvz6d 2/2 Running 0 45h
Copy to Clipboard Copied! Scale down the Ansible Automation Platform Gateway Operator and Ansible Automation Platform Controller Operator.
oc scale --replicas=0 deployment aap-gateway-operator-controller-manager automation-controller-operator-controller-manager
oc scale --replicas=0 deployment aap-gateway-operator-controller-manager automation-controller-operator-controller-manager
Copy to Clipboard Copied! Example output:
deployment.apps/aap-gateway-operator-controller-manager scaled deployment.apps/automation-controller-operator-controller-manager scaled
deployment.apps/aap-gateway-operator-controller-manager scaled deployment.apps/automation-controller-operator-controller-manager scaled
Copy to Clipboard Copied! Scale up the idled Postgres
StatefulSet
.oc scale --replicas=1 statefulset.apps/aap-postgres-15
oc scale --replicas=1 statefulset.apps/aap-postgres-15
Copy to Clipboard Copied! Create a temporary Persistent Volume Claim (PVC) with appropriate settings and sizing.
aap-temp-pvc.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: aap-temp-pvc namespace: aap spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: aap-temp-pvc namespace: aap spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi
Copy to Clipboard Copied! oc create -f aap-temp-pvc.yaml
oc create -f aap-temp-pvc.yaml
Copy to Clipboard Copied! Obtain the existing PostgreSQL image to use for temporary deployment.
echo $(oc get pod/aap-postgres-15-0 -o jsonpath="{.spec.containers[*].image}")
echo $(oc get pod/aap-postgres-15-0 -o jsonpath="{.spec.containers[*].image}")
Copy to Clipboard Copied! Create a temporary PostgreSQL deployment with the mounted temporary PVC.
aap-temp-postgres.yaml
kind: Deployment apiVersion: apps/v1 metadata: name: aap-temp-postgres spec: replicas: 1 selector: matchLabels: app: aap-temp-postgres template: metadata: labels: app: aap-temp-postgres spec: containers: - name: aap-temp-postgres image: <postgres image from previous step> command: - /bin/sh - '-c' - sleep infinity imagePullPolicy: Always securityContext: runAsNonRoot: true allowPrivilegeEscalation: false volumeMounts: - name: aap-temp-pvc mountPath: /tmp/aap-temp-pvc volumes: - name: aap-temp-pvc persistentVolumeClaim: claimName: aap-temp-pvc
kind: Deployment apiVersion: apps/v1 metadata: name: aap-temp-postgres spec: replicas: 1 selector: matchLabels: app: aap-temp-postgres template: metadata: labels: app: aap-temp-postgres spec: containers: - name: aap-temp-postgres image: <postgres image from previous step> command: - /bin/sh - '-c' - sleep infinity imagePullPolicy: Always securityContext: runAsNonRoot: true allowPrivilegeEscalation: false volumeMounts: - name: aap-temp-pvc mountPath: /tmp/aap-temp-pvc volumes: - name: aap-temp-pvc persistentVolumeClaim: claimName: aap-temp-pvc
Copy to Clipboard Copied! oc create -f aap-temp-postgres.yaml
oc create -f aap-temp-postgres.yaml
Copy to Clipboard Copied! Copy the export artifact to the temporary PostgreSQL pod.
First, obtain the pod name and set it as an environment variable:
export AAP_TEMP_POSTGRES=$(oc get pods --no-headers -o custom-columns="metadata.name" | grep aap-temp-postgres)
export AAP_TEMP_POSTGRES=$(oc get pods --no-headers -o custom-columns="metadata.name" | grep aap-temp-postgres)
Copy to Clipboard Copied! Test the environment variable:
echo $AAP_TEMP_POSTGRES
echo $AAP_TEMP_POSTGRES
Copy to Clipboard Copied! Example output:
aap-temp-postgres-7b6c57f87f-s2ldp
aap-temp-postgres-7b6c57f87f-s2ldp
Copy to Clipboard Copied! Copy the artifact and checksum to the PVC:
oc cp artifact.tar $AAP_TEMP_POSTGRES:/tmp/aap-temp-pvc/ oc cp artifact.tar.sha256 $AAP_TEMP_POSTGRES:/tmp/aap-temp-pvc/
oc cp artifact.tar $AAP_TEMP_POSTGRES:/tmp/aap-temp-pvc/ oc cp artifact.tar.sha256 $AAP_TEMP_POSTGRES:/tmp/aap-temp-pvc/
Copy to Clipboard Copied! Restore databases to Ansible Automation Platform PostgreSQL using the temporary PostgreSQL pod.
First, obtain PostgreSQL passwords for all three databases and the PostgreSQL admin password:
echo for secret in aap-controller-postgres-configuration aap-hub-postgres-configuration aap-gateway-postgres-configuration do echo $secret echo "PASSWORD: `oc get secrets $secret -o jsonpath="{.data['password']}" | base64 -d`" echo "USER: `oc get secrets $secret -o jsonpath="{.data['username']}" | base64 -d`" echo "DATABASE: `oc get secrets $secret -o jsonpath="{.data['database']}" | base64 -d`" echo done && echo "POSTGRES ADMIN PASSWORD: `oc get secrets aap-gateway-postgres-configuration -o jsonpath="{.data['postgres_admin_password']}" | base64 -d`"
echo for secret in aap-controller-postgres-configuration aap-hub-postgres-configuration aap-gateway-postgres-configuration do echo $secret echo "PASSWORD: `oc get secrets $secret -o jsonpath="{.data['password']}" | base64 -d`" echo "USER: `oc get secrets $secret -o jsonpath="{.data['username']}" | base64 -d`" echo "DATABASE: `oc get secrets $secret -o jsonpath="{.data['database']}" | base64 -d`" echo done && echo "POSTGRES ADMIN PASSWORD: `oc get secrets aap-gateway-postgres-configuration -o jsonpath="{.data['postgres_admin_password']}" | base64 -d`"
Copy to Clipboard Copied! Enter into the temporary PostgreSQL deployment and change directory to the mounted PVC containing the copied artifact:
oc exec -it deployment.apps/aap-temp-postgres /bin/bash
oc exec -it deployment.apps/aap-temp-postgres /bin/bash
Copy to Clipboard Copied! Inside the pod, change directory to
/tmp/aap-temp-pvc
and list its contents:cd /tmp/aap-temp-pvc && ls -1
cd /tmp/aap-temp-pvc && ls -1
Copy to Clipboard Copied! Example output:
total 2240 -rw-r--r-- 1 1000900000 1000900000 2273280 Jun 13 17:41 artifact.tar -rw-r--r-- 1 1000900000 1000900000 79 Jun 13 17:42 artifact.tar.sha256 drwxrws---. 2 root 1000900000 16384 Jun 13 17:40 lost+found
total 2240 -rw-r--r-- 1 1000900000 1000900000 2273280 Jun 13 17:41 artifact.tar -rw-r--r-- 1 1000900000 1000900000 79 Jun 13 17:42 artifact.tar.sha256 drwxrws---. 2 root 1000900000 16384 Jun 13 17:40 lost+found
Copy to Clipboard Copied! Verify the archive:
sha256sum --check artifact.tar.sha256
sha256sum --check artifact.tar.sha256
Copy to Clipboard Copied! Example output:
artifact.tar: OK
artifact.tar: OK
Copy to Clipboard Copied! Extract the artifact and verify its contents:
tar xf artifact.tar && cd artifact && sha256sum --check sha256sum.txt
tar xf artifact.tar && cd artifact && sha256sum --check sha256sum.txt
Copy to Clipboard Copied! Example output:
./controller/controller.pgc: OK ./gateway/gateway.pgc: OK ./hub/hub.pgc: OK
./controller/controller.pgc: OK ./gateway/gateway.pgc: OK ./hub/hub.pgc: OK
Copy to Clipboard Copied! Drop the automation controller database:
dropdb -h aap-postgres-15 automationcontroller
dropdb -h aap-postgres-15 automationcontroller
Copy to Clipboard Copied! Alter the user temporarily with the
CREATEDB
role:postgres=# ALTER USER automationcontroller WITH CREATEDB;
postgres=# ALTER USER automationcontroller WITH CREATEDB;
Copy to Clipboard Copied! Create the database:
createdb -h aap-postgres-15 -U automationcontroller automationcontroller
createdb -h aap-postgres-15 -U automationcontroller automationcontroller
Copy to Clipboard Copied! Revert temporary user permission:
postgres=# ALTER USER automationcontroller WITH NOCREATEDB;
postgres=# ALTER USER automationcontroller WITH NOCREATEDB;
Copy to Clipboard Copied! Restore the automation controller database:
pg_restore --clean-if-exists --no-owner -h aap-postgres-15 -U automationcontroller -d automationcontroller controller/controller.pgc
pg_restore --clean-if-exists --no-owner -h aap-postgres-15 -U automationcontroller -d automationcontroller controller/controller.pgc
Copy to Clipboard Copied! Restore the automation hub database:
pg_restore --clean-if-exists --no-owner -h aap-postgres-15 -U automationhub -d automationhub hub/hub.pgc
pg_restore --clean-if-exists --no-owner -h aap-postgres-15 -U automationhub -d automationhub hub/hub.pgc
Copy to Clipboard Copied! Restore the platform gateway database:
pg_restore --clean-if-exists --no-owner -h aap-postgres-15 -U gateway -d gateway gateway/gateway.pgc
pg_restore --clean-if-exists --no-owner -h aap-postgres-15 -U gateway -d gateway gateway/gateway.pgc
Copy to Clipboard Copied! Exit the pod:
exit
exit
Copy to Clipboard Copied! Replace database field encryption secrets.
oc set data secret/aap-controller-secret-key secret_key="<unencoded controller_secret_key value from secrets.yml>" oc set data secret/aap-db-fields-encryption-secret secret_key="<unencoded gateway_secret_key value from secrets.yml>" oc set data secret/aap-hub-db-fields-encryption database_fields.symmetric.key="<unencoded hub_db_fields_encryption_key value from secrets.yml>"
oc set data secret/aap-controller-secret-key secret_key="<unencoded controller_secret_key value from secrets.yml>" oc set data secret/aap-db-fields-encryption-secret secret_key="<unencoded gateway_secret_key value from secrets.yml>" oc set data secret/aap-hub-db-fields-encryption database_fields.symmetric.key="<unencoded hub_db_fields_encryption_key value from secrets.yml>"
Copy to Clipboard Copied! Clean up the temporary PostgreSQL and PVC.
oc delete -f aap-temp-postgres.yaml
oc delete -f aap-temp-postgres.yaml
Copy to Clipboard Copied! oc delete -f aap-temp-pvc.yaml
oc delete -f aap-temp-pvc.yaml
Copy to Clipboard Copied! Scale the platform gateway and automation controller Operators back up and wait for the platform gateway Operator reconciliation loop to complete.
The PostgreSQL
StatefulSet
returns to idle.oc scale --replicas=1 deployment aap-gateway-operator-controller-manager automation-controller-operator-controller-manager
oc scale --replicas=1 deployment aap-gateway-operator-controller-manager automation-controller-operator-controller-manager
Copy to Clipboard Copied! Example output:
deployment.apps/aap-gateway-operator-controller-manager scaled deployment.apps/automation-controller-operator-controller-manager scaled
deployment.apps/aap-gateway-operator-controller-manager scaled deployment.apps/automation-controller-operator-controller-manager scaled
Copy to Clipboard Copied! oc logs -f $(oc get pods --no-headers -o custom-columns="metadata.name" | grep aap-gateway-operator)
oc logs -f $(oc get pods --no-headers -o custom-columns="metadata.name" | grep aap-gateway-operator)
Copy to Clipboard Copied! Wait for reconciliation to stop.
Example output:
META: ending play {"level":"info", "ts":"2025-06-12T15:41:29Z","logger":"runner", "msg": "Ansible-runner exited successfully", "job": "5672263053238024330","name":"aap", "namespace": "aap"} PLAY RECAP *********** localhost : ok=45 changed=0 unreachable=0 failed=0 skipped=63 rescued=0 ignored=0
META: ending play {"level":"info", "ts":"2025-06-12T15:41:29Z","logger":"runner", "msg": "Ansible-runner exited successfully", "job": "5672263053238024330","name":"aap", "namespace": "aap"} PLAY RECAP *********** localhost : ok=45 changed=0 unreachable=0 failed=0 skipped=63 rescued=0 ignored=0
Copy to Clipboard Copied! Scale Ansible Automation Platform back up using
idle_aap
.oc patch ansibleautomationplatform aap --type merge -p '{"spec":{"idle_aap":false}}'
oc patch ansibleautomationplatform aap --type merge -p '{"spec":{"idle_aap":false}}'
Copy to Clipboard Copied! Example output:
ansibleautomationplatform.aap.ansible.com/aap patched
ansibleautomationplatform.aap.ansible.com/aap patched
Copy to Clipboard Copied! Wait for the
aap-gateway
pod to be running and clean up old service endpoints.Wait for the pod to be running.
Example output:
pod/aap-gateway-6c989b846c-47b91 2/2 Running 0 45s
pod/aap-gateway-6c989b846c-47b91 2/2 Running 0 45s
Copy to Clipboard Copied! for i in HTTPPort Route ServiceNode; do; oc exec -it deployment.apps/aap-gateway aap-gateway-manage shell -c 'from aap_gateway_api.models import '$i'; print('$i'.objects.all().delete())'; done
for i in HTTPPort Route ServiceNode; do; oc exec -it deployment.apps/aap-gateway aap-gateway-manage shell -c 'from aap_gateway_api.models import '$i'; print('$i'.objects.all().delete())'; done
Copy to Clipboard Copied! Example output:
(23, {'aap_gateway_api.ServiceAPIRoute': 4, 'aap_gateway_api.AdditionalRoute': 7, 'aap_gateway_api.Route': 11, 'aap_gateway_api.HTTPPort': 1}) (0, {}) (4, {'aap_gateway_api.ServiceNode': 4})
(23, {'aap_gateway_api.ServiceAPIRoute': 4, 'aap_gateway_api.AdditionalRoute': 7, 'aap_gateway_api.Route': 11, 'aap_gateway_api.HTTPPort': 1}) (0, {}) (4, {'aap_gateway_api.ServiceNode': 4})
Copy to Clipboard Copied! Run
awx-manage
to deprovision instances.Obtain the automation controller pod:
export AAP_CONTROLLER_POD=$(oc get pods --no-headers -o custom-columns=":metadata.name" | grep aap-controller-task)
export AAP_CONTROLLER_POD=$(oc get pods --no-headers -o custom-columns=":metadata.name" | grep aap-controller-task)
Copy to Clipboard Copied! Test the environment variable:
echo $AAP_CONTROLLER_POD
echo $AAP_CONTROLLER_POD
Copy to Clipboard Copied! Example output:
aap-controller-task-759b6d9759-r59q9
aap-controller-task-759b6d9759-r59q9
Copy to Clipboard Copied! Enter into the automation controller pod:
oc exec -it $AAP_CONTROLLER_POD /bin/bash awx-manage list_instances
oc exec -it $AAP_CONTROLLER_POD /bin/bash awx-manage list_instances
Copy to Clipboard Copied! Example output:
bash-4.4$ [controlplane capacity=642 policy=100%] aap-controller-task-759b6d9759-r59q9 capacity=642 node_type=control version=4.6.15 heartbeat="2025-06-12 21:39:48" node1.example.org capacity=0 node_type=hybrid version=4.6.13 heartbeat="2025-05-30 17:22:11" [default capacity=0 policy=100%] node1.example.org capacity=0 node_type=hybrid version=4.6.13 heartbeat="2025-05-30 17:22:11" node2.example.org capacity=0 node_type=execution version ansible-runner-2.4.1 heartbeat="2025-05-30 17:22:08"
bash-4.4$ [controlplane capacity=642 policy=100%] aap-controller-task-759b6d9759-r59q9 capacity=642 node_type=control version=4.6.15 heartbeat="2025-06-12 21:39:48" node1.example.org capacity=0 node_type=hybrid version=4.6.13 heartbeat="2025-05-30 17:22:11" [default capacity=0 policy=100%] node1.example.org capacity=0 node_type=hybrid version=4.6.13 heartbeat="2025-05-30 17:22:11" node2.example.org capacity=0 node_type=execution version ansible-runner-2.4.1 heartbeat="2025-05-30 17:22:08"
Copy to Clipboard Copied! Remove old nodes with
awx-manage
, leaving onlyaap-controller-task
:awx-manage deprovision_instance --host=node1.example.org awx-manage deprovision_instance --host=node2.example.org
awx-manage deprovision_instance --host=node1.example.org awx-manage deprovision_instance --host=node2.example.org
Copy to Clipboard Copied! Run the
curl
command to repair automation hub filesystem data.curl -d '{"verify_checksums": true}' -X POST -k https://<aap_url>/api/galaxy/pulp/api/v3/repair/ -u <admin_user>:<restored_admin_password>
curl -d '{"verify_checksums": true}' -X POST -k https://<aap_url>/api/galaxy/pulp/api/v3/repair/ -u <admin_user>:<restored_admin_password>
Copy to Clipboard Copied!
7.2.3. Reconciling the target environment post-import
After importing your migration artifact, perform the following steps to reconcile your target environment.
Procedure
-
Modify the Django
SECRET_KEY
secrets to match the source platform. - Deprovision and reconfigure platform gateway service nodes.
- Re-run platform gateway nodes and services register logic.
- Convert container-specific settings to OpenShift Container Platform-appropriate formats.
- Reconcile container resource allocations to OpenShift Container Platform resources.
7.2.4. Validating the target environment
To validate your migrated environment, perform the following steps.
Procedure
- Verify all migrated components are functional.
- Test workflows and automation processes.
- Validate user access and permissions.
- Confirm content synchronization and availability.
- Test integration with OpenShift Container Platform-specific features.
7.3. Managed Ansible Automation Platform
Prepare and migrate your source environment to a Managed Ansible Automation Platform deployment, and reconcile the target environment post-migration.
7.3.1. Migrating to Managed Ansible Automation Platform
Prerequisites
- You have a migration artifact from your source environment.
Procedure
Submit a support ticket on the Red Hat Customer Portal requesting a migration to Managed Ansible Automation Platform.
The support ticket should include:
- Source installation type (RPM, Containerized, OpenShift)
- Managed Ansible Automation Platform URL or deployment name
- Source version (installer or Operator version)
- The Ansible Site Reliability Engineering (SRE) team provides instructions in the support ticket on how to upload the resulting migration artifact to secure storage for processing.
- The Ansible SRE team imports the migration artifact into the identified target instance and notifies the customer through the support ticket.
- The Ansible SRE team notifies customers of successful migration.
7.3.2. Reconciling the target environment post-migration
After a successful migration, perform the following tasks:
Procedure
- Log in to the Managed Ansible Automation Platform instance by using the local administrator account to confirm that data was properly imported.
You might need to perform the following actions based on the configuration of the source deployment:
- Reconfigure SSO authenticators and mappings to reflect the new URLs.
Update private automation hub content to reflect the new URLs.
Run the following command to update the automation hub repositories:
`curl -d '{"verify_checksums": true }' -X POST -k https://<platform_url>/api/galaxy/pulp/api/v3/repair/ -u <admin_user>:<admin_password>`
`curl -d '{"verify_checksums": true }' -X POST -k https://<platform_url>/api/galaxy/pulp/api/v3/repair/ -u <admin_user>:<admin_password>`
Copy to Clipboard Copied! - Perform a sync on any repositories configured in automation hub.
- Push any custom execution environments from the source automation hub to the target automation hub.
- Reconfigure automation mesh.
- Following migration, you can request standard SRE tasks through support tickets for the SRE team to perform such as configuration of custom certificates, a custom domain, or connectivity through private endpoints.