Ansible Automation Platform migration
Migrate your deployment of Ansible Automation Platform from one installation type to another
Abstract
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
Chapter 1. Introduction and objectives Copy linkLink copied to clipboard!
Learn about supported migration paths between RPM-based, container-based, OpenShift Container Platform, and Managed Ansible Automation Platform deployments, including step-by-step workflows and migration requirements.
Migration between different Ansible Automation Platform deployment types for Ansible Automation Platform 2.6 requires specific steps and considerations.
The supported migration paths include:
| Source environment | Target environment |
|---|---|
| RPM-based Ansible Automation Platform | Container-based Ansible Automation Platform platform |
| RPM-based Ansible Automation Platform | OpenShift Container Platform |
| RPM-based Ansible Automation Platform | Managed Ansible Automation Platform |
| Container-based Ansible Automation Platform | OpenShift Container Platform |
| Container-based Ansible Automation Platform | Managed Ansible Automation Platform |
Migrations outside of those listed are not supported at this time.
The Ansible Automation Platform migration guide aims to:
- Document all components and configurations that require migration between Ansible Automation Platform platforms
- Provide step-by-step migration workflows for different deployment scenarios
- Identify potential challenges and unknowns that require further investigation
Chapter 2. Out of scope Copy linkLink copied to clipboard!
Understand which Ansible Automation Platform components and configurations require manual recreation in the target environment and are not covered by the migration process.
The Ansible Automation Platform migration guide focuses on the core components of Ansible Automation Platform. The following items are currently out of scope for this migration process:
- Event-Driven Ansible: Manually recreate configuration and content for Event-Driven Ansible in the target environment.
- Instance groups: Manually recreate instance group configurations after migration.
- Hub content: Manually re-import or reconfigure content hosted in automation hub.
- Custom Certificate Authority (CA) for receptor mesh: Manually reconfigure custom CA configurations for receptor mesh.
- Disconnected environments: The migration process does not cover disconnected environments.
- Execution environments (other than the default one): Manually rebuild or re-import custom execution environments.
Manually re-create, import, or configure these items in the target environment.
Chapter 3. Migration process overview Copy linkLink copied to clipboard!
Understand the complete migration workflow including preparation, export, artifact creation, import, reconciliation, and validation steps for moving between Ansible Automation Platform installation types.
You can only migrate to a different installation type of the same Ansible Automation Platform version. For example, you can migrate from RPM version 2.6 to containerized 2.6, but not from RPM version 2.4 to containerized 2.6.
The migration between Ansible Automation Platform installation types follows this general workflow:
- Prepare and assess the source environment
- Export the source environment
- Create and verify the migration artifact
- Prepare and assess the target environment
- Import the migration content to the target environment
- Reconcile the target environment post-import
- Validate the target environment
Chapter 4. Migration prerequisites Copy linkLink copied to clipboard!
Prerequisites for migrating your Ansible Automation Platform deployment. For your specific migration path, ensure that you meet all necessary conditions before proceeding.
4.1. RPM to containerized migration prerequisites Copy linkLink copied to clipboard!
Before migrating from an RPM-based deployment to a container-based deployment, ensure you meet the following prerequisites:
- You have a source RPM-based deployment of Ansible Automation Platform.
- The source RPM-based deployment is on the latest async release of the version you are on.
- You have a target environment prepared for a container-based deployment of Ansible Automation Platform.
- You have downloaded the containerized installation program for the latest release of the Ansible Automation Platform version you are on.
- You have enough storage for database dumps and backups.
- There is network connectivity between the source and target environments.
4.2. RPM to OpenShift Container Platform migration prerequisites Copy linkLink copied to clipboard!
Before migrating from an RPM-based deployment to an OpenShift Container Platform deployment, ensure you meet the following prerequisites:
- You have a source RPM-based deployment of Ansible Automation Platform.
- The source RPM-based deployment is on the latest async release of the version you are on.
- You have a target OpenShift Container Platform environment ready.
- You have Ansible Automation Platform Operator available for the latest release of the Ansible Automation Platform version you are on.
- You have made a decision on internal or external database configuration.
- You have made a decision on internal or external Redis configuration.
- There is network connectivity between the source and target environments.
4.3. RPM to Managed Ansible Automation Platform migration prerequisites Copy linkLink copied to clipboard!
Before migrating from an RPM-based deployment to a Managed Ansible Automation Platform deployment, ensure you meet the following prerequisites:
- You have a source RPM-based deployment of Ansible Automation Platform.
- The source deployment is on the latest release of the Ansible Automation Platform version you are on.
- You have a target Managed Ansible Automation Platform deployment.
- You have enabled local authentication on the source deployment before the migration.
- A local administrator account must be functional on the source deployment before migration. Verify this by performing a successful login to the source deployment.
- You have a plan to retain a backup throughout the migration process and to ensure that your existing Ansible Automation Platform deployment remains active until your migration has completed successfully.
You have a plan for any environment changes based on the migration from a self-hosted Ansible Automation Platform deployment to a Managed Ansible Automation Platform deployment:
- Job log retention changes from a customer-configured option to 30 days.
- Network changes occur when moving the control plane to the managed service.
- Automation mesh requires reconfiguration.
- You must reconfigure or re-create Single Sign-On (SSO) identity providers post-migration to account for URL changes.
4.4. Containerized to OpenShift Container Platform migration prerequisites Copy linkLink copied to clipboard!
Before migrating from a container-based deployment to an OpenShift Container Platform deployment, ensure that you meet the following prerequisites:
- You have a source container-based deployment of Ansible Automation Platform.
- The source deployment is on the latest async release of the version you are on.
- You have a target OpenShift Container Platform environment ready.
- You have an Ansible Automation Platform Operator available for the latest release of the Ansible Automation Platform version you are on.
- You have decided between internal or external database configuration.
- You have decided between internal or external Redis configuration.
- There is network connectivity between the source and target environments.
4.5. Containerized to Managed Ansible Automation Platform migration prerequisites Copy linkLink copied to clipboard!
Before migrating from a container-based deployment to a Managed Ansible Automation Platform deployment, ensure that you meet the following prerequisites:
- You have a source container-based deployment of Ansible Automation Platform.
- The source deployment is on the latest release of the Ansible Automation Platform version you are on.
- You have a target Managed Ansible Automation Platform deployment.
- You have enabled local authentication on the source deployment before the migration.
- A local administrator account must be functional on the source deployment before migration. Verify this by performing a successful login to the source deployment.
- You have a plan to retain a backup throughout the migration process and to ensure that your existing Ansible Automation Platform deployment remains active until your migration has completed successfully.
You have a plan for any environment changes based on the migration from a self-hosted Ansible Automation Platform deployment to a Managed Ansible Automation Platform deployment:
- Job log retention changes from a customer-configured option to 30 days.
- Network changes occur when moving the control plane to the managed service.
- Automation mesh requires reconfiguration.
- You must reconfigure or re-create Single Sign-On (SSO) identity providers post-migration to account for URL changes.
Chapter 5. Migration artifact structure and verification Copy linkLink copied to clipboard!
The migration artifact packages all necessary data and configurations from your source environment. Verify its structure and contents to ensure a successful migration.
5.1. Artifact structure Copy linkLink copied to clipboard!
The migration artifact is a comprehensive package containing all necessary components to transfer your Ansible Automation Platform deployment.
Structure the artifact as follows:
/
manifest.yml
secrets.yml
sha256sum.txt
-> controller:
controller.pgc
-> custom_configs:
foo.py
bar.py
-> gateway:
gateway.pgc
-> hub:
hub.pgc
5.2. Manifest file Copy linkLink copied to clipboard!
The manifest.yml file serves as the primary metadata document for the migration artifact. It contains critical versioning and component information from your source environment.
Structure the manifest as follows:
---
aap_version: X.Y # The version being migrated
platform: rpm # The source platform type
components:
- name: controller
version: x.y.z
- name: hub
version: x.y.z
- name: gateway
version: x.y.z
5.3. Secrets file Copy linkLink copied to clipboard!
The secrets.yml file in the migration artifact includes essential Django SECRET_KEY values required for authentication between services.
Structure the secrets file as follows:
controller_pg_database: <redacted>
controller_secret_key: <redacted>
gateway_pg_database: <redacted>
gateway_secret_key: <redacted>
hub_pg_database: <redacted>
hub_secret_key: <redacted>
hub_db_fields_encryption_key: <redacted>
Ensure the secrets.yml file is encrypted and kept in a secure location.
5.4. Migration artifact creation checklist Copy linkLink copied to clipboard!
Use this checklist to verify the migration artifact.
Database dumps: Include complete database dumps for each component.
-
Ensure the automation controller database (
controller.pgc) is present in the artifact. -
Ensure the automation hub database (
hub.pgc) is present in the artifact. -
Ensure the platform gateway database (
gateway.pgc) is present in the artifact.
-
Ensure the automation controller database (
Secret dumps: Export and include all security-related information.
-
Validate that all secret values are present in the
secrets.ymlfile.
-
Validate that all secret values are present in the
Custom configurations: Package all customizations from the source environment.
-
Validate that any custom Python scripts or modules (for example
foo.py,bar.py) are present on the artifact. - Document any non-standard configurations or environment-specific settings.
-
Validate that any custom Python scripts or modules (for example
Database information: Document database details.
- Include the database names for all components.
- Document database users and required permissions.
- Note any database-specific configurations or optimizations.
Verification: Ensure artifact integrity and completeness.
- Verify that all required files are included in the artifact.
- Verify that checksums exist for all included database files.
- Test the artifact’s structure and accessibility.
- Consider encrypting the artifact for secure transfer to the target environment.
- Document any known limitations or special considerations.
Chapter 6. Source environment Copy linkLink copied to clipboard!
Prepare and export data from your existing Ansible Automation Platform deployment. The exported data forms a critical migration artifact, which you use to configure your new environment.
6.1. RPM-based Ansible Automation Platform Copy linkLink copied to clipboard!
Prepare and export data from your RPM-based Ansible Automation Platform deployment.
6.1.1. Preparing and assessing the source environment Copy linkLink copied to clipboard!
Before beginning your migration, document your current RPM deployment to use as a reference throughout the migration process and when configuring your target environment.
Procedure
Document the full topology of your current RPM deployment:
- Map out all servers, nodes, and their roles (for example control nodes, execution nodes, database servers).
- Note the hostname, IP address, and function of each server in your deployment.
- Document the network configuration between components.
Ansible Automation Platform version information:
- Record the exact Ansible Automation Platform version (X.Y) currently deployed.
Document the specific version of each component:
- Automation controller version
- Automation hub version
- Platform gateway version
Database configuration:
- Database names for each component
- Database users and roles
- Connection parameters and authentication methods
- Any custom PostgreSQL configurations or optimizations
6.1.2. Exporting the source environment Copy linkLink copied to clipboard!
From your source environment, export the data and configurations needed for migration.
Procedure
Verify the PostgreSQL database version is PostgreSQL version 15.
You can verify your current PostgreSQL version by connecting to your database server and running the following command as the
postgresuser:$ psql -c 'SELECT version();'ImportantPostgreSQL version 15 is a strict requirement for the migration process to succeed. If running PostgreSQL 13 or earlier, upgrade to version 15 before proceeding with the migration.
If using an Ansible Automation Platform managed database, re-run the installation program to upgrade the PostgreSQL version. If using a customer provided (external) database, contact your database administrator or service provider to confirm the version and arrange for an upgrade if required.
Create a complete backup of the source environment:
$ ./setup.sh -e 'backup_dest=/path/to/backup_dir/' -bGet the connection settings from one node from each of the component groups.
For each command, access the host and become the
rootuser.Access the automation controller node and run:
# awx-manage print_settings | grep '^DATABASES'Access the automation hub node and run:
# grep '^DATABASES' /etc/pulp/settings.pyAccess the platform gateway node and run:
# aap-gateway-manage print_settings | grep '^DATABASES'
Stage the manually created artifact on the platform gateway node.
# mkdir -p /tmp/backups/artifact/{controller,gateway,hub}# mkdir -p /tmp/backups/artifact/controller/custom_configs# touch /tmp/backups/artifact/secrets.yml# cd /tmp/backups/artifact/Validate the database size and make sure you have enough space on the filesystem for the
pg_dump.You can verify the database sizes by connecting to your database server and running the following command as the
postgresuser:$ psql -c '\l+'Adjust the filesystem size or mount an external filesystem as needed before performing the next step.
NoteThese commands send all target files to the
/tmpfilesystem. Adjust the commands to match your environment’s needs.Perform database dumps of all components on the platform gateway node within the artifact you created.
# psql -h <pg_hostname> -U <component_pg_user> -d <database_name> -t -c 'SHOW server_version;' # ensure connectivity to the database# pg_dump -h <pg_hostname> -U <component_pg_user> -d <component_pg_name> --clean --create -Fc -f <component>/<component>.pgc# ls -ld <component>/<component>.pgc# echo "<component>_pg_database: <database_name>" >> secrets.yml ## Add the database name for the component to the secrets fileExport secrets from the RPM environment from one node of each component group.
For each of the following steps, use the
rootuser to run the commands.Access the automation controller node, gather the secret key, and add it to the
controller_secret_keyvalue in thesecrets.ymlfile.# cat /etc/tower/SECRET_KEYAccess the automation hub node, gather the secret key, and add it to the
hub_secret_keyvalue in thesecrets.ymlfile.# grep '^SECRET_KEY' /etc/pulp/settings.py | awk -F'=' '{ print $2 }'Access the automation hub node, gather the
database_fields.symmetric.keyvalue, and add it to thehub_db_fields_encryption_keyvalue in thesecrets.ymlfile.# cat /etc/pulp/certs/database_fields.symmetric.keyAccess the platform gateway node, gather the secret key, and add it to the
gateway_secret_keyvalue in thesecrets.ymlfile.# cat /etc/ansible-automation-platform/gateway/SECRET_KEY
Export automation controller custom configurations.
If any custom settings exist on the
/etc/tower/conf.d, copy them to/tmp/backups/artifact/controller/custom_configs.Configuration files on automation controller that are managed by the installation program and not considered custom:
-
/etc/tower/conf.d/postgres.py -
/etc/tower/conf.d/channels.py -
/etc/tower/conf.d/caching.py -
/etc/tower/conf.d/cluster_host_id.py
-
Package the artifact.
# cd /tmp/backups/artifact/# [ -f sha256sum.txt ] && rm -f sha256sum.txt; find . -type f -name "*.pgc" -exec sha256sum {} \; >> sha256sum.txt# cat sha256sum.txt# cd ..# tar cf artifact.tar artifact# sha256sum artifact.tar > artifact.tar.sha256# sha256sum --check artifact.tar.sha256# tar tvf artifact.tarExample output of
tar tvf artifact.tar:drwxr-xr-x ansible/ansible 0 2025-05-08 16:48 artifact/ drwxr-xr-x ansible/ansible 0 2025-05-08 16:33 artifact/controller/ -rw-r--r-- ansible/ansible 732615 2025-05-08 16:26 artifact/controller/controller.pgc drwxr-xr-x ansible/ansible 0 2025-05-08 16:33 artifact/controller/custom_configs/ drwxr-xr-x ansible/ansible 0 2025-05-08 16:11 artifact/gateway/ -rw-r--r-- ansible/ansible 231155 2025-05-08 16:28 artifact/gateway/gateway.pgc drwxr-xr-x ansible/ansible 0 2025-05-08 16:26 artifact/hub/ -rw-r--r-- ansible/ansible 29252002 2025-05-08 16:26 artifact/hub/hub.pgc -rw-r--r-- ansible/ansible 614 2025-05-08 16:24 artifact/secrets.yml -rw-r--r-- ansible/ansible 338 2025-05-08 16:48 artifact/sha256sum.txt-
Download the
artifact.tarandartifact.tar.sha256to your local machine or transfer to the target node with thescpcommand.
6.2. Container-based Ansible Automation Platform Copy linkLink copied to clipboard!
Prepare and export data from your container-based Ansible Automation Platform deployment.
6.2.1. Preparing and assessing the source environment Copy linkLink copied to clipboard!
Document your current containerized deployment configuration, topology, and components to create a comprehensive reference for migration.
Procedure
Document the full topology of your current containerized deployment:
- Map out all servers, nodes, and their roles (for example control nodes, execution nodes, database servers).
- Note the hostname, IP address, and function of each server in your deployment.
- Document the network configuration between components.
Ansible Automation Platform version information:
- Record the exact Ansible Automation Platform version (X.Y) currently deployed.
Document the specific version of each component:
- Automation controller version
- Automation hub version
- Platform gateway version
Database configuration:
- Database names for each component
- Database users and roles
- Connection parameters and authentication methods
- Any custom PostgreSQL configurations or optimizations
- Identify all custom configurations and settings
- Document container resource allocations and volumes
6.2.2. Exporting the source environment Copy linkLink copied to clipboard!
Export databases, secrets, and custom configurations from your source containerized Ansible Automation Platform deployment to create the migration artifact.
Procedure
Verify the PostgreSQL database version is PostgreSQL version 15.
You can verify your current PostgreSQL version by connecting to your database server and running the following command as the
postgresuser:$ podman exec -it postgresql bash -c 'psql -c "SELECT version();"'ImportantPostgreSQL version 15 is a strict requirement for the migration process to succeed. If running PostgreSQL 13 or earlier, upgrade to version 15 before proceeding with the migration.
If using an Ansible Automation Platform managed database, re-run the installation program to upgrade the PostgreSQL version. If using a customer provided (external) database, contact your database administrator or service provider to confirm the version and arrange for an upgrade if required.
Create a complete backup of the source environment:
$ ansible-playbook -i <path_to_inventory> ansible.containerized_installer.backupGet the connection settings from one node in each of the component groups.
Access the automation controller node and run:
$ podman exec -it automation-controller-task bash -c 'awx-manage print_settings | grep '^DATABASES'Access the automation hub node and run:
$ podman exec -it automation-hub-api bash -c "pulpcore-manager diffsettings | grep '^DATABASES'"Access the platform gateway node and run:
$ podman exec -it automation-gateway bash -c "aap-gateway-manage print_settings | grep '^DATABASES'"
Validate the database size and make sure you have enough space on the filesystem for the
pg_dump.You can verify the database sizes by connecting to your database server and running the following command as the
postgresuser:$ podman exec -it postgresql bash -c 'psql -c "\l+"'Adjust the filesystem size or mount an external filesystem as needed before performing the next step.
NoteThese commands send all target files to the
/tmpfilesystem. Adjust the commands to match your environment’s needs.Stage the manually created artifact on the platform gateway node.
# mkdir -p /tmp/backups/artifact/{controller,gateway,hub}# mkdir -p /tmp/backups/artifact/controller/custom_configs# touch /tmp/backups/artifact/secrets.yml# cd /tmp/backups/artifact/Perform database dumps of all components on the platform gateway node within the artifact created previously.
To run the
psqlandpg_restorecommands, you must create a temporary container and run the commands inside of it. This command must be run from the database node.$ podman run -it --rm --name postgresql_restore_temp --network host --volume ~/aap/tls/extracted:/etc/pki/ca-trust/extracted:z --volume ~/aap/postgresql/server.crt:/var/lib/pgsql/server.crt:ro,z --volume ~/aap/postgresql/server.key:/var/lib/pgsql/server.key:ro,z --volume /tmp/backups/artifact:/var/lib/pgsql/backups:ro,z registry.redhat.io/rhel8/postgresql-15:latest bashNoteThis command assumes the image
registry.redhat.io/rhel8/postgresql-15:latest. If you are missing the image, check the available images for the user withpodman images ls.The command above opens a shell inside the container named
postgresql_restore_tempand has the artifact mounted into/var/lib/pgsql/backups. Also, this command is mounting the PostgreSQL certificates to ensure that you can resolve the correct certificates.bash-4.4$ cd /var/lib/pgsql/backups bash-4.4$ psql -h <pg_hostname> -U <component_pg_user> -d <database_name> -t -c 'SHOW server_version;' # ensure connectivity to db bash-4.4$ pg_dump -h <pg_hostname> -U <component_pg_user> -d <component_pg_name> --clean --create -Fc -f <component>/<component>.pgc bash-4.4$ ls -ld <component>/<component>.pgc bash-4.4$ echo "<component>_pg_database: <database_name>" >> secrets.yml ## Add the DB name for the component to the secrets fileAfter collecting this data, exit from this temporary container.
Export the secrets from the containerized environment from one node of each component group.
For each step below, use the
rootuser to run the commands.Access the automation controller node and gather the secret key and add to the
controller_secret_keyvalue insecrets.yamlfile.$ podman secret inspect --showsecret --format "{{.SecretData}}" controller_secret_keyAccess the automation hub node and gather the secret key and add to the
hub_secret_keyvalue insecrets.yamlfile.$ podman secret inspect --showsecret --format "{{.SecretData}}" hub_secret_keyAccess the automation hub node and gather the
database_fields.symmetric.keyvalue and add to thehub_db_fields_encryption_keyvalue insecrets.yamlfile.$ podman secret inspect --showsecret --format "{{.SecretData}}" hub_database_fieldsAccess the platform gateway node and gather the secret key and add to the
gateway_secret_keyvalue insecrets.yamlfile.$ podman secret inspect --showsecret --format "{{.SecretData}}" gateway_secret_key
Export automation controller custom configurations.
If any
extra_settingsexist in your containerized installation inventory, copy them into a new file and saving them under/tmp/backups/artifact/controller/custom_configs.Package the artifact.
# cd /tmp/backups/artifact/ # [ -f sha256sum.txt ] && rm -f sha256sum.txt; find . -type f -name "*.pgc" -exec sha256sum {} \; >> sha256sum.txt # cat sha256sum.txt # cd .. # tar cf artifact.tar artifact # sha256sum artifact.tar > artifact.tar.sha256 # sha256sum --check artifact.tar.sha256 # tar tvf artifact.tarExample output of
tar tvf artifact.tar:drwxr-xr-x ansible/ansible 0 2025-05-08 16:48 artifact/ drwxr-xr-x ansible/ansible 0 2025-05-08 16:33 artifact/controller/ -rw-r--r-- ansible/ansible 732615 2025-05-08 16:26 artifact/controller/controller.pgc drwxr-xr-x ansible/ansible 0 2025-05-08 16:33 artifact/controller/custom_configs/ drwxr-xr-x ansible/ansible 0 2025-05-08 16:11 artifact/gateway/ -rw-r--r-- ansible/ansible 231155 2025-05-08 16:28 artifact/gateway/gateway.pgc drwxr-xr-x ansible/ansible 0 2025-05-08 16:26 artifact/hub/ -rw-r--r-- ansible/ansible 29252002 2025-05-08 16:26 artifact/hub/hub.pgc -rw-r--r-- ansible/ansible 614 2025-05-08 16:24 artifact/secrets.yml -rw-r--r-- ansible/ansible 338 2025-05-08 16:48 artifact/sha256sum.txt-
Download the
artifact.tarandartifact.tar.sha256to your local machine or transfer to the target node with thescpcommand.
Chapter 7. Target environment Copy linkLink copied to clipboard!
Prepare, configure, and validate your target Ansible Automation Platform environment.
7.1. Container-based Ansible Automation Platform Copy linkLink copied to clipboard!
Prepare and assess your target container-based Ansible Automation Platform environment, and import and reconcile your migrated content.
7.1.1. Preparing and assessing the target environment Copy linkLink copied to clipboard!
Transfer the migration artifact, install containerized Ansible Automation Platform, and configure the inventory file to match your source environment topology and database settings.
Procedure
- Validate the file system home folder size and make sure it has enough space to transfer the artifact.
-
Transfer the artifact to the nodes where you will be working by using
scpor any preferred file transfer method. It is recommended that you work from the platform gateway node as it has access to most systems. However, if you have access or file system space limitations due to the PostgreSQL dumps, work from the database node instead. - Download the latest version of containerized Ansible Automation Platform from the Ansible Automation Platform download page.
- Validate the artifact checksum.
Extract the artifact on the home folder for the user running the containers.
$ cd ~$ sha256sum --check artifact.tar.sha256$ tar xf artifact.tar$ cd artifact$ sha256sum --check sha256sum.txtGenerate an inventory file for your containerized deployment.
Configure the inventory file to match the same topology as the source environment. Configure the component database names and the
secret_keyvalues from the artifact’ssecrets.ymlfile.You can do this in two ways:
- Set the extra variables in the inventory file.
Use the
secrets.ymlfile as an additional variables file when running the installation program.Option 1: Extra variables in the inventory file
$ egrep 'pg_database|_key' inventory controller_pg_database=<redacted> controller_secret_key=<redacted> gateway_pg_database=<redacted> gateway_secret_key=<redacted> hub_pg_database=<redacted> hub_secret_key=<redacted> __hub_database_fields=<redacted>NoteThe
__hub_database_fieldsvalue comes from thehub_db_fields_encryption_keyvalue in your secret.Option 2: Additional variables file
$ ansible-playbook -i inventory ansible.containerized_installer.install -e @~/artifact/secrets.yml -e "__hub_database_fields='{{ hub_db_fields_encryption_key }}'"
- Install and configure the containerized target environment.
- Verify PostgreSQL database version is on version 15.
Create a backup of the initial containerized environment.
$ ansible-playbook -i <path_to_inventory> ansible.containerized_installer.backup- Verify the fresh installation functions correctly.
7.1.2. Importing the migration content to the target environment Copy linkLink copied to clipboard!
To import your migration content into the target environment, stop the containerized services, import the database dumps, and then restart the services.
Procedure
Stop the containerized services, except the database.
In all nodes, if Performance Co-Pilot is configured, run the following command:
$ systemctl --user stop pcpAccess the automation controller node and run:
$ systemctl --user stop automation-controller-task automation-controller-web automation-controller-rsyslog $ systemctl --user stop receptorAccess the automation hub node and run:
$ systemctl --user stop automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2Access the Event-Driven Ansible node and run:
$ systemctl --user stop automation-eda-scheduler automation-eda-daphne automation-eda-web automation-eda-api automation-eda-worker-1 automation-eda-worker-2 automation-eda-activation-worker-1 automation-eda-activation-worker-2Access the platform gateway node and run:
$ systemctl --user stop automation-gateway automation-gateway-proxyAccess the platform gateway node when using standalone Redis, or all nodes from the Redis group in your inventory file when using clustered Redis, and run:
$ systemctl --user stop redis-unix redis-tcpNoteIn an enterprise deployment, the components run on different nodes. Run the commands on each component node.
Import database dumps to the containerized environment.
If you are using an Ansible Automation Platform managed database, you must create a temporary container to run the
psqlandpg_restorecommands. Run this command from the database node:$ podman run -it --rm --name postgresql_restore_temp --network host --volume ~/aap/tls/extracted:/etc/pki/ca-trust/extracted:z --volume ~/aap/postgresql/server.crt:/var/lib/pgsql/server.crt:ro,z --volume ~/aap/postgresql/server.key:/var/lib/pgsql/server.key:ro,z --volume ~/artifact:/var/lib/pgsql/backups:ro,z registry.redhat.io/rhel8/postgresql-15:latest bashNoteThe command above opens a shell inside the container named
postgresql_restore_tempwith the artifact mounted at/var/lib/pgsql/backups. Additionally, it mounts the PostgreSQL certificates to ensure that you can resolve the correct certificates.The command assumes the image
registry.redhat.io/rhel8/postgresql-15:latestis available. If you are missing the image, check the available images for the user withpodman images ls.It also assumes that the artifact is located in the current user’s home folder. If the artifact is located elsewhere, change the
~/artifactwith the required path.-
If you are using a customer-provided (external) database, you can run the
psqlandpg_restorecommands from any node that has these commands installed and that has access to the database. Reach out to your database administrator if you are unsure. From inside the container, access the database and ensure the users have the
CREATEDBrole.bash-4.4$ psql -h <pg_hostname> -U postgres postgres=# \l Name | Owner | Encoding | Collate | Ctype | ICU Locale | Locale Provider | Access privileg es -------------------------+---------------+----------+-------------+-------------+------------+-----------------+------------------ ----- automationedacontroller | eda | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc | automationhub | automationhub | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc | awx | awx | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc | gateway | gateway | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc | ...For each component name, add the
CREATEDBrole to theOwner. For example:postgres=# ALTER ROLE awx WITH CREATEDB; postgres=# \qReplace
awxwith the database owner.With the
CREATEDBin place, access the path where the artifact is mounted, and run thepg_restorecommands.bash$ cd /var/lib/pgsql/backups bash$ pg_restore --clean --create --no-owner -h <pg_hostname> -U <component_pg_user> -d template1 <component>/<component>.pgcAfter the restore, remove the permissions from the user. For example:
postgres=# ALTER ROLE awx WITH NOCREATEDB; postgres=# \qReplace
awxwith each user containing the role.
Start the containerized services, except the database.
NoteIn an enterprise deployment, the components run on different nodes. Run the commands on each component node.
In all nodes, if Performance Co-Pilot is configured, run the following command:
$ systemctl --user start pcpAccess the automation controller node and run:
$ systemctl --user start automation-controller-task automation-controller-web automation-controller-rsyslog $ systemctl --user start receptorAccess the automation hub node and run:
$ systemctl --user start automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2Access the Event-Driven Ansible node and run:
$ systemctl --user start automation-eda-scheduler automation-eda-daphne automation-eda-web automation-eda-api automation-eda-worker-1 automation-eda-worker-2 automation-eda-activation-worker-1 automation-eda-activation-worker-2Access the platform gateway node and run:
$ systemctl --user start automation-gateway automation-gateway-proxyAccess the platform gateway node when using standalone Redis, or all nodes from the Redis group in your inventory when using clustered Redis, and run:
$ systemctl --user start redis-unix redis-tcp
7.1.3. Reconciling the target environment post-import Copy linkLink copied to clipboard!
Perform the following post-import reconciliation steps to verify your target environment functions correctly.
Procedure
Deprovision the platform gateway configuration.
To deprovision platform gateway configuration, SSH to the host serving an
automation-gatewaycontainer as the same rootless user from 4.2.6 and run the following to remove the platform gateway proxy configuration:$ podman exec -it automation-gateway bash $ aap-gateway-manage migrate $ aap-gateway-manage shell_plus >>> HTTPPort.objects.all().delete(); ServiceNode.objects.all().delete(); ServiceCluster.objects.all().delete()
Transfer custom configurations and settings.
-
Edit the inventory file and apply any relevant
extra_settingsto each component by using thecomponent_extra_settings.
-
Edit the inventory file and apply any relevant
Remove all resource server key secrets to be repopulated by the installation program:
$ for i in `podman secret ls | egrep 'resource_server' | awk '{print $2}'`; do podman secret rm $i; done- Re-run the installation program on the target environment by using the same inventory from the installation.
Sync platform gateway resources if Event-Driven Ansible is present:
$ podman exec -it automation-eda-api bash $ aap-eda-manage resource_syncValidate instances for automation execution.
SSH to the host serving an
automation-controller-taskcontainer as the rootless user, and run the following commands to validate and remove instances that are orphaned from the source artifact:$ podman exec -it automation-controller-task bash$ awx-manage list_instancesFind nodes that are no longer part of this cluster. A good indicator is nodes with 0 capacity as they have failed their health checks:
[ungrouped capacity=0] [DISABLED] node1.example.org capacity=0 node_type=hybrid version=X.Y.Z heartbeat="..." [DISABLED] node2.example.org capacity=0 node_type=execution version=ansible-runner-X.Y.Z heartbeat="..."Remove those nodes with
awx-manage, leaving only theaap-controller-taskinstance:awx-manage deprovision_instance --host=node1.example.org awx-manage deprovision_instance --host=node2.example.org
Repair orphaned automation hub content links for Pulp.
Run the following command from any host that has direct access to the automation hub address:
$ curl -d '{\"verify_checksums\": true }' -X POST -k https://<gateway url>/api/galaxy/pulp/api/v3/repair/ -u <gateway_admin_user>:<gateway_admin_password>
Reconcile instance groups configuration:
- Go to → → .
- Select the Instance Group and then select the Instances tab.
- Associate or disassociate instances as required.
Reconcile decision environments and credentials:
- Go to → .
- Edit each decision environment which references a registry URL either unrelated or no longer accessible to this new environment. For example, the automation hub decision environment might require modification for the target automation hub environment.
- Select each associated credential to these decision environments and ensure their addresses align with the new environment.
Reconcile execution environments and credentials:
- Go to → → .
- Check each execution environment image and verify their addresses against the new environment.
- Go to → → .
- Edit each credential and ensure that all environment specific information aligns with the new environment.
- Verify any further customizations or configurations after the migration, such as RBAC rules with instance groups.
7.1.4. Validating the target environment Copy linkLink copied to clipboard!
After completing the migration, validate that all components in your target environment function correctly.
Procedure
Verify all migrated components function correctly.
-
Platform gateway: Access the Ansible Automation Platform URL at
https://<gateway_hostname>/and verify that the dashboard loads correctly. Check that the platform gateway service is running and connected to automation controller. - Automation controller: Under Automation Execution, check that projects, inventories, and job templates are present and configured.
- Automation hub: Under Automation Content, verify that collections, namespaces, and their contents are visible.
- Event-Driven Ansible (if applicable): Under Automation Execution Decisions, verify that rule audits, rulebook activations, and projects are accessible.
For each component, check the logs to ensure there are no startup errors or warnings:
podman logs <container_name>
-
Platform gateway: Access the Ansible Automation Platform URL at
Test workflows and automation processes.
- Run job templates: Run several key job templates, including those with dependencies on various credential types.
- Test workflow templates: Run workflow templates to ensure that workflow nodes run in the correct order and that the workflow completes successfully.
- Verify execution environments: Ensure that jobs run in the appropriate execution environments and can access required dependencies.
- Check job artifacts: Verify that job artifacts are properly stored and accessible.
- Validate job scheduling: Test scheduled jobs to ensure they run at the expected times.
Validate user access and permissions.
- User authentication: Test login functionality with various user accounts to ensure authentication works correctly.
- Role-based access controls: Verify that users have appropriate permissions for organizations, projects, inventories, and job templates.
- Team memberships: Confirm that team memberships and team-based permissions are intact.
- API access: Test API tokens and ensure that API access is functioning properly.
- SSO integration (if applicable): Verify that Single Sign-On authentication is working correctly.
Confirm content synchronization and availability.
- Collection synchronization: Check that you can synchronize collections from a remote.
- Collection Upload: Check that you can upload collections.
- Collection repositories: Verify that automation hub makes collections available and that execution environments can use them.
- Project synchronization: Check that projects can sync content from source control repositories.
- External content sources: Test synchronization from automation hub and Ansible Galaxy (if configured).
- Execution environment availability: Confirm that all required execution environments exist and that execution nodes can access them.
- Content dependencies: Verify that the system correctly resolves content dependencies when running jobs.
7.2. OpenShift Container Platform Copy linkLink copied to clipboard!
Prepare and assess your target OpenShift Container Platform environment, and import and reconcile your migrated content.
7.2.1. Preparing and assessing the target environment Copy linkLink copied to clipboard!
Transfer the migration artifact, create an OpenShift Container Platform project, and deploy Ansible Automation Platform using the Operator with configurations matching your source environment.
Procedure
- Configure Ansible Automation Platform Operator for an Ansible Automation Platform deployment.
- Set up the database configuration (internal or external).
- Set up the Redis configuration (internal or external).
- Install Ansible Automation Platform using Ansible Automation Platform Operator.
- Create a backup of the initial OpenShift Container Platform deployment.
- Verify the fresh installation functions correctly.
7.2.2. Importing the migration content to the target environment Copy linkLink copied to clipboard!
To import your environment, scale down Ansible Automation Platform components, restore databases, replace encryption secrets, and scale services back up.
The import process requires the latest version of Ansible Automation Platform named aap in the default aap namespace and all default database names and database users.
Procedure
Scale down Ansible Automation Platform components.
Begin by scaling down the Ansible Automation Platform deployment by using
idle_aap:oc patch ansibleautomationplatform aap --type merge -p '{"spec":{"idle_aap":true}}'Wait for component pods to stop. Only the 6 Operator pods will remain running.
NAME READY STATUS RESTARTS AGE pod/aap-controller-migration-4.6.13-5swc6 0/1 Completed 0 160m pod/aap-gateway-operator-controller-manager-6b75c95458-4zrxv 2/2 Running 0 26h pod/ansible-lightspeed-operator-controller-manager-b674c55b8-qncjp 2/2 Running 0 45h pod/automation-controller-operator-controller-manager-6b79d48d4cchn 2/2 Running 0 45h pod/automation-hub-operator-controller-manager-5cd674c984-5njfj 2/2 Running 0 45h pod/eda-server-operator-controller-manager-645f4db5-d2flt 2/2 Running 0 45h pod/resource-operator-controller-manager-86b8f7bb54-cvz6d 2/2 Running 0 45hScale down the Ansible Automation Platform Gateway Operator and Ansible Automation Platform Controller Operator:
oc scale --replicas=0 deployment aap-gateway-operator-controller-manager automation-controller-operator-controller-managerExample output:
deployment.apps/aap-gateway-operator-controller-manager scaled deployment.apps/automation-controller-operator-controller-manager scaled
Scale up the idled Postgres
StatefulSet.oc scale --replicas=1 statefulset.apps/aap-postgres-15Prepare a temporary environment for the database restore.
Create a temporary Persistent Volume Claim (PVC) with appropriate settings and sizing.
aap-temp-pvc.yaml--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: aap-temp-pvc namespace: aap spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gioc create -f aap-temp-pvc.yamlObtain the existing PostgreSQL image to use for temporary deployment:
echo $(oc get pod/aap-postgres-15-0 -o jsonpath="{.spec.containers[].image}")Create a temporary PostgreSQL deployment with the mounted temporary PVC:
aap-temp-postgres.yaml--- kind: Deployment apiVersion: apps/v1 metadata: name: aap-temp-postgres spec: replicas: 1 selector: matchLabels: app: aap-temp-postgres template: metadata: labels: app: aap-temp-postgres spec: containers: - name: aap-temp-postgres image: <postgres image from previous step> command: - /bin/sh - '-c' - sleep infinity imagePullPolicy: Always securityContext: runAsNonRoot: true allowPrivilegeEscalation: false volumeMounts: - name: aap-temp-pvc mountPath: /tmp/aap-temp-pvc volumes: - name: aap-temp-pvc persistentVolumeClaim: claimName: aap-temp-pvcoc create -f aap-temp-postgres.yaml
Copy the export artifact to the temporary PostgreSQL pod.
First, obtain the pod name and set it as an environment variable:
export AAP_TEMP_POSTGRES=$(oc get pods --no-headers -o custom-columns="metadata.name" | grep aap-temp-postgres)Test the environment variable:
echo $AAP_TEMP_POSTGRESExample output:
aap-temp-postgres-7b6c57f87f-s2ldpCopy the artifact and checksum to the PVC:
oc cp artifact.tar $AAP_TEMP_POSTGRES:/tmp/aap-temp-pvc/ oc cp artifact.tar.sha256 $AAP_TEMP_POSTGRES:/tmp/aap-temp-pvc/
Restore databases to Ansible Automation Platform PostgreSQL by using the temporary PostgreSQL pod.
First, obtain the PostgreSQL passwords for all three databases and the PostgreSQL admin password:
echo "" && for secret in aap-controller-postgres-configuration aap-hub-postgres-configuration aap-gateway-postgres-configuration do echo $secret echo "PASSWORD: `oc get secrets $secret -o jsonpath="{.data['password']}" | base64 -d`" echo "USER: `oc get secrets $secret -o jsonpath="{.data['username']}" | base64 -d`" echo "DATABASE: `oc get secrets $secret -o jsonpath="{.data['database']}" | base64 -d`" echo "" done && echo "POSTGRES ADMIN PASSWORD: `oc get secrets aap-gateway-postgres-configuration -o jsonpath="{.data['postgres_admin_password']}" | base64 -d`"Enter into the temporary PostgreSQL deployment and change directory to the mounted PVC containing the copied artifact:
oc exec -it deployment.apps/aap-temp-postgres -- /bin/bashInside the pod, change directory to
/tmp/aap-temp-pvcand list its contents:cd /tmp/aap-temp-pvc && ls -lExample output:
total 2240 -rw-r--r--. 1 1000900000 1000900000 2273280 Jun 13 17:41 artifact.tar -rw-r--r--. 1 1000900000 1000900000 79 Jun 13 17:42 artifact.tar.sha256 drwxrws---. 2 root 1000900000 16384 Jun 13 17:40 lost+foundVerify the archive:
sha256sum --check artifact.tar.sha256Example output:
artifact.tar: OKExtract the artifact and verify its contents:
tar xf artifact.tar && cd artifact && sha256sum --check sha256sum.txtExample output:
./controller/controller.pgc: OK ./gateway/gateway.pgc: OK ./hub/hub.pgc: OKDrop the automation controller database:
dropdb -h aap-postgres-15 automationcontrollerAlter the user temporarily with the
CREATEDBrole:postgres=# ALTER USER automationcontroller WITH CREATEDB;Create the database:
createdb -h aap-postgres-15 -U automationcontroller automationcontrollerRevert temporary user permission:
postgres=# ALTER USER automationcontroller NOCREATEDB;Restore the automation controller database:
pg_restore --clean --create --no-owner -h aap-postgres-15 -U automationcontroller -d automationcontroller controller/controller.pgcRestore the automation hub database:
pg_restore --clean --create --no-owner -h aap-postgres-15 -U automationhub -d automationhub hub/hub.pgcRestore the platform gateway database:
pg_restore --clean --create --no-owner -h aap-postgres-15 -U gateway -d gateway gateway/gateway.pgcExit the pod:
exit
Replace database field encryption secrets and clean up temporary resources.
Replace database field encryption secrets:
oc set data secret/aap-controller-secret-key secret_key="<unencoded controller_secret_key value from secrets.yml>"oc set data secret/aap-db-fields-encryption-secret secret_key="<unencoded gateway_secret_key value from secrets.yml>"oc set data secret/aap-hub-db-fields-encryption database_fields.symmetric.key="<unencoded hub_db_fields_encryption_key value from secrets.yml>"Clean up the temporary PostgreSQL and PVC:
oc delete -f aap-temp-postgres.yamloc delete -f aap-temp-pvc.yaml
Scale Ansible Automation Platform components back up.
Scale the platform gateway and automation controller Operators back up and wait for the platform gateway Operator reconciliation loop to complete:
The PostgreSQL
StatefulSetreturns to idle.oc scale --replicas=1 deployment aap-gateway-operator-controller-manager automation-controller-operator-controller-managerExample output:
deployment.apps/aap-gateway-operator-controller-manager scaled deployment.apps/automation-controller-operator-controller-manager scaledoc logs -f $(oc get pods --no-headers -o custom-columns=":metadata.name" | grep aap-gateway-operator)Wait for reconciliation to stop.
Example output:
META: ending play {"level":"info","ts":"2025-06-12T15:41:29Z","logger":"runner","msg":"Ansible-runner exited successfully","job":"5672263053238024330","name":"aap","namespace":"aap"} ----- Ansible Task Status Event StdOut (aap.ansible.com/v1alpha1, Kind=AnsibleAutomationPlatform, aap/aap) ----- PLAY RECAP ********************************************************************* localhost : ok=45 changed=0 unreachable=0 failed=0 skipped=63 rescued=0 ignored=0Scale Ansible Automation Platform back up using
idle_aap:oc patch ansibleautomationplatform aap --type=merge -p '{"spec":{"idle_aap":false}}'Example output:
ansibleautomationplatform.aap.ansible.com/aap patched
Wait for the
aap-gatewaypod to be running and clean up old service endpoints.Example output:
pod/aap-gateway-6c989b846c-47b91 2/2 Running 0 45sfor i in HTTPPort Route ServiceNode; do; oc exec -it deployment.apps/aap-gateway -- aap-gateway-manage shell -c 'from aap_gateway_api.models import '$i';print('$i'.objects.all().delete())'; doneExample output:
(23, {'aap_gateway_api.ServiceAPIRoute': 4, 'aap_gateway_api.AdditionalRoute': 7, 'aap_gateway_api.Route': 11, 'aap_gateway_api.HTTPPort': 1}) (0, {}) (4, {'aap_gateway_api.ServiceNode': 4})Run
awx-manageto deprovision instances.Obtain the automation controller pod:
export AAP_CONTROLLER_POD=$(oc get pods --no-headers -o custom-columns=":metadata.name" | grep aap-controller-task)Test the environment variable:
echo $AAP_CONTROLLER_PODExample output:
aap-controller-task-759b6d9759-r59q9Enter into the automation controller pod:
oc exec -it $AAP_CONTROLLER_POD -- /bin/bash awx-manage list_instancesExample output:
bash-4.4$ [controlplane capacity=642 policy=100%] aap-controller-task-759b6d9759-r59q9 capacity=642 node_type=control version=4.6.15 heartbeat="2025-06-12 21:39:48" node1.example.org capacity=0 node_type=hybrid version=4.6.13 heartbeat="2025-05-30 17:22:11" [default capacity=0 policy=100%] node1.example.org capacity=0 node_type=hybrid version=4.6.13 heartbeat="2025-05-30 17:22:11" node2.example.org capacity=0 node_type=execution version=ansible-runner-2.4.1 heartbeat="2025-05-30 17:22:08"Remove old nodes with
awx-manage, leaving onlyaap-controller-task:awx-manage deprovision_instance --host=node1.example.org awx-manage deprovision_instance --host=node2.example.org
Remove the
aap-resource-serversecret and allow the deployments to reconcile. This will recreate the resource service keys and secret for the components:$ oc delete secret/aap-resource-serverRun the
curlcommand to repair automation hub filesystem data.curl -d '{\"verify_checksums\": true }' -X POST -k https://<aap url>/api/galaxy/pulp/api/v3/repair/ -u <admin_user>:<restored_admin_password>
7.2.3. Reconciling the target environment post-import Copy linkLink copied to clipboard!
After importing your migration artifact, perform the following steps to reconcile your target environment.
Procedure
-
Modify the Django
SECRET_KEYsecrets to match the source platform. - Deprovision and reconfigure platform gateway service nodes.
- Re-run platform gateway nodes and services register logic.
- Convert container-specific settings to OpenShift Container Platform-appropriate formats.
- Reconcile container resource allocations to OpenShift Container Platform resources.
7.2.4. Validating the target environment Copy linkLink copied to clipboard!
Verify that all Ansible Automation Platform services are running, credentials work correctly, and migrated content like projects, inventories, and job templates are accessible on OpenShift Container Platform.
Procedure
- Verify all migrated components are functional.
- Test workflows and automation processes.
- Validate user access and permissions.
- Confirm content synchronization and availability.
- Test integration with OpenShift Container Platform-specific features.
7.3. Managed Ansible Automation Platform Copy linkLink copied to clipboard!
Prepare and migrate your source environment to a Managed Ansible Automation Platform deployment, and reconcile the target environment post-migration.
7.3.1. Migrating to Managed Ansible Automation Platform Copy linkLink copied to clipboard!
Submit a support ticket on the Red Hat Customer Portal to request a migration to Managed Ansible Automation Platform.
Prerequisites
- You have a migration artifact from your source environment.
Procedure
Submit a support ticket on the Red Hat Customer Portal requesting a migration to Managed Ansible Automation Platform.
The support ticket should include:
- Source installation type (RPM, Containerized, OpenShift)
- Managed Ansible Automation Platform URL or deployment name
- Source version (installer or Operator version)
- The Ansible Site Reliability Engineering (SRE) team provides instructions in the support ticket on how to upload the resulting migration artifact to secure storage for processing.
- The Ansible SRE team imports the migration artifact into the identified target instance and notifies the customer through the support ticket.
- The Ansible SRE team notifies customers of successful migration.
7.3.2. Reconciling the target environment post-migration Copy linkLink copied to clipboard!
Update necessary configurations after migrating to Managed Ansible Automation Platform.
Procedure
- Log in to the Managed Ansible Automation Platform instance by using the local administrator account to confirm that data was imported.
Perform the following actions based on the configuration of the source deployment:
- Reconfigure Single Sign-On (SSO) authenticators and mappings to reflect the new URLs.
Update private automation hub content to reflect the new URLs.
Run the following command to update the automation hub repositories:
curl -d '{\"verify_checksums\": true }' -X POST -k https://<platform url>/api/galaxy/pulp/api/v3/repair/ -u <admin_user>:<admin_password>- Perform a sync on any repositories configured in automation hub.
- Push any custom execution environments from the source automation hub to the target automation hub.
- Reconfigure automation mesh.
- After migration, you can request standard Site Reliability Engineering (SRE) tasks through support tickets, such as configuration of custom certificates, a custom domain, or connectivity through private endpoints.