Ansible Automation Platform migration


Red Hat Ansible Automation Platform 2.6

Migrate your deployment of Ansible Automation Platform from one installation type to another

Red Hat Customer Content Services

Abstract

This guide provides instructions for migrating your Red Hat Ansible Automation Platform deployment from one installation type to another

Providing feedback on Red Hat documentation

If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.

Chapter 1. Introduction and objectives

Learn about supported migration paths between RPM-based, container-based, OpenShift Container Platform, and Managed Ansible Automation Platform deployments, including step-by-step workflows and migration requirements.

Migration between different Ansible Automation Platform deployment types for Ansible Automation Platform 2.6 requires specific steps and considerations.

The supported migration paths include:

Expand
Source environmentTarget environment

RPM-based Ansible Automation Platform

Container-based Ansible Automation Platform platform

RPM-based Ansible Automation Platform

OpenShift Container Platform

RPM-based Ansible Automation Platform

Managed Ansible Automation Platform

Container-based Ansible Automation Platform

OpenShift Container Platform

Container-based Ansible Automation Platform

Managed Ansible Automation Platform

Migrations outside of those listed are not supported at this time.

The Ansible Automation Platform migration guide aims to:

  • Document all components and configurations that require migration between Ansible Automation Platform platforms
  • Provide step-by-step migration workflows for different deployment scenarios
  • Identify potential challenges and unknowns that require further investigation

Chapter 2. Out of scope

Understand which Ansible Automation Platform components and configurations require manual recreation in the target environment and are not covered by the migration process.

The Ansible Automation Platform migration guide focuses on the core components of Ansible Automation Platform. The following items are currently out of scope for this migration process:

  • Event-Driven Ansible: Manually recreate configuration and content for Event-Driven Ansible in the target environment.
  • Instance groups: Manually recreate instance group configurations after migration.
  • Hub content: Manually re-import or reconfigure content hosted in automation hub.
  • Custom Certificate Authority (CA) for receptor mesh: Manually reconfigure custom CA configurations for receptor mesh.
  • Disconnected environments: The migration process does not cover disconnected environments.
  • Execution environments (other than the default one): Manually rebuild or re-import custom execution environments.

Manually re-create, import, or configure these items in the target environment.

Chapter 3. Migration process overview

Understand the complete migration workflow including preparation, export, artifact creation, import, reconciliation, and validation steps for moving between Ansible Automation Platform installation types.

Important

You can only migrate to a different installation type of the same Ansible Automation Platform version. For example, you can migrate from RPM version 2.6 to containerized 2.6, but not from RPM version 2.4 to containerized 2.6.

The migration between Ansible Automation Platform installation types follows this general workflow:

  1. Prepare and assess the source environment
  2. Export the source environment
  3. Create and verify the migration artifact
  4. Prepare and assess the target environment
  5. Import the migration content to the target environment
  6. Reconcile the target environment post-import
  7. Validate the target environment

Chapter 4. Migration prerequisites

Prerequisites for migrating your Ansible Automation Platform deployment. For your specific migration path, ensure that you meet all necessary conditions before proceeding.

4.1. RPM to containerized migration prerequisites

Before migrating from an RPM-based deployment to a container-based deployment, ensure you meet the following prerequisites:

  • You have a source RPM-based deployment of Ansible Automation Platform.
  • The source RPM-based deployment is on the latest async release of the version you are on.
  • You have a target environment prepared for a container-based deployment of Ansible Automation Platform.
  • You have downloaded the containerized installation program for the latest release of the Ansible Automation Platform version you are on.
  • You have enough storage for database dumps and backups.
  • There is network connectivity between the source and target environments.

Before migrating from an RPM-based deployment to an OpenShift Container Platform deployment, ensure you meet the following prerequisites:

  • You have a source RPM-based deployment of Ansible Automation Platform.
  • The source RPM-based deployment is on the latest async release of the version you are on.
  • You have a target OpenShift Container Platform environment ready.
  • You have Ansible Automation Platform Operator available for the latest release of the Ansible Automation Platform version you are on.
  • You have made a decision on internal or external database configuration.
  • You have made a decision on internal or external Redis configuration.
  • There is network connectivity between the source and target environments.

Before migrating from an RPM-based deployment to a Managed Ansible Automation Platform deployment, ensure you meet the following prerequisites:

  • You have a source RPM-based deployment of Ansible Automation Platform.
  • The source deployment is on the latest release of the Ansible Automation Platform version you are on.
  • You have a target Managed Ansible Automation Platform deployment.
  • You have enabled local authentication on the source deployment before the migration.
  • A local administrator account must be functional on the source deployment before migration. Verify this by performing a successful login to the source deployment.
  • You have a plan to retain a backup throughout the migration process and to ensure that your existing Ansible Automation Platform deployment remains active until your migration has completed successfully.
  • You have a plan for any environment changes based on the migration from a self-hosted Ansible Automation Platform deployment to a Managed Ansible Automation Platform deployment:

    • Job log retention changes from a customer-configured option to 30 days.
    • Network changes occur when moving the control plane to the managed service.
    • Automation mesh requires reconfiguration.
  • You must reconfigure or re-create Single Sign-On (SSO) identity providers post-migration to account for URL changes.

Before migrating from a container-based deployment to an OpenShift Container Platform deployment, ensure that you meet the following prerequisites:

  • You have a source container-based deployment of Ansible Automation Platform.
  • The source deployment is on the latest async release of the version you are on.
  • You have a target OpenShift Container Platform environment ready.
  • You have an Ansible Automation Platform Operator available for the latest release of the Ansible Automation Platform version you are on.
  • You have decided between internal or external database configuration.
  • You have decided between internal or external Redis configuration.
  • There is network connectivity between the source and target environments.

Before migrating from a container-based deployment to a Managed Ansible Automation Platform deployment, ensure that you meet the following prerequisites:

  • You have a source container-based deployment of Ansible Automation Platform.
  • The source deployment is on the latest release of the Ansible Automation Platform version you are on.
  • You have a target Managed Ansible Automation Platform deployment.
  • You have enabled local authentication on the source deployment before the migration.
  • A local administrator account must be functional on the source deployment before migration. Verify this by performing a successful login to the source deployment.
  • You have a plan to retain a backup throughout the migration process and to ensure that your existing Ansible Automation Platform deployment remains active until your migration has completed successfully.
  • You have a plan for any environment changes based on the migration from a self-hosted Ansible Automation Platform deployment to a Managed Ansible Automation Platform deployment:

    • Job log retention changes from a customer-configured option to 30 days.
    • Network changes occur when moving the control plane to the managed service.
    • Automation mesh requires reconfiguration.
  • You must reconfigure or re-create Single Sign-On (SSO) identity providers post-migration to account for URL changes.

The migration artifact packages all necessary data and configurations from your source environment. Verify its structure and contents to ensure a successful migration.

5.1. Artifact structure

The migration artifact is a comprehensive package containing all necessary components to transfer your Ansible Automation Platform deployment.

Structure the artifact as follows:

/
  manifest.yml
  secrets.yml
  sha256sum.txt

  -> controller:
     controller.pgc
     -> custom_configs:
        foo.py
        bar.py
  -> gateway:
     gateway.pgc
  -> hub:
     hub.pgc
Copy to Clipboard Toggle word wrap

5.2. Manifest file

The manifest.yml file serves as the primary metadata document for the migration artifact. It contains critical versioning and component information from your source environment.

Structure the manifest as follows:

---
aap_version: X.Y # The version being migrated
platform: rpm # The source platform type
components:
  - name: controller
    version: x.y.z
  - name: hub
    version: x.y.z
  - name: gateway
    version: x.y.z
Copy to Clipboard Toggle word wrap

5.3. Secrets file

The secrets.yml file in the migration artifact includes essential Django SECRET_KEY values required for authentication between services.

Structure the secrets file as follows:

controller_pg_database: <redacted>
controller_secret_key: <redacted>
gateway_pg_database: <redacted>
gateway_secret_key: <redacted>
hub_pg_database: <redacted>
hub_secret_key: <redacted>
hub_db_fields_encryption_key: <redacted>
Copy to Clipboard Toggle word wrap
Note

Ensure the secrets.yml file is encrypted and kept in a secure location.

5.4. Migration artifact creation checklist

Use this checklist to verify the migration artifact.

  • Database dumps: Include complete database dumps for each component.

    • Ensure the automation controller database (controller.pgc) is present in the artifact.
    • Ensure the automation hub database (hub.pgc) is present in the artifact.
    • Ensure the platform gateway database (gateway.pgc) is present in the artifact.
  • Secret dumps: Export and include all security-related information.

    • Validate that all secret values are present in the secrets.yml file.
  • Custom configurations: Package all customizations from the source environment.

    • Validate that any custom Python scripts or modules (for example foo.py, bar.py) are present on the artifact.
    • Document any non-standard configurations or environment-specific settings.
  • Database information: Document database details.

    • Include the database names for all components.
    • Document database users and required permissions.
    • Note any database-specific configurations or optimizations.
  • Verification: Ensure artifact integrity and completeness.

    • Verify that all required files are included in the artifact.
    • Verify that checksums exist for all included database files.
    • Test the artifact’s structure and accessibility.
    • Consider encrypting the artifact for secure transfer to the target environment.
    • Document any known limitations or special considerations.

Chapter 6. Source environment

Prepare and export data from your existing Ansible Automation Platform deployment. The exported data forms a critical migration artifact, which you use to configure your new environment.

6.1. RPM-based Ansible Automation Platform

Prepare and export data from your RPM-based Ansible Automation Platform deployment.

Before beginning your migration, document your current RPM deployment to use as a reference throughout the migration process and when configuring your target environment.

Procedure

  1. Document the full topology of your current RPM deployment:

    1. Map out all servers, nodes, and their roles (for example control nodes, execution nodes, database servers).
    2. Note the hostname, IP address, and function of each server in your deployment.
    3. Document the network configuration between components.
  2. Ansible Automation Platform version information:

    1. Record the exact Ansible Automation Platform version (X.Y) currently deployed.
  3. Document the specific version of each component:

    1. Automation controller version
    2. Automation hub version
    3. Platform gateway version
  4. Database configuration:

    1. Database names for each component
    2. Database users and roles
    3. Connection parameters and authentication methods
    4. Any custom PostgreSQL configurations or optimizations

6.1.2. Exporting the source environment

From your source environment, export the data and configurations needed for migration.

Procedure

  1. Verify the PostgreSQL database version is PostgreSQL version 15.

    You can verify your current PostgreSQL version by connecting to your database server and running the following command as the postgres user:

    $ psql -c 'SELECT version();'
    Copy to Clipboard Toggle word wrap
    Important

    PostgreSQL version 15 is a strict requirement for the migration process to succeed. If running PostgreSQL 13 or earlier, upgrade to version 15 before proceeding with the migration.

    If using an Ansible Automation Platform managed database, re-run the installation program to upgrade the PostgreSQL version. If using a customer provided (external) database, contact your database administrator or service provider to confirm the version and arrange for an upgrade if required.

  2. Create a complete backup of the source environment:

    $ ./setup.sh -e 'backup_dest=/path/to/backup_dir/' -b
    Copy to Clipboard Toggle word wrap
  3. Get the connection settings from one node from each of the component groups.

    For each command, access the host and become the root user.

    • Access the automation controller node and run:

      # awx-manage print_settings | grep '^DATABASES'
      Copy to Clipboard Toggle word wrap
    • Access the automation hub node and run:

      # grep '^DATABASES' /etc/pulp/settings.py
      Copy to Clipboard Toggle word wrap
    • Access the platform gateway node and run:

      # aap-gateway-manage print_settings | grep '^DATABASES'
      Copy to Clipboard Toggle word wrap
  4. Stage the manually created artifact on the platform gateway node.

    # mkdir -p /tmp/backups/artifact/{controller,gateway,hub}
    Copy to Clipboard Toggle word wrap
    # mkdir -p /tmp/backups/artifact/controller/custom_configs
    Copy to Clipboard Toggle word wrap
    # touch /tmp/backups/artifact/secrets.yml
    Copy to Clipboard Toggle word wrap
    # cd /tmp/backups/artifact/
    Copy to Clipboard Toggle word wrap
  5. Validate the database size and make sure you have enough space on the filesystem for the pg_dump.

    You can verify the database sizes by connecting to your database server and running the following command as the postgres user:

    $ psql -c '\l+'
    Copy to Clipboard Toggle word wrap

    Adjust the filesystem size or mount an external filesystem as needed before performing the next step.

    Note

    These commands send all target files to the /tmp filesystem. Adjust the commands to match your environment’s needs.

  6. Perform database dumps of all components on the platform gateway node within the artifact you created.

    # psql -h <pg_hostname> -U <component_pg_user> -d <database_name> -t -c 'SHOW server_version;' # ensure connectivity to the database
    Copy to Clipboard Toggle word wrap
    # pg_dump -h <pg_hostname> -U <component_pg_user> -d <component_pg_name> --clean --create -Fc -f <component>/<component>.pgc
    Copy to Clipboard Toggle word wrap
    # ls -ld <component>/<component>.pgc
    Copy to Clipboard Toggle word wrap
    # echo "<component>_pg_database: <database_name>" >> secrets.yml ## Add the database name for the component to the secrets file
    Copy to Clipboard Toggle word wrap
  7. Export secrets from the RPM environment from one node of each component group.

    For each of the following steps, use the root user to run the commands.

    • Access the automation controller node, gather the secret key, and add it to the controller_secret_key value in the secrets.yml file.

      # cat /etc/tower/SECRET_KEY
      Copy to Clipboard Toggle word wrap
    • Access the automation hub node, gather the secret key, and add it to the hub_secret_key value in the secrets.yml file.

      # grep '^SECRET_KEY' /etc/pulp/settings.py | awk -F'=' '{ print $2 }'
      Copy to Clipboard Toggle word wrap
    • Access the automation hub node, gather the database_fields.symmetric.key value, and add it to the hub_db_fields_encryption_key value in the secrets.yml file.

      # cat /etc/pulp/certs/database_fields.symmetric.key
      Copy to Clipboard Toggle word wrap
    • Access the platform gateway node, gather the secret key, and add it to the gateway_secret_key value in the secrets.yml file.

      # cat /etc/ansible-automation-platform/gateway/SECRET_KEY
      Copy to Clipboard Toggle word wrap
  8. Export automation controller custom configurations.

    If any custom settings exist on the /etc/tower/conf.d, copy them to /tmp/backups/artifact/controller/custom_configs.

    Configuration files on automation controller that are managed by the installation program and not considered custom:

    • /etc/tower/conf.d/postgres.py
    • /etc/tower/conf.d/channels.py
    • /etc/tower/conf.d/caching.py
    • /etc/tower/conf.d/cluster_host_id.py
  9. Package the artifact.

    # cd /tmp/backups/artifact/
    Copy to Clipboard Toggle word wrap
    # [ -f sha256sum.txt ] && rm -f sha256sum.txt; find . -type f -name "*.pgc" -exec sha256sum {} \; >> sha256sum.txt
    Copy to Clipboard Toggle word wrap
    # cat sha256sum.txt
    Copy to Clipboard Toggle word wrap
    # cd ..
    Copy to Clipboard Toggle word wrap
    # tar cf artifact.tar artifact
    Copy to Clipboard Toggle word wrap
    # sha256sum artifact.tar > artifact.tar.sha256
    Copy to Clipboard Toggle word wrap
    # sha256sum --check artifact.tar.sha256
    Copy to Clipboard Toggle word wrap
    # tar tvf artifact.tar
    Copy to Clipboard Toggle word wrap

    Example output of tar tvf artifact.tar:

    drwxr-xr-x ansible/ansible     0 2025-05-08 16:48 artifact/
    drwxr-xr-x ansible/ansible     0 2025-05-08 16:33 artifact/controller/
    -rw-r--r-- ansible/ansible 732615 2025-05-08 16:26 artifact/controller/controller.pgc
    drwxr-xr-x ansible/ansible      0 2025-05-08 16:33 artifact/controller/custom_configs/
    drwxr-xr-x ansible/ansible      0 2025-05-08 16:11 artifact/gateway/
    -rw-r--r-- ansible/ansible 231155 2025-05-08 16:28 artifact/gateway/gateway.pgc
    drwxr-xr-x ansible/ansible      0 2025-05-08 16:26 artifact/hub/
    -rw-r--r-- ansible/ansible 29252002 2025-05-08 16:26 artifact/hub/hub.pgc
    -rw-r--r-- ansible/ansible      614 2025-05-08 16:24 artifact/secrets.yml
    -rw-r--r-- ansible/ansible      338 2025-05-08 16:48 artifact/sha256sum.txt
    Copy to Clipboard Toggle word wrap
  10. Download the artifact.tar and artifact.tar.sha256 to your local machine or transfer to the target node with the scp command.

6.2. Container-based Ansible Automation Platform

Prepare and export data from your container-based Ansible Automation Platform deployment.

Document your current containerized deployment configuration, topology, and components to create a comprehensive reference for migration.

Procedure

  1. Document the full topology of your current containerized deployment:

    1. Map out all servers, nodes, and their roles (for example control nodes, execution nodes, database servers).
    2. Note the hostname, IP address, and function of each server in your deployment.
    3. Document the network configuration between components.
  2. Ansible Automation Platform version information:

    1. Record the exact Ansible Automation Platform version (X.Y) currently deployed.
  3. Document the specific version of each component:

    1. Automation controller version
    2. Automation hub version
    3. Platform gateway version
  4. Database configuration:

    1. Database names for each component
    2. Database users and roles
    3. Connection parameters and authentication methods
    4. Any custom PostgreSQL configurations or optimizations
  5. Identify all custom configurations and settings
  6. Document container resource allocations and volumes

6.2.2. Exporting the source environment

Export databases, secrets, and custom configurations from your source containerized Ansible Automation Platform deployment to create the migration artifact.

Procedure

  1. Verify the PostgreSQL database version is PostgreSQL version 15.

    You can verify your current PostgreSQL version by connecting to your database server and running the following command as the postgres user:

    $ podman exec -it postgresql bash -c 'psql -c "SELECT version();"'
    Copy to Clipboard Toggle word wrap
    Important

    PostgreSQL version 15 is a strict requirement for the migration process to succeed. If running PostgreSQL 13 or earlier, upgrade to version 15 before proceeding with the migration.

    If using an Ansible Automation Platform managed database, re-run the installation program to upgrade the PostgreSQL version. If using a customer provided (external) database, contact your database administrator or service provider to confirm the version and arrange for an upgrade if required.

  2. Create a complete backup of the source environment:

    $ ansible-playbook -i <path_to_inventory> ansible.containerized_installer.backup
    Copy to Clipboard Toggle word wrap
  3. Get the connection settings from one node in each of the component groups.

    • Access the automation controller node and run:

      $ podman exec -it automation-controller-task bash -c 'awx-manage print_settings | grep '^DATABASES'
      Copy to Clipboard Toggle word wrap
    • Access the automation hub node and run:

      $ podman exec -it automation-hub-api bash -c "pulpcore-manager diffsettings | grep '^DATABASES'"
      Copy to Clipboard Toggle word wrap
    • Access the platform gateway node and run:

      $ podman exec -it automation-gateway bash -c "aap-gateway-manage print_settings | grep '^DATABASES'"
      Copy to Clipboard Toggle word wrap
  4. Validate the database size and make sure you have enough space on the filesystem for the pg_dump.

    You can verify the database sizes by connecting to your database server and running the following command as the postgres user:

    $ podman exec -it postgresql bash -c 'psql -c "\l+"'
    Copy to Clipboard Toggle word wrap

    Adjust the filesystem size or mount an external filesystem as needed before performing the next step.

    Note

    These commands send all target files to the /tmp filesystem. Adjust the commands to match your environment’s needs.

  5. Stage the manually created artifact on the platform gateway node.

    # mkdir -p /tmp/backups/artifact/{controller,gateway,hub}
    Copy to Clipboard Toggle word wrap
    # mkdir -p /tmp/backups/artifact/controller/custom_configs
    Copy to Clipboard Toggle word wrap
    # touch /tmp/backups/artifact/secrets.yml
    Copy to Clipboard Toggle word wrap
    # cd /tmp/backups/artifact/
    Copy to Clipboard Toggle word wrap
  6. Perform database dumps of all components on the platform gateway node within the artifact created previously.

    To run the psql and pg_restore commands, you must create a temporary container and run the commands inside of it. This command must be run from the database node.

    $ podman run -it --rm --name postgresql_restore_temp --network host --volume ~/aap/tls/extracted:/etc/pki/ca-trust/extracted:z --volume ~/aap/postgresql/server.crt:/var/lib/pgsql/server.crt:ro,z --volume ~/aap/postgresql/server.key:/var/lib/pgsql/server.key:ro,z --volume /tmp/backups/artifact:/var/lib/pgsql/backups:ro,z registry.redhat.io/rhel8/postgresql-15:latest bash
    Copy to Clipboard Toggle word wrap
    Note

    This command assumes the image registry.redhat.io/rhel8/postgresql-15:latest. If you are missing the image, check the available images for the user with podman images ls.

    The command above opens a shell inside the container named postgresql_restore_temp and has the artifact mounted into /var/lib/pgsql/backups. Also, this command is mounting the PostgreSQL certificates to ensure that you can resolve the correct certificates.

    bash-4.4$ cd /var/lib/pgsql/backups
    bash-4.4$ psql -h <pg_hostname> -U <component_pg_user> -d <database_name> -t -c 'SHOW server_version;' # ensure connectivity to db
    bash-4.4$ pg_dump -h <pg_hostname> -U <component_pg_user> -d <component_pg_name> --clean --create -Fc -f <component>/<component>.pgc
    bash-4.4$ ls -ld <component>/<component>.pgc
    bash-4.4$ echo "<component>_pg_database: <database_name>" >> secrets.yml ## Add the DB name for the component to the secrets file
    Copy to Clipboard Toggle word wrap

    After collecting this data, exit from this temporary container.

  7. Export the secrets from the containerized environment from one node of each component group.

    For each step below, use the root user to run the commands.

    1. Access the automation controller node and gather the secret key and add to the controller_secret_key value in secrets.yaml file.

      $ podman secret inspect --showsecret --format "{{.SecretData}}" controller_secret_key
      Copy to Clipboard Toggle word wrap
    2. Access the automation hub node and gather the secret key and add to the hub_secret_key value in secrets.yaml file.

      $ podman secret inspect --showsecret --format "{{.SecretData}}" hub_secret_key
      Copy to Clipboard Toggle word wrap
    3. Access the automation hub node and gather the database_fields.symmetric.key value and add to the hub_db_fields_encryption_key value in secrets.yaml file.

      $ podman secret inspect --showsecret --format "{{.SecretData}}" hub_database_fields
      Copy to Clipboard Toggle word wrap
    4. Access the platform gateway node and gather the secret key and add to the gateway_secret_key value in secrets.yaml file.

      $ podman secret inspect --showsecret --format "{{.SecretData}}" gateway_secret_key
      Copy to Clipboard Toggle word wrap
  8. Export automation controller custom configurations.

    If any extra_settings exist in your containerized installation inventory, copy them into a new file and saving them under /tmp/backups/artifact/controller/custom_configs.

  9. Package the artifact.

    # cd /tmp/backups/artifact/
    # [ -f sha256sum.txt ] && rm -f sha256sum.txt; find . -type f -name "*.pgc" -exec sha256sum {} \; >> sha256sum.txt
    # cat sha256sum.txt
    # cd ..
    # tar cf artifact.tar artifact
    # sha256sum artifact.tar > artifact.tar.sha256
    # sha256sum --check artifact.tar.sha256
    # tar tvf artifact.tar
    Copy to Clipboard Toggle word wrap

    Example output of tar tvf artifact.tar:

    drwxr-xr-x ansible/ansible     0 2025-05-08 16:48 artifact/
    drwxr-xr-x ansible/ansible     0 2025-05-08 16:33 artifact/controller/
    -rw-r--r-- ansible/ansible 732615 2025-05-08 16:26 artifact/controller/controller.pgc
    drwxr-xr-x ansible/ansible      0 2025-05-08 16:33 artifact/controller/custom_configs/
    drwxr-xr-x ansible/ansible      0 2025-05-08 16:11 artifact/gateway/
    -rw-r--r-- ansible/ansible 231155 2025-05-08 16:28 artifact/gateway/gateway.pgc
    drwxr-xr-x ansible/ansible      0 2025-05-08 16:26 artifact/hub/
    -rw-r--r-- ansible/ansible 29252002 2025-05-08 16:26 artifact/hub/hub.pgc
    -rw-r--r-- ansible/ansible      614 2025-05-08 16:24 artifact/secrets.yml
    -rw-r--r-- ansible/ansible      338 2025-05-08 16:48 artifact/sha256sum.txt
    Copy to Clipboard Toggle word wrap
  10. Download the artifact.tar and artifact.tar.sha256 to your local machine or transfer to the target node with the scp command.

Chapter 7. Target environment

Prepare, configure, and validate your target Ansible Automation Platform environment.

7.1. Container-based Ansible Automation Platform

Prepare and assess your target container-based Ansible Automation Platform environment, and import and reconcile your migrated content.

Transfer the migration artifact, install containerized Ansible Automation Platform, and configure the inventory file to match your source environment topology and database settings.

Procedure

  1. Validate the file system home folder size and make sure it has enough space to transfer the artifact.
  2. Transfer the artifact to the nodes where you will be working by using scp or any preferred file transfer method. It is recommended that you work from the platform gateway node as it has access to most systems. However, if you have access or file system space limitations due to the PostgreSQL dumps, work from the database node instead.
  3. Download the latest version of containerized Ansible Automation Platform from the Ansible Automation Platform download page.
  4. Validate the artifact checksum.
  5. Extract the artifact on the home folder for the user running the containers.

    $ cd ~
    Copy to Clipboard Toggle word wrap
    $ sha256sum-check artifact.tar.sha256
    Copy to Clipboard Toggle word wrap
    $ tar xf artifact.tar
    Copy to Clipboard Toggle word wrap
    $ cd artifact
    Copy to Clipboard Toggle word wrap
    $ sha256sum-check sha256sum.txt
    Copy to Clipboard Toggle word wrap
  6. Generate an inventory file for your containerized deployment.

    Configure the inventory file to match the same topology as the source environment. Configure the component database names and the secret_key values from the artifact’s secrets.yml file.

    You can do this in two ways:

    • Set the extra variables in the inventory file.
    • Use the secrets.yml file as an additional variables file when running the installation program.

      1. Option 1: Extra variables in the inventory file

        $ egrep 'pg_database|_key' inventory
        controller_pg_database=<redacted>
        controller_secret_key=<redacted>
        gateway_pg_database=<redacted>
        gateway_secret_key=<redacted>
        hub_pg_database=<redacted>
        hub_secret_key=<redacted>
        __hub_database_fields=<redacted>
        Copy to Clipboard Toggle word wrap
        Note

        The __hub_database_fields value comes from the hub_db_fields_encryption_key value in your secret.

      2. Option 2: Additional variables file

        $ ansible-playbook -i inventory ansible.containerized_installer.install -e @~/artifact/secrets.yml -e "__hub_database_fields='{{ hub_db_fields_encryption_key }}'"
        Copy to Clipboard Toggle word wrap
  7. Install and configure the containerized target environment.
  8. Verify PostgreSQL database version is on version 15.
  9. Create a backup of the initial containerized environment.

    $ ansible-playbook -i <path_to_inventory> ansible.containerized_installer.backup
    Copy to Clipboard Toggle word wrap
  10. Verify the fresh installation functions correctly.

To import your migration content into the target environment, stop the containerized services, import the database dumps, and then restart the services.

Procedure

  1. Stop the containerized services, except the database.

    1. In all nodes, if Performance Co-Pilot is configured, run the following command:

      $ systemctl --user stop pcp
      Copy to Clipboard Toggle word wrap
    2. Access the automation controller node and run:

      $ systemctl --user stop automation-controller-task automation-controller-web automation-controller-rsyslog
      $ systemctl --user stop receptor
      Copy to Clipboard Toggle word wrap
    3. Access the automation hub node and run:

      $ systemctl --user stop automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2
      Copy to Clipboard Toggle word wrap
    4. Access the Event-Driven Ansible node and run:

      $ systemctl --user stop automation-eda-scheduler automation-eda-daphne automation-eda-web automation-eda-api automation-eda-worker-1 automation-eda-worker-2 automation-eda-activation-worker-1 automation-eda-activation-worker-2
      Copy to Clipboard Toggle word wrap
    5. Access the platform gateway node and run:

      $ systemctl --user stop automation-gateway automation-gateway-proxy
      Copy to Clipboard Toggle word wrap
    6. Access the platform gateway node when using standalone Redis, or all nodes from the Redis group in your inventory file when using clustered Redis, and run:

      $ systemctl --user stop redis-unix redis-tcp
      Copy to Clipboard Toggle word wrap
      Note

      In an enterprise deployment, the components run on different nodes. Run the commands on each component node.

  2. Import database dumps to the containerized environment.

    1. If you are using an Ansible Automation Platform managed database, you must create a temporary container to run the psql and pg_restore commands. Run this command from the database node:

      $ podman run -it --rm --name postgresql_restore_temp --network host --volume ~/aap/tls/extracted:/etc/pki/ca-trust/extracted:z --volume ~/aap/postgresql/server.crt:/var/lib/pgsql/server.crt:ro,z --volume ~/aap/postgresql/server.key:/var/lib/pgsql/server.key:ro,z --volume ~/artifact:/var/lib/pgsql/backups:ro,z registry.redhat.io/rhel8/postgresql-15:latest bash
      Copy to Clipboard Toggle word wrap
      Note

      The command above opens a shell inside the container named postgresql_restore_temp with the artifact mounted at /var/lib/pgsql/backups. Additionally, it mounts the PostgreSQL certificates to ensure that you can resolve the correct certificates.

      The command assumes the image registry.redhat.io/rhel8/postgresql-15:latest is available. If you are missing the image, check the available images for the user with podman images ls.

      It also assumes that the artifact is located in the current user’s home folder. If the artifact is located elsewhere, change the ~/artifact with the required path.

    2. If you are using a customer-provided (external) database, you can run the psql and pg_restore commands from any node that has these commands installed and that has access to the database. Reach out to your database administrator if you are unsure.
    3. From inside the container, access the database and ensure the users have the CREATEDB role.

      bash-4.4$ psql -h <pg_hostname> -U postgres
      postgres=# \l
                Name           |     Owner     | Encoding |   Collate   |    Ctype    | ICU Locale | Locale Provider |   Access privileg
      es
      -------------------------+---------------+----------+-------------+-------------+------------+-----------------+------------------
      -----
       automationedacontroller | eda           | UTF8     | en_US.UTF-8 | en_US.UTF-8 |            | libc            |
       automationhub           | automationhub | UTF8     | en_US.UTF-8 | en_US.UTF-8 |            | libc            |
       awx                     | awx           | UTF8     | en_US.UTF-8 | en_US.UTF-8 |            | libc            |
       gateway                 | gateway       | UTF8     | en_US.UTF-8 | en_US.UTF-8 |            | libc            |
      ...
      Copy to Clipboard Toggle word wrap
    4. For each component name, add the CREATEDB role to the Owner. For example:

      postgres=# ALTER ROLE awx WITH CREATEDB;
      postgres=# \q
      Copy to Clipboard Toggle word wrap

      Replace awx with the database owner.

    5. With the CREATEDB in place, access the path where the artifact is mounted, and run the pg_restore commands.

      bash$ cd /var/lib/pgsql/backups
      bash$ pg_restore --clean --create --no-owner -h <pg_hostname> -U <component_pg_user> -d template1 <component>/<component>.pgc
      Copy to Clipboard Toggle word wrap
    6. After the restore, remove the permissions from the user. For example:

      postgres=# ALTER ROLE awx WITH NOCREATEDB;
      postgres=# \q
      Copy to Clipboard Toggle word wrap

      Replace awx with each user containing the role.

  3. Start the containerized services, except the database.

    Note

    In an enterprise deployment, the components run on different nodes. Run the commands on each component node.

    1. In all nodes, if Performance Co-Pilot is configured, run the following command:

      $ systemctl --user start pcp
      Copy to Clipboard Toggle word wrap
    2. Access the automation controller node and run:

      $ systemctl --user start automation-controller-task automation-controller-web automation-controller-rsyslog
      $ systemctl --user start receptor
      Copy to Clipboard Toggle word wrap
    3. Access the automation hub node and run:

      $ systemctl --user start automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2
      Copy to Clipboard Toggle word wrap
    4. Access the Event-Driven Ansible node and run:

      $ systemctl --user start automation-eda-scheduler automation-eda-daphne automation-eda-web automation-eda-api automation-eda-worker-1 automation-eda-worker-2  automation-eda-activation-worker-1 automation-eda-activation-worker-2
      Copy to Clipboard Toggle word wrap
    5. Access the platform gateway node and run:

      $ systemctl --user start automation-gateway automation-gateway-proxy
      Copy to Clipboard Toggle word wrap
    6. Access the platform gateway node when using standalone Redis, or all nodes from the Redis group in your inventory when using clustered Redis, and run:

      $ systemctl --user start redis-unix redis-tcp
      Copy to Clipboard Toggle word wrap

Perform the following post-import reconciliation steps to verify your target environment functions correctly.

Procedure

  1. Deprovision the platform gateway configuration.

    • To deprovision platform gateway configuration, SSH to the host serving an automation-gateway container as the same rootless user from 4.2.6 and run the following to remove the platform gateway proxy configuration:

      $ podman exec -it automation-gateway bash
      $ aap-gateway-manage migrate
      $ aap-gateway-manage shell_plus
      >>> HTTPPort.objects.all().delete(); ServiceNode.objects.all().delete(); ServiceCluster.objects.all().delete()
      Copy to Clipboard Toggle word wrap
  2. Transfer custom configurations and settings.

    • Edit the inventory file and apply any relevant extra_settings to each component by using the component_extra_settings.
  3. Update the Resource Server Secret Key for each component.

    1. Gather the current Resource Secret values for each component:

      $ podman exec -it automation-gateway bash -c 'aap-gateway-manage shell_plus --quiet -c "[print(cl.name, key.secret) for cl in ServiceCluster.objects.all() for key in cl.service_keys.all()]"'
      Copy to Clipboard Toggle word wrap
    2. Validate the current secret values:

      $ for secret_name in eda_resource_server hub_resource_server controller_resource_server
      do
      echo $secret_name
      podman secret inspect $secret_name --showsecret | grep SecretData
      done
      Copy to Clipboard Toggle word wrap
    3. If the secret value does not match the current values, delete the existing secret and re-create it, updating it with the new value:

      1. Delete the secret:

        $ podman secret rm <SECRET_NAME>
        Copy to Clipboard Toggle word wrap
      2. Re-create the secret:

        $ echo "secret_value" | podman secret create <SECRET_NAME> -
        Copy to Clipboard Toggle word wrap

        Replace the <SECRET_NAME> placeholder in the commands above with the appropriate secret name for each component: eda_resource_server (Event-Driven Ansible), hub_resource_server (automation hub), and controller_resource_server (automation controller).

  4. Re-run the installation program on the target environment by using the same inventory from the installation.
  5. Validate instances for automation execution.

    1. SSH to the host serving an automation-controller-task container as the rootless user, and run the following commands to validate and remove instances that are orphaned from the source artifact:

      $ podman exec -it automation-controller-task bash
      Copy to Clipboard Toggle word wrap
      $ awx-manage list_instances
      Copy to Clipboard Toggle word wrap
    2. Find nodes that are no longer part of this cluster. A good indicator is nodes with 0 capacity as they have failed their health checks:

      [ungrouped capacity=0]
      	[DISABLED] node1.example.org capacity=0 node_type=hybrid version=X.Y.Z heartbeat="..."
      	[DISABLED] node2.example.org capacity=0 node_type=execution version=ansible-runner-X.Y.Z heartbeat="..."
      Copy to Clipboard Toggle word wrap
    3. Remove those nodes with awx-manage, leaving only the aap-controller-task instance:

      awx-manage deprovision_instance --host=node1.example.org
      awx-manage deprovision_instance --host=node2.example.org
      Copy to Clipboard Toggle word wrap
  6. Repair orphaned automation hub content links for Pulp.

    • Run the following command from any host that has direct access to the automation hub address:

      $ curl -d '{\"verify_checksums\": true }' -X POST -k https://<gateway url>/api/galaxy/pulp/api/v3/repair/ -u <gateway_admin_user>:<gateway_admin_password>
      Copy to Clipboard Toggle word wrap
  7. Reconcile instance groups configuration:

    1. Go to Automation ExecutionInfrastructureInstance Groups.
    2. Select the Instance Group and then select the Instances tab.
    3. Associate or disassociate instances as required.
  8. Reconcile decision environments and credentials:

    1. Go to Automation DecisionsDecision Environments.
    2. Edit each decision environment which references a registry URL either unrelated or no longer accessible to this new environment. For example, the automation hub decision environment might require modification for the target automation hub environment.
    3. Select each associated credential to these decision environments and ensure their addresses align with the new environment.
  9. Reconcile execution environments and credentials:

    1. Go to Automation ExecutionInfrastructureExecution Environments.
    2. Check each execution environment image and verify their addresses against the new environment.
    3. Go to Automation ExecutionInfrastructureCredentials.
    4. Edit each credential and ensure that all environment specific information aligns with the new environment.
  10. Verify any further customizations or configurations after the migration, such as RBAC rules with instance groups.

7.1.4. Validating the target environment

After completing the migration, validate that all components in your target environment function correctly.

Procedure

  1. Verify all migrated components function correctly.

    1. Platform gateway: Access the Ansible Automation Platform URL at https://<gateway_hostname>/ and verify that the dashboard loads correctly. Check that the platform gateway service is running and connected to automation controller.
    2. Automation controller: Under Automation Execution, check that projects, inventories, and job templates are present and configured.
    3. Automation hub: Under Automation Content, verify that collections, namespaces, and their contents are visible.
    4. Event-Driven Ansible (if applicable): Under Automation Execution Decisions, verify that rule audits, rulebook activations, and projects are accessible.
    5. For each component, check the logs to ensure there are no startup errors or warnings:

      podman logs <container_name>
      Copy to Clipboard Toggle word wrap
  2. Test workflows and automation processes.

    1. Run job templates: Run several key job templates, including those with dependencies on various credential types.
    2. Test workflow templates: Run workflow templates to ensure that workflow nodes run in the correct order and that the workflow completes successfully.
    3. Verify execution environments: Ensure that jobs run in the appropriate execution environments and can access required dependencies.
    4. Check job artifacts: Verify that job artifacts are properly stored and accessible.
    5. Validate job scheduling: Test scheduled jobs to ensure they run at the expected times.
  3. Validate user access and permissions.

    1. User authentication: Test login functionality with various user accounts to ensure authentication works correctly.
    2. Role-based access controls: Verify that users have appropriate permissions for organizations, projects, inventories, and job templates.
    3. Team memberships: Confirm that team memberships and team-based permissions are intact.
    4. API access: Test API tokens and ensure that API access is functioning properly.
    5. SSO integration (if applicable): Verify that Single Sign-On authentication is working correctly.
  4. Confirm content synchronization and availability.

    1. Collection synchronization: Check that you can synchronize collections from a remote.
    2. Collection Upload: Check that you can upload collections.
    3. Collection repositories: Verify that automation hub makes collections available and that execution environments can use them.
    4. Project synchronization: Check that projects can sync content from source control repositories.
    5. External content sources: Test synchronization from automation hub and Ansible Galaxy (if configured).
    6. Execution environment availability: Confirm that all required execution environments exist and that execution nodes can access them.
    7. Content dependencies: Verify that the system correctly resolves content dependencies when running jobs.

7.2. OpenShift Container Platform

Prepare and assess your target OpenShift Container Platform environment, and import and reconcile your migrated content.

Transfer the migration artifact, create an OpenShift Container Platform project, and deploy Ansible Automation Platform using the Operator with configurations matching your source environment.

Procedure

  1. Configure Ansible Automation Platform Operator for an Ansible Automation Platform deployment.
  2. Set up the database configuration (internal or external).
  3. Set up the Redis configuration (internal or external).
  4. Install Ansible Automation Platform using Ansible Automation Platform Operator.
  5. Create a backup of the initial OpenShift Container Platform deployment.
  6. Verify the fresh installation functions correctly.

To import your environment, scale down Ansible Automation Platform components, restore databases, replace encryption secrets, and scale services back up.

Note

The import process requires the latest version of Ansible Automation Platform named aap in the default aap namespace and all default database names and database users.

Procedure

  1. Scale down Ansible Automation Platform components.

    1. Begin by scaling down the Ansible Automation Platform deployment by using idle_aap:

      oc patch ansibleautomationplatform aap --type merge -p '{"spec":{"idle_aap":true}}'
      Copy to Clipboard Toggle word wrap
    2. Wait for component pods to stop. Only the 6 Operator pods will remain running.

      NAME                                                                  READY   STATUS      RESTARTS   AGE
      pod/aap-controller-migration-4.6.13-5swc6                             0/1     Completed   0          160m
      pod/aap-gateway-operator-controller-manager-6b75c95458-4zrxv          2/2     Running     0          26h
      pod/ansible-lightspeed-operator-controller-manager-b674c55b8-qncjp    2/2     Running     0          45h
      pod/automation-controller-operator-controller-manager-6b79d48d4cchn   2/2     Running     0          45h
      pod/automation-hub-operator-controller-manager-5cd674c984-5njfj       2/2     Running     0          45h
      pod/eda-server-operator-controller-manager-645f4db5-d2flt             2/2     Running     0          45h
      pod/resource-operator-controller-manager-86b8f7bb54-cvz6d             2/2     Running     0          45h
      Copy to Clipboard Toggle word wrap
    3. Scale down the Ansible Automation Platform Gateway Operator and Ansible Automation Platform Controller Operator:

      oc scale --replicas=0 deployment aap-gateway-operator-controller-manager automation-controller-operator-controller-manager
      Copy to Clipboard Toggle word wrap

      Example output:

      deployment.apps/aap-gateway-operator-controller-manager scaled
      deployment.apps/automation-controller-operator-controller-manager scaled
      Copy to Clipboard Toggle word wrap
  2. Scale up the idled Postgres StatefulSet.

    oc scale --replicas=1 statefulset.apps/aap-postgres-15
    Copy to Clipboard Toggle word wrap
  3. Prepare a temporary environment for the database restore.

    1. Create a temporary Persistent Volume Claim (PVC) with appropriate settings and sizing.

      aap-temp-pvc.yaml

      ---
      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: aap-temp-pvc
        namespace: aap
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 200Gi
      Copy to Clipboard Toggle word wrap
      oc create -f aap-temp-pvc.yaml
      Copy to Clipboard Toggle word wrap
    2. Obtain the existing PostgreSQL image to use for temporary deployment:

      echo $(oc get pod/aap-postgres-15-0 -o jsonpath="{.spec.containers[].image}")
      Copy to Clipboard Toggle word wrap
    3. Create a temporary PostgreSQL deployment with the mounted temporary PVC:

      aap-temp-postgres.yaml

      ---
      kind: Deployment
      apiVersion: apps/v1
      metadata:
        name: aap-temp-postgres
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: aap-temp-postgres
        template:
          metadata:
            labels:
              app: aap-temp-postgres
          spec:
            containers:
              - name: aap-temp-postgres
                image: <postgres image from previous step>
                command:
                  - /bin/sh
                  - '-c'
                  - sleep infinity
                imagePullPolicy: Always
                securityContext:
                  runAsNonRoot: true
                  allowPrivilegeEscalation: false
                volumeMounts:
                  - name: aap-temp-pvc
                    mountPath: /tmp/aap-temp-pvc
            volumes:
              - name: aap-temp-pvc
                persistentVolumeClaim:
                  claimName: aap-temp-pvc
      Copy to Clipboard Toggle word wrap
      oc create -f aap-temp-postgres.yaml
      Copy to Clipboard Toggle word wrap
  4. Copy the export artifact to the temporary PostgreSQL pod.

    1. First, obtain the pod name and set it as an environment variable:

      export AAP_TEMP_POSTGRES=$(oc get pods --no-headers -o custom-columns="metadata.name" | grep aap-temp-postgres)
      Copy to Clipboard Toggle word wrap
    2. Test the environment variable:

      echo $AAP_TEMP_POSTGRES
      Copy to Clipboard Toggle word wrap

      Example output:

      aap-temp-postgres-7b6c57f87f-s2ldp
      Copy to Clipboard Toggle word wrap
    3. Copy the artifact and checksum to the PVC:

      oc cp artifact.tar $AAP_TEMP_POSTGRES:/tmp/aap-temp-pvc/
      oc cp artifact.tar.sha256 $AAP_TEMP_POSTGRES:/tmp/aap-temp-pvc/
      Copy to Clipboard Toggle word wrap
  5. Restore databases to Ansible Automation Platform PostgreSQL by using the temporary PostgreSQL pod.

    1. First, obtain the PostgreSQL passwords for all three databases and the PostgreSQL admin password:

      echo "" && for secret in aap-controller-postgres-configuration aap-hub-postgres-configuration aap-gateway-postgres-configuration
      do
      echo $secret
      echo "PASSWORD: `oc get secrets $secret -o jsonpath="{.data['password']}" | base64 -d`"
      echo "USER: `oc get secrets $secret -o jsonpath="{.data['username']}" | base64 -d`"
      echo "DATABASE: `oc get secrets $secret -o jsonpath="{.data['database']}" | base64 -d`"
      echo ""
      done && echo "POSTGRES ADMIN PASSWORD: `oc get secrets aap-gateway-postgres-configuration -o jsonpath="{.data['postgres_admin_password']}" | base64 -d`"
      Copy to Clipboard Toggle word wrap
    2. Enter into the temporary PostgreSQL deployment and change directory to the mounted PVC containing the copied artifact:

      oc exec -it deployment.apps/aap-temp-postgres -- /bin/bash
      Copy to Clipboard Toggle word wrap
    3. Inside the pod, change directory to /tmp/aap-temp-pvc and list its contents:

      cd /tmp/aap-temp-pvc && ls -l
      Copy to Clipboard Toggle word wrap

      Example output:

      total 2240
      -rw-r--r--. 1 1000900000 1000900000 2273280 Jun 13 17:41 artifact.tar
      -rw-r--r--. 1 1000900000 1000900000      79 Jun 13 17:42 artifact.tar.sha256
      drwxrws---. 2 root       1000900000   16384 Jun 13 17:40 lost+found
      Copy to Clipboard Toggle word wrap
    4. Verify the archive:

      sha256sum --check artifact.tar.sha256
      Copy to Clipboard Toggle word wrap

      Example output:

      artifact.tar: OK
      Copy to Clipboard Toggle word wrap
    5. Extract the artifact and verify its contents:

      tar xf artifact.tar && cd artifact && sha256sum --check sha256sum.txt
      Copy to Clipboard Toggle word wrap

      Example output:

       ./controller/controller.pgc: OK
       ./gateway/gateway.pgc: OK
       ./hub/hub.pgc: OK
      Copy to Clipboard Toggle word wrap
    6. Drop the automation controller database:

      dropdb -h aap-postgres-15 automationcontroller
      Copy to Clipboard Toggle word wrap
    7. Alter the user temporarily with the CREATEDB role:

      postgres=# ALTER USER automationcontroller WITH CREATEDB;
      Copy to Clipboard Toggle word wrap
    8. Create the database:

      createdb -h aap-postgres-15 -U automationcontroller automationcontroller
      Copy to Clipboard Toggle word wrap
    9. Revert temporary user permission:

      postgres=# ALTER USER automationcontroller NOCREATEDB;
      Copy to Clipboard Toggle word wrap
    10. Restore the automation controller database:

      pg_restore --clean --create --no-owner -h aap-postgres-15 -U automationcontroller -d automationcontroller controller/controller.pgc
      Copy to Clipboard Toggle word wrap
    11. Restore the automation hub database:

      pg_restore --clean --create --no-owner -h aap-postgres-15 -U automationhub -d automationhub hub/hub.pgc
      Copy to Clipboard Toggle word wrap
    12. Restore the platform gateway database:

      pg_restore --clean --create --no-owner -h aap-postgres-15 -U gateway -d gateway gateway/gateway.pgc
      Copy to Clipboard Toggle word wrap
    13. Exit the pod:

      exit
      Copy to Clipboard Toggle word wrap
  6. Replace database field encryption secrets and clean up temporary resources.

    1. Replace database field encryption secrets:

      oc set data secret/aap-controller-secret-key secret_key="<unencoded controller_secret_key value from secrets.yml>"
      Copy to Clipboard Toggle word wrap
      oc set data secret/aap-db-fields-encryption-secret secret_key="<unencoded gateway_secret_key value from secrets.yml>"
      Copy to Clipboard Toggle word wrap
      oc set data secret/aap-hub-db-fields-encryption database_fields.symmetric.key="<unencoded hub_db_fields_encryption_key value from secrets.yml>"
      Copy to Clipboard Toggle word wrap
    2. Clean up the temporary PostgreSQL and PVC:

      oc delete -f aap-temp-postgres.yaml
      Copy to Clipboard Toggle word wrap
      oc delete -f aap-temp-pvc.yaml
      Copy to Clipboard Toggle word wrap
  7. Scale Ansible Automation Platform components back up.

    1. Scale the platform gateway and automation controller Operators back up and wait for the platform gateway Operator reconciliation loop to complete:

      The PostgreSQL StatefulSet returns to idle.

      oc scale --replicas=1 deployment aap-gateway-operator-controller-manager automation-controller-operator-controller-manager
      Copy to Clipboard Toggle word wrap

      Example output:

      deployment.apps/aap-gateway-operator-controller-manager scaled
      deployment.apps/automation-controller-operator-controller-manager scaled
      Copy to Clipboard Toggle word wrap
      oc logs -f $(oc get pods  --no-headers -o custom-columns=":metadata.name" | grep aap-gateway-operator)
      Copy to Clipboard Toggle word wrap
    2. Wait for reconciliation to stop.

      Example output:

      META: ending play
      {"level":"info","ts":"2025-06-12T15:41:29Z","logger":"runner","msg":"Ansible-runner exited successfully","job":"5672263053238024330","name":"aap","namespace":"aap"}
      
      ----- Ansible Task Status Event StdOut (aap.ansible.com/v1alpha1, Kind=AnsibleAutomationPlatform, aap/aap) -----
      
      
      PLAY RECAP *********************************************************************
      localhost                  : ok=45   changed=0    unreachable=0    failed=0    skipped=63   rescued=0    ignored=0
      Copy to Clipboard Toggle word wrap
    3. Scale Ansible Automation Platform back up using idle_aap:

      oc patch ansibleautomationplatform aap --type=merge -p '{"spec":{"idle_aap":false}}'
      Copy to Clipboard Toggle word wrap

      Example output:

      ansibleautomationplatform.aap.ansible.com/aap patched
      Copy to Clipboard Toggle word wrap
  8. Wait for the aap-gateway pod to be running and clean up old service endpoints.

    Example output:

    pod/aap-gateway-6c989b846c-47b91 2/2 Running 0 45s
    Copy to Clipboard Toggle word wrap
    for i in HTTPPort Route ServiceNode; do; oc exec -it deployment.apps/aap-gateway -- aap-gateway-manage shell -c 'from aap_gateway_api.models import '$i';print('$i'.objects.all().delete())'; done
    Copy to Clipboard Toggle word wrap

    Example output:

    (23, {'aap_gateway_api.ServiceAPIRoute': 4, 'aap_gateway_api.AdditionalRoute': 7, 'aap_gateway_api.Route': 11, 'aap_gateway_api.HTTPPort': 1})
    (0, {})
    (4, {'aap_gateway_api.ServiceNode': 4})
    Copy to Clipboard Toggle word wrap
  9. Run awx-manage to deprovision instances.

    1. Obtain the automation controller pod:

      export AAP_CONTROLLER_POD=$(oc get pods --no-headers -o custom-columns=":metadata.name" | grep aap-controller-task)
      Copy to Clipboard Toggle word wrap
    2. Test the environment variable:

      echo $AAP_CONTROLLER_POD
      Copy to Clipboard Toggle word wrap

      Example output:

      aap-controller-task-759b6d9759-r59q9
      Copy to Clipboard Toggle word wrap
    3. Enter into the automation controller pod:

      oc exec -it $AAP_CONTROLLER_POD -- /bin/bash
      awx-manage list_instances
      Copy to Clipboard Toggle word wrap

      Example output:

      bash-4.4$
      [controlplane capacity=642 policy=100%]
      	aap-controller-task-759b6d9759-r59q9 capacity=642 node_type=control version=4.6.15 heartbeat="2025-06-12 21:39:48"
      	node1.example.org capacity=0 node_type=hybrid version=4.6.13 heartbeat="2025-05-30 17:22:11"
      
      [default capacity=0 policy=100%]
      	node1.example.org capacity=0 node_type=hybrid version=4.6.13 heartbeat="2025-05-30 17:22:11"
      	node2.example.org capacity=0 node_type=execution version=ansible-runner-2.4.1 heartbeat="2025-05-30 17:22:08"
      Copy to Clipboard Toggle word wrap
    4. Remove old nodes with awx-manage, leaving only aap-controller-task:

      awx-manage deprovision_instance --host=node1.example.org
      awx-manage deprovision_instance --host=node2.example.org
      Copy to Clipboard Toggle word wrap
  10. Run the curl command to repair automation hub filesystem data.

    curl -d '{\"verify_checksums\": true }' -X POST -k https://<aap url>/api/galaxy/pulp/api/v3/repair/ -u <admin_user>:<restored_admin_password>
    Copy to Clipboard Toggle word wrap

After importing your migration artifact, perform the following steps to reconcile your target environment.

Procedure

  1. Modify the Django SECRET_KEY secrets to match the source platform.
  2. Deprovision and reconfigure platform gateway service nodes.
  3. Re-run platform gateway nodes and services register logic.
  4. Convert container-specific settings to OpenShift Container Platform-appropriate formats.
  5. Reconcile container resource allocations to OpenShift Container Platform resources.

7.2.4. Validating the target environment

Verify that all Ansible Automation Platform services are running, credentials work correctly, and migrated content like projects, inventories, and job templates are accessible on OpenShift Container Platform.

Procedure

  1. Verify all migrated components are functional.
  2. Test workflows and automation processes.
  3. Validate user access and permissions.
  4. Confirm content synchronization and availability.
  5. Test integration with OpenShift Container Platform-specific features.

7.3. Managed Ansible Automation Platform

Prepare and migrate your source environment to a Managed Ansible Automation Platform deployment, and reconcile the target environment post-migration.

Submit a support ticket on the Red Hat Customer Portal to request a migration to Managed Ansible Automation Platform.

Prerequisites

  • You have a migration artifact from your source environment.

Procedure

  1. Submit a support ticket on the Red Hat Customer Portal requesting a migration to Managed Ansible Automation Platform.

    The support ticket should include:

    • Source installation type (RPM, Containerized, OpenShift)
    • Managed Ansible Automation Platform URL or deployment name
    • Source version (installer or Operator version)
  2. The Ansible Site Reliability Engineering (SRE) team provides instructions in the support ticket on how to upload the resulting migration artifact to secure storage for processing.
  3. The Ansible SRE team imports the migration artifact into the identified target instance and notifies the customer through the support ticket.
  4. The Ansible SRE team notifies customers of successful migration.

Update necessary configurations after migrating to Managed Ansible Automation Platform.

Procedure

  1. Log in to the Managed Ansible Automation Platform instance by using the local administrator account to confirm that data was imported.
  2. Perform the following actions based on the configuration of the source deployment:

    1. Reconfigure Single Sign-On (SSO) authenticators and mappings to reflect the new URLs.
    2. Update private automation hub content to reflect the new URLs.

      1. Run the following command to update the automation hub repositories:

        curl -d '{\"verify_checksums\": true }' -X POST -k https://<platform url>/api/galaxy/pulp/api/v3/repair/ -u <admin_user>:<admin_password>
        Copy to Clipboard Toggle word wrap
      2. Perform a sync on any repositories configured in automation hub.
      3. Push any custom execution environments from the source automation hub to the target automation hub.
    3. Reconfigure automation mesh.
  3. After migration, you can request standard Site Reliability Engineering (SRE) tasks through support tickets, such as configuration of custom certificates, a custom domain, or connectivity through private endpoints.

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat