Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.

Chapter 7. Target environment


Prepare, configure, and validate your target Ansible Automation Platform environment.

7.1. Container-based Ansible Automation Platform

Prepare and assess your target container-based Ansible Automation Platform environment, and import and reconcile your migrated content.

7.1.1. Preparing and assessing the target environment

To prepare your target environment, perform the following steps.

Procedure

  1. Validate the file system home folder size and make sure it has enough space to transfer the artifact.
  2. Transfer the artifact to the nodes where you will be working by using scp or any preferred file transfer method. It is recommended that you work from the platform gateway node as it will have access to most systems. However, if you have access or file system space limitations due to the PostgreSQL dumps, then work from the database node.
  3. Download the latest version of containerized Ansible Automation Platform from the Ansible Automation Platform download page.
  4. Validate the artifact checksum.
  5. Extract the artifact on the home folder for the user running the containers.

    $ cd ~
    Copy to Clipboard Toggle word wrap
    $ sha256sum-check artifact.tar.sha256
    Copy to Clipboard Toggle word wrap
    $ tar xf artifact.tar
    Copy to Clipboard Toggle word wrap
    $ cd artifact
    Copy to Clipboard Toggle word wrap
    $ sha256sum-check sha256sum.txt
    Copy to Clipboard Toggle word wrap
  6. Generate inventory file for containerized deployment.

    Configure the inventory file to match the same topology as the source environment. Configure the component database names and the secret_key values seen on the secrets.yml file from the artifact. You can do this by either setting the extra variables in the inventory file or by using the secrets.yml file as an additional variables file when running the installation program.

    1. Option 1: Extra variables in the inventory file

      $ egrep 'pg_database|_key' inventory
      controller_pg_database=<redacted>
      controller_secret_key=<redacted>
      gateway_pg_database=<redacted>
      gateway_secret_key=<redacted>
      hub_pg_database=<redacted>
      hub_secret_key=<redacted>
      __hub_database_fields=<redacted>
      Copy to Clipboard Toggle word wrap
      Note

      The __hub_database_fields value comes from the hub_db_fields_encryption_key value in your secret.

    2. Option 2: Additional variables file

      $ ansible-playbook -i inventory ansible.containerized_installer.install -e @~/artifact/secrets.yml -e "__hub_database_fields='{{ hub_db_fields_encryption_key }}'"
      Copy to Clipboard Toggle word wrap
  7. Install and configure the containerized target environment.
  8. Verify PostgreSQL database version is on version 15.
  9. Create a backup of the initial containerized environment.

    $ ansible-playbook -i <path_to_inventory> ansible.containerized_installer.backup
    Copy to Clipboard Toggle word wrap
  10. Ensure the fresh installation is functional.

7.1.2. Importing the migration content to the target environment

To import your migration content into the target environment, stop the containerized services, import the database dumps, and then restart the services.

Procedure

  1. Stop the containerized services, except the database.

    1. In all nodes, if Performance Co-Pilot is configured, run the following command:

      $ systemctl --user stop pcp
      Copy to Clipboard Toggle word wrap
    2. Access the automation controller node and run:

      $ systemctl --user stop automation-controller-task automation-controller-web automation-controller-rsyslog
      $ systemctl --user stop receptor
      Copy to Clipboard Toggle word wrap
    3. Access the automation hub node and run:

      $ systemctl --user stop automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2
      Copy to Clipboard Toggle word wrap
    4. Access the Event-Driven Ansible node and run:

      $ systemctl --user stop automation-eda-scheduler automation-eda-daphne automation-eda-web automation-eda-api automation-eda-worker-1 automation-eda-worker-2 automation-eda-activation-worker-1 automation-eda-activation-worker-2
      Copy to Clipboard Toggle word wrap
    5. Access the platform gateway node and run:

      $ systemctl --user stop automation-gateway automation-gateway-proxy
      Copy to Clipboard Toggle word wrap
    6. Access the platform gateway node when using standalone Redis, or all nodes from the Redis group in your inventory file when using clustered Redis, and run:

      $ systemctl --user stop redis-unix redis-tcp
      Copy to Clipboard Toggle word wrap
      Note

      In an enterprise deployment, the components run on different nodes. Run the commands on each component node.

  2. Import database dumps to the containerized environment.

    1. If you are using an Ansible Automation Platform managed database, you must create a temporary container to run the psql and pg_restore commands. Run this command from the database node.

      $ podman run -it --rm --name postgresql_restore_temp --network host --volume ~/aap/tls/extracted:/etc/pki/ca-trust/extracted:z --volume ~/aap/postgresql/server.crt:/var/lib/pgsql/server.crt:ro,z --volume ~/aap/postgresql/server.key:/var/lib/pgsql/server.key:ro,z --volume ~/artifact:/var/lib/pgsql/backups:ro,z registry.redhat.io/rhel8/postgresql-15:latest bash
      Copy to Clipboard Toggle word wrap
      Note

      The command above opens a shell inside the container named postgresql_restore_temp with the artifact mounted at /var/lib/pgsql/backups. Additionally, it mounts the PostgreSQL certificates to ensure that you can resolve the correct certificates.

      The command assumes the image registry.redhat.io/rhel8/postgresql-15:latest is available. If you are missing the image, check the available images for the user with podman images ls.

      It also assumes that the artifact is located in the current user’s home folder. If the artifact is located elsewhere, change the ~/artifact with the required path.

    2. If you are using a customer-provided (external) database, you can run the psql and pg_restore commands from any node that has these commands installed and that has to access the database. Reach out to your database administrator if you are unsure.
    3. From inside the container, access the database and ensure the users have the CREATEDB role.

      bash-4.4$ psql -h <pg_hostname> -U postgres
      postgres=# \l
                Name           |     Owner     | Encoding |   Collate   |    Ctype    | ICU Locale | Locale Provider |   Access privileg
      es
      -------------------------+---------------+----------+-------------+-------------+------------+-----------------+------------------
      -----
       automationedacontroller | eda           | UTF8     | en_US.UTF-8 | en_US.UTF-8 |            | libc            |
       automationhub           | automationhub | UTF8     | en_US.UTF-8 | en_US.UTF-8 |            | libc            |
       awx                     | awx           | UTF8     | en_US.UTF-8 | en_US.UTF-8 |            | libc            |
       gateway                 | gateway       | UTF8     | en_US.UTF-8 | en_US.UTF-8 |            | libc            |
      ...
      Copy to Clipboard Toggle word wrap
    4. For each component name, add the CREATEDB role to the Owner. For example:

      postgres=# ALTER ROLE awx WITH CREATEDB;
      postgres=# \q
      Copy to Clipboard Toggle word wrap

      Replace awx with the database owner.

    5. With the CREATEDB in place, access the path where the artifact is mounted, and run the pg_restore commands.

      bash$ cd /var/lib/pgsql/backups
      bash$ pg_restore --clean --create --no-owner -h <pg_hostname> -U <component_pg_user> -d template1 <component>/<component>.pgc
      Copy to Clipboard Toggle word wrap
    6. After the restore, remove the permissions from the user. For example:

      postgres=# ALTER ROLE awx WITH NOCREATEDB;
      postgres=# \q
      Copy to Clipboard Toggle word wrap

      Replace awx with each user containing the role.

  3. Start the containerized services, except the database.

    1. In all nodes, if Performance Co-Pilot is configured, run the following command:

      $ systemctl --user start pcp
      Copy to Clipboard Toggle word wrap
    2. Access the automation controller node and run:

      $ systemctl --user start automation-controller-task automation-controller-web automation-controller-rsyslog
      $ systemctl --user start receptor
      Copy to Clipboard Toggle word wrap
    3. Access the automation hub node and run:

      $ systemctl --user start automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2
      Copy to Clipboard Toggle word wrap
    4. Access the Event-Driven Ansible node and run:

      $ systemctl --user start automation-eda-scheduler automation-eda-daphne automation-eda-web automation-eda-api automation-eda-worker-1 automation-eda-worker-2  automation-eda-activation-worker-1 automation-eda-activation-worker-2
      Copy to Clipboard Toggle word wrap
    5. Access the platform gateway node and run:

      $ systemctl --user start automation-gateway automation-gateway-proxy
      Copy to Clipboard Toggle word wrap
    6. Access the platform gateway node when using standalone Redis, or all nodes from the Redis group in your inventory when using clustered Redis, and run:

      $ systemctl --user start redis-unix redis-tcp
      Copy to Clipboard Toggle word wrap
      Note

      In an enterprise deployment, the components run on different nodes. Run the commands on each component node.

7.1.3. Reconciling the target environment post-import

Perform the following post-import reconciliation steps to ensure your target environment is fully functional and correctly configured.

Procedure

  1. Deprovision the platform gateway configuration.

    • Deprovision platform gateway configuration SSH to the host serving an automation-gateway container as the same rootless user from 4.2.6 and run the following to remove the platform gateway proxy configuration:

      $ podman exec -it automation-gateway bash
      $ aap-gateway-manage migrate
      $ aap-gateway-manage shell_plus
      >>> HTTPPort.objects.all().delete(); ServiceNode.objects.all().delete(); ServiceCluster.objects.all().delete()
      Copy to Clipboard Toggle word wrap
  2. Transfer custom configurations and settings.

    • Edit the inventory file and apply any relevant extra_settings to each component by using the component_extra_settings.
  3. Update the Resource Server Secret Key for each component.

    1. Gather the current Resource Secret values for each component:

      $ podman exec -it automation-gateway bash -c 'aap-gateway-manage shell_plus --quiet -c "[print(cl.name, key.secret) for cl in ServiceCluster.objects.all() for key in cl.service_keys.all()]"'
      Copy to Clipboard Toggle word wrap
    2. Validate the current secret values:

      $ for secret_name in eda_resource_server hub_resource_server controller_resource_server
      do
      echo $secret_name
      podman secret inspect $secret_name --showsecret | grep SecretData
      done
      Copy to Clipboard Toggle word wrap
    3. If the secret value does not match the current values, delete the existing secret and re-create it, updating it with the new value:

      1. Delete the secret:

        $ podman secret rm <SECRET_NAME>
        Copy to Clipboard Toggle word wrap
      2. Re-create the secret:

        $ echo "secret_value" | podman secret create <SECRET_NAME> -
        Copy to Clipboard Toggle word wrap

        Replace the <SECRET_NAME> placeholder in the commands above with the appropriate secret name for each component: eda_resource_server (Event-Driven Ansible), hub_resource_server (automation hub), and controller_resource_server (automation controller).

  4. Re-run the installation program on the target environment by using the same inventory from the installation.
  5. Validate instances for automation execution.

    1. SSH to the host serving an automation-controller-task container as the rootless user, and run the following commands to validate and remove instances that are orphaned from the source artifact:

      $ podman exec -it automation-controller-task bash
      Copy to Clipboard Toggle word wrap
      $ awx-manage list_instances
      Copy to Clipboard Toggle word wrap
    2. Find nodes that are no longer part of this cluster. A good indicator is nodes with 0 capacity as they have failed their health checks:

      [ungrouped capacity=0]
      	[DISABLED] node1.example.org capacity=0 node_type=hybrid version=X.Y.Z heartbeat="..."
      	[DISABLED] node2.example.org capacity=0 node_type=execution version=ansible-runner-X.Y.Z heartbeat="..."
      Copy to Clipboard Toggle word wrap
    3. Remove those nodes with awx-manage, leaving only the aap-controller-task instance:

      awx-manage deprovision_instance --host=node1.example.org
      awx-manage deprovision_instance --host=node2.example.org
      Copy to Clipboard Toggle word wrap
  6. Repair orphaned automation hub content links for Pulp.

    • Run the following command from any host that has direct access to the automation hub address:

      $ curl -d '{\"verify_checksums\": true }' -X POST -k https://<gateway url>/api/galaxy/pulp/api/v3/repair/ -u <gateway_admin_user>:<gateway_admin_password>
      Copy to Clipboard Toggle word wrap
  7. Reconcile instance groups configuration:

    1. Go to Automation Execution Infrastructure Instance Groups.
    2. Select the Instance Group and then select the Instances tab.
    3. Associate or disassociate instances as required.
  8. Reconcile decision environments and credentials:

    1. Go to Automation Decisions Decision Environments.
    2. Edit each decision environment which references a registry URL either unrelated or no longer accessible to this new environment. For example, the automation hub decision environment might require modification for the target automation hub environment.
    3. Select each associated credential to these decision environments and ensure their addresses align with the new environment.
  9. Reconcile execution environments and credentials:

    1. Go to Automation Execution Infrastructure Execution Environments.
    2. Check each execution environment image and verify their addresses against the new environment.
    3. Go to Automation Execution Infrastructure Credentials.
    4. Edit each credential and ensure that all environment specific information aligns with the new environment.
  10. Verify any further customizations or configurations after the migration, such as RBAC rules with instance groups.

7.1.4. Validating the target environment

After completing the migration, validate your target environment to ensure all components are functional and operating as expected.

Procedure

  1. Verify all migrated components are functional.

    To ensure that all components have been successfully migrated, verify that each component is operational and accessible:

    1. Platform gateway: Access the Ansible Automation Platform URL at https://<gateway_hostname>/ and verify that the dashboard loads correctly. Check that the platform gateway service is running and properly connected to automation controller.
    2. Automation controller: Under Automation Execution, check that projects, inventories, and job templates are present and properly configured.
    3. Automation hub: Under Automation Content, verify that collections, namespaces, and their contents are visible.
    4. Event-Driven Ansible (if applicable): Under Automation Execution Decisions, verify that rule audits, rulebook activations, and projects are accessible.

      For each component, check the logs to ensure there are no startup errors or warnings:

      podman logs <container_name>
      Copy to Clipboard Toggle word wrap
  2. Test workflows and automation processes.

    After you have confirmed that all components are functional, test critical automation workflows to ensure they operate correctly in the containerized environment:

    1. Run job templates: Run several key job templates, including those with dependencies on various credential types.
    2. Test workflow templates: Run workflow templates to ensure that workflow nodes run in the correct order and that the workflow completes successfully.
    3. Verify execution environments: Ensure that jobs run in the appropriate execution environments and can access required dependencies.
    4. Check job artifacts: Verify that job artifacts are properly stored and accessible.
    5. Validate job scheduling: Test scheduled jobs to ensure they run at the expected times.
  3. Validate user access and permissions.

    Confirm that user accounts, teams, and roles were correctly migrated:

    1. User authentication: Test login functionality with various user accounts to ensure authentication works correctly.
    2. Role-based access controls: Verify that users have appropriate permissions for organizations, projects, inventories, and job templates.
    3. Team memberships: Confirm that team memberships and team-based permissions are intact.
    4. API access: Test API tokens and ensure that API access is functioning properly.
    5. SSO integration (if applicable): Verify that Single Sign-On authentication is working correctly.
  4. Confirm content synchronization and availability.

    Ensure that all content sources are properly configured and accessible:

    • Collection synchronization: Check that you can synchronize collections from a remote.
    • Collection Upload: Check that you can upload collections.
    • Collection repositories: Verify that collections are available in automation hub and can be used in execution environments.
    • Project synchronization: Check that projects can sync content from source control repositories.
    • External content sources: Test synchronization from automation hub and Ansible Galaxy (if configured).
    • Execution environment availability: Confirm that all required execution environments are available and can be accessed by the execution nodes.
    • Content dependencies: Verify that content dependencies are correctly resolved when running jobs.

7.2. OpenShift Container Platform

Prepare and assess your target OpenShift Container Platform environment, and import and reconcile your migrated content.

7.2.1. Preparing and assessing the target environment

To prepare and assess your target environment, perform the following steps.

Procedure

  1. Configure Ansible Automation Platform Operator for an Ansible Automation Platform deployment.
  2. Set up the database configuration (internal or external).
  3. Set up the Redis configuration (internal or external).
  4. Install Ansible Automation Platform using Ansible Automation Platform Operator.
  5. Create a backup of the initial OpenShift Container Platform deployment.
  6. Verify the fresh installation is functional.

7.2.2. Importing the migration content to the target environment

To import your environment, scale down Ansible Automation Platform components, restore databases, replace encryption secrets, and scale services back up.

Note

The import process requires the latest version of Ansible Automation Platform named 'aap' in the default 'aap' namespace and all default database names and database users.

Procedure

  1. Begin by scaling down the Ansible Automation Platform deployment by using idle_aap.

    oc patch ansibleautomationplatform aap --type merge -p '{"spec":{"idle_aap":true}}'
    Copy to Clipboard Toggle word wrap

    Wait for component pods to stop. Only the 6 Operator pods will remain running.

    NAME                                                                  READY   STATUS      RESTARTS   AGE
    pod/aap-controller-migration-4.6.13-5swc6                             0/1     Completed   0          160m
    pod/aap-gateway-operator-controller-manager-6b75c95458-4zrxv          2/2     Running     0          26h
    pod/ansible-lightspeed-operator-controller-manager-b674c55b8-qncjp    2/2     Running     0          45h
    pod/automation-controller-operator-controller-manager-6b79d48d4cchn   2/2     Running     0          45h
    pod/automation-hub-operator-controller-manager-5cd674c984-5njfj       2/2     Running     0          45h
    pod/eda-server-operator-controller-manager-645f4db5-d2flt             2/2     Running     0          45h
    pod/resource-operator-controller-manager-86b8f7bb54-cvz6d             2/2     Running     0          45h
    Copy to Clipboard Toggle word wrap
  2. Scale down the Ansible Automation Platform Gateway Operator and Ansible Automation Platform Controller Operator.

    oc scale --replicas=0 deployment aap-gateway-operator-controller-manager automation-controller-operator-controller-manager
    Copy to Clipboard Toggle word wrap

    Example output:

    deployment.apps/aap-gateway-operator-controller-manager scaled
    deployment.apps/automation-controller-operator-controller-manager scaled
    Copy to Clipboard Toggle word wrap
  3. Scale up the idled Postgres StatefulSet.

    oc scale --replicas=1 statefulset.apps/aap-postgres-15
    Copy to Clipboard Toggle word wrap
  4. Create a temporary Persistent Volume Claim (PVC) with appropriate settings and sizing.

    aap-temp-pvc.yaml

    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: aap-temp-pvc
      namespace: aap
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 200Gi
    Copy to Clipboard Toggle word wrap
    oc create -f aap-temp-pvc.yaml
    Copy to Clipboard Toggle word wrap
  5. Obtain the existing PostgreSQL image to use for temporary deployment.

    echo $(oc get pod/aap-postgres-15-0 -o jsonpath="{.spec.containers[].image}")
    Copy to Clipboard Toggle word wrap
  6. Create a temporary PostgreSQL deployment with the mounted temporary PVC.

    aap-temp-postgres.yaml

    ---
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: aap-temp-postgres
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: aap-temp-postgres
      template:
        metadata:
          labels:
            app: aap-temp-postgres
        spec:
          containers:
            - name: aap-temp-postgres
              image: <postgres image from previous step>
              command:
                - /bin/sh
                - '-c'
                - sleep infinity
              imagePullPolicy: Always
              securityContext:
                runAsNonRoot: true
                allowPrivilegeEscalation: false
              volumeMounts:
                - name: aap-temp-pvc
                  mountPath: /tmp/aap-temp-pvc
          volumes:
            - name: aap-temp-pvc
              persistentVolumeClaim:
                claimName: aap-temp-pvc
    Copy to Clipboard Toggle word wrap
    oc create -f aap-temp-postgres.yaml
    Copy to Clipboard Toggle word wrap
  7. Copy the export artifact to the temporary PostgreSQL pod.

    First, obtain the pod name and set it as an environment variable:

    export AAP_TEMP_POSTGRES=$(oc get pods --no-headers -o custom-columns="metadata.name" | grep aap-temp-postgres)
    Copy to Clipboard Toggle word wrap

    Test the environment variable:

    echo $AAP_TEMP_POSTGRES
    Copy to Clipboard Toggle word wrap

    Example output:

    aap-temp-postgres-7b6c57f87f-s2ldp
    Copy to Clipboard Toggle word wrap

    Copy the artifact and checksum to the PVC:

    oc cp artifact.tar $AAP_TEMP_POSTGRES:/tmp/aap-temp-pvc/
    oc cp artifact.tar.sha256 $AAP_TEMP_POSTGRES:/tmp/aap-temp-pvc/
    Copy to Clipboard Toggle word wrap
  8. Restore databases to Ansible Automation Platform PostgreSQL using the temporary PostgreSQL pod.

    First, obtain PostgreSQL passwords for all three databases and the PostgreSQL admin password:

    echo "" && for secret in aap-controller-postgres-configuration aap-hub-postgres-configuration aap-gateway-postgres-configuration
    do
    echo $secret
    echo "PASSWORD: `oc get secrets $secret -o jsonpath="{.data['password']}" | base64 -d`"
    echo "USER: `oc get secrets $secret -o jsonpath="{.data['username']}" | base64 -d`"
    echo "DATABASE: `oc get secrets $secret -o jsonpath="{.data['database']}" | base64 -d`"
    echo ""
    done && echo "POSTGRES ADMIN PASSWORD: `oc get secrets aap-gateway-postgres-configuration -o jsonpath="{.data['postgres_admin_password']}" | base64 -d`"
    Copy to Clipboard Toggle word wrap

    Enter into the temporary PostgreSQL deployment and change directory to the mounted PVC containing the copied artifact:

    oc exec -it deployment.apps/aap-temp-postgres -- /bin/bash
    Copy to Clipboard Toggle word wrap

    Inside the pod, change directory to /tmp/aap-temp-pvc and list its contents:

    cd /tmp/aap-temp-pvc && ls -l
    Copy to Clipboard Toggle word wrap

    Example output:

    total 2240
    -rw-r--r--. 1 1000900000 1000900000 2273280 Jun 13 17:41 artifact.tar
    -rw-r--r--. 1 1000900000 1000900000      79 Jun 13 17:42 artifact.tar.sha256
    drwxrws---. 2 root       1000900000   16384 Jun 13 17:40 lost+found
    Copy to Clipboard Toggle word wrap

    Verify the archive:

    sha256sum --check artifact.tar.sha256
    Copy to Clipboard Toggle word wrap

    Example output:

    artifact.tar: OK
    Copy to Clipboard Toggle word wrap

    Extract the artifact and verify its contents:

    tar xf artifact.tar && cd artifact && sha256sum --check sha256sum.txt
    Copy to Clipboard Toggle word wrap

    Example output:

     ./controller/controller.pgc: OK
     ./gateway/gateway.pgc: OK
     ./hub/hub.pgc: OK
    Copy to Clipboard Toggle word wrap

    Drop the automation controller database:

    dropdb -h aap-postgres-15 automationcontroller
    Copy to Clipboard Toggle word wrap

    Alter the user temporarily with the CREATEDB role:

    postgres=# ALTER USER automationcontroller WITH CREATEDB;
    Copy to Clipboard Toggle word wrap

    Create the database:

    createdb -h aap-postgres-15 -U automationcontroller automationcontroller
    Copy to Clipboard Toggle word wrap

    Revert temporary user permission:

    postgres=# ALTER USER automationcontroller NOCREATEDB;
    Copy to Clipboard Toggle word wrap

    Restore the automation controller database:

    pg_restore --clean --create --no-owner -h aap-postgres-15 -U automationcontroller -d automationcontroller controller/controller.pgc
    Copy to Clipboard Toggle word wrap

    Restore the automation hub database:

    pg_restore --clean --create --no-owner -h aap-postgres-15 -U automationhub -d automationhub hub/hub.pgc
    Copy to Clipboard Toggle word wrap

    Restore the platform gateway database:

    pg_restore --clean --create --no-owner -h aap-postgres-15 -U gateway -d gateway gateway/gateway.pgc
    Copy to Clipboard Toggle word wrap

    Exit the pod:

    exit
    Copy to Clipboard Toggle word wrap
  9. Replace database field encryption secrets.

    oc set data secret/aap-controller-secret-key secret_key="<unencoded controller_secret_key value from secrets.yml>"
    Copy to Clipboard Toggle word wrap
    oc set data secret/aap-db-fields-encryption-secret secret_key="<unencoded gateway_secret_key value from secrets.yml>"
    Copy to Clipboard Toggle word wrap
    oc set data secret/aap-hub-db-fields-encryption database_fields.symmetric.key="<unencoded hub_db_fields_encryption_key value from secrets.yml>"
    Copy to Clipboard Toggle word wrap
  10. Clean up the temporary PostgreSQL and PVC.

    oc delete -f aap-temp-postgres.yaml
    Copy to Clipboard Toggle word wrap
    oc delete -f aap-temp-pvc.yaml
    Copy to Clipboard Toggle word wrap
  11. Scale the platform gateway and automation controller Operators back up and wait for the platform gateway Operator reconciliation loop to complete.

    The PostgreSQL StatefulSet returns to idle.

    oc scale --replicas=1 deployment aap-gateway-operator-controller-manager automation-controller-operator-controller-manager
    Copy to Clipboard Toggle word wrap

    Example output:

    deployment.apps/aap-gateway-operator-controller-manager scaled
    deployment.apps/automation-controller-operator-controller-manager scaled
    Copy to Clipboard Toggle word wrap
    oc logs -f $(oc get pods  --no-headers -o custom-columns=":metadata.name" | grep aap-gateway-operator)
    Copy to Clipboard Toggle word wrap

    Wait for reconciliation to stop.

    Example output:

    META: ending play
    {"level":"info","ts":"2025-06-12T15:41:29Z","logger":"runner","msg":"Ansible-runner exited successfully","job":"5672263053238024330","name":"aap","namespace":"aap"}
    
    ----- Ansible Task Status Event StdOut (aap.ansible.com/v1alpha1, Kind=AnsibleAutomationPlatform, aap/aap) -----
    
    
    PLAY RECAP *********************************************************************
    localhost                  : ok=45   changed=0    unreachable=0    failed=0    skipped=63   rescued=0    ignored=0
    Copy to Clipboard Toggle word wrap
  12. Scale Ansible Automation Platform back up using idle_aap.

    oc patch ansibleautomationplatform aap --type=merge -p '{"spec":{"idle_aap":false}}'
    Copy to Clipboard Toggle word wrap

    Example output:

    ansibleautomationplatform.aap.ansible.com/aap patched
    Copy to Clipboard Toggle word wrap
  13. Wait for the aap-gateway pod to be running and clean up old service endpoints.

    Wait for the pod to be running.

    Example output:

    pod/aap-gateway-6c989b846c-47b91 2/2 Running 0 45s
    Copy to Clipboard Toggle word wrap
    for i in HTTPPort Route ServiceNode; do; oc exec -it deployment.apps/aap-gateway -- aap-gateway-manage shell -c 'from aap_gateway_api.models import '$i';print('$i'.objects.all().delete())'; done
    Copy to Clipboard Toggle word wrap

    Example output:

    (23, {'aap_gateway_api.ServiceAPIRoute': 4, 'aap_gateway_api.AdditionalRoute': 7, 'aap_gateway_api.Route': 11, 'aap_gateway_api.HTTPPort': 1})
    (0, {})
    (4, {'aap_gateway_api.ServiceNode': 4})
    Copy to Clipboard Toggle word wrap
  14. Run awx-manage to deprovision instances.

    Obtain the automation controller pod:

    export AAP_CONTROLLER_POD=$(oc get pods --no-headers -o custom-columns=":metadata.name" | grep aap-controller-task)
    Copy to Clipboard Toggle word wrap

    Test the environment variable:

    echo $AAP_CONTROLLER_POD
    Copy to Clipboard Toggle word wrap

    Example output:

    aap-controller-task-759b6d9759-r59q9
    Copy to Clipboard Toggle word wrap

    Enter into the automation controller pod:

    oc exec -it $AAP_CONTROLLER_POD -- /bin/bash
    awx-manage list_instances
    Copy to Clipboard Toggle word wrap

    Example output:

    bash-4.4$
    [controlplane capacity=642 policy=100%]
    	aap-controller-task-759b6d9759-r59q9 capacity=642 node_type=control version=4.6.15 heartbeat="2025-06-12 21:39:48"
    	node1.example.org capacity=0 node_type=hybrid version=4.6.13 heartbeat="2025-05-30 17:22:11"
    
    [default capacity=0 policy=100%]
    	node1.example.org capacity=0 node_type=hybrid version=4.6.13 heartbeat="2025-05-30 17:22:11"
    	node2.example.org capacity=0 node_type=execution version=ansible-runner-2.4.1 heartbeat="2025-05-30 17:22:08"
    Copy to Clipboard Toggle word wrap

    Remove old nodes with awx-manage, leaving only aap-controller-task:

    awx-manage deprovision_instance --host=node1.example.org
    awx-manage deprovision_instance --host=node2.example.org
    Copy to Clipboard Toggle word wrap
  15. Run the curl command to repair automation hub filesystem data.

    curl -d '{\"verify_checksums\": true }' -X POST -k https://<aap url>/api/galaxy/pulp/api/v3/repair/ -u <admin_user>:<restored_admin_password>
    Copy to Clipboard Toggle word wrap

7.2.3. Reconciling the target environment post-import

After importing your migration artifact, perform the following steps to reconcile your target environment.

Procedure

  1. Modify the Django SECRET_KEY secrets to match the source platform.
  2. Deprovision and reconfigure platform gateway service nodes.
  3. Re-run platform gateway nodes and services register logic.
  4. Convert container-specific settings to OpenShift Container Platform-appropriate formats.
  5. Reconcile container resource allocations to OpenShift Container Platform resources.

7.2.4. Validating the target environment

To validate your migrated environment, perform the following steps.

Procedure

  1. Verify all migrated components are functional.
  2. Test workflows and automation processes.
  3. Validate user access and permissions.
  4. Confirm content synchronization and availability.
  5. Test integration with OpenShift Container Platform-specific features.

7.3. Managed Ansible Automation Platform

Prepare and migrate your source environment to a Managed Ansible Automation Platform deployment, and reconcile the target environment post-migration.

7.3.1. Migrating to Managed Ansible Automation Platform

Follow this procedure to migrate to Managed Ansible Automation Platform.

Prerequisites

  • You have a migration artifact from your source environment.

Procedure

  1. Submit a support ticket on the Red Hat Customer Portal requesting a migration to Managed Ansible Automation Platform.

    The support ticket should include:

    • Source installation type (RPM, Containerized, OpenShift)
    • Managed Ansible Automation Platform URL or deployment name
    • Source version (installer or Operator version)
  2. The Ansible Site Reliability Engineering (SRE) team provides instructions in the support ticket on how to upload the resulting migration artifact to secure storage for processing.
  3. The Ansible SRE team imports the migration artifact into the identified target instance and notifies the customer through the support ticket.
  4. The Ansible SRE team notifies customers of successful migration.

7.3.2. Reconciling the target environment post-migration

After a successful migration, perform the following tasks:

Procedure

  1. Log in to the Managed Ansible Automation Platform instance by using the local administrator account to confirm that data was properly imported.
  2. You might need to perform the following actions based on the configuration of the source deployment:

    1. Reconfigure SSO authenticators and mappings to reflect the new URLs.
    2. Update private automation hub content to reflect the new URLs.

      1. Run the following command to update the automation hub repositories:

        curl -d '{\"verify_checksums\": true }' -X POST -k https://<platform url>/api/galaxy/pulp/api/v3/repair/ -u <admin_user>:<admin_password>
        Copy to Clipboard Toggle word wrap
      2. Perform a sync on any repositories configured in automation hub.
      3. Push any custom execution environments from the source automation hub to the target automation hub.
    3. Reconfigure automation mesh.
  3. Following migration, you can request standard SRE tasks through support tickets for the SRE team to perform such as configuration of custom certificates, a custom domain, or connectivity through private endpoints.
Nach oben
Red Hat logoGithubredditYoutubeTwitter

Lernen

Testen, kaufen und verkaufen

Communitys

Über Red Hat Dokumentation

Wir helfen Red Hat Benutzern, mit unseren Produkten und Diensten innovativ zu sein und ihre Ziele zu erreichen – mit Inhalten, denen sie vertrauen können. Entdecken Sie unsere neuesten Updates.

Mehr Inklusion in Open Source

Red Hat hat sich verpflichtet, problematische Sprache in unserem Code, unserer Dokumentation und unseren Web-Eigenschaften zu ersetzen. Weitere Einzelheiten finden Sie in Red Hat Blog.

Über Red Hat

Wir liefern gehärtete Lösungen, die es Unternehmen leichter machen, plattform- und umgebungsübergreifend zu arbeiten, vom zentralen Rechenzentrum bis zum Netzwerkrand.

Theme

© 2025 Red Hat