Dieser Inhalt ist in der von Ihnen ausgewählten Sprache nicht verfügbar.
Chapter 7. Target environment
Prepare, configure, and validate your target Ansible Automation Platform environment.
7.1. Container-based Ansible Automation Platform Link kopierenLink in die Zwischenablage kopiert!
Prepare and assess your target container-based Ansible Automation Platform environment, and import and reconcile your migrated content.
7.1.1. Preparing and assessing the target environment Link kopierenLink in die Zwischenablage kopiert!
To prepare your target environment, perform the following steps.
Procedure
- Validate the file system home folder size and make sure it has enough space to transfer the artifact.
-
Transfer the artifact to the nodes where you will be working by using
scpor any preferred file transfer method. It is recommended that you work from the platform gateway node as it will have access to most systems. However, if you have access or file system space limitations due to the PostgreSQL dumps, then work from the database node. - Download the latest version of containerized Ansible Automation Platform from the Ansible Automation Platform download page.
- Validate the artifact checksum.
Extract the artifact on the home folder for the user running the containers.
cd ~
$ cd ~Copy to Clipboard Copied! Toggle word wrap Toggle overflow sha256sum-check artifact.tar.sha256
$ sha256sum-check artifact.tar.sha256Copy to Clipboard Copied! Toggle word wrap Toggle overflow tar xf artifact.tar
$ tar xf artifact.tarCopy to Clipboard Copied! Toggle word wrap Toggle overflow cd artifact
$ cd artifactCopy to Clipboard Copied! Toggle word wrap Toggle overflow sha256sum-check sha256sum.txt
$ sha256sum-check sha256sum.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Generate inventory file for containerized deployment.
Configure the inventory file to match the same topology as the source environment. Configure the component database names and the
secret_keyvalues seen on thesecrets.ymlfile from the artifact. You can do this by either setting the extra variables in the inventory file or by using thesecrets.ymlfile as an additional variables file when running the installation program.Option 1: Extra variables in the inventory file
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe
__hub_database_fieldsvalue comes from thehub_db_fields_encryption_keyvalue in your secret.Option 2: Additional variables file
ansible-playbook -i inventory ansible.containerized_installer.install -e @~/artifact/secrets.yml -e "__hub_database_fields='{{ hub_db_fields_encryption_key }}'"$ ansible-playbook -i inventory ansible.containerized_installer.install -e @~/artifact/secrets.yml -e "__hub_database_fields='{{ hub_db_fields_encryption_key }}'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
- Install and configure the containerized target environment.
- Verify PostgreSQL database version is on version 15.
Create a backup of the initial containerized environment.
ansible-playbook -i <path_to_inventory> ansible.containerized_installer.backup
$ ansible-playbook -i <path_to_inventory> ansible.containerized_installer.backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Ensure the fresh installation is functional.
7.1.2. Importing the migration content to the target environment Link kopierenLink in die Zwischenablage kopiert!
To import your migration content into the target environment, stop the containerized services, import the database dumps, and then restart the services.
Procedure
Stop the containerized services, except the database.
In all nodes, if Performance Co-Pilot is configured, run the following command:
systemctl --user stop pcp
$ systemctl --user stop pcpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Access the automation controller node and run:
systemctl --user stop automation-controller-task automation-controller-web automation-controller-rsyslog systemctl --user stop receptor
$ systemctl --user stop automation-controller-task automation-controller-web automation-controller-rsyslog $ systemctl --user stop receptorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Access the automation hub node and run:
systemctl --user stop automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2
$ systemctl --user stop automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Access the Event-Driven Ansible node and run:
systemctl --user stop automation-eda-scheduler automation-eda-daphne automation-eda-web automation-eda-api automation-eda-worker-1 automation-eda-worker-2 automation-eda-activation-worker-1 automation-eda-activation-worker-2
$ systemctl --user stop automation-eda-scheduler automation-eda-daphne automation-eda-web automation-eda-api automation-eda-worker-1 automation-eda-worker-2 automation-eda-activation-worker-1 automation-eda-activation-worker-2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Access the platform gateway node and run:
systemctl --user stop automation-gateway automation-gateway-proxy
$ systemctl --user stop automation-gateway automation-gateway-proxyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Access the platform gateway node when using standalone Redis, or all nodes from the Redis group in your inventory file when using clustered Redis, and run:
systemctl --user stop redis-unix redis-tcp
$ systemctl --user stop redis-unix redis-tcpCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIn an enterprise deployment, the components run on different nodes. Run the commands on each component node.
Import database dumps to the containerized environment.
If you are using an Ansible Automation Platform managed database, you must create a temporary container to run the
psqlandpg_restorecommands. Run this command from the database node.podman run -it --rm --name postgresql_restore_temp --network host --volume ~/aap/tls/extracted:/etc/pki/ca-trust/extracted:z --volume ~/aap/postgresql/server.crt:/var/lib/pgsql/server.crt:ro,z --volume ~/aap/postgresql/server.key:/var/lib/pgsql/server.key:ro,z --volume ~/artifact:/var/lib/pgsql/backups:ro,z registry.redhat.io/rhel8/postgresql-15:latest bash
$ podman run -it --rm --name postgresql_restore_temp --network host --volume ~/aap/tls/extracted:/etc/pki/ca-trust/extracted:z --volume ~/aap/postgresql/server.crt:/var/lib/pgsql/server.crt:ro,z --volume ~/aap/postgresql/server.key:/var/lib/pgsql/server.key:ro,z --volume ~/artifact:/var/lib/pgsql/backups:ro,z registry.redhat.io/rhel8/postgresql-15:latest bashCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe command above opens a shell inside the container named
postgresql_restore_tempwith the artifact mounted at/var/lib/pgsql/backups. Additionally, it mounts the PostgreSQL certificates to ensure that you can resolve the correct certificates.The command assumes the image
registry.redhat.io/rhel8/postgresql-15:latestis available. If you are missing the image, check the available images for the user withpodman images ls.It also assumes that the artifact is located in the current user’s home folder. If the artifact is located elsewhere, change the
~/artifactwith the required path.-
If you are using a customer-provided (external) database, you can run the
psqlandpg_restorecommands from any node that has these commands installed and that has to access the database. Reach out to your database administrator if you are unsure. From inside the container, access the database and ensure the users have the
CREATEDBrole.Copy to Clipboard Copied! Toggle word wrap Toggle overflow For each component name, add the
CREATEDBrole to theOwner. For example:postgres=# ALTER ROLE awx WITH CREATEDB; postgres=# \q
postgres=# ALTER ROLE awx WITH CREATEDB; postgres=# \qCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
awxwith the database owner.With the
CREATEDBin place, access the path where the artifact is mounted, and run thepg_restorecommands.bash$ cd /var/lib/pgsql/backups bash$ pg_restore --clean --create --no-owner -h <pg_hostname> -U <component_pg_user> -d template1 <component>/<component>.pgc
bash$ cd /var/lib/pgsql/backups bash$ pg_restore --clean --create --no-owner -h <pg_hostname> -U <component_pg_user> -d template1 <component>/<component>.pgcCopy to Clipboard Copied! Toggle word wrap Toggle overflow After the restore, remove the permissions from the user. For example:
postgres=# ALTER ROLE awx WITH NOCREATEDB; postgres=# \q
postgres=# ALTER ROLE awx WITH NOCREATEDB; postgres=# \qCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace
awxwith each user containing the role.
Start the containerized services, except the database.
In all nodes, if Performance Co-Pilot is configured, run the following command:
systemctl --user start pcp
$ systemctl --user start pcpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Access the automation controller node and run:
systemctl --user start automation-controller-task automation-controller-web automation-controller-rsyslog systemctl --user start receptor
$ systemctl --user start automation-controller-task automation-controller-web automation-controller-rsyslog $ systemctl --user start receptorCopy to Clipboard Copied! Toggle word wrap Toggle overflow Access the automation hub node and run:
systemctl --user start automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2
$ systemctl --user start automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Access the Event-Driven Ansible node and run:
systemctl --user start automation-eda-scheduler automation-eda-daphne automation-eda-web automation-eda-api automation-eda-worker-1 automation-eda-worker-2 automation-eda-activation-worker-1 automation-eda-activation-worker-2
$ systemctl --user start automation-eda-scheduler automation-eda-daphne automation-eda-web automation-eda-api automation-eda-worker-1 automation-eda-worker-2 automation-eda-activation-worker-1 automation-eda-activation-worker-2Copy to Clipboard Copied! Toggle word wrap Toggle overflow Access the platform gateway node and run:
systemctl --user start automation-gateway automation-gateway-proxy
$ systemctl --user start automation-gateway automation-gateway-proxyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Access the platform gateway node when using standalone Redis, or all nodes from the Redis group in your inventory when using clustered Redis, and run:
systemctl --user start redis-unix redis-tcp
$ systemctl --user start redis-unix redis-tcpCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIn an enterprise deployment, the components run on different nodes. Run the commands on each component node.
7.1.3. Reconciling the target environment post-import Link kopierenLink in die Zwischenablage kopiert!
Perform the following post-import reconciliation steps to ensure your target environment is fully functional and correctly configured.
Procedure
Deprovision the platform gateway configuration.
Deprovision platform gateway configuration SSH to the host serving an
automation-gatewaycontainer as the same rootless user from 4.2.6 and run the following to remove the platform gateway proxy configuration:podman exec -it automation-gateway bash aap-gateway-manage migrate aap-gateway-manage shell_plus >> HTTPPort.objects.all().delete(); ServiceNode.objects.all().delete(); ServiceCluster.objects.all().delete()
$ podman exec -it automation-gateway bash $ aap-gateway-manage migrate $ aap-gateway-manage shell_plus >>> HTTPPort.objects.all().delete(); ServiceNode.objects.all().delete(); ServiceCluster.objects.all().delete()Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Transfer custom configurations and settings.
-
Edit the inventory file and apply any relevant
extra_settingsto each component by using thecomponent_extra_settings.
-
Edit the inventory file and apply any relevant
Update the Resource Server Secret Key for each component.
Gather the current Resource Secret values for each component:
podman exec -it automation-gateway bash -c 'aap-gateway-manage shell_plus --quiet -c "[print(cl.name, key.secret) for cl in ServiceCluster.objects.all() for key in cl.service_keys.all()]"'
$ podman exec -it automation-gateway bash -c 'aap-gateway-manage shell_plus --quiet -c "[print(cl.name, key.secret) for cl in ServiceCluster.objects.all() for key in cl.service_keys.all()]"'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Validate the current secret values:
for secret_name in eda_resource_server hub_resource_server controller_resource_server do echo $secret_name podman secret inspect $secret_name --showsecret | grep SecretData done
$ for secret_name in eda_resource_server hub_resource_server controller_resource_server do echo $secret_name podman secret inspect $secret_name --showsecret | grep SecretData doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the secret value does not match the current values, delete the existing secret and re-create it, updating it with the new value:
Delete the secret:
podman secret rm <SECRET_NAME>
$ podman secret rm <SECRET_NAME>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Re-create the secret:
echo "secret_value" | podman secret create <SECRET_NAME> -
$ echo "secret_value" | podman secret create <SECRET_NAME> -Copy to Clipboard Copied! Toggle word wrap Toggle overflow Replace the
<SECRET_NAME>placeholder in the commands above with the appropriate secret name for each component:eda_resource_server(Event-Driven Ansible),hub_resource_server(automation hub), andcontroller_resource_server(automation controller).
- Re-run the installation program on the target environment by using the same inventory from the installation.
Validate instances for automation execution.
SSH to the host serving an
automation-controller-taskcontainer as the rootless user, and run the following commands to validate and remove instances that are orphaned from the source artifact:podman exec -it automation-controller-task bash
$ podman exec -it automation-controller-task bashCopy to Clipboard Copied! Toggle word wrap Toggle overflow awx-manage list_instances
$ awx-manage list_instancesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Find nodes that are no longer part of this cluster. A good indicator is nodes with 0 capacity as they have failed their health checks:
[ungrouped capacity=0] [DISABLED] node1.example.org capacity=0 node_type=hybrid version=X.Y.Z heartbeat="..." [DISABLED] node2.example.org capacity=0 node_type=execution version=ansible-runner-X.Y.Z heartbeat="..."
[ungrouped capacity=0] [DISABLED] node1.example.org capacity=0 node_type=hybrid version=X.Y.Z heartbeat="..." [DISABLED] node2.example.org capacity=0 node_type=execution version=ansible-runner-X.Y.Z heartbeat="..."Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove those nodes with
awx-manage, leaving only theaap-controller-taskinstance:awx-manage deprovision_instance --host=node1.example.org awx-manage deprovision_instance --host=node2.example.org
awx-manage deprovision_instance --host=node1.example.org awx-manage deprovision_instance --host=node2.example.orgCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Repair orphaned automation hub content links for Pulp.
Run the following command from any host that has direct access to the automation hub address:
curl -d '{\"verify_checksums\": true }' -X POST -k https://<gateway url>/api/galaxy/pulp/api/v3/repair/ -u <gateway_admin_user>:<gateway_admin_password>$ curl -d '{\"verify_checksums\": true }' -X POST -k https://<gateway url>/api/galaxy/pulp/api/v3/repair/ -u <gateway_admin_user>:<gateway_admin_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Reconcile instance groups configuration:
-
Go to
. - Select the Instance Group and then select the Instances tab.
- Associate or disassociate instances as required.
-
Go to
Reconcile decision environments and credentials:
-
Go to
. - Edit each decision environment which references a registry URL either unrelated or no longer accessible to this new environment. For example, the automation hub decision environment might require modification for the target automation hub environment.
- Select each associated credential to these decision environments and ensure their addresses align with the new environment.
-
Go to
Reconcile execution environments and credentials:
-
Go to
. - Check each execution environment image and verify their addresses against the new environment.
-
Go to
. - Edit each credential and ensure that all environment specific information aligns with the new environment.
-
Go to
- Verify any further customizations or configurations after the migration, such as RBAC rules with instance groups.
7.1.4. Validating the target environment Link kopierenLink in die Zwischenablage kopiert!
After completing the migration, validate your target environment to ensure all components are functional and operating as expected.
Procedure
Verify all migrated components are functional.
To ensure that all components have been successfully migrated, verify that each component is operational and accessible:
-
Platform gateway: Access the Ansible Automation Platform URL at
https://<gateway_hostname>/and verify that the dashboard loads correctly. Check that the platform gateway service is running and properly connected to automation controller. - Automation controller: Under Automation Execution, check that projects, inventories, and job templates are present and properly configured.
- Automation hub: Under Automation Content, verify that collections, namespaces, and their contents are visible.
Event-Driven Ansible (if applicable): Under Automation Execution Decisions, verify that rule audits, rulebook activations, and projects are accessible.
For each component, check the logs to ensure there are no startup errors or warnings:
podman logs <container_name>
podman logs <container_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Platform gateway: Access the Ansible Automation Platform URL at
Test workflows and automation processes.
After you have confirmed that all components are functional, test critical automation workflows to ensure they operate correctly in the containerized environment:
- Run job templates: Run several key job templates, including those with dependencies on various credential types.
- Test workflow templates: Run workflow templates to ensure that workflow nodes run in the correct order and that the workflow completes successfully.
- Verify execution environments: Ensure that jobs run in the appropriate execution environments and can access required dependencies.
- Check job artifacts: Verify that job artifacts are properly stored and accessible.
- Validate job scheduling: Test scheduled jobs to ensure they run at the expected times.
Validate user access and permissions.
Confirm that user accounts, teams, and roles were correctly migrated:
- User authentication: Test login functionality with various user accounts to ensure authentication works correctly.
- Role-based access controls: Verify that users have appropriate permissions for organizations, projects, inventories, and job templates.
- Team memberships: Confirm that team memberships and team-based permissions are intact.
- API access: Test API tokens and ensure that API access is functioning properly.
- SSO integration (if applicable): Verify that Single Sign-On authentication is working correctly.
Confirm content synchronization and availability.
Ensure that all content sources are properly configured and accessible:
- Collection synchronization: Check that you can synchronize collections from a remote.
- Collection Upload: Check that you can upload collections.
- Collection repositories: Verify that collections are available in automation hub and can be used in execution environments.
- Project synchronization: Check that projects can sync content from source control repositories.
- External content sources: Test synchronization from automation hub and Ansible Galaxy (if configured).
- Execution environment availability: Confirm that all required execution environments are available and can be accessed by the execution nodes.
- Content dependencies: Verify that content dependencies are correctly resolved when running jobs.
7.2. OpenShift Container Platform Link kopierenLink in die Zwischenablage kopiert!
Prepare and assess your target OpenShift Container Platform environment, and import and reconcile your migrated content.
7.2.1. Preparing and assessing the target environment Link kopierenLink in die Zwischenablage kopiert!
To prepare and assess your target environment, perform the following steps.
Procedure
- Configure Ansible Automation Platform Operator for an Ansible Automation Platform deployment.
- Set up the database configuration (internal or external).
- Set up the Redis configuration (internal or external).
- Install Ansible Automation Platform using Ansible Automation Platform Operator.
- Create a backup of the initial OpenShift Container Platform deployment.
- Verify the fresh installation is functional.
7.2.2. Importing the migration content to the target environment Link kopierenLink in die Zwischenablage kopiert!
To import your environment, scale down Ansible Automation Platform components, restore databases, replace encryption secrets, and scale services back up.
The import process requires the latest version of Ansible Automation Platform named 'aap' in the default 'aap' namespace and all default database names and database users.
Procedure
Begin by scaling down the Ansible Automation Platform deployment by using
idle_aap.oc patch ansibleautomationplatform aap --type merge -p '{"spec":{"idle_aap":true}}'oc patch ansibleautomationplatform aap --type merge -p '{"spec":{"idle_aap":true}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for component pods to stop. Only the 6 Operator pods will remain running.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale down the Ansible Automation Platform Gateway Operator and Ansible Automation Platform Controller Operator.
oc scale --replicas=0 deployment aap-gateway-operator-controller-manager automation-controller-operator-controller-manager
oc scale --replicas=0 deployment aap-gateway-operator-controller-manager automation-controller-operator-controller-managerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
deployment.apps/aap-gateway-operator-controller-manager scaled deployment.apps/automation-controller-operator-controller-manager scaled
deployment.apps/aap-gateway-operator-controller-manager scaled deployment.apps/automation-controller-operator-controller-manager scaledCopy to Clipboard Copied! Toggle word wrap Toggle overflow Scale up the idled Postgres
StatefulSet.oc scale --replicas=1 statefulset.apps/aap-postgres-15
oc scale --replicas=1 statefulset.apps/aap-postgres-15Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a temporary Persistent Volume Claim (PVC) with appropriate settings and sizing.
aap-temp-pvc.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f aap-temp-pvc.yaml
oc create -f aap-temp-pvc.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Obtain the existing PostgreSQL image to use for temporary deployment.
echo $(oc get pod/aap-postgres-15-0 -o jsonpath="{.spec.containers[].image}")echo $(oc get pod/aap-postgres-15-0 -o jsonpath="{.spec.containers[].image}")Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a temporary PostgreSQL deployment with the mounted temporary PVC.
aap-temp-postgres.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f aap-temp-postgres.yaml
oc create -f aap-temp-postgres.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the export artifact to the temporary PostgreSQL pod.
First, obtain the pod name and set it as an environment variable:
export AAP_TEMP_POSTGRES=$(oc get pods --no-headers -o custom-columns="metadata.name" | grep aap-temp-postgres)
export AAP_TEMP_POSTGRES=$(oc get pods --no-headers -o custom-columns="metadata.name" | grep aap-temp-postgres)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Test the environment variable:
echo $AAP_TEMP_POSTGRES
echo $AAP_TEMP_POSTGRESCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
aap-temp-postgres-7b6c57f87f-s2ldp
aap-temp-postgres-7b6c57f87f-s2ldpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the artifact and checksum to the PVC:
oc cp artifact.tar $AAP_TEMP_POSTGRES:/tmp/aap-temp-pvc/ oc cp artifact.tar.sha256 $AAP_TEMP_POSTGRES:/tmp/aap-temp-pvc/
oc cp artifact.tar $AAP_TEMP_POSTGRES:/tmp/aap-temp-pvc/ oc cp artifact.tar.sha256 $AAP_TEMP_POSTGRES:/tmp/aap-temp-pvc/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restore databases to Ansible Automation Platform PostgreSQL using the temporary PostgreSQL pod.
First, obtain PostgreSQL passwords for all three databases and the PostgreSQL admin password:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter into the temporary PostgreSQL deployment and change directory to the mounted PVC containing the copied artifact:
oc exec -it deployment.apps/aap-temp-postgres -- /bin/bash
oc exec -it deployment.apps/aap-temp-postgres -- /bin/bashCopy to Clipboard Copied! Toggle word wrap Toggle overflow Inside the pod, change directory to
/tmp/aap-temp-pvcand list its contents:cd /tmp/aap-temp-pvc && ls -l
cd /tmp/aap-temp-pvc && ls -lCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
total 2240 -rw-r--r--. 1 1000900000 1000900000 2273280 Jun 13 17:41 artifact.tar -rw-r--r--. 1 1000900000 1000900000 79 Jun 13 17:42 artifact.tar.sha256 drwxrws---. 2 root 1000900000 16384 Jun 13 17:40 lost+found
total 2240 -rw-r--r--. 1 1000900000 1000900000 2273280 Jun 13 17:41 artifact.tar -rw-r--r--. 1 1000900000 1000900000 79 Jun 13 17:42 artifact.tar.sha256 drwxrws---. 2 root 1000900000 16384 Jun 13 17:40 lost+foundCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the archive:
sha256sum --check artifact.tar.sha256
sha256sum --check artifact.tar.sha256Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
artifact.tar: OK
artifact.tar: OKCopy to Clipboard Copied! Toggle word wrap Toggle overflow Extract the artifact and verify its contents:
tar xf artifact.tar && cd artifact && sha256sum --check sha256sum.txt
tar xf artifact.tar && cd artifact && sha256sum --check sha256sum.txtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
./controller/controller.pgc: OK ./gateway/gateway.pgc: OK ./hub/hub.pgc: OK
./controller/controller.pgc: OK ./gateway/gateway.pgc: OK ./hub/hub.pgc: OKCopy to Clipboard Copied! Toggle word wrap Toggle overflow Drop the automation controller database:
dropdb -h aap-postgres-15 automationcontroller
dropdb -h aap-postgres-15 automationcontrollerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Alter the user temporarily with the
CREATEDBrole:postgres=# ALTER USER automationcontroller WITH CREATEDB;
postgres=# ALTER USER automationcontroller WITH CREATEDB;Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the database:
createdb -h aap-postgres-15 -U automationcontroller automationcontroller
createdb -h aap-postgres-15 -U automationcontroller automationcontrollerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Revert temporary user permission:
postgres=# ALTER USER automationcontroller NOCREATEDB;
postgres=# ALTER USER automationcontroller NOCREATEDB;Copy to Clipboard Copied! Toggle word wrap Toggle overflow Restore the automation controller database:
pg_restore --clean --create --no-owner -h aap-postgres-15 -U automationcontroller -d automationcontroller controller/controller.pgc
pg_restore --clean --create --no-owner -h aap-postgres-15 -U automationcontroller -d automationcontroller controller/controller.pgcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restore the automation hub database:
pg_restore --clean --create --no-owner -h aap-postgres-15 -U automationhub -d automationhub hub/hub.pgc
pg_restore --clean --create --no-owner -h aap-postgres-15 -U automationhub -d automationhub hub/hub.pgcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Restore the platform gateway database:
pg_restore --clean --create --no-owner -h aap-postgres-15 -U gateway -d gateway gateway/gateway.pgc
pg_restore --clean --create --no-owner -h aap-postgres-15 -U gateway -d gateway gateway/gateway.pgcCopy to Clipboard Copied! Toggle word wrap Toggle overflow Exit the pod:
exit
exitCopy to Clipboard Copied! Toggle word wrap Toggle overflow Replace database field encryption secrets.
oc set data secret/aap-controller-secret-key secret_key="<unencoded controller_secret_key value from secrets.yml>"
oc set data secret/aap-controller-secret-key secret_key="<unencoded controller_secret_key value from secrets.yml>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc set data secret/aap-db-fields-encryption-secret secret_key="<unencoded gateway_secret_key value from secrets.yml>"
oc set data secret/aap-db-fields-encryption-secret secret_key="<unencoded gateway_secret_key value from secrets.yml>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc set data secret/aap-hub-db-fields-encryption database_fields.symmetric.key="<unencoded hub_db_fields_encryption_key value from secrets.yml>"
oc set data secret/aap-hub-db-fields-encryption database_fields.symmetric.key="<unencoded hub_db_fields_encryption_key value from secrets.yml>"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Clean up the temporary PostgreSQL and PVC.
oc delete -f aap-temp-postgres.yaml
oc delete -f aap-temp-postgres.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete -f aap-temp-pvc.yaml
oc delete -f aap-temp-pvc.yamlCopy to Clipboard Copied! Toggle word wrap Toggle overflow Scale the platform gateway and automation controller Operators back up and wait for the platform gateway Operator reconciliation loop to complete.
The PostgreSQL
StatefulSetreturns to idle.oc scale --replicas=1 deployment aap-gateway-operator-controller-manager automation-controller-operator-controller-manager
oc scale --replicas=1 deployment aap-gateway-operator-controller-manager automation-controller-operator-controller-managerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
deployment.apps/aap-gateway-operator-controller-manager scaled deployment.apps/automation-controller-operator-controller-manager scaled
deployment.apps/aap-gateway-operator-controller-manager scaled deployment.apps/automation-controller-operator-controller-manager scaledCopy to Clipboard Copied! Toggle word wrap Toggle overflow oc logs -f $(oc get pods --no-headers -o custom-columns=":metadata.name" | grep aap-gateway-operator)
oc logs -f $(oc get pods --no-headers -o custom-columns=":metadata.name" | grep aap-gateway-operator)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for reconciliation to stop.
Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Scale Ansible Automation Platform back up using
idle_aap.oc patch ansibleautomationplatform aap --type=merge -p '{"spec":{"idle_aap":false}}'oc patch ansibleautomationplatform aap --type=merge -p '{"spec":{"idle_aap":false}}'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
ansibleautomationplatform.aap.ansible.com/aap patched
ansibleautomationplatform.aap.ansible.com/aap patchedCopy to Clipboard Copied! Toggle word wrap Toggle overflow Wait for the
aap-gatewaypod to be running and clean up old service endpoints.Wait for the pod to be running.
Example output:
pod/aap-gateway-6c989b846c-47b91 2/2 Running 0 45s
pod/aap-gateway-6c989b846c-47b91 2/2 Running 0 45sCopy to Clipboard Copied! Toggle word wrap Toggle overflow for i in HTTPPort Route ServiceNode; do; oc exec -it deployment.apps/aap-gateway -- aap-gateway-manage shell -c 'from aap_gateway_api.models import '$i';print('$i'.objects.all().delete())'; donefor i in HTTPPort Route ServiceNode; do; oc exec -it deployment.apps/aap-gateway -- aap-gateway-manage shell -c 'from aap_gateway_api.models import '$i';print('$i'.objects.all().delete())'; doneCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
(23, {'aap_gateway_api.ServiceAPIRoute': 4, 'aap_gateway_api.AdditionalRoute': 7, 'aap_gateway_api.Route': 11, 'aap_gateway_api.HTTPPort': 1}) (0, {}) (4, {'aap_gateway_api.ServiceNode': 4})(23, {'aap_gateway_api.ServiceAPIRoute': 4, 'aap_gateway_api.AdditionalRoute': 7, 'aap_gateway_api.Route': 11, 'aap_gateway_api.HTTPPort': 1}) (0, {}) (4, {'aap_gateway_api.ServiceNode': 4})Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run
awx-manageto deprovision instances.Obtain the automation controller pod:
export AAP_CONTROLLER_POD=$(oc get pods --no-headers -o custom-columns=":metadata.name" | grep aap-controller-task)
export AAP_CONTROLLER_POD=$(oc get pods --no-headers -o custom-columns=":metadata.name" | grep aap-controller-task)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Test the environment variable:
echo $AAP_CONTROLLER_POD
echo $AAP_CONTROLLER_PODCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
aap-controller-task-759b6d9759-r59q9
aap-controller-task-759b6d9759-r59q9Copy to Clipboard Copied! Toggle word wrap Toggle overflow Enter into the automation controller pod:
oc exec -it $AAP_CONTROLLER_POD -- /bin/bash awx-manage list_instances
oc exec -it $AAP_CONTROLLER_POD -- /bin/bash awx-manage list_instancesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Remove old nodes with
awx-manage, leaving onlyaap-controller-task:awx-manage deprovision_instance --host=node1.example.org awx-manage deprovision_instance --host=node2.example.org
awx-manage deprovision_instance --host=node1.example.org awx-manage deprovision_instance --host=node2.example.orgCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
curlcommand to repair automation hub filesystem data.curl -d '{\"verify_checksums\": true }' -X POST -k https://<aap url>/api/galaxy/pulp/api/v3/repair/ -u <admin_user>:<restored_admin_password>curl -d '{\"verify_checksums\": true }' -X POST -k https://<aap url>/api/galaxy/pulp/api/v3/repair/ -u <admin_user>:<restored_admin_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
7.2.3. Reconciling the target environment post-import Link kopierenLink in die Zwischenablage kopiert!
After importing your migration artifact, perform the following steps to reconcile your target environment.
Procedure
-
Modify the Django
SECRET_KEYsecrets to match the source platform. - Deprovision and reconfigure platform gateway service nodes.
- Re-run platform gateway nodes and services register logic.
- Convert container-specific settings to OpenShift Container Platform-appropriate formats.
- Reconcile container resource allocations to OpenShift Container Platform resources.
7.2.4. Validating the target environment Link kopierenLink in die Zwischenablage kopiert!
To validate your migrated environment, perform the following steps.
Procedure
- Verify all migrated components are functional.
- Test workflows and automation processes.
- Validate user access and permissions.
- Confirm content synchronization and availability.
- Test integration with OpenShift Container Platform-specific features.
7.3. Managed Ansible Automation Platform Link kopierenLink in die Zwischenablage kopiert!
Prepare and migrate your source environment to a Managed Ansible Automation Platform deployment, and reconcile the target environment post-migration.
7.3.1. Migrating to Managed Ansible Automation Platform Link kopierenLink in die Zwischenablage kopiert!
Follow this procedure to migrate to Managed Ansible Automation Platform.
Prerequisites
- You have a migration artifact from your source environment.
Procedure
Submit a support ticket on the Red Hat Customer Portal requesting a migration to Managed Ansible Automation Platform.
The support ticket should include:
- Source installation type (RPM, Containerized, OpenShift)
- Managed Ansible Automation Platform URL or deployment name
- Source version (installer or Operator version)
- The Ansible Site Reliability Engineering (SRE) team provides instructions in the support ticket on how to upload the resulting migration artifact to secure storage for processing.
- The Ansible SRE team imports the migration artifact into the identified target instance and notifies the customer through the support ticket.
- The Ansible SRE team notifies customers of successful migration.
7.3.2. Reconciling the target environment post-migration Link kopierenLink in die Zwischenablage kopiert!
After a successful migration, perform the following tasks:
Procedure
- Log in to the Managed Ansible Automation Platform instance by using the local administrator account to confirm that data was properly imported.
You might need to perform the following actions based on the configuration of the source deployment:
- Reconfigure SSO authenticators and mappings to reflect the new URLs.
Update private automation hub content to reflect the new URLs.
Run the following command to update the automation hub repositories:
curl -d '{\"verify_checksums\": true }' -X POST -k https://<platform url>/api/galaxy/pulp/api/v3/repair/ -u <admin_user>:<admin_password>curl -d '{\"verify_checksums\": true }' -X POST -k https://<platform url>/api/galaxy/pulp/api/v3/repair/ -u <admin_user>:<admin_password>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Perform a sync on any repositories configured in automation hub.
- Push any custom execution environments from the source automation hub to the target automation hub.
- Reconfigure automation mesh.
- Following migration, you can request standard SRE tasks through support tickets for the SRE team to perform such as configuration of custom certificates, a custom domain, or connectivity through private endpoints.