Chapter 8. Migrating Red Hat Ansible Automation Platform to Red Hat Ansible Automation Platform Operator


Migrating your Red Hat Ansible Automation Platform deployment to the Ansible Automation Platform Operator allows you to take advantage of the benefits provided by a Kubernetes native operator, including simplified upgrades and full lifecycle support for your Red Hat Ansible Automation Platform deployments.

Use these procedures to migrate any of the following deployments to the Ansible Automation Platform Operator:

  • A VM-based installation of Ansible Tower 3.8.6, automation controller, or automation hub
  • An Openshift instance of Ansible Tower 3.8.6 (Ansible Automation Platform 1.2)

8.1. Migration considerations

If you are upgrading from Ansible Automation Platform 1.2 on OpenShift Container Platform 3 to Ansible Automation Platform 2.x on OpenShift Container Platform 4, you must provision a fresh OpenShift Container Platform version 4 cluster and then migrate the Ansible Automation Platform to the new cluster.

8.2. Preparing for migration

Before migrating your current Ansible Automation Platform deployment to Ansible Automation Platform Operator, you need to back up your existing data, create k8s secrets for your secret key and postgresql configuration.

Note

If you are migrating both automation controller and automation hub instances, repeat the steps in Creating a secret key secret and Creating a postgresql configuration secret for both and then proceed to Migrating data to the Ansible Automation Platform Operator.

Prerequisites

To migrate Ansible Automation Platform deployment to Ansible Automation Platform Operator, you must have the following:

  • Secret key secret
  • Postgresql configuration
  • Role-based Access Control for the namespaces on the new OpenShift cluster
  • The new OpenShift cluster must be able to connect to the previous PostgreSQL database
Note

You can store the secret key information in the inventory file before the initial Red Hat Ansible Automation Platform installation. If you are unable to remember your secret key or have trouble locating your inventory file, contact Ansible support through the Red Hat Customer portal.

Before migrating your data from Ansible Automation Platform 2.x or earlier, you must back up your data for loss prevention. To backup your data, do the following:

Procedure

  1. Log in to your current deployment project.
  2. Run setup.sh to create a backup of your current data or deployment:

    For on-prem deployments of version 2.x or earlier:

    $ ./setup.sh -b
    Copy to Clipboard Toggle word wrap

    For OpenShift deployments before version 2.0 (non-operator deployments):

    ./setup_openshift.sh -b
    Copy to Clipboard Toggle word wrap

8.2.2. Creating a secret key secret

To migrate your data to Ansible Automation Platform Operator on OpenShift Container Platform, you must create a secret key. If you are migrating automation controller, automation hub, and Event-Driven Ansible you must have a secret key for each that matches the secret key defined in the inventory file during your initial installation. Otherwise, the migrated data remains encrypted and unusable after migration.

Note

When specifying the symmetric encryption secret key on the custom resources, note that for automation controller the field is called secret_key_name. But for automation hub and Event-Driven Ansible, the field is called db_fields_encryption_secret.

Note

In the Kubernetes secrets, automation controller and Event-Driven Ansible use the same stringData key (secret_key) but, automation hub uses a different key (database_fields.symmetric.key).

Procedure

  1. Locate the old secret keys in the inventory file you used to deploy Ansible Automation Platform in your previous installation.
  2. Create a YAML file for your secret keys:

    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: <controller-resourcename>-secret-key
      namespace: <target-namespace>
    stringData:
      secret_key: <replaceme-with-controller-secret>
    type: Opaque
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: <eda-resourcename>-secret-key
      namespace: <target-namespace>
    stringData:
      secret_key: <replaceme-with-eda-secret>
    type: Opaque
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: <hub-resourcename>-secret-key
      namespace: <target-namespace>
    stringData:
      database_fields.symmetric.key: <replace-me-withdb-fields-encryption-key>
    type: Opaque
    Copy to Clipboard Toggle word wrap
    Note

    If admin_password_secret is not provided, the operator looks for a secret named <resourcename>-admin-password for the admin password. If it is not present, the operator generates a password and create a secret from it named <resourcename>-admin-password.

  3. Apply the secret key YAML to the cluster:

    oc apply -f <yaml-file>
    Copy to Clipboard Toggle word wrap

8.2.3. Creating a postgresql configuration secret

For migration to be successful, you must provide access to the database for your existing deployment.

Procedure

  1. Create a yaml file for your postgresql configuration secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: <resourcename>-old-postgres-configuration
      namespace: <target namespace>
    stringData:
      host: "<external ip or url resolvable by the cluster>"
      port: "<external port, this usually defaults to 5432>"
      database: "<desired database name>"
      username: "<username to connect as>"
      password: "<password to connect with>"
    type: Opaque
    Copy to Clipboard Toggle word wrap
  2. Apply the postgresql configuration yaml to the cluster:
oc apply -f <old-postgres-configuration.yml>
Copy to Clipboard Toggle word wrap

8.2.4. Verifying network connectivity

To ensure successful migration of your data, verify that you have network connectivity from your new operator deployment to your old deployment database.

Prerequisites

Take note of the host and port information from your existing deployment. This information is located in the postgres.py file located in the conf.d directory.

Procedure

  1. Create a yaml file to verify the connection between your new deployment and your old deployment database:

    apiVersion: v1
    kind: Pod
    metadata:
        name: dbchecker
    spec:
      containers:
        - name: dbchecker
          image: registry.redhat.io/rhel8/postgresql-13:latest
          command: ["sleep"]
          args: ["600"]
    Copy to Clipboard Toggle word wrap
  2. Apply the connection checker yaml file to your new project deployment:

    oc project ansible-automation-platform
    oc apply -f connection_checker.yaml
    Copy to Clipboard Toggle word wrap
  3. Verify that the connection checker pod is running:

    oc get pods
    Copy to Clipboard Toggle word wrap
  4. Connect to a pod shell:

    oc rsh dbchecker
    Copy to Clipboard Toggle word wrap
  5. After the shell session opens in the pod, verify that the new project can connect to your old project cluster:

    pg_isready -h <old-host-address> -p <old-port-number> -U awx
    Copy to Clipboard Toggle word wrap

    Example

    <old-host-address>:<old-port-number> - accepting connections
    Copy to Clipboard Toggle word wrap

After you have set your secret key, postgresql credentials, verified network connectivity and installed the Ansible Automation Platform Operator, you must create a custom resource controller object before you can migrate your data.

8.3.1. Creating an AutomationController object

Use the following steps to create an AutomationController custom resource object.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to Operators Installed Operators.
  3. Select the Ansible Automation Platform Operator installed on your project namespace.
  4. Select the Automation Controller tab.
  5. Click Create AutomationController. You can create the object through the Form view or YAML view. The following inputs are available through the Form view.

    1. Enter a name for the new deployment.
    2. In Advanced configurations:

      1. From the Secret Key list, select your secret key secret.
      2. From the Old Database Configuration Secret list, select the old postgres configuration secret.
    3. Click Create.

8.3.2. Creating an AutomationHub object

Use the following steps to create an AutomationHub custom resource object.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to Operators Installed Operators.
  3. Select the Ansible Automation Platform Operator installed on your project namespace.
  4. Select the Automation Hub tab.
  5. Click Create AutomationHub.
  6. Enter a name for the new deployment.
  7. In Advanced configurations, select your secret key secret and postgres configuration secret.
  8. Click Create.

8.4. Post migration cleanup

After data migration, delete unnecessary instance groups and unlink the old database configuration secret from the automation controller resource definition.

8.4.1. Deleting Instance Groups post migration

Procedure

  1. Log in to Red Hat Ansible Automation Platform as the administrator with the password you created during migration.

    Note

    Note: If you did not create an administrator password during migration, one was automatically created for you. To locate this password, go to your project, select Workloads Secrets and open controller-admin-password. From there you can copy the password and paste it into the Red Hat Ansible Automation Platform password field.

  2. Select Administration Instance Groups.
  3. Select all Instance Groups except controlplane and default.
  4. Click Delete.
  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to Operators Installed Operators.
  3. Select the Ansible Automation Platform Operator installed on your project namespace.
  4. Select the Automation Controller tab.
  5. Click your AutomationController object. You can then view the object through the Form view or YAML view. The following inputs are available through the YAML view.
  6. Locate the old_postgres_configuration_secret item within the spec section of the YAML contents.
  7. Delete the line that contains this item.
  8. Click Save.
Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat