Deploying the Red Hat Ansible Automation Platform operator on OpenShift Container Platform
Install and configure Ansible Automation Platform operator on OpenShift Container Platform
Abstract
Preface
Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments.
This guide helps you to understand the installation, migration and upgrade requirements for deploying the Ansible Automation Platform Operator on OpenShift Container Platform.
Providing feedback on Red Hat documentation
If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
Chapter 1. Planning your Red Hat Ansible Automation Platform Operator on Red Hat OpenShift Container Platform
Red Hat Ansible Automation Platform is supported on both Red Hat Enterprise Linux and Red Hat Openshift.
OpenShift operators help install and automate day-2 operations of complex, distributed software on Red Hat OpenShift Container Platform. The Ansible Automation Platform Operator enables you to deploy and manage Ansible Automation Platform components on Red Hat OpenShift Container Platform.
You can use this section to help plan your Red Hat Ansible Automation Platform installation on your Red Hat OpenShift Container Platform environment. Before installing, review the supported installation scenarios to determine which meets your requirements.
1.1. About Ansible Automation Platform Operator
The Ansible Automation Platform Operator provides cloud-native, push-button deployment of new Ansible Automation Platform instances in your OpenShift environment. The Ansible Automation Platform Operator includes resource types to deploy and manage instances of automation controller and private automation hub. It also includes automation controller job resources for defining and launching jobs inside your automation controller deployments.
Deploying Ansible Automation Platform instances with a Kubernetes native operator offers several advantages over launching instances from a playbook deployed on Red Hat OpenShift Container Platform, including upgrades and full lifecycle support for your Red Hat Ansible Automation Platform deployments.
You can install the Ansible Automation Platform Operator from the Red Hat Operators catalog in OperatorHub.
1.2. OpenShift Container Platform version compatibility
The Ansible Automation Platform Operator to install Ansible Automation Platform 2.4 is available on OpenShift Container Platform 4.9 and later versions.
Additional resources
- See the Red Hat Ansible Automation Platform Life Cycle for the most current compatibility details.
1.3. Supported installation scenarios for Red Hat OpenShift Container Platform
You can use the OperatorHub on the Red Hat OpenShift Container Platform web console to install Ansible Automation Platform Operator.
Alternatively, you can install {OperatorPlatform} from the OpenShift Container Platform command-line interface (CLI), oc
.
Follow one of the workflows below to install the Ansible Automation Platform Operator and use it to install the components of Ansible Automation Platform that you require.
- Automation controller custom resources first, then automation hub custom resources;
- Automation hub custom resources first, then automation controller custom resources;
- Automation controller custom resources;
- Automation hub custom resources.
1.4. Custom resources
You can define custom resources for each primary installation workflows.
1.5. Additional resources
- See Understanding OperatorHub to learn more about OpenShift Container Platform OperatorHub.
Chapter 2. Installing the Red Hat Ansible Automation Platform on Red Hat OpenShift Container Platform
Prerequisites
- You have installed the Red Hat Ansible Automation Platform catalog in OperatorHub.
-
You have created a
StorageClass
object for your platform and a persistent volume claim (PVC) withReadWriteMany
access mode. See Dynamic provisioning for details. To run Red Hat OpenShift Container Platform clusters on Amazon Web Services (AWS) with
ReadWriteMany
access mode, you must add NFS or other storage.-
For information about the AWS Elastic Block Store (EBS) or to use the
aws-ebs
storage class, see Persistent storage using AWS Elastic Block Store. -
To use multi-attach
ReadWriteMany
access mode for AWS EBS, see Attaching a volume to multiple instances with Amazon EBS Multi-Attach.
-
For information about the AWS Elastic Block Store (EBS) or to use the
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Search for the Red Hat Ansible Automation Platform operator and click .
Select an Update Channel:
- stable-2.x: installs a namespace-scoped operator, which limits deployments of automation hub and automation controller instances to the namespace the operator is installed in. This is suitable for most cases. The stable-2.x channel does not require administrator privileges and utilizes fewer resources because it only monitors a single namespace.
- stable-2.x-cluster-scoped: deploys automation hub and automation controller across multiple namespaces in the cluster and requires administrator privileges for all namespaces in the cluster.
- Select Installation Mode, Installed Namespace, and Approval Strategy.
- Click .
The installation process begins. When installation finishes, a modal appears notifying you that the Ansible Automation Platform Operator is installed in the specified namespace.
- Click to view your newly installed Ansible Automation Platform Operator.
You can only install a single instance of the Ansible Automation Platform Operator into a single namespace. Installing multiple instances in the same namespace can lead to improper operation for both operator instances.
Chapter 3. Installing and configuring automation controller on Red Hat OpenShift Container Platform web console
You can use these instructions to install the automation controller operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.
Automation controller configuration can be done through the automation controller extra_settings or directly in the user interface after deployment. However, it is important to note that configurations made in extra_settings take precedence over settings made in the user interface.
When an instance of automation controller is removed, the associated PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the previous one. Therefore, it is recommended that you manually remove old PVCs before deploying a new automation controller instance in the same namespace. See Finding and deleting PVCs for more information.
3.1. Prerequisites
- You have installed the Red Hat Ansible Automation Platform catalog in Operator Hub.
- For Controller, a default StorageClass must be configured on the cluster for the operator to dynamically create needed PVCs. This is not necessary if an external PostgreSQL database is configured.
- For Hub a StorageClass that supports ReadWriteMany must be available on the cluster to dynamically created the PVC needed for the content, redis and api pods. If it is not the default StorageClass on the cluster, you can specify it when creating your AutomationHub object.
3.2. Installing the automation controller operator
Use this procedure to install the automation controller operator.
Procedure
- Navigate to Ansible Automation Platform operator. → , then click on the
- Locate the Automation controller tab, then click .
You can proceed with configuring the instance using either the Form View or YAML view.
3.2.1. Creating your automation controller form-view
Use this procedure to create your automation controller using the form-view.
Procedure
- Ensure Form view is selected. It should be selected by default.
- Enter the name of the new controller.
- Optional: Add any labels necessary.
- Click .
- Enter Hostname of the instance. The hostname is optional. The default hostname will be generated based upon the deployment name you have selected.
- Enter the Admin account username.
- Enter the Admin email address.
- Under the Admin password secret drop-down menu, select the secret.
- Under Database configuration secret drop-down menu, select the secret.
- Under Old Database configuration secret drop-down menu, select the secret.
- Under Secret key secret drop-down menu, select the secret.
- Under Broadcast Websocket Secret drop-down menu, select the secret.
- Enter any Service Account Annotations necessary.
3.2.2. Configuring your controller image pull policy
Use this procedure to configure the image pull policy on your automation controller.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Go to → .
- Select your Ansible Automation Platform Operator deployment.
- Select the Automation Controller tab.
For new instances, click
.- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
Click Image Pull Policy, click on the radio button to select
. Under- Always
- Never
- IfNotPresent
To display the option under Image Pull Secrets, click the arrow.
- Click Add Image Pull Secret and enter a value. beside
To display fields under the Web container resource requirements drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the Task container resource requirements drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the EE Control Plane container resource requirements drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the PostgreSQL init container resource requirements (when using a managed service) drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the Redis container resource requirements drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the PostgreSQL container resource requirements (when using a managed instance)* drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display the PostgreSQL container storage requirements (when using a managed instance) drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
- Under Replicas, enter the number of instance replicas.
- Under Remove used secrets on instance removal, select true or false. The default is false.
- Under Preload instance with data upon creation, select true or false. The default is true.
3.2.3. Configuring your controller LDAP security
Use this procedure to configure LDAP security for your automation controller.
Procedure
If you do not have a
ldap_cacert_secret
, you can create one with the following command:$ oc create secret generic <resourcename>-custom-certs \ --from-file=ldap-ca.crt=<PATH/TO/YOUR/CA/PEM/FILE> \ 1
- 1
- Modify this to point to where your CA cert is stored.
This will create a secret that looks like this:
$ oc get secret/mycerts -o yaml apiVersion: v1 data: ldap-ca.crt: <mysecret> 1 kind: Secret metadata: name: mycerts namespace: awx type: Opaque
- 1
- Automation controller looks for the data field
ldap-ca.crt
in the specified secret when using theldap_cacert_secret
.
-
Under LDAP Certificate Authority Trust Bundle click the drop-down menu and select your
ldap_cacert_secret
. - Under LDAP Password Secret, click the drop-down menu and select a secret.
- Under EE Images Pull Credentials Secret, click the drop-down menu and select a secret.
- Under Bundle Cacert Secret, click the drop-down menu and select a secret.
Under Service Type, click the drop-down menu and select
- ClusterIP
- LoadBalancer
- NodePort
3.2.4. Configuring your automation controller operator route options
The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation controller operator route options under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select your Ansible Automation Platform Operator deployment.
- Select the Automation Controller tab.
For new instances, click
.- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingress type, click the drop-down menu and select Route.
- Under Route DNS host, enter a common host name that the route answers to.
- Under Route TLS termination mechanism, click the drop-down menu and select Edge or Passthrough. For most instances Edge should be selected.
- Under Route TLS credential secret, click the drop-down menu and select a secret from the list.
- Under Enable persistence for /var/lib/projects directory select either true or false by moving the slider.
3.2.5. Configuring the Ingress type for your automation controller operator
The Ansible Automation Platform Operator installation form allows you to further configure your automation controller operator ingress under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select your Ansible Automation Platform Operator deployment.
- Select the Automation Controller tab.
For new instances, click
.- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingress type, click the drop-down menu and select Ingress.
- Under Ingress annotations, enter any annotations to add to the ingress.
- Under Ingress TLS secret, click the drop-down menu and select a secret from the list.
After you have configured your automation controller operator, click
at the bottom of the form view. Red Hat OpenShift Container Platform will now create the pods. This may take a few minutes.You can view the progress by navigating to
→ and locating the newly created instance.Verification
Verify that the following operator pods provided by the Ansible Automation Platform Operator installation from automation controller are running:
Operator manager controllers | automation controller | automation hub |
---|---|---|
The operator manager controllers for each of the 3 operators, include the following:
| After deploying automation controller, you will see the addition of these pods:
| After deploying automation hub, you will see the addition of these pods:
|
A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name>
to see if there is an ImagePullBackOff error on that pod.
3.3. Configuring an external database for automation controller on Red Hat Ansible Automation Platform Operator
For users who prefer to deploy Ansible Automation Platform with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the oc create
command.
By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment. You can deploy Ansible Automation Platform with an external database instead of the managed PostgreSQL pod that the Ansible Automation Platform Operator automatically creates.
Using an external database lets you share and reuse resources and manually manage backups, upgrades, and performance optimizations.
The same external database (PostgreSQL instance) can be used for both automation hub and automation controller as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.
The following section outlines the steps to configure an external database for your automation controller on a Ansible Automation Platform Operator.
Prerequisite
The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform.
Ansible Automation Platform 2.4 supports PostgreSQL 13.
Procedure
The external postgres instance credentials and connection information must be stored in a secret, which is then set on the automation controller spec.
Create a
postgres_configuration_secret
.yaml file, following the template below:apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> 1 stringData: host: "<external_ip_or_url_resolvable_by_the_cluster>" 2 port: "<external_port>" 3 database: "<desired_database_name>" username: "<username_to_connect_as>" password: "<password_to_connect_with>" 4 sslmode: "prefer" 5 type: "unmanaged" type: Opaque
- 1
- Namespace to create the secret in. This should be the same namespace you want to deploy to.
- 2
- The resolvable hostname for your database node.
- 3
- External port defaults to
5432
. - 4
- Value for variable
password
should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration. - 5
- The variable
sslmode
is valid forexternal
databases only. The allowed values are:prefer
,disable
,allow
,require
,verify-ca
, andverify-full
.
Apply
external-postgres-configuration-secret.yml
to your cluster using theoc create
command.$ oc create -f external-postgres-configuration-secret.yml
When creating your
AutomationController
custom resource object, specify the secret on your spec, following the example below:apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: controller-dev spec: postgres_configuration_secret: external-postgres-configuration
3.4. Finding and deleting PVCs
A persistent volume claim (PVC) is a storage volume used to store data that automation hub and automation controller applications use. These PVCs are independent from the applications and remain even when the application is deleted. If you are confident that you no longer need a PVC, or have backed it up elsewhere, you can manually delete them.
Procedure
List the existing PVCs in your deployment namespace:
oc get pvc -n <namespace>
- Identify the PVC associated with your previous deployment by comparing the old deployment name and the PVC name.
Delete the old PVC:
oc delete pvc -n <namespace> <pvc-name>
3.5. Additional resources
- For more information on running operators on OpenShift Container Platform, navigate to the OpenShift Container Platform product documentation and click the Operators - Working with Operators in OpenShift Container Platform guide.
Chapter 4. Installing and configuring automation hub on Red Hat OpenShift Container Platform web console
You can use these instructions to install the automation hub operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.
Automation hub configuration can be done through the automation hub pulp_settings or directly in the user interface after deployment. However, it is important to note that configurations made in pulp_settings take precedence over settings made in the user interface. Hub settings should always be set as lowercase on the Hub custom resource specification.
When an instance of automation hub is removed, the PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the previous one. Therefore, it is recommended that you manually remove old PVCs before deploying a new automation hub instance in the same namespace. See Finding and deleting PVCs for more information.
4.1. Prerequisites
- You have installed the Ansible Automation Platform Operator in Operator Hub.
4.2. Installing the automation hub operator
Use this procedure to install the automation hub operator.
Procedure
- Navigate to → .
- Locate the Automation hub entry, then click .
4.2.1. Storage options for Ansible Automation Platform Operator installation on Red Hat OpenShift Container Platform
Automation hub requires ReadWriteMany
file-based storage, Azure Blob storage, or Amazon S3-compliant storage for operation so that multiple pods can access shared content, such as collections.
The process for configuring object storage on the AutomationHub
CR is similar for Amazon S3 and Azure Blob Storage.
If you are using file-based storage and your installation scenario includes automation hub, ensure that the storage option for Ansible Automation Platform Operator is set to ReadWriteMany
. ReadWriteMany
is the default storage option.
In addition, OpenShift Data Foundation provides a ReadWriteMany
or S3-compliant implementation. Also, you can set up NFS storage configuration to support ReadWriteMany
. This, however, introduces the NFS server as a potential, single point of failure.
Additional resources
- Persistent storage using NFS in the OpenShift Container Platform Storage guide
- IBM’s How do I create a storage class for NFS dynamic storage provisioning in an OpenShift environment?
4.2.1.1. Provisioning OCP storage with ReadWriteMany
access mode
To ensure successful installation of Ansible Automation Platform Operator, you must provision your storage type for automation hub initially to ReadWriteMany
access mode.
Procedure
- Click Provisioning to update the access mode.
-
In the first step, update the
accessModes
from the defaultReadWriteOnce
toReadWriteMany
. - Complete the additional steps in this section to create the persistent volume claim (PVC).
4.2.1.2. Configuring object storage on Amazon S3
Red Hat supports Amazon Simple Storage Service (S3) for automation hub. You can configure it when deploying the AutomationHub
custom resource (CR), or you can configure it for an existing instance.
Prerequisites
- Create an Amazon S3 bucket to store the objects.
- Note the name of the S3 bucket.
Procedure
Create a Kubernetes secret containing the AWS credentials and connection details, and the name of your Amazon S3 bucket. The following example creates a secret called
test-s3
:$ oc -n $HUB_NAMESPACE apply -f- <<EOF apiVersion: v1 kind: Secret metadata: name: 'test-s3' stringData: s3-access-key-id: $S3_ACCESS_KEY_ID s3-secret-access-key: $S3_SECRET_ACCESS_KEY s3-bucket-name: $S3_BUCKET_NAME s3-region: $S3_REGION EOF
Add the secret to the automation hub custom resource (CR)
spec
:spec: object_storage_s3_secret: test-s3
-
If you are applying this secret to an existing instance, restart the API pods for the change to take effect.
<hub-name>
is the name of your hub instance.
$ oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api
4.2.1.3. Configuring object storage on Azure Blob
Red Hat supports Azure Blob Storage for automation hub. You can configure it when deploying the AutomationHub
custom resource (CR), or you can configure it for an existing instance.
Prerequisites
- Create an Azure Storage blob container to store the objects.
- Note the name of the blob container.
Procedure
Create a Kubernetes secret containing the credentials and connection details for your Azure account, and the name of your Azure Storage blob container. The following example creates a secret called
test-azure
:$ oc -n $HUB_NAMESPACE apply -f- <<EOF apiVersion: v1 kind: Secret metadata: name: 'test-azure' stringData: azure-account-name: $AZURE_ACCOUNT_NAME azure-account-key: $AZURE_ACCOUNT_KEY azure-container: $AZURE_CONTAINER azure-container-path: $AZURE_CONTAINER_PATH azure-connection-string: $AZURE_CONNECTION_STRING EOF
Add the secret to the automation hub custom resource (CR)
spec
:spec: object_storage_azure_secret: test-azure
-
If you are applying this secret to an existing instance, restart the API pods for the change to take effect.
<hub-name>
is the name of your hub instance.
$ oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api
4.2.2. Configure your automation hub operator route options
The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation hub operator route options under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select your Ansible Automation Platform Operator deployment.
- Select the Automation Hub tab.
For new instances, click
.- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingress type, click the drop-down menu and select Route.
- Under Route DNS host, enter a common host name that the route answers to.
- Under Route TLS termination mechanism, click the drop-down menu and select Edge or Passthrough.
- Under Route TLS credential secret, click the drop-down menu and select a secret from the list.
4.2.3. Configuring the Ingress type for your automation hub operator
The Ansible Automation Platform Operator installation form allows you to further configure your automation hub operator ingress under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select your Ansible Automation Platform Operator deployment.
- Select the Automation Hub tab.
For new instances, click
.- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingress type, click the drop-down menu and select Ingress.
- Under Ingress annotations, enter any annotations to add to the ingress.
- Under Ingress TLS secret, click the drop-down menu and select a secret from the list.
After you have configured your automation hub operator, click
at the bottom of the form view. Red Hat OpenShift Container Platform will now create the pods. This may take a few minutes.You can view the progress by navigating to
→ and locating the newly created instance.Verification
Verify that the following operator pods provided by the Ansible Automation Platform Operator installation from automation hub are running:
Operator manager controllers | automation controller | automation hub |
---|---|---|
The operator manager controllers for each of the 3 operators, include the following:
| After deploying automation controller, you will see the addition of these pods:
| After deploying automation hub, you will see the addition of these pods:
|
A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name>
to see if there is an ImagePullBackOff error on that pod.
4.3. Configuring LDAP authentication for Ansible automation hub on OpenShift Container Platform
Configure LDAP authentication settings for Ansible Automation Platform on OpenShift Container Platform in the spec section of your Hub instance configuration file.
Procedure
Use the following example to configure LDAP in your automation hub instance. For any blank fields, enter
``
.spec: pulp_settings: auth_ldap_user_attr_map: email: "mail" first_name: "givenName" last_name: "sn" auth_ldap_group_search_base_dn: 'cn=groups,cn=accounts,dc=example,dc=com' auth_ldap_bind_dn: ' ' auth_ldap_bind_password: ' ' auth_ldap_group_search_filter: (objectClass=posixGroup) auth_ldap_user_search_scope: SUBTREE auth_ldap_server_uri: 'ldap://ldapserver:389' authentication_backend_preset: ldap auth_ldap_mirror_groups: 'True' auth_ldap_user_search_base_dn: 'cn=users,cn=accounts,dc=example,dc=com' auth_ldap_bind_password: 'ldappassword' auth_ldap_user_search_filter: (uid=%(user)s) auth_ldap_group_search_scope: SUBTREE auth_ldap_user_flags_by_group: '@json {"is_superuser": "cn=tower-admin,cn=groups,cn=accounts,dc=example,dc=com"}'
Do not leave any fields empty. For fields with no variable, enter ``
to indicate a default value.
4.4. Accessing the automation hub user interface
You can access the automation hub interface once all pods have successfully launched.
Procedure
- Navigate to → .
- Under Location, click on the URL for your automation hub instance.
The automation hub user interface launches where you can sign in with the administrator credentials specified during the operator configuration process.
If you did not specify an administrator password during configuration, one was automatically created for you. To locate this password, go to your project, select
→ and open controller-admin-password. From there you can copy the password and paste it into the Automation hub password field.4.5. Configuring an external database for automation hub on Red Hat Ansible Automation Platform Operator
For users who prefer to deploy Ansible Automation Platform with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the oc create
command.
By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment.
You can choose to use an external database instead if you prefer to use a dedicated node to ensure dedicated resources or to manually manage backups, upgrades, or performance tweaks.
The same external database (PostgreSQL instance) can be used for both automation hub and automation controller as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.
The following section outlines the steps to configure an external database for your automation hub on a Ansible Automation Platform Operator.
Prerequisite
The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform.
Ansible Automation Platform 2.4 supports PostgreSQL 13.
Procedure
The external postgres instance credentials and connection information will need to be stored in a secret, which will then be set on the automation hub spec.
Create a
postgres_configuration_secret
.yaml file, following the template below:apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> 1 stringData: host: "<external_ip_or_url_resolvable_by_the_cluster>" 2 port: "<external_port>" 3 database: "<desired_database_name>" username: "<username_to_connect_as>" password: "<password_to_connect_with>" 4 sslmode: "prefer" 5 type: "unmanaged" type: Opaque
- 1
- Namespace to create the secret in. This should be the same namespace you want to deploy to.
- 2
- The resolvable hostname for your database node.
- 3
- External port defaults to
5432
. - 4
- Value for variable
password
should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration. - 5
- The variable
sslmode
is valid forexternal
databases only. The allowed values are:prefer
,disable
,allow
,require
,verify-ca
, andverify-full
.
Apply
external-postgres-configuration-secret.yml
to your cluster using theoc create
command.$ oc create -f external-postgres-configuration-secret.yml
When creating your
AutomationHub
custom resource object, specify the secret on your spec, following the example below:apiVersion: automationhub.ansible.com/v1beta1 kind: AutomationHub metadata: name: hub-dev spec: postgres_configuration_secret: external-postgres-configuration
4.5.1. Enabling the hstore extension for the automation hub PostgreSQL database
From Ansible Automation Platform 2.4, the database migration script uses hstore
fields to store information, therefore the hstore
extension to the automation hub PostgreSQL database must be enabled.
This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server.
If the PostgreSQL database is external, you must enable the hstore
extension to the automation hub PostreSQL database manually before automation hub installation.
If the hstore
extension is not enabled before automation hub installation, a failure is raised during database migration.
Procedure
Check if the extension is available on the PostgreSQL server (automation hub database).
$ psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"
Where the default value for
<automation hub database>
isautomationhub
.Example output with
hstore
available:name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)
Example output with
hstore
not available:name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)
On a RHEL based server, the
hstore
extension is included in thepostgresql-contrib
RPM package, which is not installed automatically when installing the PostgreSQL server RPM package.To install the RPM package, use the following command:
dnf install postgresql-contrib
Create the
hstore
PostgreSQL extension on the automation hub database with the following command:$ psql -d <automation hub database> -c "CREATE EXTENSION hstore;"
The output of which is:
CREATE EXTENSION
In the following output, the
installed_version
field contains thehstore
extension used, indicating thathstore
is enabled.name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)
4.6. Finding and deleting PVCs
A persistent volume claim (PVC) is a storage volume used to store data that automation hub and automation controller applications use. These PVCs are independent from the applications and remain even when the application is deleted. If you are confident that you no longer need a PVC, or have backed it up elsewhere, you can manually delete them.
Procedure
List the existing PVCs in your deployment namespace:
oc get pvc -n <namespace>
- Identify the PVC associated with your previous deployment by comparing the old deployment name and the PVC name.
Delete the old PVC:
oc delete pvc -n <namespace> <pvc-name>
4.7. Additional configurations
A collection download count can help you understand collection usage. To add a collection download count to automation hub, set the following configuration:
spec: pulp_settings: ansible_collect_download_count: true
When ansible_collect_download_count
is enabled, automation hub will display a download count by the collection.
4.8. Additional resources
- For more information on running operators on OpenShift Container Platform, navigate to the OpenShift Container Platform product documentation and click the Operators - Working with Operators in OpenShift Container Platform guide.
Chapter 5. Installing Red Hat Ansible Automation Platform Operator from the OpenShift Container Platform CLI
Use these instructions to install the Ansible Automation Platform Operator on Red Hat OpenShift Container Platform from the OpenShift Container Platform command-line interface (CLI) using the oc
command.
5.1. Prerequisites
- Access to Red Hat OpenShift Container Platform using an account with operator installation permissions.
-
The OpenShift Container Platform CLI
oc
command is installed on your local system. Refer to Installing the OpenShift CLI in the Red Hat OpenShift Container Platform product documentation for further information.
5.2. Subscribing a namespace to an operator using the OpenShift Container Platform CLI
Use this procedure to subscribe a namespace to an operator.
You can only subscribe a single instance of the Ansible Automation Platform Operator into a single namespace. Subscribing multiple instances in the same namespace can lead to improper operation for both operator instances.
Procedure
Create a project for the operator
oc new-project ansible-automation-platform
-
Create a file called
sub.yaml
. Add the following YAML code to the
sub.yaml
file.--- apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: ansible-automation-platform --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ansible-automation-platform-operator namespace: ansible-automation-platform spec: targetNamespaces: - ansible-automation-platform --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ansible-automation-platform namespace: ansible-automation-platform spec: channel: 'stable-2.4' installPlanApproval: Automatic name: ansible-automation-platform-operator source: redhat-operators sourceNamespace: openshift-marketplace --- apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: example namespace: ansible-automation-platform spec: replicas: 1
This file creates a
Subscription
object calledansible-automation-platform
that subscribes theansible-automation-platform
namespace to theansible-automation-platform-operator
operator.It then creates an
AutomationController
object calledexample
in theansible-automation-platform
namespace.To change the automation controller name from
example
, edit the name field in thekind: AutomationController
section ofsub.yaml
and replace<automation_controller_name>
with the name you want to use:apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: <automation_controller_name> namespace: ansible-automation-platform
Run the
oc apply
command to create the objects specified in thesub.yaml
file:oc apply -f sub.yaml
To verify that the namespace has been successfully subscribed to the ansible-automation-platform-operator
operator, run the oc get subs
command:
$ oc get subs -n ansible-automation-platform
For further information about subscribing namespaces to operators, see Installing from OperatorHub using the CLI in the Red Hat OpenShift Container Platform Operators guide.
You can use the OpenShift Container Platform CLI to fetch the web address and the password of the Automation controller that you created.
5.3. Fetching Automation controller login details from the OpenShift Container Platform CLI
To login to the Automation controller, you need the web address and the password.
5.3.1. Fetching the automation controller web address
A Red Hat OpenShift Container Platform route exposes a service at a host name, so that external clients can reach it by name. When you created the automation controller instance, a route was created for it. The route inherits the name that you assigned to the automation controller object in the YAML file.
Use the following command to fetch the routes:
oc get routes -n <controller_namespace>
In the following example, the example
automation controller is running in the ansible-automation-platform
namespace.
$ oc get routes -n ansible-automation-platform NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD example example-ansible-automation-platform.apps-crc.testing example-service http edge/Redirect None
The address for the automation controller instance is example-ansible-automation-platform.apps-crc.testing
.
5.3.2. Fetching the automation controller password
The YAML block for the automation controller instance in sub.yaml
assigns values to the name and admin_user keys. Use these values in the following command to fetch the password for the automation controller instance.
oc get secret/<controller_name>-<admin_user>-password -o yaml
The default value for admin_user is admin
. Modify the command if you changed the admin username in sub.yaml
.
The following example retrieves the password for an automation controller object called example
:
oc get secret/example-admin-password -o yaml
The password for the automation controller instance is listed in the metadata
field in the output:
$ oc get secret/example-admin-password -o yaml apiVersion: v1 data: password: ODzLODzLODzLODzLODzLODzLODzLODzLODzLODzLODzL kind: Secret metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: '{"apiVersion":"v1","kind":"Secret","metadata":{"labels":{"app.kubernetes.io/component":"automationcontroller","app.kubernetes.io/managed-by":"automationcontroller-operator","app.kubernetes.io/name":"example","app.kubernetes.io/operator-version":"","app.kubernetes.io/part-of":"example"},"name":"example-admin-password","namespace":"ansible-automation-platform"},"stringData":{"password":"88TG88TG88TG88TG88TG88TG88TG88TG"}}' creationTimestamp: "2021-11-03T00:02:24Z" labels: app.kubernetes.io/component: automationcontroller app.kubernetes.io/managed-by: automationcontroller-operator app.kubernetes.io/name: example app.kubernetes.io/operator-version: "" app.kubernetes.io/part-of: example name: example-admin-password namespace: ansible-automation-platform resourceVersion: "185185" uid: 39393939-5252-4242-b929-665f665f665f
For this example, the password is 88TG88TG88TG88TG88TG88TG88TG88TG
.
5.4. Additional resources
- For more information on running operators on OpenShift Container Platform, navigate to the OpenShift Container Platform product documentation and click the Operators - Working with Operators in OpenShift Container Platform guide.
Chapter 6. Deploying Event-Driven Ansible controller with Red Hat Ansible Automation Platform Operator on Red Hat OpenShift Container Platform
Event-Driven Ansible controller is the interface for event-driven automation and introduces automated resolution of IT requests. This component helps you connect to sources of events and acts on those events using rulebooks. When you deploy Event-Driven Ansible controller, you can automate decision making, use numerous event sources, implement event-driven automation within and across multiple IT use cases, and achieve more efficient service delivery.
Use the following instructions to install Event-Driven Ansible with your Ansible Automation Platform Operator on OpenShift Container Platform.
Prerequisites
- You have installed Ansible Automation Platform Operator on OpenShift Container Platform.
- You have installed and configured automation controller.
Procedure
- Select → .
- Locate and select your installation of Ansible Automation Platform.
- Under the Details tab, locate the EDA modal and click Create instance.
Click Name field, enter the name you want for your new Event-Driven Ansible controller deployment.
, and in theImportantIf you have installed other Ansible Automation Platform components in your current OpenShift Container Platform namespace, ensure that you provide a unique name for your Event-Driven Ansible controller when you create your Event-Driven Ansible custom resource. Otherwise, naming conflicts can occur and impact Event-Driven Ansible controller deployment.
Specify your controller URL in the Automation Server URL field.
If you deployed automation controller in Openshift as well, you can find the URL in the navigation panel under
→ .NoteThis is the only required customization, but you can customize other options using the UI form or directly in the YAML configuration tab, if desired.
ImportantTo ensure that you can run concurrent Event-Driven Ansible activations efficiently, you must set your maximum number of activations in proportion to the resources available on your cluster. You can do this by adjusting your Event-Driven Ansible settings in the YAML view.
When you activate an Event-Driven Ansible rulebook under standard conditions, it uses approximately 250 MB of memory. However, the actual memory consumption can vary significantly based on the complexity of your rules and the volume and size of the events processed. In scenarios where a large number of events are anticipated or the rulebook complexity is high, conduct a preliminary assessment of resource usage in a staging environment. This ensures that your maximum number of activations is based on the capacity of your resources.
- Click to update your YAML key values.
Copy and paste the following string at the end of the
spec
key value section:extra_settings: - setting: EDA_MAX_RUNNING_ACTIVATIONS value: '12'
Click
. This deploys Event-Driven Ansible controller in the namespace you specified.After a couple minutes when the installation is marked as Successful, you can find the URL for the Event-Driven Ansible UI on the Routes page in the OpenShift UI.
From the navigation panel, select
→ to find the new Route URL that has been created for you.Routes are listed according to the name of your custom resource.
- Click the new URL under the Location column to navigate to Event-Driven Ansible in the browser.
From the navigation panel, select
→ and locate the Admin Password k8s secret that was created for you, unless you specified a custom one.Secrets are listed according to the name of your custom resource and appended with
-admin-password.
NoteYou can use the password value in the secret to log in to the Event-Driven Ansible controller UI. The default user is
admin
.
Chapter 7. Using Red Hat Single Sign-On Operator with automation hub
Private automation hub uses Red Hat Single Sign-On for authentication.
The Red Hat Single Sign-On Operator creates and manages resources. Use this Operator to create custom resources to automate Red Hat Single Sign-On administration in Openshift.
- When installing Ansible Automation Platform on Virtual Machines (VMs) the installer can automatically install and configure Red Hat Single Sign-On for use with private automation hub.
- When installing Ansible Automation Platform on Red Hat OpenShift Container Platform you must install Single Sign-On separately.
This chapter describes the process to configure Red Hat Single Sign-On and integrate it with private automation hub when Ansible Automation Platform is installed on OpenShift Container Platform.
Prerequisites
- You have access to Red Hat OpenShift Container Platform using an account with operator installation permissions.
- You have installed the catalog containing the Red Hat Ansible Automation Platform operators.
- You have installed the Red Hat Single Sign-On Operator. To install the Red Hat Single Sign-On Operator, follow the procedure in Installing Red Hat Single Sign-On using a custom resource in the Red Hat Single Sign-On documentation.
7.1. Creating a Keycloak instance
When the Red Hat Single Sign-On Operator is installed you can create a Keycloak instance for use with Ansible Automation Platform.
From here you provide an external Postgres or one will be created for you.
Procedure
- Navigate to → .
-
Select the
rh-sso
project. - Select the Red Hat Single Sign-On Operator.
- On the Red Hat Single Sign-On Operator details page select .
- Click .
Click
.The default Keycloak custom resource is as follows:
apiVersion: keycloak.org/v1alpha1 kind: Keycloak metadata: name: example-keycloak labels: app: sso namespace: aap spec: externalAccess: enabled: true instances: 1
- Click .
- When deployment is complete, you can use this credential to login to the administrative console.
-
You can find the credentials for the administrator in the
credential-<custom-resource>
(example keycloak) secret in the namespace.
7.2. Creating a Keycloak realm for Ansible Automation Platform
Create a realm to manage a set of users, credentials, roles, and groups. A user belongs to and logs into a realm. Realms are isolated from one another and can only manage and authenticate the users that they control.
Procedure
- Navigate to → .
- Select the Red Hat Single Sign-On Operator project.
- Select the Keycloak Realm tab and click .
On the Keycloak Realm form, select . Edit the YAML file as follows:
kind: KeycloakRealm apiVersion: keycloak.org/v1alpha1 metadata: name: ansible-automation-platform-keycloakrealm namespace: rh-sso labels: app: sso realm: ansible-automation-platform spec: realm: id: ansible-automation-platform realm: ansible-automation-platform enabled: true displayName: Ansible Automation Platform instanceSelector: matchLabels: app: sso
Field
Description
metadata.name
Set a unique value in metadata for the name of the configuration resource (CR).
metadata.namespace
Set a unique value in metadata for the name of the configuration resource (CR).
metadata.labels.app
Set labels to a unique value. This is used when creating the client CR.
metadata.labels.realm
Set labels to a unique value. This is used when creating the client CR.
spec.realm.id
Set the realm name and id. These must be the same.
spec.realm.realm
Set the realm name and id. These must be the same.
spec.realm.displayname
Set the name to display.
- Click and wait for the process to complete.
7.3. Creating a Keycloak client
Keycloak clients authenticate hub users with Red Hat Single Sign-On. When a user authenticates the request goes through the Keycloak client. When Single Sign-On validates or issues the OAuth
token, the client provides the response to automation hub and the user can log in.
Procedure
- Navigate to → .
- Select the Red Hat Single Sign-On Operator project.
- Select the Keycloak Client tab and click .
- On the Keycloak Realm form, select .
Replace the default YAML file with the following:
kind: KeycloakClient apiVersion: keycloak.org/v1alpha1 metadata: name: automation-hub-client-secret labels: app: sso realm: ansible-automation-platform namespace: rh-sso spec: realmSelector: matchLabels: app: sso realm: ansible-automation-platform client: name: Automation Hub clientId: automation-hub secret: <client-secret> 1 clientAuthenticatorType: client-secret description: Client for automation hub attributes: user.info.response.signature.alg: RS256 request.object.signature.alg: RS256 directAccessGrantsEnabled: true publicClient: true protocol: openid-connect standardFlowEnabled: true protocolMappers: - config: access.token.claim: "true" claim.name: "family_name" id.token.claim: "true" jsonType.label: String user.attribute: lastName userinfo.token.claim: "true" consentRequired: false name: family name protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper - config: userinfo.token.claim: "true" user.attribute: email id.token.claim: "true" access.token.claim: "true" claim.name: email jsonType.label: String name: email protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper consentRequired: false - config: multivalued: "true" access.token.claim: "true" claim.name: "resource_access.${client_id}.roles" jsonType.label: String name: client roles protocol: openid-connect protocolMapper: oidc-usermodel-client-role-mapper consentRequired: false - config: userinfo.token.claim: "true" user.attribute: firstName id.token.claim: "true" access.token.claim: "true" claim.name: given_name jsonType.label: String name: given name protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper consentRequired: false - config: id.token.claim: "true" access.token.claim: "true" userinfo.token.claim: "true" name: full name protocol: openid-connect protocolMapper: oidc-full-name-mapper consentRequired: false - config: userinfo.token.claim: "true" user.attribute: username id.token.claim: "true" access.token.claim: "true" claim.name: preferred_username jsonType.label: String name: <username> protocol: openid-connect protocolMapper: oidc-usermodel-property-mapper consentRequired: false - config: access.token.claim: "true" claim.name: "group" full.path: "true" id.token.claim: "true" userinfo.token.claim: "true" consentRequired: false name: group protocol: openid-connect protocolMapper: oidc-group-membership-mapper - config: multivalued: 'true' id.token.claim: 'true' access.token.claim: 'true' userinfo.token.claim: 'true' usermodel.clientRoleMapping.clientId: 'automation-hub' claim.name: client_roles jsonType.label: String name: client_roles protocolMapper: oidc-usermodel-client-role-mapper protocol: openid-connect - config: id.token.claim: "true" access.token.claim: "true" included.client.audience: 'automation-hub' protocol: openid-connect name: audience mapper protocolMapper: oidc-audience-mapper roles: - name: "hubadmin" description: "An administrator role for automation hub"
- 1
- Replace this with a unique value.
- Click and wait for the process to complete.
When automation hub is deployed, you must update the client with the “Valid Redirect URIs” and “Web Origins” as described in Updating the Red Hat Single Sign-On client Additionally, the client comes pre-configured with token mappers, however, if your authentication provider does not provide group data to Red Hat SSO, then the group mapping must be updated to reflect how that information is passed. This is commonly by user attribute.
7.4. Creating a Keycloak user
This procedure creates a Keycloak user, with the hubadmin
role, that can log in to automation hub with Super Administration privileges.
Procedure
- Navigate to → .
- Select the Red Hat Single Sign-On Operator project.
- Select the Keycloak Realm tab and click .
- On the Keycloak User form, select .
Replace the default YAML file with the following:
apiVersion: keycloak.org/v1alpha1 kind: KeycloakUser metadata: name: hubadmin-user labels: app: sso realm: ansible-automation-platform namespace: rh-sso spec: realmSelector: matchLabels: app: sso realm: ansible-automation-platform user: username: hub_admin firstName: Hub lastName: Admin email: hub_admin@example.com enabled: true emailVerified: false credentials: - type: password value: <ch8ngeme> clientRoles: automation-hub: - hubadmin
- Click and wait for the process to complete.
When a user is created, the Operator creates a Secret containing both the username and password using the following naming pattern: credential-<realm name>-<username>-<namespace>
. In this example the credential is called credential-ansible-automation-platform-hub-admin-rh-sso
. When a user is created the Operator does not update the user’s password. Password changes are not reflected in the Secret.
7.5. Installing the Ansible Automation Platform Operator
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Search for the Ansible Automation Platform Operator.
- Select the Ansible Automation Platform Operator project.
- Click on the Operator tile.
- Click .
Select a Project to install the Operator into. Red Hat recommends using the Operator recommended Namespace name.
- If you want to install the Operator into a project other than the recommended one, select Create Project from the drop down menu.
- Enter the Project name.
- Click .
- Click .
- When the Operator has been installed, click .
7.6. Creating a Red Hat Single Sign-On connection secret
Use this procedure to create a connection secret for Red Hat Single Sign-On.
Procedure
-
Navigate to
https://<sso_host>/auth/realms/ansible-automation-platform
. -
Copy the
public_key
value. - In the OpenShift Web UI, navigate to → .
- Select the ansible-automation-platform project.
- Click , and select .
Edit the following YAML to create the secret
apiVersion: v1 kind: Secret metadata: name: automation-hub-sso 1 namespace: ansible-automation-platform type: Opaque stringData: keycloak_host: "keycloak-rh-sso.apps-crc.testing" keycloak_port: "443" keycloak_protocol: "https" keycloak_realm: "ansible-automation-platform" keycloak_admin_role: "hubadmin" social_auth_keycloak_key: "automation-hub" social_auth_keycloak_secret: "client-secret" 2 social_auth_keycloak_public_key: >- 3
- 1
- This name is used in the next step when creating the automation hub instance.
- 2
- If the secret was changed when creating the Keycloak client for automation hub be sure to change this value to match.
- 3
- Enter the value of the
public_key
copied in Installing the Ansible Automation Platform Operator.
- Click and wait for the process to complete.
7.7. Installing automation hub using the Ansible Automation Platform Operator
Use the following procedure to install automation hub using the Ansible Automation Platform Operator.
Procedure
- Navigate to → .
- Select your Ansible Automation Platform Operator deployment.
- Select the Automation hub tab.
- Click .
Select
. The YAML should be similar to:apiVersion: automationhub.ansible.com/v1beta1 kind: AutomationHub metadata: name: private-ah 1 namespace: aap spec: sso_secret: automation-hub-sso 2 pulp_settings: verify_ssl: false route_tls_termination_mechanism: Edge ingress_type: Route loadbalancer_port: 80 file_storage_size: 100Gi image_pull_policy: IfNotPresent replicas: 1 3 web_replicas: N task_replicas: N file_storage_access_mode: ReadWriteMany content: log_level: INFO replicas: 2 postgres_storage_requirements: limits: storage: 50Gi requests: storage: 8Gi api: log_level: INFO replicas: 1 postgres_resource_requirements: limits: cpu: 1000m memory: 8Gi requests: cpu: 500m memory: 2Gi loadbalancer_protocol: http resource_manager: replicas: 1 worker: replicas: 2
- 1
- Set metadata.name to the name to use for the instance.
- 2
- Set spec.sso_secret to the name of the secret created in Creating a Secret to hold the Red Hat Single Sign On connection details.
- 3
- Scale replicas up or down for each deployment by using the
web_replicas
ortask_replicas
respectively, where N represents the number of replicas you want to create. Alternatively, you can scale all pods across both deployments by usingreplicas
. See Scaling the Web and Task Pods independently for details.
NoteThis YAML turns off SSL verification (
ssl_verify: false
). If you are not using self-signed certificates for OpenShift this setting can be removed.- Click and wait for the process to complete.
7.8. Determining the automation hub Route
Use the following procedure to determine the hub route.
Procedure
- Navigate to → .
- Select the project you used for the install.
-
Copy the location of the
private-ah-web-svc
service. The name of the service is different if you used a different name when creating the automation hub instance. This is used later to update the Red Hat Single Sign-On client.
7.9. Updating the Red Hat Single Sign-On client
When automation hub is installed and you know the URL of the instance, you must update the Red Hat Single Sign-On to set the Valid Redirect URIs and Web Origins settings.
Procedure
- Navigate to → .
- Select the RH-SSO project.
- Click .
- Select .
- Click on the automation-hub-client-secret client.
- Select .
Update the Client YAML to add the Valid Redirect URIs and Web Origins settings.
redirectUris: - 'https://private-ah-ansible-automation-platform.apps-crc.testing/*' webOrigins: - 'https://private-ah-ansible-automation-platform.apps-crc.testing'
Field
Description
redirectURIs
This is the location determined in Determine Automation Hub Route. Be sure to add the /* to the end of the
redirectUris
setting.webOrigins
This is the location determined in Determine Automation Hub Route.
NoteEnsure the indentation is correct when entering these settings.
- Click .
To verify connectivity
- Navigate to the automation hub route.
-
Enter the
hub_admin
user credentials and sign in. - Red Hat Single Sign-On processes the authentication and redirects back to automation hub.
7.10. Additional resources
- For more information on running operators on OpenShift Container Platform, see Working with Operators in OpenShift Container Platform in the OpenShift Container Platform product documentation.
Chapter 8. Migrating Red Hat Ansible Automation Platform to Red Hat Ansible Automation Platform Operator
Migrating your Red Hat Ansible Automation Platform deployment to the Ansible Automation Platform Operator allows you to take advantage of the benefits provided by a Kubernetes native operator, including simplified upgrades and full lifecycle support for your Red Hat Ansible Automation Platform deployments.
Use these procedures to migrate any of the following deployments to the Ansible Automation Platform Operator:
- A VM-based installation of Ansible Tower 3.8.6, automation controller, or automation hub
- An Openshift instance of Ansible Tower 3.8.6 (Ansible Automation Platform 1.2)
8.1. Migration considerations
If you are upgrading from Ansible Automation Platform 1.2 on OpenShift Container Platform 3 to Ansible Automation Platform 2.x on OpenShift Container Platform 4, you must provision a fresh OpenShift Container Platform version 4 cluster and then migrate the Ansible Automation Platform to the new cluster.
8.2. Preparing for migration
Before migrating your current Ansible Automation Platform deployment to Ansible Automation Platform Operator, you need to back up your existing data, create k8s secrets for your secret key and postgresql configuration.
If you are migrating both automation controller and automation hub instances, repeat the steps in Creating a secret key secret and Creating a postgresql configuration secret for both and then proceed to Migrating data to the Ansible Automation Platform Operator.
8.2.1. Migrating to Ansible Automation Platform Operator
Prerequisites
To migrate Ansible Automation Platform deployment to Ansible Automation Platform Operator, you must have the following:
- Secret key secret
- Postgresql configuration
- Role-based Access Control for the namespaces on the new OpenShift cluster
- The new OpenShift cluster must be able to connect to the previous PostgreSQL database
You can store the secret key information in the inventory file before the initial Red Hat Ansible Automation Platform installation. If you are unable to remember your secret key or have trouble locating your inventory file, contact Ansible support through the Red Hat Customer portal.
Before migrating your data from Ansible Automation Platform 2.x or earlier, you must back up your data for loss prevention. To backup your data, do the following:
Procedure
- Log in to your current deployment project.
Run
setup.sh
to create a backup of your current data or deployment:For on-prem deployments of version 2.x or earlier:
$ ./setup.sh -b
For OpenShift deployments before version 2.0 (non-operator deployments):
./setup_openshift.sh -b
8.2.2. Creating a secret key secret
To migrate your data to Ansible Automation Platform Operator on OpenShift Container Platform, you must create a secret key that matches the secret key defined in the inventory file during your initial installation. Otherwise, the migrated data will remain encrypted and unusable after migration.
Procedure
- Locate the old secret key in the inventory file you used to deploy Ansible Automation Platform in your previous installation.
Create a yaml file for your secret key:
apiVersion: v1 kind: Secret metadata: name: <resourcename>-admin-password namespace: <target-namespace> stringData: password: mysuperlongpassword type: Opaque
NoteIf admin_password_secret is not provided, the operator will look for a secret named <resourcename>-admin-password for the admin password. If it is not present, the operator will generate a password and create a Secret from it named <resourcename>-admin-password.
Apply the secret key yaml to the cluster:
oc apply -f <secret-key.yml>
8.2.3. Creating a postgresql configuration secret
For migration to be successful, you must provide access to the database for your existing deployment.
Procedure
Create a yaml file for your postgresql configuration secret:
apiVersion: v1 kind: Secret metadata: name: <resourcename>-old-postgres-configuration namespace: <target namespace> stringData: host: "<external ip or url resolvable by the cluster>" port: "<external port, this usually defaults to 5432>" database: "<desired database name>" username: "<username to connect as>" password: "<password to connect with>" type: Opaque
- Apply the postgresql configuration yaml to the cluster:
oc apply -f <old-postgres-configuration.yml>
8.2.4. Verifying network connectivity
To ensure successful migration of your data, verify that you have network connectivity from your new operator deployment to your old deployment database.
Prerequisites
Take note of the host and port information from your existing deployment. This information is located in the postgres.py file located in the conf.d directory.
Procedure
Create a yaml file to verify the connection between your new deployment and your old deployment database:
apiVersion: v1 kind: Pod metadata: name: dbchecker spec: containers: - name: dbchecker image: registry.redhat.io/rhel8/postgresql-13:latest command: ["sleep"] args: ["600"]
Apply the connection checker yaml file to your new project deployment:
oc project ansible-automation-platform oc apply -f connection_checker.yaml
Verify that the connection checker pod is running:
oc get pods
Connect to a pod shell:
oc rsh dbchecker
After the shell session opens in the pod, verify that the new project can connect to your old project cluster:
pg_isready -h <old-host-address> -p <old-port-number> -U awx
Example
<old-host-address>:<old-port-number> - accepting connections
8.3. Migrating data to the Ansible Automation Platform Operator
After you have set your secret key, postgresql credentials, verified network connectivity and installed the Ansible Automation Platform Operator, you must create a custom resource controller object before you can migrate your data.
8.3.1. Creating an AutomationController object
Use the following steps to create an AutomationController custom resource object.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select the Ansible Automation Platform Operator installed on your project namespace.
- Select the Automation Controller tab.
- Click .
- Enter a name for the new deployment.
In Advanced configurations, do the following:
- From the Admin Password Secret list, select your secret key secret.
- From the Database Configuration Secret list, select the postgres configuration secret.
- Click .
8.3.2. Creating an AutomationHub object
Use the following steps to create an AutomationHub custom resource object.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select the Ansible Automation Platform Operator installed on your project namespace.
- Select the Automation Hub tab.
- Click .
- Enter a name for the new deployment.
- In Advanced configurations, select your secret key secret and postgres configuration secret.
- Click .
8.4. Post migration cleanup
After your data migration is complete, you must delete any Instance Groups that are no longer required.
Procedure
Log in to Red Hat Ansible Automation Platform as the administrator with the password you created during migration.
NoteNote: If you did not create an administrator password during migration, one was automatically created for you. To locate this password, go to your project, select
→ and open controller-admin-password. From there you can copy the password and paste it into the Red Hat Ansible Automation Platform password field.- Select → .
- Select all Instance Groups except controlplane and default.
- Click .
Chapter 9. Upgrading Red Hat Ansible Automation Platform Operator on OpenShift Container Platform
The Ansible Automation Platform Operator simplifies the installation, upgrade and deployment of new Red Hat Ansible Automation Platform instances in your OpenShift Container Platform environment.
9.1. Upgrade considerations
Red Hat Ansible Automation Platform version 2.0 was the first release of the Ansible Automation Platform Operator. If you are upgrading from version 2.0, continue to the Upgrading the Ansible Automation Platform Operator procedure.
If you are using a version of OpenShift Container Platform that is not supported by the version of Red Hat Ansible Automation Platform to which you are upgrading, you must upgrade your OpenShift Container Platform cluster to a supported version before upgrading.
Refer to the Red Hat Ansible Automation Platform Life Cycle to determine the OpenShift Container Platform version needed.
For information about upgrading your cluster, refer to Updating clusters.
9.2. Prerequisites
To upgrade to a newer version of Ansible Automation Platform Operator, it is recommended that you do the following:
- Create AutomationControllerBackup and AutomationHubBackup objects. For help with this see Creating Red Hat Ansible Automation Platform backup resources
- Review the release notes for the new Ansible Automation Platform version to which you are upgrading and any intermediate versions.
9.3. Upgrading the Ansible Automation Platform Operator
To upgrade to the latest version of Ansible Automation Platform Operator on OpenShift Container Platform, do the following:
Prodedure
- Log in to OpenShift Container Platform.
- Navigate to → .
- Select the Subscriptions tab.
- Under Upgrade status, click .
- Click .
- Click .
Chapter 10. Adding execution nodes to Red Hat Ansible Automation Platform Operator
You can enable the Ansible Automation Platform Operator with execution nodes by downloading and installing the install bundle.
Prerequisites
- An automation controller instance.
- The receptor collection package is installed.
-
AAP Repository
ansible-automation-platform-2.4-for-rhel-{RHEL-RELEASE-NUMBER}-x86_64-rpms
is enabled.
Procedure
- Log in to Red Hat Ansible Automation Platform.
- In the navigation panel, select → .
- Click .
- Input the Execution Node domain name or IP in the Host Name field.
- Optional: Input the port number in the Listener Port field.
- Click .
- Click the download icon next to Install Bundle. This starts a download, take note of where you save the file
Untar the gz file.
NoteTo run the
install_receptor.yml
playbook you need to install the receptor collection from Ansible Galaxy:Ansible-galaxy collection install -r requirements.yml
Update the playbook with your user name and SSH private key file. Note that
ansible_host
pre-populates with the hostname you input earlier.all: hosts: remote-execution: ansible_host: example_host_name # Same with configured in AAP WebUI ansible_user: <username> #user provided Ansible_ssh_private_key_file: ~/.ssh/id_example
- Open your terminal, and navigate to the directory where you saved the playbook.
To install the bundle run:
ansible-playbook install_receptor.yml -i inventory.yml
- When installed you can now upgrade your execution node by downloading and re-running the playbook for the instance you created.
Verification
To verify receptor service status run the following command:
sudo systemctl status receptor.service
Make sure the service is in active (running)
state
To verify if your playbook runs correctly on your new node run the following command:
watch podman ps
Additional resources
- For more information about managing instance groups see the Managing Instance Groups section of the Automation Controller User Guide.
Chapter 11. Ansible Automation Platform Resource Operator
11.1. Resource Operator overview
Resource Operator is a custom resource (CR) that you can deploy after you have created your automation controller deployment. With Resource Operator you can define projects, job templates, and inventories through the use of YAML files. These YAML files are then used by automation controller to create these resources. You can create the YAML through the Form view that prompts you for keys and values for your YAML code. Alternatively, to work with YAML directly, you can select YAML view.
There are currently two custom resources provided by the Resource Operator:
- AnsibleJob: launches a job in the automation controller instance specified in the Kubernetes secret (automation controller host URL, token).
- JobTemplate: creates a job template in the automation controller instance specified.
11.2. Using Resource Operator
The Resource Operator itself does not do anything until the user creates an object. As soon as the user creates an AutomationControllerProject or AnsibleJob resource, the Resource Operator will start processing that object.
Prerequisites
- Install the Kubernetes-based cluster of your choice.
-
Deploy automation controller using the
automation-controller-operator
.
After installing the automation-controller-resource-operator
in your cluster, you must create a Kubernetes (k8s) secret with the connection information for your automation controller instance. Then you can use Resource Operator to create a k8s resource to manage your automation controller instance.
11.3. Connecting Resource Operator to automation controller
To connect Resource Operator with automation controller you need to create a k8s secret with the connection information for your automation controller instance.
Procedure
To create an OAuth2 token for your user in the automation controller UI:
- In the navigation panel, select → .
- Select the username you want to create a token for.
- Click on , then click .
- You can leave Applications empty. Add a description and select Read or Write for the Scope.
Alternatively, you can create a OAuth2 token at the command-line by using the create_oauth2_token
manage command:
$ controller-manage create_oauth2_token --user example_user New OAuth2 token for example_user: j89ia8OO79te6IAZ97L7E8bMgXCON2
Make sure you provide a valid user when creating tokens. Otherwise, you will get an error message that you tried to issue the command without specifying a user, or supplying a username that does not exist.
11.4. Creating a automation controller connection secret for Resource Operator
To make your connection information available to the Resource Operator, create a k8s secret with the token and host value.
Procedure
The following is an example of the YAML for the connection secret. Save the following example to a file, for example,
automation-controller-connection-secret.yml
.apiVersion: v1 kind: Secret metadata: name: controller-access type: Opaque stringData: token: <generated-token> host: https://my-controller-host.example.com/
- Edit the file with your host and token value.
-
Apply it to your cluster by running the
kubectl create
command:
kubectl create -f controller-connection-secret.yml
11.5. Creating an AnsibleJob
Launch an automation job on automation controller by creating an AnsibleJob resource.
Procedure
Specify the connection secret and job template you want to launch.
apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: generateName: demo-job-1 # generate a unique suffix per 'kubectl create' spec: connection_secret: controller-access job_template_name: Demo Job Template
Configure features such as, inventory, extra variables, and time to live for the job.
spec: connection_secret: controller-access job_template_name: Demo Job Template inventory: Demo Inventory # Inventory prompt on launch needs to be enabled runner_image: quay.io/ansible/controller-resource-runner runner_version: latest job_ttl: 100 extra_vars: # Extra variables prompt on launch needs to be enabled test_var: test job_tags: "provision,install,configuration" # Specify tags to run skip_tags: "configuration,restart" # Skip tasks with a given tag
NoteYou must enable prompt on launch for inventories and extra variables if you are configuring those. To enable Prompt on launch, within the automation controller UI: From the → page, select your template and select the Prompt on launch checkbox next to Inventory and Variables sections.
Launch a workflow job template with an AnsibleJob object by specifying the
workflow_template_name
instead ofjob_template_name
:apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: generateName: demo-job-1 # generate a unique suffix per 'kubectl create' spec: connection_secret: controller-access workflow_template_name: Demo Workflow Template
11.6. Creating a JobTemplate
Create a job template on automation controller by creating a JobTemplate resource:
apiVersion: tower.ansible.com/v1alpha1 kind: JobTemplate metadata: name: jobtemplate-4 spec: connection_secret: controller-access job_template_name: ExampleJobTemplate4 job_template_project: Demo Project job_template_playbook: hello_world.yml job_template_inventory: Demo Inventory