Chapter 3. Configuring Red Hat Ansible Automation Platform components on Red Hat Ansible Automation Platform Operator
After you have installed Ansible Automation Platform Operator and set up your Ansible Automation Platform components, you can configure them for your desired output.
3.1. Configuring platform gateway on Red Hat OpenShift Container Platform web console
You can use these instructions to further configure the platform gateway operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.
3.1.1. Configuring an external database for platform gateway on Red Hat Ansible Automation Platform Operator
There are two scenarios for deploying Ansible Automation Platform with an external database:
Scenario | Action required |
Fresh install | You must specify a single external database instance for the platform to use for the following:
See the aap-configuring-external-db-all-default-components.yml example in the 14.1. Custom resources section for help with this. If using Red Hat Ansible Lightspeed, use the aap-configuring-external-db-with-lightspeed-enabled.yml example. |
Existing external database in 2.4 |
Your existing external database remains the same after upgrading but you must specify the |
To deploy Ansible Automation Platform with an external database, you must first create a Kubernetes secret with credentials for connecting to the database.
By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment. You can deploy Ansible Automation Platform with an external database instead of the managed PostgreSQL pod that the Ansible Automation Platform Operator automatically creates.
Using an external database lets you share and reuse resources and manually manage backups, upgrades, and performance optimizations.
The same external database (PostgreSQL instance) can be used for both automation hub, automation controller, and platform gateway as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.
The following section outlines the steps to configure an external database for your platform gateway on a Ansible Automation Platform Operator.
Prerequisite
The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform.
Ansible Automation Platform 2.5 supports PostgreSQL 15.
Procedure
The external postgres instance credentials and connection information must be stored in a secret, which is then set on the platform gateway spec.
Create a
postgres_configuration_secret
YAML file, following the template below:apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> 1 stringData: host: "<external_ip_or_url_resolvable_by_the_cluster>" 2 port: "<external_port>" 3 database: "<desired_database_name>" username: "<username_to_connect_as>" password: "<password_to_connect_with>" 4 type: "unmanaged" type: Opaque
- 1
- Namespace to create the secret in. This should be the same namespace you want to deploy to.
- 2
- The resolvable hostname for your database node.
- 3
- External port defaults to
5432
. - 4
- Value for variable
password
should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration.
Apply
external-postgres-configuration-secret.yml
to your cluster using theoc create
command.$ oc create -f external-postgres-configuration-secret.yml
NoteThe following example is for a platform gateway deployment. To configure an external database for all components, use the aap-configuring-external-db-all-default-components.yml example in the 14.1. Custom resources section.
When creating your
AnsibleAutomationPlatform
custom resource object, specify the secret on your spec, following the example below:apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: example-aap Namespace: aap spec: database: database_secret: automation-platform-postgres-configuration
3.1.2. Troubleshooting an external database with an unexpected DataStyle set
When upgrading the Ansible Automation Platform Operator you may encounter an error like the following:
NotImplementedError: can't parse timestamptz with DateStyle 'Redwood, SHOW_TIME': '18-MAY-23 20:33:55.765755 +00:00'
Errors like this occur when you have an external database with an unexpected DateStyle set. You can refer to the following steps to resolve this issue.
Procedure
Edit the
/var/lib/pgsql/data/postgres.conf
file on the database server:# vi /var/lib/pgsql/data/postgres.conf
Find and comment out the line:
#datestyle = 'Redwood, SHOW_TIME'
Add the following setting immediately below the newly-commented line:
datestyle = 'iso, mdy'
-
Save and close the
postgres.conf
file. Reload the database configuration:
# systemctl reload postgresql
NoteRunning this command does not disrupt database operations.
3.1.3. Enabling HTTPS redirect for single sign-on (SSO) for platform gateway on OpenShift Container Platform
HTTPS redirect for SAML, allows you to log in once and access all of the platform gateway without needing to reauthenticate.
Prerequisites
- You have successfully configured SAML in the gateway from the Ansible Automation Platform Operator. Refer to Configuring SAML authentication for help with this.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Go to
. - Select your Ansible Automation Platform Operator deployment.
- Select All Instances and go to your AnsibleAutomationPlatform instance.
- Click the ⋮ icon and then select .
In the YAML view paste the following YAML code under the
spec:
section:spec: extra_settings: - setting: REDIRECT_IS_HTTPS value: '"True"'
- Click .
Verification
After you have added the REDIRECT_IS_HTTPS
setting, wait for the pod to redeploy automatically. You can verify this setting makes it into the pod by running:
oc exec -it <gateway-pod-name> -- grep REDIRECT /etc/ansible-automation-platform/gateway/settings.py
3.1.4. Configuring your CSRF settings for your platform gateway Operator ingress
The Red Hat Ansible Automation Platform Operator creates Openshift Routes and configures your Cross-site request forgery (CSRF) settings automatically. When using external ingress, you must configure your CSRF on the ingress to allow for cross-site requests. You can configure your platform gateway operator ingress under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Navigate to
. - Select your Ansible Automation Platform Operator deployment.
- Select the Ansible Automation Platform tab.
For new instances, click
.- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingres annotations, enter any annotations to add to the ingress.
- Under Ingress TLS secret, click the drop-down list and select a secret from the list.
Under YAML view paste in the following code:
spec: extra_settings: - setting: CSRF_TRUSTED_ORIGINS value: - https://my-aap-domain.com
- After you have configured your platform gateway, click at the bottom of the form view (Or in the case of editing existing instances).
Red Hat OpenShift Container Platform creates the pods. This may take a few minutes. You can view the progress by navigating to
Verification
Verify that the following operator pods provided by the Red Hat Ansible Automation Platform Operator installation from platform gateway are running:
Operator manager controllers pods | Automation controller pods | Automation hub pods | Event-Driven Ansible (EDA) pods | platform gateway pods |
---|---|---|---|---|
The operator manager controllers for each of the four operators, include the following:
| After deploying automation controller, you can see the addition of the following pods:
| After deploying automation hub, you can see the addition of the following pods:
| After deploying EDA, you can see the addition of the following pods:
| After deploying platform gateway, you can see the addition of the following pods:
|
A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name>
to see if there is an ImagePullBackOff error on that pod.
3.1.5. Frequently asked questions on platform gateway
- If I delete my Ansible Automation Platform deployment will I still have access to automation controller?
- No, automation controller, automation hub, and Event-Driven Ansible are nested within the deployment and are also deleted.
- Something went wrong with my deployment but I’m not sure what, how can I find out?
- You can follow along in the command line while the operator is reconciling, this can be helpful for debugging. Alternatively you can click into the deployment instance to see the status conditions being updated as the deployment goes on.
- Is it still possible to view individual component logs?
- When troubleshooting you should examine the Ansible Automation Platform instance for the main logs and then each individual component (EDA, AutomationHub, AutomationController) for more specific information.
- Where can I view the condition of an instance?
-
To display status conditions click into the instance, and look under the Details or Events tab. Alternatively, to display the status conditions you can run the get command:
oc get automationcontroller <instance-name> -o jsonpath=Pipe "| jq"
- Can I track my migration in real time?
-
To help track the status of the migration or to understand why migration might have failed you can look at the migration logs as they are running. Use the logs command:
oc logs fresh-install-controller-migration-4.6.0-jwfm6 -f
- I have configured my SAML but authentication fails with this error: "Unable to complete social auth login" What can I do?
-
You must update your Ansible Automation Platform instance to include the
REDIRECT_IS_HTTPS
extra setting. See Enabling single sign-on (SSO) for platform gateway on OpenShift Container Platform for help with this.
3.2. Configuring automation controller on Red Hat OpenShift Container Platform web console
You can use these instructions to configure the automation controller operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.
Automation controller configuration can be done through the automation controller extra_settings or directly in the user interface after deployment. However, it is important to note that configurations made in extra_settings take precedence over settings made in the user interface.
When an instance of automation controller is removed, the associated PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the previous one. Therefore, it is recommended that you manually remove old PVCs before deploying a new automation controller instance in the same namespace. See Finding and deleting PVCs for more information.
3.2.1. Prerequisites
- You have installed the Red Hat Ansible Automation Platform catalog in Operator Hub.
- For automation controller, a default StorageClass must be configured on the cluster for the operator to dynamically create needed PVCs. This is not necessary if an external PostgreSQL database is configured.
- For Hub a StorageClass that supports ReadWriteMany must be available on the cluster to dynamically created the PVC needed for the content, redis and api pods. If it is not the default StorageClass on the cluster, you can specify it when creating your AutomationHub object.
3.2.1.1. Configuring your controller image pull policy
Use this procedure to configure the image pull policy on your automation controller.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Go to
. - Select your Ansible Automation Platform Operator deployment.
- Select the Automation Controller tab.
For new instances, click
.- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
Click Image Pull Policy, click on the radio button to select
. Under- Always
- Never
- IfNotPresent
To display the option under Image Pull Secrets, click the arrow.
- Click Add Image Pull Secret and enter a value. beside
To display fields under the Web container resource requirements drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the Task container resource requirements drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the EE Control Plane container resource requirements drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the PostgreSQL init container resource requirements (when using a managed service) drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the Redis container resource requirements drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the PostgreSQL container resource requirements (when using a managed instance)* drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display the PostgreSQL container storage requirements (when using a managed instance) drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
- Under Replicas, enter the number of instance replicas.
- Under Remove used secrets on instance removal, select true or false. The default is false.
- Under Preload instance with data upon creation, select true or false. The default is true.
3.2.1.2. Configuring your controller LDAP security
You can configure your LDAP SSL configuration for automation controller through any of the following options:
- The automation controller user interface.
- The platform gateway user interface. See the Configuring LDAP authentication section of the Access management and authentication guide for additional steps.
- The following procedure steps.
Procedure
Create a secret in your Ansible Automation Platform namespace for the
bundle-ca.crt
file (the filename must bebundle-ca.crt
):$ oc create secret -n aap-namespace generic bundle-ca-secret --from-file=bundle-ca.crt
Add the
bundle_cacert_secret
to the Ansible Automation Platform customer resource:... spec: bundle_cacert_secret: bundle-ca-secret ...
Verification
You can verify the expected certificate by running:
oc exec -it deployment.apps/aap-gateway - openssl x509 -in /etc/pki/tls/certs/bundle-ca.crt -noout -text
3.2.1.3. Configuring your automation controller operator route options
The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation controller operator route options under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Navigate to
. - Select your Ansible Automation Platform Operator deployment.
- Select the Automation Controller tab.
For new instances, click
.- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingress type, click the drop-down menu and select Route.
- Under Route DNS host, enter a common host name that the route answers to.
- Under Route TLS termination mechanism, click the drop-down menu and select Edge or Passthrough. For most instances Edge should be selected.
- Under Route TLS credential secret, click the drop-down menu and select a secret from the list.
- Under Enable persistence for /var/lib/projects directory select either true or false by moving the slider.
3.2.1.4. Configuring the ingress type for your automation controller operator
The Ansible Automation Platform Operator installation form allows you to further configure your automation controller operator ingress under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Navigate to
. - Select your Ansible Automation Platform Operator deployment.
- Select the Automation Controller tab.
For new instances, click
.- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingress type, click the drop-down menu and select Ingress.
- Under Ingress annotations, enter any annotations to add to the ingress.
- Under Ingress TLS secret, click the drop-down menu and select a secret from the list.
After you have configured your automation controller operator, click
at the bottom of the form view. Red Hat OpenShift Container Platform creates the pods. This may take a few minutes.
You can view the progress by navigating to
Verification
Verify that the following operator pods provided by the Ansible Automation Platform Operator installation from automation controller are running:
Operator manager controllers | Automation controller | Automation hub | Event-Driven Ansible (EDA) |
---|---|---|---|
The operator manager controllers for each of the three operators, include the following:
| After deploying automation controller, you can see the addition of the following pods:
| After deploying automation hub, you can see the addition of the following pods:
| After deploying EDA, you can see the addition of the following pods:
|
A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name>
to see if there is an ImagePullBackOff error on that pod.
3.2.2. Configuring an external database for automation controller on Red Hat Ansible Automation Platform Operator
For users who prefer to deploy Ansible Automation Platform with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the oc create
command.
By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment. You can deploy Ansible Automation Platform with an external database instead of the managed PostgreSQL pod that the Ansible Automation Platform Operator automatically creates.
Using an external database lets you share and reuse resources and manually manage backups, upgrades, and performance optimizations.
The same external database (PostgreSQL instance) can be used for both automation hub, automation controller, and platform gateway as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.
The following section outlines the steps to configure an external database for your automation controller on a Ansible Automation Platform Operator.
Prerequisite
The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform.
Ansible Automation Platform 2.5 supports PostgreSQL 15.
Procedure
The external postgres instance credentials and connection information must be stored in a secret, which is then set on the automation controller spec.
Create a
postgres_configuration_secret
YAML file, following the template below:apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> 1 stringData: host: "<external_ip_or_url_resolvable_by_the_cluster>" 2 port: "<external_port>" 3 database: "<desired_database_name>" username: "<username_to_connect_as>" password: "<password_to_connect_with>" 4 sslmode: "prefer" 5 type: "unmanaged" type: Opaque
- 1
- Namespace to create the secret in. This should be the same namespace you want to deploy to.
- 2
- The resolvable hostname for your database node.
- 3
- External port defaults to
5432
. - 4
- Value for variable
password
should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration. - 5
- The variable
sslmode
is valid forexternal
databases only. The allowed values are:prefer
,disable
,allow
,require
,verify-ca
, andverify-full
.
Apply
external-postgres-configuration-secret.yml
to your cluster using theoc create
command.$ oc create -f external-postgres-configuration-secret.yml
When creating your
AutomationController
custom resource object, specify the secret on your spec, following the example below:apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: controller-dev spec: postgres_configuration_secret: external-postgres-configuration
3.2.3. Finding and deleting PVCs
A persistent volume claim (PVC) is a storage volume used to store data that automation hub and automation controller applications use. These PVCs are independent from the applications and remain even when the application is deleted. If you are confident that you no longer need a PVC, or have backed it up elsewhere, you can manually delete them.
Procedure
List the existing PVCs in your deployment namespace:
oc get pvc -n <namespace>
- Identify the PVC associated with your previous deployment by comparing the old deployment name and the PVC name.
Delete the old PVC:
oc delete pvc -n <namespace> <pvc-name>
3.2.4. Additional resources
- For more information on running operators on OpenShift Container Platform, navigate to the OpenShift Container Platform product documentation and click the Operators - Working with Operators in OpenShift Container Platform guide.
3.3. Configuring automation hub on Red Hat OpenShift Container Platform web console
You can use these instructions to configure the automation hub operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.
Automation hub configuration can be done through the automation hub pulp_settings or directly in the user interface after deployment. However, it is important to note that configurations made in pulp_settings take precedence over settings made in the user interface. Hub settings should always be set as lowercase on the Hub custom resource specification.
When an instance of automation hub is removed, the PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the previous one. Therefore, it is recommended that you manually remove old PVCs before deploying a new automation hub instance in the same namespace. See Finding and deleting PVCs for more information.
3.3.1. Prerequisites
- You have installed the Ansible Automation Platform Operator in Operator Hub.
3.3.1.1. Storage options for Ansible Automation Platform Operator installation on Red Hat OpenShift Container Platform
Automation hub requires ReadWriteMany
file-based storage, Azure Blob storage, or Amazon S3-compliant storage for operation so that multiple pods can access shared content, such as collections.
The process for configuring object storage on the AutomationHub
CR is similar for Amazon S3 and Azure Blob Storage.
If you are using file-based storage and your installation scenario includes automation hub, ensure that the storage option for Ansible Automation Platform Operator is set to ReadWriteMany
. ReadWriteMany
is the default storage option.
In addition, OpenShift Data Foundation provides a ReadWriteMany
or S3-compliant implementation. Also, you can set up NFS storage configuration to support ReadWriteMany
. This, however, introduces the NFS server as a potential, single point of failure.
Additional resources
- Persistent storage using NFS in the OpenShift Container Platform Storage guide
- IBM’s How do I create a storage class for NFS dynamic storage provisioning in an OpenShift environment?
3.3.1.1.1. Provisioning OCP storage with ReadWriteMany
access mode
To ensure successful installation of Ansible Automation Platform Operator, you must provision your storage type for automation hub initially to ReadWriteMany
access mode.
Procedure
-
Go to
. - Click .
In the first step, update the
accessModes
from the defaultReadWriteOnce
toReadWriteMany
.- See Provisioning to update the access mode. for a detailed overview.
- Complete the additional steps in this section to create the persistent volume claim (PVC).
3.3.1.1.2. Configuring object storage on Amazon S3
Red Hat supports Amazon Simple Storage Service (S3) for automation hub. You can configure it when deploying the AutomationHub
custom resource (CR), or you can configure it for an existing instance.
Prerequisites
- Create an Amazon S3 bucket to store the objects.
- Note the name of the S3 bucket.
Procedure
Create a Kubernetes secret containing the AWS credentials and connection details, and the name of your Amazon S3 bucket. The following example creates a secret called
test-s3
:$ oc -n $HUB_NAMESPACE apply -f- <<EOF apiVersion: v1 kind: Secret metadata: name: 'test-s3' stringData: s3-access-key-id: $S3_ACCESS_KEY_ID s3-secret-access-key: $S3_SECRET_ACCESS_KEY s3-bucket-name: $S3_BUCKET_NAME s3-region: $S3_REGION EOF
Add the secret to the automation hub custom resource (CR)
spec
:spec: object_storage_s3_secret: test-s3
-
If you are applying this secret to an existing instance, restart the API pods for the change to take effect.
<hub-name>
is the name of your hub instance.
$ oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api
3.3.1.1.3. Configuring object storage on Azure Blob
Red Hat supports Azure Blob Storage for automation hub. You can configure it when deploying the AutomationHub
custom resource (CR), or you can configure it for an existing instance.
Prerequisites
- Create an Azure Storage blob container to store the objects.
- Note the name of the blob container.
Procedure
Create a Kubernetes secret containing the credentials and connection details for your Azure account, and the name of your Azure Storage blob container. The following example creates a secret called
test-azure
:$ oc -n $HUB_NAMESPACE apply -f- <<EOF apiVersion: v1 kind: Secret metadata: name: 'test-azure' stringData: azure-account-name: $AZURE_ACCOUNT_NAME azure-account-key: $AZURE_ACCOUNT_KEY azure-container: $AZURE_CONTAINER azure-container-path: $AZURE_CONTAINER_PATH azure-connection-string: $AZURE_CONNECTION_STRING EOF
Add the secret to the automation hub custom resource (CR)
spec
:spec: object_storage_azure_secret: test-azure
-
If you are applying this secret to an existing instance, restart the API pods for the change to take effect.
<hub-name>
is the name of your hub instance.
$ oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api
3.3.1.2. Configure your automation hub operator route options
The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation hub operator route options under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Navigate to
. - Select your Ansible Automation Platform Operator deployment.
- Select the Automation Hub tab.
For new instances, click
.- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingress type, click the drop-down menu and select Route.
- Under Route DNS host, enter a common host name that the route answers to.
- Under Route TLS termination mechanism, click the drop-down menu and select Edge or Passthrough.
- Under Route TLS credential secret, click the drop-down menu and select a secret from the list.
3.3.1.3. Configuring the ingress type for your automation hub operator
The Ansible Automation Platform Operator installation form allows you to further configure your automation hub operator ingress under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Navigate to
. - Select your Ansible Automation Platform Operator deployment.
- Select the Automation Hub tab.
For new instances, click
.- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingress type, click the drop-down menu and select Ingress.
- Under Ingress annotations, enter any annotations to add to the ingress.
- Under Ingress TLS secret, click the drop-down menu and select a secret from the list.
After you have configured your automation hub operator, click
at the bottom of the form view. Red Hat OpenShift Container Platform creates the pods. This may take a few minutes.
You can view the progress by navigating to
Verification
Verify that the following operator pods provided by the Ansible Automation Platform Operator installation from automation hub are running:
Operator manager controllers | Automation controller | Automation hub |
---|---|---|
The operator manager controllers for each of the 3 operators, include the following:
| After deploying automation controller, you will see the addition of these pods:
| After deploying automation hub, you will see the addition of these pods:
|
A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name>
to see if there is an ImagePullBackOff error on that pod.
3.3.2. Finding the automation hub route
You can access the automation hub through the platform gateway or through the following procedure.
Procedure
- Log into Red Hat OpenShift Container Platform.
-
Navigate to
. - Under Location, click on the URL for your automation hub instance.
The automation hub user interface launches where you can sign in with the administrator credentials specified during the operator configuration process.
If you did not specify an administrator password during configuration, one was automatically created for you. To locate this password, go to your project, select
3.3.3. Configuring an external database for automation hub on Red Hat Ansible Automation Platform Operator
For users who prefer to deploy Ansible Automation Platform with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the oc create
command.
By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment.
You can choose to use an external database instead if you prefer to use a dedicated node to ensure dedicated resources or to manually manage backups, upgrades, or performance tweaks.
The same external database (PostgreSQL instance) can be used for both automation hub, automation controller, and platform gateway as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.
The following section outlines the steps to configure an external database for your automation hub on a Ansible Automation Platform Operator.
Prerequisite
The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform.
Ansible Automation Platform 2.5 supports PostgreSQL 15.
Procedure
The external postgres instance credentials and connection information will need to be stored in a secret, which will then be set on the automation hub spec.
Create a
postgres_configuration_secret
YAML file, following the template below:apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> 1 stringData: host: "<external_ip_or_url_resolvable_by_the_cluster>" 2 port: "<external_port>" 3 database: "<desired_database_name>" username: "<username_to_connect_as>" password: "<password_to_connect_with>" 4 sslmode: "prefer" 5 type: "unmanaged" type: Opaque
- 1
- Namespace to create the secret in. This should be the same namespace you want to deploy to.
- 2
- The resolvable hostname for your database node.
- 3
- External port defaults to
5432
. - 4
- Value for variable
password
should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration. - 5
- The variable
sslmode
is valid forexternal
databases only. The allowed values are:prefer
,disable
,allow
,require
,verify-ca
, andverify-full
.
Apply
external-postgres-configuration-secret.yml
to your cluster using theoc create
command.$ oc create -f external-postgres-configuration-secret.yml
When creating your
AutomationHub
custom resource object, specify the secret on your spec, following the example below:apiVersion: automationhub.ansible.com/v1beta1 kind: AutomationHub metadata: name: hub-dev spec: postgres_configuration_secret: external-postgres-configuration
3.3.3.1. Enabling the hstore extension for the automation hub PostgreSQL database
Added in Ansible Automation Platform 2.5, the database migration script uses hstore
fields to store information, therefore the hstore
extension must be enabled in the automation hub PostgreSQL database.
This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server.
If the PostgreSQL database is external, you must enable the hstore
extension in the automation hub PostgreSQL database manually before installation.
If the hstore
extension is not enabled before installation, a failure raises during database migration.
Procedure
Check if the extension is available on the PostgreSQL server (automation hub database).
$ psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"
Where the default value for
<automation hub database>
isautomationhub
.Example output with
hstore
available:name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)
Example output with
hstore
not available:name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)
On a RHEL based server, the
hstore
extension is included in thepostgresql-contrib
RPM package, which is not installed automatically when installing the PostgreSQL server RPM package.To install the RPM package, use the following command:
dnf install postgresql-contrib
Load the
hstore
PostgreSQL extension into the automation hub database with the following command:$ psql -d <automation hub database> -c "CREATE EXTENSION hstore;"
In the following output, the
installed_version
field lists thehstore
extension used, indicating thathstore
is enabled.name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)
3.3.4. Finding and deleting PVCs
A persistent volume claim (PVC) is a storage volume used to store data that automation hub and automation controller applications use. These PVCs are independent from the applications and remain even when the application is deleted. If you are confident that you no longer need a PVC, or have backed it up elsewhere, you can manually delete them.
Procedure
List the existing PVCs in your deployment namespace:
oc get pvc -n <namespace>
- Identify the PVC associated with your previous deployment by comparing the old deployment name and the PVC name.
Delete the old PVC:
oc delete pvc -n <namespace> <pvc-name>
3.3.5. Additional configurations
A collection download count can help you understand collection usage. To add a collection download count to automation hub, set the following configuration:
spec: pulp_settings: ansible_collect_download_count: true
When ansible_collect_download_count
is enabled, automation hub will display a download count by the collection.
3.3.6. Adding allowed registries to the automation controller image configuration
Before you can deploy a container image in automation hub, you must add the registry to the allowedRegistries
in the automation controller image configuration. To do this you can copy and paste the following code into your automation controller image YAML.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Navigate to
. - Select the Resources drop-down list and type "Image".
- Select Image (config,openshift.io/v1).
- Click Name heading. under the
- Select the tab.
Paste in the following under spec value:
spec: registrySources: allowedRegistries: - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - <OCP route for your automation hub>
- Click .
3.3.7. Additional resources
- For more information on running operators on OpenShift Container Platform, navigate to the OpenShift Container Platform product documentation and click the Operators - Working with Operators in OpenShift Container Platform guide.
3.4. Deploying clustered Redis on Red Hat Ansible Automation Platform Operator
When you create an Ansible Automation Platform instance through the Ansible Automation Platform Operator, standalone Redis is assigned by default. To deploy clustered Redis, use the following procedure.
For more information about Redis, refer to Caching and queueing system in the Planning your installation guide.
Prerequisites
- You have installed an Ansible Automation Platform Operator deployment.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Navigate to
. - Select your Ansible Automation Platform Operator deployment.
- Select the Details tab.
On the Ansible Automation Platform tile click .
- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Change the redis_mode value to "cluster".
- Click , then .
- Click to expand Advanced configuration.
- For the Redis Mode list, select Cluster.
- Configure the rest of your instance as necessary, then click .
Your instance deploys with a cluster Redis with 6 Redis replicas as default.
You can modify your automation hub default redis cache PVC volume size, for help with this see, Modifying the default redis cache PVC volume size automation hub.