Chapter 5. Configuring Red Hat Ansible Automation Platform components on Red Hat Ansible Automation Platform Operator
After you have installed Ansible Automation Platform Operator and set up your Ansible Automation Platform components, you can configure them for your desired output.
5.1. Configuring platform gateway on Red Hat OpenShift Container Platform web console Copy linkLink copied to clipboard!
You can use these instructions to further configure the platform gateway operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.
5.1.1. Configuring an external database for platform gateway on Red Hat Ansible Automation Platform Operator Copy linkLink copied to clipboard!
There are two scenarios for deploying Ansible Automation Platform with an external database:
| Scenario | Action required |
| Fresh install | You must specify a single external database instance for the platform to use for the following:
See the aap-configuring-external-db-all-default-components.yml example in the 14.1. Custom resources section for help with this. If using Red Hat Ansible Lightspeed, use the aap-configuring-external-db-with-lightspeed-enabled.yml example. |
| Existing external database in 2.4 |
Your existing external database remains the same after upgrading but you must specify the |
To deploy Ansible Automation Platform with an external database, you must first create a Kubernetes secret with credentials for connecting to the database.
By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment. You can deploy Ansible Automation Platform with an external database instead of the managed PostgreSQL pod that the Ansible Automation Platform Operator automatically creates.
Using an external database lets you share and reuse resources and manually manage backups, upgrades, and performance optimizations.
The same external database (PostgreSQL instance) can be used for both automation hub, automation controller, and platform gateway as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.
The following section outlines the steps to configure an external database for your platform gateway on a Ansible Automation Platform Operator.
Prerequisite
The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform. The external postgres instance credentials and connection information must be stored in a secret, which is then set on the platform gateway spec.
Ansible Automation Platform 2.6 supports PostgreSQL 15 for its managed databases and additionally supports PostgreSQL 15, 16, and 17 for external databases.
If you choose to use an externally managed database with version 16 or 17 you must also rely on external backup and restore processes.
Procedure
Create a
postgres_configuration_secretYAML file, following the template below:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Namespace to create the secret in. This should be the same namespace you want to deploy to.
- The resolvable hostname for your database node.
-
External port defaults to
5432. -
Value for variable
passwordshould not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration.
Apply
external-postgres-configuration-secret.ymlto your cluster using theoc createcommand.oc create -f external-postgres-configuration-secret.yml
$ oc create -f external-postgres-configuration-secret.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe following example is for a platform gateway deployment. To configure an external database for all components, use the aap-configuring-external-db-all-default-components.yml example in the 14.1. Custom resources section.
When creating your
AnsibleAutomationPlatformcustom resource object, specify the secret on your spec, following the example below:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.1.2. Troubleshooting an external database with an unexpected DataStyle set Copy linkLink copied to clipboard!
When upgrading the Ansible Automation Platform Operator you may encounter an error like the following:
NotImplementedError: can't parse timestamptz with DateStyle 'Redwood, SHOW_TIME': '18-MAY-23 20:33:55.765755 +00:00'
NotImplementedError: can't parse timestamptz with DateStyle 'Redwood, SHOW_TIME': '18-MAY-23 20:33:55.765755 +00:00'
Errors like this occur when you have an external database with an unexpected DateStyle set. You can refer to the following steps to resolve this issue.
Procedure
Edit the
/var/lib/pgsql/data/postgres.conffile on the database server:vi /var/lib/pgsql/data/postgres.conf
# vi /var/lib/pgsql/data/postgres.confCopy to Clipboard Copied! Toggle word wrap Toggle overflow Find and comment out the line:
#datestyle = 'Redwood, SHOW_TIME'
#datestyle = 'Redwood, SHOW_TIME'Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following setting immediately below the newly-commented line:
datestyle = 'iso, mdy'
datestyle = 'iso, mdy'Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save and close the
postgres.conffile. Reload the database configuration:
systemctl reload postgresql
# systemctl reload postgresqlCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteRunning this command does not disrupt database operations.
5.1.3. Enabling HTTPS redirect for single sign-on (SSO) for platform gateway on OpenShift Container Platform Copy linkLink copied to clipboard!
HTTPS redirect for SAML, allows you to log in once and access all of the platform gateway without needing to reauthenticate.
Prerequisites
- You have successfully configured SAML in the gateway from the Ansible Automation Platform Operator. Refer to Configuring SAML authentication for help with this.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Go to
. - Select your Ansible Automation Platform Operator deployment.
- Select All Instances and go to your AnsibleAutomationPlatform instance.
- Click the ⋮ icon and then select .
In the YAML view paste the following YAML code under the
spec:section:spec: extra_settings: - setting: REDIRECT_IS_HTTPS value: '"True"'spec: extra_settings: - setting: REDIRECT_IS_HTTPS value: '"True"'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click .
Verification
After you have added the REDIRECT_IS_HTTPS setting, wait for the pod to redeploy automatically. You can verify this setting makes it into the pod by running:
oc exec -it <gateway-pod-name> -- grep REDIRECT /etc/ansible-automation-platform/gateway/settings.py
oc exec -it <gateway-pod-name> -- grep REDIRECT /etc/ansible-automation-platform/gateway/settings.py
5.1.4. Configuring your CSRF settings for your platform gateway Operator ingress Copy linkLink copied to clipboard!
The Red Hat Ansible Automation Platform Operator creates Openshift Routes and configures your Cross-site request forgery (CSRF) settings automatically. When using external ingress, you must configure your CSRF on the ingress to allow for cross-site requests. You can configure your platform gateway operator ingress under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Navigate to
. - Select your Ansible Automation Platform Operator deployment.
- Select the Ansible Automation Platform tab.
For new instances, click .
- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingres annotations, enter any annotations to add to the ingress.
- Under Ingress TLS secret, click the drop-down list and select a secret from the list.
Under YAML view paste in the following code:
spec: extra_settings: - setting: CSRF_TRUSTED_ORIGINS value: - https://my-aap-domain.comspec: extra_settings: - setting: CSRF_TRUSTED_ORIGINS value: - https://my-aap-domain.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow - After you have configured your platform gateway, click at the bottom of the form view (Or in the case of editing existing instances).
Verification
Red Hat OpenShift Container Platform creates the pods. This may take a few minutes. You can view the progress by navigating to
| Operator manager controllers pods | {ControllerNameStart} pods | {HubNameStart} pods | {EDAName} (EDA) pods | {Gateway} pods |
|---|---|---|---|---|
| The operator manager controllers for each of the four operators, include the following:
| After deploying automation controller, you can see the addition of the following pods:
| After deploying automation hub, you can see the addition of the following pods:
| After deploying EDA, you can see the addition of the following pods:
| After deploying platform gateway, you can see the addition of the following pods:
|
A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name> to see if there is an ImagePullBackOff error on that pod.
5.1.5. Configuring custom PostgreSQL settings for Ansible Automation Platform Copy linkLink copied to clipboard!
The postgres_extra_settings variable allows you to pass a list of custom name: value pairs directly to the PostgreSQL configuration file (/var/lib/pgsql/data/postgresql.conf) within the database pod.
Prerequisites
- You have installed the Ansible Automation Platform Operator.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Go to Operators
Installed Operators. - Select your Ansible Automation Platform Operator deployment.
- Select All Instances and go to your Ansible Automation Platform instance.
- Click the ⋮ icon and then select Edit Ansible Automation Platform.
-
In the YAML view, locate the
spec:section Add the
databasesection and the required settings underspec:. The following example configures SSL ciphers and the maximum connections:spec: database: postgres_extra_settings: - name: max_connections value: '1000'spec: database: postgres_extra_settings: - name: max_connections value: '1000'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click .
Verification
Inspect the PostgreSQL pod logs to verify the new settings.
Alternatively, you can run the following command to check the settings. Replace <aap postgres pod> with the name of your PostgreSQL pod.
+
oc exec -it <aap postgres pod> -- psql -d gateway -c "SHOW max_connections;"
$ oc exec -it <aap postgres pod> -- psql -d gateway -c "SHOW max_connections;"
5.1.6. Frequently asked questions on platform gateway Copy linkLink copied to clipboard!
Manage your Ansible Automation Platform deployment and troubleshoot common issues with these frequently asked questions. Learn about resource management, logging, and error recovery for your components.
- If I delete my Ansible Automation Platform deployment will I still have access to automation controller?
- No, automation controller, automation hub, and Event-Driven Ansible are nested within the deployment and are also deleted.
- How must I manage parameters when adding or removing them in the Ansible Automation Platform custom resource (CR) hierarchy?
- When adding parameters, you can add it to the Ansible Automation Platform custom resource (CR) only and those parameters will work their way down to the nested CRs.
When removing parameters, you have to remove them both from the Ansible Automation Platform CR and the nested CR, for example, the Automation Controller CR.
- Something went wrong with my deployment but I’m not sure what, how can I find out?
- You can follow along in the command line while the operator is reconciling, this can be helpful for debugging. Alternatively you can click into the deployment instance to see the status conditions being updated as the deployment goes on.
- Is it still possible to view individual component logs?
- When troubleshooting you should examine the Ansible Automation Platform instance for the main logs and then each individual component (EDA, AutomationHub, AutomationController) for more specific information.
- Where can I view the condition of an instance?
-
To display status conditions click into the instance, and look under the Details or Events tab. Alternatively, to display the status conditions you can run the get command:
oc get automationcontroller <instance-name> -o jsonpath=Pipe "| jq" - Can I track my migration in real time?
-
To help track the status of the migration or to understand why migration might have failed you can look at the migration logs as they are running. Use the logs command:
oc logs fresh-install-controller-migration-4.6.0-jwfm6 -f - I have configured my SAML but authentication fails with this error: "Unable to complete social auth login" What can I do?
-
You must update your Ansible Automation Platform instance to include the
REDIRECT_IS_HTTPSextra setting. See Enabling single sign-on (SSO) for platform gateway on OpenShift Container Platform for help with this.
5.2. Configuring automation controller on Red Hat OpenShift Container Platform web console Copy linkLink copied to clipboard!
You can use these instructions to configure the automation controller operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.
Automation controller configuration can be done through the automation controller extra_settings or directly in the user interface after deployment. However, it is important to note that configurations made in extra_settings take precedence over settings made in the user interface.
When an instance of automation controller is removed, the associated PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the previous one. Therefore, it is recommended that you manually remove old PVCs before deploying a new automation controller instance in the same namespace. See Finding and deleting PVCs for more information.
5.2.1. Prerequisites Copy linkLink copied to clipboard!
- You have installed the Red Hat Ansible Automation Platform catalog in Operator Hub.
- For automation controller, a default StorageClass must be configured on the cluster for the operator to dynamically create needed PVCs. This is not necessary if an external PostgreSQL database is configured.
- For Hub a StorageClass that supports ReadWriteMany must be available on the cluster to dynamically created the PVC needed for the content, redis and api pods. If it is not the default StorageClass on the cluster, you can specify it when creating your AutomationHub object.
5.2.1.1. Configuring your controller image pull policy Copy linkLink copied to clipboard!
Use this procedure to configure the image pull policy on your automation controller.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Go to
. - Select your Ansible Automation Platform Operator deployment.
- Select the Automation Controller tab.
For new instances, click .
- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
Click . Under Image Pull Policy, click on the radio button to select
- Always
- Never
- IfNotPresent
To display the option under Image Pull Secrets, click the arrow.
- Click beside Add Image Pull Secret and enter a value.
To display fields under the Web container resource requirements drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the Task container resource requirements drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the EE Control Plane container resource requirements drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the PostgreSQL init container resource requirements (when using a managed service) drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the Redis container resource requirements drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the PostgreSQL container resource requirements (when using a managed instance)* drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display the PostgreSQL container storage requirements (when using a managed instance) drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
- Under Replicas, enter the number of instance replicas.
- Under Remove used secrets on instance removal, select true or false. The default is false.
- Under Preload instance with data upon creation, select true or false. The default is true.
5.2.1.2. Configuring your controller LDAP security Copy linkLink copied to clipboard!
You can configure your LDAP SSL configuration for automation controller through any of the following options:
- The automation controller user interface.
- The platform gateway user interface. See the Configuring LDAP authentication section of the Access management and authentication guide for additional steps.
- The following procedure steps.
Procedure
Create a secret in your Ansible Automation Platform namespace for the
bundle-ca.crtfile (the filename must bebundle-ca.crt):oc create secret -n aap-namespace generic bundle-ca-secret --from-file=bundle-ca.crt
$ oc create secret -n aap-namespace generic bundle-ca-secret --from-file=bundle-ca.crtCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
bundle_cacert_secretto the Ansible Automation Platform customer resource:... spec: bundle_cacert_secret: bundle-ca-secret ...
... spec: bundle_cacert_secret: bundle-ca-secret ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Verification
You can verify the expected certificate by running:
oc exec -it deployment.apps/aap-gateway - openssl x509 -in /etc/pki/tls/certs/bundle-ca.crt -noout -text
oc exec -it deployment.apps/aap-gateway - openssl x509 -in /etc/pki/tls/certs/bundle-ca.crt -noout -textCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.1.3. Configuring your automation controller operator route options Copy linkLink copied to clipboard!
The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation controller operator route options under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Navigate to
. - Select your Ansible Automation Platform Operator deployment.
- Select the Automation Controller tab.
For new instances, click .
- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingress type, click the drop-down menu and select Route.
- Under Route DNS host, enter a common host name that the route answers to.
- Under Route TLS termination mechanism, click the drop-down menu and select Edge or Passthrough. For most instances Edge should be selected.
- Under Route TLS credential secret, click the drop-down menu and select a secret from the list.
Under Enable persistence for /var/lib/projects directory select either true or false by moving the slider.
NoteAfter you have configured your route you can customize your hostname by adding
route_host:to the YAML for that automation controller instance.
5.2.1.4. Configuring the ingress type for your automation controller operator Copy linkLink copied to clipboard!
The Ansible Automation Platform Operator installation form allows you to further configure your automation controller operator ingress under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Navigate to
. - Select your Ansible Automation Platform Operator deployment.
- Select the Automation Controller tab.
For new instances, click .
- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingress type, click the drop-down menu and select Ingress.
- Under Ingress annotations, enter any annotations to add to the ingress.
- Under Ingress TLS secret, click the drop-down menu and select a secret from the list.
Verification
After you have configured your automation controller operator, click at the bottom of the form view. Red Hat OpenShift Container Platform creates the pods. This may take a few minutes.
You can view the progress by navigating to
Verify that the following operator pods provided by the Ansible Automation Platform Operator installation from automation controller are running:
| Operator manager controllers | Automation controller | Automation hub | Event-Driven Ansible (EDA) |
|---|---|---|---|
| The operator manager controllers for each of the three operators, include the following:
| After deploying automation controller, you can see the addition of the following pods:
| After deploying automation hub, you can see the addition of the following pods:
| After deploying EDA, you can see the addition of the following pods:
|
A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name> to see if there is an ImagePullBackOff error on that pod.
5.2.2. Configuring an external database for automation controller on Red Hat Ansible Automation Platform Operator Copy linkLink copied to clipboard!
For users who prefer to deploy Ansible Automation Platform with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the oc create command.
By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment. You can deploy Ansible Automation Platform with an external database instead of the managed PostgreSQL pod that the Ansible Automation Platform Operator automatically creates.
Using an external database lets you share and reuse resources and manually manage backups, upgrades, and performance optimizations.
The same external database (PostgreSQL instance) can be used for both automation hub, automation controller, and platform gateway as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.
The following section outlines the steps to configure an external database for your automation controller on a Ansible Automation Platform Operator.
Prerequisite
The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform. The external postgres instance credentials and connection information must be stored in a secret, which is then set on the automation controller spec.
Ansible Automation Platform 2.6 supports PostgreSQL 15 for its managed databases and additionally supports PostgreSQL 15, 16, and 17 for external databases.
If you choose to use an externally managed database with version 16 or 17 you must also rely on external backup and restore processes.
Procedure
Create a
postgres_configuration_secretYAML file, following the template below:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Namespace to create the secret in. This should be the same namespace you want to deploy to.
- The resolvable hostname for your database node.
-
External port defaults to
5432. -
Value for variable
passwordshould not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration. -
The variable
sslmodeis valid forexternaldatabases only. The allowed values are:prefer,disable,allow,require,verify-ca, andverify-full.
Apply
external-postgres-configuration-secret.ymlto your cluster using theoc createcommand.oc create -f external-postgres-configuration-secret.yml
$ oc create -f external-postgres-configuration-secret.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow When creating your
AutomationControllercustom resource object, specify the secret on your spec, following the example below:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.2.3. Finding and deleting PVCs Copy linkLink copied to clipboard!
A persistent volume claim (PVC) is a storage volume used to store data that automation hub and automation controller applications use. These PVCs are independent from the applications and remain even when the application is deleted. If you are confident that you no longer need a PVC, or have backed it up elsewhere, you can manually delete them.
Procedure
List the existing PVCs in your deployment namespace:
oc get pvc -n <namespace>
oc get pvc -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Identify the PVC associated with your previous deployment by comparing the old deployment name and the PVC name.
Delete the old PVC:
oc delete pvc -n <namespace> <pvc-name>
oc delete pvc -n <namespace> <pvc-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3. Configuring automation hub on Red Hat OpenShift Container Platform web console Copy linkLink copied to clipboard!
You can use these instructions to configure the automation hub operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.
Automation hub configuration can be done through the automation hub pulp_settings or directly in the user interface after deployment. However, it is important to note that configurations made in pulp_settings take precedence over settings made in the user interface. Hub settings should always be set as lowercase on the Hub custom resource specification.
When an instance of automation hub is removed, the PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the previous one. Therefore, it is recommended that you manually remove old PVCs before deploying a new automation hub instance in the same namespace. See Finding and deleting PVCs for more information.
5.3.1. Prerequisites Copy linkLink copied to clipboard!
- You have installed the Ansible Automation Platform Operator in Operator Hub.
5.3.1.1. Storage options for Ansible Automation Platform Operator installation on Red Hat OpenShift Container Platform Copy linkLink copied to clipboard!
Automation hub requires ReadWriteMany file-based storage, Azure Blob storage, or Amazon S3 storage for operation so that multiple pods can access shared content, such as collections.
The process for configuring object storage on the AutomationHub CR is similar for Amazon S3 and Azure Blob Storage.
If you are using file-based storage and your installation scenario includes automation hub, ensure that the storage option for Ansible Automation Platform Operator is set to ReadWriteMany. ReadWriteMany is the default storage option.
In addition, OpenShift Data Foundation provides a ReadWriteMany or S3 implementation. Also, you can set up NFS storage configuration to support ReadWriteMany. This, however, introduces the NFS server as a potential, single point of failure.
5.3.1.1.1. Provisioning OCP storage with ReadWriteMany access mode Copy linkLink copied to clipboard!
To ensure successful installation of Ansible Automation Platform Operator, you must provision your storage type for automation hub initially to ReadWriteMany access mode.
Procedure
-
Go to
. - Click .
In the first step, update the
accessModesfrom the defaultReadWriteOncetoReadWriteMany.- See Provisioning to update the access mode. for a detailed overview.
- Complete the additional steps in this section to create the persistent volume claim (PVC).
5.3.1.1.2. Configuring object storage on Amazon S3 Copy linkLink copied to clipboard!
Red Hat supports Amazon Simple Storage Service (S3) for automation hub. You can configure it when deploying the AutomationHub custom resource (CR), or you can configure it for an existing instance.
Prerequisites
- Create an Amazon S3 bucket to store the objects.
- Note the name of the S3 bucket.
Procedure
Create a Kubernetes secret containing the AWS credentials and connection details, and the name of your Amazon S3 bucket. The following example creates a secret called
test-s3:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the secret to the automation hub custom resource (CR)
spec:spec: object_storage_s3_secret: test-s3
spec: object_storage_s3_secret: test-s3Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you are applying this secret to an existing instance, restart the API pods for the change to take effect.
<hub-name>is the name of your hub instance.oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api
$ oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.1.1.3. Configuring object storage on Azure Blob Copy linkLink copied to clipboard!
Red Hat supports Azure Blob Storage for automation hub. You can configure it when deploying the AutomationHub custom resource (CR), or you can configure it for an existing instance.
Prerequisites
- Create an Azure Storage blob container to store the objects.
- Note the name of the blob container.
Procedure
Create a Kubernetes secret containing the credentials and connection details for your Azure account, and the name of your Azure Storage blob container. The following example creates a secret called
test-azure:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the secret to the automation hub custom resource (CR)
spec:spec: object_storage_azure_secret: test-azure
spec: object_storage_azure_secret: test-azureCopy to Clipboard Copied! Toggle word wrap Toggle overflow If you are applying this secret to an existing instance, restart the API pods for the change to take effect.
<hub-name>is the name of your hub instance.oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api
$ oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-apiCopy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.1.2. Configure your automation hub operator route options Copy linkLink copied to clipboard!
The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation hub operator route options under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Navigate to
. - Select your Ansible Automation Platform Operator deployment.
- Select the Automation Hub tab.
For new instances, click .
- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingress type, click the drop-down menu and select Route.
- Under Route DNS host, enter a common host name that the route answers to.
- Under Route TLS termination mechanism, click the drop-down menu and select Edge or Passthrough.
Under Route TLS credential secret, click the drop-down menu and select a secret from the list.
NoteAfter you have configured your route you can customize your hostname by adding
route_host:to the YAML for that automation hub instance.
5.3.1.3. Configuring the ingress type for your automation hub operator Copy linkLink copied to clipboard!
The Ansible Automation Platform Operator installation form allows you to further configure your automation hub operator ingress under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Navigate to
. - Select your Ansible Automation Platform Operator deployment.
- Select the Automation Hub tab.
For new instances, click .
- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingress type, click the drop-down menu and select Ingress.
- Under Ingress annotations, enter any annotations to add to the ingress.
- Under Ingress TLS secret, click the drop-down menu and select a secret from the list.
Verification
After you have configured your automation hub operator, click at the bottom of the form view. Red Hat OpenShift Container Platform creates the pods. This may take a few minutes.
You can view the progress by navigating to
Verify that the following operator pods provided by the Ansible Automation Platform Operator installation from automation hub are running:
| Operator manager controllers | Automation controller | Automation hub |
|---|---|---|
| The operator manager controllers for each of the 3 operators, include the following:
| After deploying automation controller, you will see the addition of these pods:
| After deploying automation hub, you will see the addition of these pods:
|
A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name> to see if there is an ImagePullBackOff error on that pod.
5.3.2. Finding the automation hub route Copy linkLink copied to clipboard!
You can access the automation hub through the platform gateway or through the following procedure.
Procedure
- Log into Red Hat OpenShift Container Platform.
-
Navigate to
. - Under Location, click on the URL for your automation hub instance.
Verification
The automation hub user interface launches where you can sign in with the administrator credentials specified during the operator configuration process.
If you did not specify an administrator password during configuration, one was automatically created for you. To locate this password, go to your project, select
5.3.3. Configuring an external database for automation hub on Red Hat Ansible Automation Platform Operator Copy linkLink copied to clipboard!
For users who prefer to deploy Ansible Automation Platform with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the oc create command.
By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment.
You can choose to use an external database instead if you prefer to use a dedicated node to ensure dedicated resources or to manually manage backups, upgrades, or performance tweaks.
The same external database (PostgreSQL instance) can be used for both automation hub, automation controller, and platform gateway as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.
The following section outlines the steps to configure an external database for your automation hub on a Ansible Automation Platform Operator.
Prerequisite
The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform. The external postgres instance credentials and connection information will need to be stored in a secret, which will then be set on the automation hub spec.
Ansible Automation Platform 2.6 supports PostgreSQL 15 for its managed databases and additionally supports PostgreSQL 15, 16, and 17 for external databases.
If you choose to use an externally managed database with version 16 or 17 you must also rely on external backup and restore processes.
Procedure
Create a
postgres_configuration_secretYAML file, following the template below:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Namespace to create the secret in. This should be the same namespace you want to deploy to.
- The resolvable hostname for your database node.
-
External port defaults to
5432. -
Value for variable
passwordshould not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration. -
The variable
sslmodeis valid forexternaldatabases only. The allowed values are:prefer,disable,allow,require,verify-ca, andverify-full.
Apply
external-postgres-configuration-secret.ymlto your cluster using theoc createcommand.oc create -f external-postgres-configuration-secret.yml
$ oc create -f external-postgres-configuration-secret.ymlCopy to Clipboard Copied! Toggle word wrap Toggle overflow When creating your
AutomationHubcustom resource object, specify the secret on your spec, following the example below:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.3.1. Enabling the hstore extension for the automation hub PostgreSQL database Copy linkLink copied to clipboard!
The database migration script uses hstore fields to store information, therefore the hstore extension must be enabled in the automation hub PostgreSQL database.
This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server.
If the PostgreSQL database is external, you must enable the hstore extension in the automation hub PostgreSQL database manually before installation.
If the hstore extension is not enabled before installation, a failure raises during database migration.
Procedure
Check if the extension is available on the PostgreSQL server (automation hub database).
psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"
$ psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where the default value for
<automation hub database>isautomationhub.Example output with
hstoreavailable:name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)
name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output with
hstorenot available:name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)
name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)Copy to Clipboard Copied! Toggle word wrap Toggle overflow On a RHEL based server, the
hstoreextension is included in thepostgresql-contribRPM package, which is not installed automatically when installing the PostgreSQL server RPM package.To install the RPM package, use the following command:
dnf install postgresql-contrib
dnf install postgresql-contribCopy to Clipboard Copied! Toggle word wrap Toggle overflow Load the
hstorePostgreSQL extension into the automation hub database with the following command:psql -d <automation hub database> -c "CREATE EXTENSION hstore;"
$ psql -d <automation hub database> -c "CREATE EXTENSION hstore;"Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the following output, the
installed_versionfield lists thehstoreextension used, indicating thathstoreis enabled.name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)
name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.4. Finding and deleting PVCs Copy linkLink copied to clipboard!
A persistent volume claim (PVC) is a storage volume used to store data that automation hub and automation controller applications use. These PVCs are independent from the applications and remain even when the application is deleted. If you are confident that you no longer need a PVC, or have backed it up elsewhere, you can manually delete them.
Procedure
List the existing PVCs in your deployment namespace:
oc get pvc -n <namespace>
oc get pvc -n <namespace>Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Identify the PVC associated with your previous deployment by comparing the old deployment name and the PVC name.
Delete the old PVC:
oc delete pvc -n <namespace> <pvc-name>
oc delete pvc -n <namespace> <pvc-name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.3.5. Additional configurations Copy linkLink copied to clipboard!
A collection download count can help you understand collection usage. To add a collection download count to automation hub, set the following configuration:
spec:
pulp_settings:
ansible_collect_download_count: true
spec:
pulp_settings:
ansible_collect_download_count: true
When ansible_collect_download_count is enabled, automation hub will display a download count by the collection.
5.3.6. Adding allowed registries to the automation controller image configuration Copy linkLink copied to clipboard!
Before you can deploy a container image in automation hub, you must add the registry to the allowedRegistries in the automation controller image configuration. To do this you can copy and paste the following code into your automation controller image YAML.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Navigate to
. - Select the Resources drop-down list and type "Image".
- Select Image (config,openshift.io/v1).
- Click under the Name heading.
- Select the tab.
Paste in the following under spec value:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Click .
5.3.7. Configuring content signing for Ansible Automation Platform Hub Operator Copy linkLink copied to clipboard!
As an automation administrator for your organization, you can configure Ansible Automation Platform Hub Operator for signing and publishing Ansible content collections from different groups within your organization.
For additional security, automation creators can configure Ansible-Galaxy CLI to verify these collections to ensure that they have not been changed after they were uploaded to automation hub.
To successfully sign and publish Ansible Certified Content Collections, you must configure private automation hub for signing.
Prerequisites
-
A GPG key pair. If you do not have one, you can generate one using the
gpg --full-generate-keycommand. - Your public-private key pair has proper access for configuring content signing on Ansible Automation Platform Hub Operator.
Procedure
Create a ConfigMap for signing scripts. The ConfigMap you create contains the scripts used by the signing service for collections and container images.
NoteThis script is used as part of the signing service and must generate an ascii-armored detached
gpgsignature for that file using the key specified through thePULP_SIGNING_KEY_FINGERPRINTenvironment variable.The script prints out a JSON structure with the following format.
{"file": "filename", "signature": "filename.asc"}{"file": "filename", "signature": "filename.asc"}Copy to Clipboard Copied! Toggle word wrap Toggle overflow All the file names are relative paths inside the current working directory. The file name must remain the same for the detached signature.
Example: The following script produces signatures for content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create a secret for your GnuPG private key. This secret securely stores the GnuPG private key you use for signing.
gpg --export --armor <your-gpg-key-id> > signing_service.gpg oc create secret generic signing-galaxy --from-file=signing_service.gpg
gpg --export --armor <your-gpg-key-id> > signing_service.gpg oc create secret generic signing-galaxy --from-file=signing_service.gpgCopy to Clipboard Copied! Toggle word wrap Toggle overflow The secret must have a key named
signing_service.gpg.Configure the AnsibleAutomationPlatform CR.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.4. Deploying Redis on Red Hat Ansible Automation Platform Operator Copy linkLink copied to clipboard!
When you create an Ansible Automation Platform instance through the Ansible Automation Platform Operator, standalone Redis is assigned by default. If you would prefer to deploy clustered Redis, you can use the following procedure.
For more information about Redis, refer to Caching and queueing system in the Planning your installation guide.
Switching Redis modes on an existing instance is not supported and can lead to unexpected consequences, including data loss. To change the Redis mode, you must deploy a new instance.
Prerequisites
- You have installed an Ansible Automation Platform Operator deployment.
Procedure
- Log in to Red Hat OpenShift Container Platform.
-
Navigate to
. - Select your Ansible Automation Platform Operator deployment.
- Select the Details tab.
On the Ansible Automation Platform tile click .
- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Change the redis_mode value to "cluster".
- Click , then .
- Click to expand Advanced configuration.
- For the Redis Mode list, select Cluster.
- Configure the rest of your instance as necessary, then click .
Verification
Your instance deploys with a cluster Redis with 6 Redis replicas as default.
You can modify your automation hub default redis cache PVC volume size, for help with this see, Modifying the default redis cache PVC volume size automation hub.