Installing on OpenShift Container Platform
Install and configure Ansible Automation Platform operator on OpenShift Container Platform
Abstract
Preface
Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments.
This guide helps you to understand the installation, migration and upgrade requirements for deploying the Ansible Automation Platform Operator on OpenShift Container Platform.
Providing feedback on Red Hat documentation
If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
Chapter 1. Installing Red Hat Ansible Automation Platform Operator on Red Hat OpenShift Container Platform
As a system administrator, you can use Ansible Automation Platform Operator to deploy new Ansible Automation Platform instances in your OpenShift environment.
1.1. Planning your Red Hat Ansible Automation Platform Operator on Red Hat OpenShift Container Platform
Red Hat Ansible Automation Platform is supported on both Red Hat Enterprise Linux and Red Hat Openshift.
OpenShift operators help install and automate day-2 operations of complex, distributed software on Red Hat OpenShift Container Platform. The Ansible Automation Platform Operator enables you to deploy and manage Ansible Automation Platform components on Red Hat OpenShift Container Platform.
You can use this section to help plan your Red Hat Ansible Automation Platform installation on your Red Hat OpenShift Container Platform environment. Before installing, review the supported installation scenarios to determine which meets your requirements.
1.1.1. About Ansible Automation Platform Operator
The Ansible Automation Platform Operator provides cloud-native, push-button deployment of new Ansible Automation Platform instances in your OpenShift environment. The Ansible Automation Platform Operator includes resource types to deploy and manage instances of automation controller and private automation hub. It also includes automation controller job resources for defining and launching jobs inside your automation controller deployments.
Deploying Ansible Automation Platform instances with a Kubernetes native operator offers several advantages over launching instances from a playbook deployed on Red Hat OpenShift Container Platform, including upgrades and full lifecycle support for your Red Hat Ansible Automation Platform deployments.
You can install the Ansible Automation Platform Operator from the Red Hat Operators catalog in OperatorHub.
For information about the Ansible Automation Platform Operator system requirements and infrastructure topology see Operator topologies in Tested deployment models
1.1.2. OpenShift Container Platform version compatibility
The Ansible Automation Platform Operator to install Ansible Automation Platform 2.5 is available on OpenShift Container Platform 4.12 through to 4.17 and later versions.
Additional resources
- See the Red Hat Ansible Automation Platform Life Cycle for the most current compatibility details.
1.1.3. Supported installation scenarios for Red Hat OpenShift Container Platform
You can use the OperatorHub on the Red Hat OpenShift Container Platform web console to install Ansible Automation Platform Operator.
Alternatively, you can install Ansible Automation Platform Operator from the OpenShift Container Platform command-line interface (CLI), oc
. See Installing Red Hat Ansible Automation Platform Operator from the OpenShift Container Platform CLI for help with this.
After you have installed Ansible Automation Platform Operator you must create an Ansible Automation Platform custom resource (CR). This enables you to manage Ansible Automation Platform components from a single unified interface known as the platform gateway. As of version 2.5, you must create an Ansible Automation Platform CR, even if you have an existing automation controller, automation hub, or Event-Driven Ansible, components.
If existing components have already been deployed, you must specify these components on the Ansible Automation Platform CR. You must create the custom resource in the same namespace as the existing components.
Supported scenarios | Supported scenarios with existing components |
---|---|
|
|
1.1.4. Custom resources
You can define custom resources for each primary installation workflows.
1.1.4.1. Modifying the number of simultaneous rulebook activations during or after Event-Driven Ansible controller installation
-
If you plan to install Event-Driven Ansible on OpenShift Container Platform and modify the number of simultaneous rulebook activations, add the required
EDA_MAX_RUNNING_ACTIVATIONS
parameter to your custom resources. By default, Event-Driven Ansible controller allows 12 activations per node to run simultaneously. For an example see the eda-max-running-activations.yml in the appendix section.
EDA_MAX_RUNNING_ACTIVATIONS
for OpenShift Container Platform is a global value since there is no concept of worker nodes when installing Event-Driven Ansible on OpenShift Container Platform.
1.1.5. Ansible Automation Platform Operator CSRF management
In Ansible Automation Platform version 2.5 the Ansible Automation Platform Operator on OpenShift Container Platform creates OpenShift Routes and configures your Cross-site request forgery (CSRF) settings automatically. When using external ingress, you must configure your CSRF on the ingress, for help with this see Configuring your CSRF settings for your platform gateway operator ingress.
In previous versions CSRF was configurable through the automation controller user interface, in version 2.5 automation controller settings are still present but have no impact on CSRF settings for the platform gateway.
The following table helps to clarify which settings are applicable for which component.
UI setting | Applicable for |
---|---|
Subscription | automation controller |
platform gateway | platform gateway |
User Preferences | User interface |
System | Automation controller |
Job | Automation controller |
Logging | Automation controller |
Troubleshooting | Automation controller |
1.1.6. Additional resources
- See Understanding OperatorHub to learn more about OpenShift Container Platform OperatorHub.
1.2. Managing Ansible Automation Platform licensing, updates and support
Ansible is an open source software project and is licensed under the GNU General Public License version 3, as described in the Ansible Source Code.
You must have valid subscriptions attached before installing Ansible Automation Platform.
For more information, see Attaching Subscriptions.
1.2.1. Trial and evaluation
A license is required to run Ansible Automation Platform. You can start by using a free trial license.
- Trial licenses for Ansible Automation Platform are available at the Red Hat product trial center.
- Support is not included in a trial license or during an evaluation of the Ansible Automation Platform.
1.2.2. Component licenses
To view the license information for the components included in Ansible Automation Platform, see /usr/share/doc/automation-controller-<version>/README
.
where <version>
refers to the version of automation controller you have installed.
To view a specific license, see /usr/share/doc/automation-controller-<version>/*.txt
.
where *
is the license file name to which you are referring.
1.2.3. Node counting in licenses
The Ansible Automation Platform license defines the number of Managed Nodes that can be managed as part of your subscription.
A typical license says "License Count: 500", which sets the maximum number of Managed Nodes at 500.
For more information about managed node requirements for licensing, see How are "managed nodes" defined as part of the Red Hat Ansible Automation Platform offering.
Ansible does not recycle node counts or reset automated hosts.
1.2.4. Subscription Types
Red Hat Ansible Automation Platform is provided at various levels of support and number of machines as an annual subscription.
Standard:
- Manage any size environment
- Enterprise 8x5 support and SLA
- Maintenance and upgrades included
- Review the SLA at Product Support Terms of Service
- Review the Red Hat Support Severity Level Definitions
Premium:
- Manage any size environment, including mission-critical environments
- Premium 24x7 support and SLA
- Maintenance and upgrades included
- Review the SLA at Product Support Terms of Service
- Review the Red Hat Support Severity Level Definitions
All subscription levels include regular updates and releases of automation controller, Ansible, and any other components of the Ansible Automation Platform.
For more information, contact Ansible through the Red Hat Customer Portal or at the Ansible site.
1.2.5. Attaching your Red Hat Ansible Automation Platform subscription
You must have valid subscriptions attached on all nodes before installing Red Hat Ansible Automation Platform. Attaching your Ansible Automation Platform subscription provides access to subscription-only resources necessary to proceed with the installation.
Procedure
Make sure your system is registered:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo subscription-manager register --username <$INSERT_USERNAME_HERE> --password <$INSERT_PASSWORD_HERE>
$ sudo subscription-manager register --username <$INSERT_USERNAME_HERE> --password <$INSERT_PASSWORD_HERE>
Obtain the
pool_id
for your Red Hat Ansible Automation Platform subscription:Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo subscription-manager list --available --all | grep "Ansible Automation Platform" -B 3 -A 6
$ sudo subscription-manager list --available --all | grep "Ansible Automation Platform" -B 3 -A 6
NoteDo not use MCT4022 as a
pool_id
for your subscription because it can cause Ansible Automation Platform subscription attachment to fail.Example
An example output of the
subsciption-manager list
command. Obtain thepool_id
as seen in thePool ID:
section:Copy to Clipboard Copied! Toggle word wrap Toggle overflow Subscription Name: Red Hat Ansible Automation, Premium (5000 Managed Nodes) Provides: Red Hat Ansible Engine Red Hat Ansible Automation Platform SKU: MCT3695 Contract: ```` Pool ID: <pool_id> Provides Management: No Available: 4999 Suggested: 1
Subscription Name: Red Hat Ansible Automation, Premium (5000 Managed Nodes) Provides: Red Hat Ansible Engine Red Hat Ansible Automation Platform SKU: MCT3695 Contract: ```` Pool ID: <pool_id> Provides Management: No Available: 4999 Suggested: 1
Attach the subscription:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo subscription-manager attach --pool=<pool_id>
$ sudo subscription-manager attach --pool=<pool_id>
You have now attached your Red Hat Ansible Automation Platform subscriptions to all nodes.
To remove this subscription, enter the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo subscription-manager remove --pool=<pool_id>
$ sudo subscription-manager remove --pool=<pool_id>
Verification
- Verify the subscription was successfully attached:
sudo subscription-manager list --consumed
$ sudo subscription-manager list --consumed
Troubleshooting
If you are unable to locate certain packages that came bundled with the Ansible Automation Platform installer, or if you are seeing a
Repositories disabled by configuration
message, try enabling the repository by using the command:Red Hat Ansible Automation Platform 2.5 for RHEL 8
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo subscription-manager repos --enable ansible-automation-platform-2.5-for-rhel-8-x86_64-rpms
$ sudo subscription-manager repos --enable ansible-automation-platform-2.5-for-rhel-8-x86_64-rpms
Red Hat Ansible Automation Platform 2.5 for RHEL 9
Copy to Clipboard Copied! Toggle word wrap Toggle overflow sudo subscription-manager repos --enable ansible-automation-platform-2.5-for-rhel-9-x86_64-rpms
$ sudo subscription-manager repos --enable ansible-automation-platform-2.5-for-rhel-9-x86_64-rpms
1.2.6. Obtaining a manifest file
You can obtain a subscription manifest in the Subscription Allocations section of Red Hat Subscription Management. After you obtain a subscription allocation, you can download its manifest file and upload it to activate Ansible Automation Platform.
To begin, login to the Red Hat Customer Portal using your administrator user account and follow the procedures in this section.
1.2.6.1. Create a subscription allocation
Creating a new subscription allocation allows you to set aside subscriptions and entitlements for a system that is currently offline or air-gapped. This is necessary before you can download its manifest and upload it to Ansible Automation Platform.
Procedure
- From the Subscription Allocations page, click .
- Enter a name for the allocation so that you can find it later.
- Select Type: Satellite 6.16 as the management application.
- Click .
Next steps
- Add the subscriptions needed for Ansible Automation Platform to run properly.
1.2.6.2. Adding subscriptions to a subscription allocation
Once an allocation is created, you can add the subscriptions you need for Ansible Automation Platform to run properly. This step is necessary before you can download the manifest and add it to Ansible Automation Platform.
Procedure
- From the Subscription Allocations page, click on the name of the Subscription Allocation to which you would like to add a subscription.
- Click the Subscriptions tab.
- Click .
- Enter the number of Ansible Automation Platform Entitlement(s) you plan to add.
- Click .
Next steps
- Download the manifest file from Red Hat Subscription Management.
1.2.6.3. Downloading a manifest file
After an allocation is created and has the appropriate subscriptions on it, you can download the manifest from Red Hat Subscription Management.
Procedure
- From the Subscription Allocations page, click on the name of the Subscription Allocation to which you would like to generate a manifest.
- Click the Subscriptions tab.
Click
to download the manifest file.This downloads a file manifest<allocation name>_<date>.zip_ to your default downloads folder.
Next steps
- Upload the manifest file to activate Red Hat Ansible Automation Platform.
1.2.7. Activating Red Hat Ansible Automation Platform
Ansible subscriptions require a service account from console.redhat.com. You must create a service account and use the client ID and client secret to activate your subscription.
If you enter your client ID and client secret but cannot locate your subscription, you might not have the correct permissions set on your service account. For more information and troubleshooting guidance for service accounts, see Configure Ansible Automation Platform to authenticate through service account credentials.
For Red Hat Satellite, input your Satellite username and Satellite password in the fields below.
Red Hat Ansible Automation Platform uses available subscriptions or a subscription manifest to authorize the use of Ansible Automation Platform. To obtain a subscription, you can do either of the following:
- Use your Red Hat service account or Satellite credentials when you launch Ansible Automation Platform.
- Upload a subscriptions manifest file either using the Red Hat Ansible Automation Platform interface or manually in an Ansible playbook.
1.2.7.1. Activate with credentials
When Ansible Automation Platform launches for the first time, the Ansible Automation Platform Subscription screen automatically displays. You can use your Red Hat service account to retrieve and import your subscription directly into Ansible Automation Platform.
You are opted in for Automation Analytics by default when you activate the platform on first time log in. This helps Red Hat improve the product by delivering you a much better user experience. You can opt out, after activating Ansible Automation Platform, by doing the following:
- From the navigation panel, select → → .
- Clear the Gather data for Automation Analytics option.
- Click .
Procedure
- Log in to Red Hat Ansible Automation Platform.
- Select Service Account / Red Hat Satellite.
- Enter your Client ID / Satellite username and Client secret / Satellite password.
Select your subscription from the Subscription list.
NoteYou can also use your Satellite username and password if your cluster nodes are registered to Satellite through Subscription Manager.
- Review the End User License Agreement and select I agree to the End User License Agreement.
- Click .
Verification
After your subscription has been accepted, subscription details are displayed. A status of Compliant indicates your subscription is in compliance with the number of hosts you have automated within your subscription count. Otherwise, your status will show as Out of Compliance, indicating you have exceeded the number of hosts in your subscription. Other important information displayed include the following:
- Hosts automated
- Host count automated by the job, which consumes the license count
- Hosts imported
- Host count considering all inventory sources (does not impact hosts remaining)
- Hosts remaining
- Total host count minus hosts automated
1.2.7.2. Activate with a manifest file
If you have a subscriptions manifest, you can upload the manifest file either by using the Red Hat Ansible Automation Platform interface.
You are opted in for Automation Analytics by default when you activate the platform on first time log in. This helps Red Hat improve the product by delivering you a much better user experience. You can opt out, after activating Ansible Automation Platform, by doing the following:
- From the navigation panel, select → → .
- Uncheck the Gather data for Automation Analytics option.
- Click .
Prerequisites
You must have a Red Hat Subscription Manifest file exported from the Red Hat Customer Portal. For more information, see Obtaining a manifest file.
Procedure
- Log in to Red Hat Ansible Automation Platform.
- If you are not immediately prompted for a manifest file, go to → .
- Select Subscription manifest.
- Click and select the manifest file.
- Review the End User License Agreement and select I agree to the End User License Agreement.
- Click .
If the USERNAME and PASSWORD fields.
button is disabled on the License page, clear theVerification
After your subscription has been accepted, subscription details are displayed. A status of Compliant indicates your subscription is in compliance with the number of hosts you have automated within your subscription count. Otherwise, your status will show as Out of Compliance, indicating you have exceeded the number of hosts in your subscription. Other important information displayed include the following:
- Hosts automated
- Host count automated by the job, which consumes the license count
- Hosts imported
- Host count considering all inventory sources (does not impact hosts remaining)
- Hosts remaining
- Total host count minus hosts automated
Next steps
- You can return to the license screen by selecting → from the navigation panel and clicking .
1.3. Installing the Red Hat Ansible Automation Platform Operator on Red Hat OpenShift Container Platform
For information about the Ansible Automation Platform Operator system requirements and infrastructure topology see Operator topologies in Tested deployment models.
When installing your Ansible Automation Platform Operator you have a choice of a namespace-scoped operator or a cluster-scoped operator. This depends on the update channel you choose, stable-2.x or cluster-scoped-2.x.
A namespace-scoped operator is confined to one namespace, offering tighter security. A cluster-scoped operator spans multiple namespaces, which grants broader permissions.
If you are managing multiple Ansible Automation Platform instances with the same Ansible Automation Platform Operator version, use the cluster-scoped operator, which uses a single operator to manage all Ansible Automation Platform custom resources in your cluster.
If you need multiple operator versions in the same cluster, you must use the namespace-scoped operator. The operator and the deployment share the same namespace. This can also be helpful when debugging because the operator logs pertain to custom resources in that namespace only.
For help with installing a namespace or cluster-scoped operator see the following procedure.
You cannot deploy Ansible Automation Platform in the default namespace on your OpenShift Cluster. The aap
namespace is recommended. You can use a custom namespace, but it should run only Ansible Automation Platform.
Prerequisites
- You have installed the Red Hat Ansible Automation Platform catalog in OperatorHub.
-
You have created a
StorageClass
object for your platform and a persistent volume claim (PVC) withReadWriteMany
access mode. See Dynamic provisioning for details. To run Red Hat OpenShift Container Platform clusters on Amazon Web Services (AWS) with
ReadWriteMany
access mode, you must add NFS or other storage.-
For information about the AWS Elastic Block Store (EBS) or to use the
aws-ebs
storage class, see Persistent storage using AWS Elastic Block Store. -
To use multi-attach
ReadWriteMany
access mode for AWS EBS, see Attaching a volume to multiple instances with Amazon EBS Multi-Attach.
-
For information about the AWS Elastic Block Store (EBS) or to use the
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Search for Ansible Automation Platform and click .
Select an Update Channel:
- stable-2.x: installs a namespace-scoped operator, which limits deployments of automation hub and automation controller instances to the namespace the operator is installed in, this is suitable for most cases. The stable-2.x channel does not require administrator privileges and utilizes fewer resources because it only monitors a single namespace.
- stable-2.x-cluster-scoped: installs the Ansible Automation Platform Operator in a single namespace that manages Ansible Automation Platform custom resources and deployments in all namespaces. The Ansible Automation Platform Operator requires administrator privileges for all namespaces in the cluster.
- Select Installation Mode, Installed Namespace, and Approval Strategy.
- Click .
The installation process begins. When installation finishes, a modal appears notifying you that the Ansible Automation Platform Operator is installed in the specified namespace.
Verification
- Click to view your newly installed Ansible Automation Platform Operator and verify the following operator custom resources are present:
Automation controller | Automation hub | Event-Driven Ansible (EDA) | Red Hat Ansible Lightspeed |
---|---|---|---|
|
|
|
|
1.4. Installing Red Hat Ansible Automation Platform Operator from the Red Hat OpenShift Container Platform CLI
Use these instructions to install the Ansible Automation Platform Operator on Red Hat OpenShift Container Platform from the OpenShift Container Platform command-line interface (CLI) using the oc
command.
1.4.1. Prerequisites
- Access to Red Hat OpenShift Container Platform using an account with operator installation permissions.
-
The OpenShift Container Platform CLI
oc
command is installed on your local system. Refer to Installing the OpenShift CLI in the Red Hat OpenShift Container Platform product documentation for further information.
1.4.2. Installing the Ansible Automation Platform Operator in a namespace
Use this procedure to subscribe a namespace to an operator.
You cannot deploy Ansible Automation Platform in the default namespace on your OpenShift Cluster. The aap
namespace is recommended. You can use a custom namespace, but it should run only Ansible Automation Platform.
Procedure
Create a project for the operator.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc new-project ansible-automation-platform
oc new-project ansible-automation-platform
-
Create a file called
sub.yaml
. Add the following YAML code to the
sub.yaml
file.Copy to Clipboard Copied! Toggle word wrap Toggle overflow --- apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: ansible-automation-platform --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ansible-automation-platform-operator namespace: ansible-automation-platform spec: targetNamespaces: - ansible-automation-platform --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ansible-automation-platform namespace: ansible-automation-platform spec: channel: 'stable-2.5' installPlanApproval: Automatic name: ansible-automation-platform-operator source: redhat-operators sourceNamespace: openshift-marketplace ---
--- apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" name: ansible-automation-platform --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ansible-automation-platform-operator namespace: ansible-automation-platform spec: targetNamespaces: - ansible-automation-platform --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ansible-automation-platform namespace: ansible-automation-platform spec: channel: 'stable-2.5' installPlanApproval: Automatic name: ansible-automation-platform-operator source: redhat-operators sourceNamespace: openshift-marketplace ---
This file creates a
Subscription
object calledansible-automation-platform
that subscribes theansible-automation-platform
namespace to theansible-automation-platform-operator
operator.Run the
oc apply
command to create the objects specified in thesub.yaml
file:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f sub.yaml
oc apply -f sub.yaml
Verify the CSV PHASE reports "Succeeded" before proceeding using the
oc get csv -n ansible-automation-platform
command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get csv -n ansible-automation-platform NAME DISPLAY VERSION REPLACES PHASE aap-operator.v2.5.0-0.1728520175 Ansible Automation Platform 2.5.0+0.1728520175 aap-operator.v2.5.0-0.1727875185 Succeeded
oc get csv -n ansible-automation-platform NAME DISPLAY VERSION REPLACES PHASE aap-operator.v2.5.0-0.1728520175 Ansible Automation Platform 2.5.0+0.1728520175 aap-operator.v2.5.0-0.1727875185 Succeeded
Create an
AnsibleAutomationPlatform
object calledexample
in theansible-automation-platform
namespace.To change the Ansible Automation Platform and its components from from
example
, edit the name field in themetadata:
section and replace example with the name you want to use:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f - <<EOF apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: example namespace: ansible-automation-platform spec: # Platform image_pull_policy: IfNotPresent # Components controller: disabled: false eda: disabled: false hub: disabled: false ## Modify to contain your RWM storage class name storage_type: file file_storage_storage_class: <your-read-write-many-storage-class> file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name lightspeed: disabled: true EOF
oc apply -f - <<EOF apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: example namespace: ansible-automation-platform spec: # Platform image_pull_policy: IfNotPresent # Components controller: disabled: false eda: disabled: false hub: disabled: false ## Modify to contain your RWM storage class name storage_type: file file_storage_storage_class: <your-read-write-many-storage-class> file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name lightspeed: disabled: true EOF
For further information about subscribing namespaces to operators, see Installing from OperatorHub using the CLI in the Red Hat OpenShift Container Platform Operators guide.
You can use the OpenShift Container Platform CLI to fetch the web address and the password of the Automation controller that you created.
1.4.3. Fetching platform gateway login details from the OpenShift Container Platform CLI
To login to the platform gateway, you need the web address and the password.
1.4.3.1. Fetching the platform gateway web address
A Red Hat OpenShift Container Platform route exposes a service at a host name, so that external clients can reach it by name. When you created the platform gateway instance, a route was created for it. The route inherits the name that you assigned to the platform gateway object in the YAML file.
Use the following command to fetch the routes:
oc get routes -n <platform_namespace>
oc get routes -n <platform_namespace>
In the following example, the example
platform gateway is running in the ansible-automation-platform
namespace.
oc get routes -n ansible-automation-platform
$ oc get routes -n ansible-automation-platform
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
example example-ansible-automation-platform.apps-crc.testing example-service http edge/Redirect None
The address for the platform gateway instance is example-ansible-automation-platform.apps-crc.testing
.
1.4.3.2. Fetching the platform gateway password
The YAML block for the platform gateway instance in the AnsibleAutomationPlatform
object assigns values to the name and admin_user keys.
Use these values in the following command to fetch the password for the platform gateway instance.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get secret/<your instance name>-<admin_user>-password -o yaml
oc get secret/<your instance name>-<admin_user>-password -o yaml
The default value for admin_user is
admin
. Modify the command if you changed the admin username in theAnsibleAutomationPlatform
object.The following example retrieves the password for a platform gateway object called
example
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get secret/example-admin-password -o yaml
oc get secret/example-admin-password -o yaml
The base64 encoded password for the platform gateway instance is listed in the
metadata
field in the output:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get secret/example-admin-password -o yaml
$ oc get secret/example-admin-password -o yaml apiVersion: v1 data: password: ODzLODzLODzLODzLODzLODzLODzLODzLODzLODzLODzL kind: Secret metadata: labels: app.kubernetes.io/component: aap app.kubernetes.io/name: example app.kubernetes.io/operator-version: "" app.kubernetes.io/part-of: example name: example-admin-password namespace: ansible-automation-platform
1.4.3.3. Decoding the platform gateway password
After you have found your gateway password, you must decode it from base64.
Run the following command to decode your password from base64:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get secret/example-admin-password -o jsonpath={.data.password} | base64 --decode
oc get secret/example-admin-password -o jsonpath={.data.password} | base64 --decode
1.4.4. Additional resources
- For more information on running operators on OpenShift Container Platform, navigate to the OpenShift Container Platform product documentation and click the Operators - Working with Operators in OpenShift Container Platform guide.
Chapter 2. Configuring the Red Hat Ansible Automation Platform Operator on Red Hat OpenShift Container Platform
As a namespace administrator, you can use Ansible Automation Platform gateway to manage new Ansible Automation Platform components in your OpenShift environment.
The Ansible Automation Platform gateway uses the Ansible Automation Platform custom resource to manage and integrate the following Ansible Automation Platform components into a unified user interface:
- Automation controller
- Automation hub
- Event-Driven Ansible
- Red Hat Ansible Lightspeed (This feature is disabled by default, you must opt in to use it.)
Before you can deploy the platform gateway you must have Ansible Automation Platform Operator installed in a namespace. If you have not installed Ansible Automation Platform Operator see Installing the Red Hat Ansible Automation Platform Operator on Red Hat OpenShift Container Platform.
Platform gateway is only available under Ansible Automation Platform Operator version 2.5. Every component deployed under Ansible Automation Platform Operator 2.5 defaults to version 2.5.
If you have the Ansible Automation Platform Operator and some or all of the Ansible Automation Platform components installed see Deploying the platform gateway with existing Ansible Automation Platform components for how to proceed.
2.1. Linking your components to the platform gateway
After installing the Ansible Automation Platform Operator in your namespace you can set up your Ansible Automation Platform instance. Then link all the platform components to a single user interface.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select your Ansible Automation Platform Operator deployment.
- Select the Details tab.
- On the Ansible Automation Platform tile click .
- From the Create Ansible Automation Platform page enter a name for your instance in the Name field.
Click
and paste the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow spec: database: resource_requirements: requests: cpu: 200m memory: 512Mi storage_requirements: requests: storage: 100Gi controller: disabled: false eda: disabled: false hub: disabled: false storage_type: file file_storage_storage_class: <read-write-many-storage-class> file_storage_size: 10Gi
spec: database: resource_requirements: requests: cpu: 200m memory: 512Mi storage_requirements: requests: storage: 100Gi controller: disabled: false eda: disabled: false hub: disabled: false storage_type: file file_storage_storage_class: <read-write-many-storage-class> file_storage_size: 10Gi
- Click .
Verification
Go to your Ansible Automation Platform Operator deployment and click Ansible Automation Platform instance and the deployed AutomationController, EDA, and AutomationHub instances here.
to verify if all instances deployed correctly. You should see the
Alternatively you can check by the command line, run: oc get route
2.2. Deploying the platform gateway with existing Ansible Automation Platform components
You can link any components of the Ansible Automation Platform, that you have already installed to a new Ansible Automation Platform instance.
The following procedure simulates a scenario where you have automation controller as an existing component and want to add automation hub and Event-Driven Ansible.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select your Ansible Automation Platform Operator deployment.
- Click Update channel to stable-2.5. and edit your
- Click Ansible Automation Platform tile click . and on the
From the Create Ansible Automation Platform page enter a name for your instance in the Name field.
-
When deploying an Ansible Automation Platform instance, ensure that
auto_update
is set to the default value offalse
on your existing automation controller instance in order for the integration to work.
-
When deploying an Ansible Automation Platform instance, ensure that
Click
and copy in the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: example-aap namespace: aap spec: database: resource_requirements: requests: cpu: 200m memory: 512Mi storage_requirements: requests: storage: 100Gi # Platform image_pull_policy: IfNotPresent # Components controller: disabled: false name: existing-controller-name eda: disabled: false hub: disabled: false ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: <your-read-write-many-storage-class> file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage
apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: example-aap namespace: aap spec: database: resource_requirements: requests: cpu: 200m memory: 512Mi storage_requirements: requests: storage: 100Gi # Platform image_pull_policy: IfNotPresent # Components controller: disabled: false name: existing-controller-name eda: disabled: false hub: disabled: false ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: <your-read-write-many-storage-class> file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage
- For new components, if you do not specify a name, a default name is generated.
- Click .
- To access your new instance, see Accessing the platform gateway.
If you have an existing controller with a managed Postgres pod, after creating the Ansible Automation Platform resource your automation controller instance will continue to use that original Postgres pod. If you were to do a fresh install you would have a single Postgres managed pod for all instances.
2.3. Accessing the platform gateway
You should use the Ansible Automation Platform instance as your default. This instance links the automation controller, automation hub, and Event-Driven Ansible deployments to a single interface.
Procedure
To access your Ansible Automation Platform instance:
- Log in to Red Hat OpenShift Container Platform.
- Navigate to →
- Click the link under Location for Ansible Automation Platform.
- This redirects you to the Ansible Automation Platform login page. Enter "admin" as your username in the Username field.
For the password you need to:
- Go to to → .
- Click and copy the password.
- Paste the password into the Password field.
- Click .
Apply your subscription:
- Click or .
- Upload your manifest or enter your username and password.
- Select your subscription from the Subscription list.
-
Click
This redirects you to the Analytics page. .
- Click .
- Select the I agree to the terms of the license agreement checkbox.
- Click .
You now have access to the platform gateway user interface. If you cannot access the Ansible Automation Platform see Frequently asked questions on platform gateway for help with troubleshooting and debugging.
Chapter 3. Configuring Red Hat Ansible Automation Platform components on Red Hat Ansible Automation Platform Operator
After you have installed Ansible Automation Platform Operator and set up your Ansible Automation Platform components, you can configure them for your desired output.
3.1. Configuring platform gateway on Red Hat OpenShift Container Platform web console
You can use these instructions to further configure the platform gateway operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.
3.1.1. Configuring an external database for platform gateway on Red Hat Ansible Automation Platform Operator
There are two scenarios for deploying Ansible Automation Platform with an external database:
Scenario | Action required |
Fresh install | You must specify a single external database instance for the platform to use for the following:
See the aap-configuring-external-db-all-default-components.yml example in the 14.1. Custom resources section for help with this. If using Red Hat Ansible Lightspeed, use the aap-configuring-external-db-with-lightspeed-enabled.yml example. |
Existing external database in 2.4 |
Your existing external database remains the same after upgrading but you must specify the |
To deploy Ansible Automation Platform with an external database, you must first create a Kubernetes secret with credentials for connecting to the database.
By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment. You can deploy Ansible Automation Platform with an external database instead of the managed PostgreSQL pod that the Ansible Automation Platform Operator automatically creates.
Using an external database lets you share and reuse resources and manually manage backups, upgrades, and performance optimizations.
The same external database (PostgreSQL instance) can be used for both automation hub, automation controller, and platform gateway as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.
The following section outlines the steps to configure an external database for your platform gateway on a Ansible Automation Platform Operator.
Prerequisite
The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform.
Ansible Automation Platform 2.5 supports PostgreSQL 15.
Procedure
The external postgres instance credentials and connection information must be stored in a secret, which is then set on the platform gateway spec.
Create a
postgres_configuration_secret
YAML file, following the template below:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> stringData: host: "<external_ip_or_url_resolvable_by_the_cluster>" port: "<external_port>" database: "<desired_database_name>" username: "<username_to_connect_as>" password: "<password_to_connect_with>" type: "unmanaged" type: Opaque
apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace>
1 stringData: host: "<external_ip_or_url_resolvable_by_the_cluster>"
2 port: "<external_port>"
3 database: "<desired_database_name>" username: "<username_to_connect_as>" password: "<password_to_connect_with>"
4 type: "unmanaged" type: Opaque
- 1
- Namespace to create the secret in. This should be the same namespace you want to deploy to.
- 2
- The resolvable hostname for your database node.
- 3
- External port defaults to
5432
. - 4
- Value for variable
password
should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration.
Apply
external-postgres-configuration-secret.yml
to your cluster using theoc create
command.Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f external-postgres-configuration-secret.yml
$ oc create -f external-postgres-configuration-secret.yml
NoteThe following example is for a platform gateway deployment. To configure an external database for all components, use the aap-configuring-external-db-all-default-components.yml example in the 14.1. Custom resources section.
When creating your
AnsibleAutomationPlatform
custom resource object, specify the secret on your spec, following the example below:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: example-aap Namespace: aap spec: database: database_secret: automation-platform-postgres-configuration
apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: example-aap Namespace: aap spec: database: database_secret: automation-platform-postgres-configuration
3.1.2. Troubleshooting an external database with an unexpected DataStyle set
When upgrading the Ansible Automation Platform Operator you may encounter an error like the following:
NotImplementedError: can't parse timestamptz with DateStyle 'Redwood, SHOW_TIME': '18-MAY-23 20:33:55.765755 +00:00'
NotImplementedError: can't parse timestamptz with DateStyle 'Redwood, SHOW_TIME': '18-MAY-23 20:33:55.765755 +00:00'
Errors like this occur when you have an external database with an unexpected DateStyle set. You can refer to the following steps to resolve this issue.
Procedure
Edit the
/var/lib/pgsql/data/postgres.conf
file on the database server:Copy to Clipboard Copied! Toggle word wrap Toggle overflow vi /var/lib/pgsql/data/postgres.conf
# vi /var/lib/pgsql/data/postgres.conf
Find and comment out the line:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow #datestyle = 'Redwood, SHOW_TIME'
#datestyle = 'Redwood, SHOW_TIME'
Add the following setting immediately below the newly-commented line:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow datestyle = 'iso, mdy'
datestyle = 'iso, mdy'
-
Save and close the
postgres.conf
file. Reload the database configuration:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow systemctl reload postgresql
# systemctl reload postgresql
NoteRunning this command does not disrupt database operations.
3.1.3. Enabling HTTPS redirect for single sign-on (SSO) for platform gateway on OpenShift Container Platform
HTTPS redirect for SAML, allows you to log in once and access all of the platform gateway without needing to reauthenticate.
Prerequisites
- You have successfully configured SAML in the gateway from the Ansible Automation Platform Operator. Refer to Configuring SAML authentication for help with this.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Go to → .
- Select your Ansible Automation Platform Operator deployment.
- Select All Instances and go to your AnsibleAutomationPlatform instance.
- Click the ⋮ icon and then select .
In the YAML view paste the following YAML code under the
spec:
section:Copy to Clipboard Copied! Toggle word wrap Toggle overflow spec: extra_settings: - setting: REDIRECT_IS_HTTPS value: '"True"'
spec: extra_settings: - setting: REDIRECT_IS_HTTPS value: '"True"'
- Click .
Verification
After you have added the REDIRECT_IS_HTTPS
setting, wait for the pod to redeploy automatically. You can verify this setting makes it into the pod by running:
oc exec -it <gateway-pod-name> -- grep REDIRECT /etc/ansible-automation-platform/gateway/settings.py
oc exec -it <gateway-pod-name> -- grep REDIRECT /etc/ansible-automation-platform/gateway/settings.py
3.1.4. Configuring your CSRF settings for your platform gateway Operator ingress
The Red Hat Ansible Automation Platform Operator creates Openshift Routes and configures your Cross-site request forgery (CSRF) settings automatically. When using external ingress, you must configure your CSRF on the ingress to allow for cross-site requests. You can configure your platform gateway operator ingress under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select your Ansible Automation Platform Operator deployment.
- Select the Ansible Automation Platform tab.
For new instances, click
.- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingres annotations, enter any annotations to add to the ingress.
- Under Ingress TLS secret, click the drop-down list and select a secret from the list.
Under YAML view paste in the following code:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow spec: extra_settings: - setting: CSRF_TRUSTED_ORIGINS value: - https://my-aap-domain.com
spec: extra_settings: - setting: CSRF_TRUSTED_ORIGINS value: - https://my-aap-domain.com
- After you have configured your platform gateway, click at the bottom of the form view (Or in the case of editing existing instances).
Red Hat OpenShift Container Platform creates the pods. This may take a few minutes. You can view the progress by navigating to
→ and locating the newly created instance.Verification
Verify that the following operator pods provided by the Red Hat Ansible Automation Platform Operator installation from platform gateway are running:
Operator manager controllers pods | Automation controller pods | Automation hub pods | Event-Driven Ansible (EDA) pods | platform gateway pods |
---|---|---|---|---|
The operator manager controllers for each of the four operators, include the following:
| After deploying automation controller, you can see the addition of the following pods:
| After deploying automation hub, you can see the addition of the following pods:
| After deploying EDA, you can see the addition of the following pods:
| After deploying platform gateway, you can see the addition of the following pods:
|
A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name>
to see if there is an ImagePullBackOff error on that pod.
3.1.5. Frequently asked questions on platform gateway
- If I delete my Ansible Automation Platform deployment will I still have access to automation controller?
- No, automation controller, automation hub, and Event-Driven Ansible are nested within the deployment and are also deleted.
- Something went wrong with my deployment but I’m not sure what, how can I find out?
- You can follow along in the command line while the operator is reconciling, this can be helpful for debugging. Alternatively you can click into the deployment instance to see the status conditions being updated as the deployment goes on.
- Is it still possible to view individual component logs?
- When troubleshooting you should examine the Ansible Automation Platform instance for the main logs and then each individual component (EDA, AutomationHub, AutomationController) for more specific information.
- Where can I view the condition of an instance?
-
To display status conditions click into the instance, and look under the Details or Events tab. Alternatively, to display the status conditions you can run the get command:
oc get automationcontroller <instance-name> -o jsonpath=Pipe "| jq"
- Can I track my migration in real time?
-
To help track the status of the migration or to understand why migration might have failed you can look at the migration logs as they are running. Use the logs command:
oc logs fresh-install-controller-migration-4.6.0-jwfm6 -f
- I have configured my SAML but authentication fails with this error: "Unable to complete social auth login" What can I do?
-
You must update your Ansible Automation Platform instance to include the
REDIRECT_IS_HTTPS
extra setting. See Enabling single sign-on (SSO) for platform gateway on OpenShift Container Platform for help with this.
3.2. Configuring automation controller on Red Hat OpenShift Container Platform web console
You can use these instructions to configure the automation controller operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.
Automation controller configuration can be done through the automation controller extra_settings or directly in the user interface after deployment. However, it is important to note that configurations made in extra_settings take precedence over settings made in the user interface.
When an instance of automation controller is removed, the associated PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the previous one. Therefore, it is recommended that you manually remove old PVCs before deploying a new automation controller instance in the same namespace. See Finding and deleting PVCs for more information.
3.2.1. Prerequisites
- You have installed the Red Hat Ansible Automation Platform catalog in Operator Hub.
- For automation controller, a default StorageClass must be configured on the cluster for the operator to dynamically create needed PVCs. This is not necessary if an external PostgreSQL database is configured.
- For Hub a StorageClass that supports ReadWriteMany must be available on the cluster to dynamically created the PVC needed for the content, redis and api pods. If it is not the default StorageClass on the cluster, you can specify it when creating your AutomationHub object.
3.2.1.1. Configuring your controller image pull policy
Use this procedure to configure the image pull policy on your automation controller.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Go to → .
- Select your Ansible Automation Platform Operator deployment.
- Select the Automation Controller tab.
For new instances, click
.- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
Click Image Pull Policy, click on the radio button to select
. Under- Always
- Never
- IfNotPresent
To display the option under Image Pull Secrets, click the arrow.
- Click Add Image Pull Secret and enter a value. beside
To display fields under the Web container resource requirements drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the Task container resource requirements drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the EE Control Plane container resource requirements drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the PostgreSQL init container resource requirements (when using a managed service) drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the Redis container resource requirements drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display fields under the PostgreSQL container resource requirements (when using a managed instance)* drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
To display the PostgreSQL container storage requirements (when using a managed instance) drop-down list, click the arrow.
- Under Limits, and Requests, enter values for CPU cores, Memory, and Storage.
- Under Replicas, enter the number of instance replicas.
- Under Remove used secrets on instance removal, select true or false. The default is false.
- Under Preload instance with data upon creation, select true or false. The default is true.
3.2.1.2. Configuring your controller LDAP security
You can configure your LDAP SSL configuration for automation controller through any of the following options:
- The automation controller user interface.
- The platform gateway user interface. See the Configuring LDAP authentication section of the Access management and authentication guide for additional steps.
- The following procedure steps.
Procedure
Create a secret in your Ansible Automation Platform namespace for the
bundle-ca.crt
file (the filename must bebundle-ca.crt
):Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create secret -n aap-namespace generic bundle-ca-secret --from-file=bundle-ca.crt
$ oc create secret -n aap-namespace generic bundle-ca-secret --from-file=bundle-ca.crt
Add the
bundle_cacert_secret
to the Ansible Automation Platform customer resource:Copy to Clipboard Copied! Toggle word wrap Toggle overflow ... spec: bundle_cacert_secret: bundle-ca-secret ...
... spec: bundle_cacert_secret: bundle-ca-secret ...
Verification
You can verify the expected certificate by running:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc exec -it deployment.apps/aap-gateway - openssl x509 -in /etc/pki/tls/certs/bundle-ca.crt -noout -text
oc exec -it deployment.apps/aap-gateway - openssl x509 -in /etc/pki/tls/certs/bundle-ca.crt -noout -text
3.2.1.3. Configuring your automation controller operator route options
The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation controller operator route options under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select your Ansible Automation Platform Operator deployment.
- Select the Automation Controller tab.
For new instances, click
.- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingress type, click the drop-down menu and select Route.
- Under Route DNS host, enter a common host name that the route answers to.
- Under Route TLS termination mechanism, click the drop-down menu and select Edge or Passthrough. For most instances Edge should be selected.
- Under Route TLS credential secret, click the drop-down menu and select a secret from the list.
- Under Enable persistence for /var/lib/projects directory select either true or false by moving the slider.
3.2.1.4. Configuring the ingress type for your automation controller operator
The Ansible Automation Platform Operator installation form allows you to further configure your automation controller operator ingress under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select your Ansible Automation Platform Operator deployment.
- Select the Automation Controller tab.
For new instances, click
.- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingress type, click the drop-down menu and select Ingress.
- Under Ingress annotations, enter any annotations to add to the ingress.
- Under Ingress TLS secret, click the drop-down menu and select a secret from the list.
After you have configured your automation controller operator, click
at the bottom of the form view. Red Hat OpenShift Container Platform creates the pods. This may take a few minutes.You can view the progress by navigating to
→ and locating the newly created instance.Verification
Verify that the following operator pods provided by the Ansible Automation Platform Operator installation from automation controller are running:
Operator manager controllers | Automation controller | Automation hub | Event-Driven Ansible (EDA) |
---|---|---|---|
The operator manager controllers for each of the three operators, include the following:
| After deploying automation controller, you can see the addition of the following pods:
| After deploying automation hub, you can see the addition of the following pods:
| After deploying EDA, you can see the addition of the following pods:
|
A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name>
to see if there is an ImagePullBackOff error on that pod.
3.2.2. Configuring an external database for automation controller on Red Hat Ansible Automation Platform Operator
For users who prefer to deploy Ansible Automation Platform with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the oc create
command.
By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment. You can deploy Ansible Automation Platform with an external database instead of the managed PostgreSQL pod that the Ansible Automation Platform Operator automatically creates.
Using an external database lets you share and reuse resources and manually manage backups, upgrades, and performance optimizations.
The same external database (PostgreSQL instance) can be used for both automation hub, automation controller, and platform gateway as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.
The following section outlines the steps to configure an external database for your automation controller on a Ansible Automation Platform Operator.
Prerequisite
The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform.
Ansible Automation Platform 2.5 supports PostgreSQL 15.
Procedure
The external postgres instance credentials and connection information must be stored in a secret, which is then set on the automation controller spec.
Create a
postgres_configuration_secret
YAML file, following the template below:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> stringData: host: "<external_ip_or_url_resolvable_by_the_cluster>" port: "<external_port>" database: "<desired_database_name>" username: "<username_to_connect_as>" password: "<password_to_connect_with>" sslmode: "prefer" type: "unmanaged" type: Opaque
apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace>
1 stringData: host: "<external_ip_or_url_resolvable_by_the_cluster>"
2 port: "<external_port>"
3 database: "<desired_database_name>" username: "<username_to_connect_as>" password: "<password_to_connect_with>"
4 sslmode: "prefer"
5 type: "unmanaged" type: Opaque
- 1
- Namespace to create the secret in. This should be the same namespace you want to deploy to.
- 2
- The resolvable hostname for your database node.
- 3
- External port defaults to
5432
. - 4
- Value for variable
password
should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration. - 5
- The variable
sslmode
is valid forexternal
databases only. The allowed values are:prefer
,disable
,allow
,require
,verify-ca
, andverify-full
.
Apply
external-postgres-configuration-secret.yml
to your cluster using theoc create
command.Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f external-postgres-configuration-secret.yml
$ oc create -f external-postgres-configuration-secret.yml
When creating your
AutomationController
custom resource object, specify the secret on your spec, following the example below:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: controller-dev spec: postgres_configuration_secret: external-postgres-configuration
apiVersion: automationcontroller.ansible.com/v1beta1 kind: AutomationController metadata: name: controller-dev spec: postgres_configuration_secret: external-postgres-configuration
3.2.3. Finding and deleting PVCs
A persistent volume claim (PVC) is a storage volume used to store data that automation hub and automation controller applications use. These PVCs are independent from the applications and remain even when the application is deleted. If you are confident that you no longer need a PVC, or have backed it up elsewhere, you can manually delete them.
Procedure
List the existing PVCs in your deployment namespace:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pvc -n <namespace>
oc get pvc -n <namespace>
- Identify the PVC associated with your previous deployment by comparing the old deployment name and the PVC name.
Delete the old PVC:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete pvc -n <namespace> <pvc-name>
oc delete pvc -n <namespace> <pvc-name>
3.2.4. Additional resources
- For more information on running operators on OpenShift Container Platform, navigate to the OpenShift Container Platform product documentation and click the Operators - Working with Operators in OpenShift Container Platform guide.
3.3. Configuring automation hub on Red Hat OpenShift Container Platform web console
You can use these instructions to configure the automation hub operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.
Automation hub configuration can be done through the automation hub pulp_settings or directly in the user interface after deployment. However, it is important to note that configurations made in pulp_settings take precedence over settings made in the user interface. Hub settings should always be set as lowercase on the Hub custom resource specification.
When an instance of automation hub is removed, the PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the previous one. Therefore, it is recommended that you manually remove old PVCs before deploying a new automation hub instance in the same namespace. See Finding and deleting PVCs for more information.
3.3.1. Prerequisites
- You have installed the Ansible Automation Platform Operator in Operator Hub.
3.3.1.1. Storage options for Ansible Automation Platform Operator installation on Red Hat OpenShift Container Platform
Automation hub requires ReadWriteMany
file-based storage, Azure Blob storage, or Amazon S3-compliant storage for operation so that multiple pods can access shared content, such as collections.
The process for configuring object storage on the AutomationHub
CR is similar for Amazon S3 and Azure Blob Storage.
If you are using file-based storage and your installation scenario includes automation hub, ensure that the storage option for Ansible Automation Platform Operator is set to ReadWriteMany
. ReadWriteMany
is the default storage option.
In addition, OpenShift Data Foundation provides a ReadWriteMany
or S3-compliant implementation. Also, you can set up NFS storage configuration to support ReadWriteMany
. This, however, introduces the NFS server as a potential, single point of failure.
Additional resources
- Persistent storage using NFS in the OpenShift Container Platform Storage guide
- IBM’s How do I create a storage class for NFS dynamic storage provisioning in an OpenShift environment?
3.3.1.1.1. Provisioning OCP storage with ReadWriteMany
access mode
To ensure successful installation of Ansible Automation Platform Operator, you must provision your storage type for automation hub initially to ReadWriteMany
access mode.
Procedure
- Go to → .
- Click .
In the first step, update the
accessModes
from the defaultReadWriteOnce
toReadWriteMany
.- See Provisioning to update the access mode. for a detailed overview.
- Complete the additional steps in this section to create the persistent volume claim (PVC).
3.3.1.1.2. Configuring object storage on Amazon S3
Red Hat supports Amazon Simple Storage Service (S3) for automation hub. You can configure it when deploying the AutomationHub
custom resource (CR), or you can configure it for an existing instance.
Prerequisites
- Create an Amazon S3 bucket to store the objects.
- Note the name of the S3 bucket.
Procedure
Create a Kubernetes secret containing the AWS credentials and connection details, and the name of your Amazon S3 bucket. The following example creates a secret called
test-s3
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n $HUB_NAMESPACE apply -f- <<EOF apiVersion: v1 kind: Secret metadata: name: 'test-s3' stringData: s3-access-key-id: $S3_ACCESS_KEY_ID s3-secret-access-key: $S3_SECRET_ACCESS_KEY s3-bucket-name: $S3_BUCKET_NAME s3-region: $S3_REGION EOF
$ oc -n $HUB_NAMESPACE apply -f- <<EOF apiVersion: v1 kind: Secret metadata: name: 'test-s3' stringData: s3-access-key-id: $S3_ACCESS_KEY_ID s3-secret-access-key: $S3_SECRET_ACCESS_KEY s3-bucket-name: $S3_BUCKET_NAME s3-region: $S3_REGION EOF
Add the secret to the automation hub custom resource (CR)
spec
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow spec: object_storage_s3_secret: test-s3
spec: object_storage_s3_secret: test-s3
-
If you are applying this secret to an existing instance, restart the API pods for the change to take effect.
<hub-name>
is the name of your hub instance.
oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api
$ oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api
3.3.1.1.3. Configuring object storage on Azure Blob
Red Hat supports Azure Blob Storage for automation hub. You can configure it when deploying the AutomationHub
custom resource (CR), or you can configure it for an existing instance.
Prerequisites
- Create an Azure Storage blob container to store the objects.
- Note the name of the blob container.
Procedure
Create a Kubernetes secret containing the credentials and connection details for your Azure account, and the name of your Azure Storage blob container. The following example creates a secret called
test-azure
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc -n $HUB_NAMESPACE apply -f- <<EOF apiVersion: v1 kind: Secret metadata: name: 'test-azure' stringData: azure-account-name: $AZURE_ACCOUNT_NAME azure-account-key: $AZURE_ACCOUNT_KEY azure-container: $AZURE_CONTAINER azure-container-path: $AZURE_CONTAINER_PATH azure-connection-string: $AZURE_CONNECTION_STRING EOF
$ oc -n $HUB_NAMESPACE apply -f- <<EOF apiVersion: v1 kind: Secret metadata: name: 'test-azure' stringData: azure-account-name: $AZURE_ACCOUNT_NAME azure-account-key: $AZURE_ACCOUNT_KEY azure-container: $AZURE_CONTAINER azure-container-path: $AZURE_CONTAINER_PATH azure-connection-string: $AZURE_CONNECTION_STRING EOF
Add the secret to the automation hub custom resource (CR)
spec
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow spec: object_storage_azure_secret: test-azure
spec: object_storage_azure_secret: test-azure
-
If you are applying this secret to an existing instance, restart the API pods for the change to take effect.
<hub-name>
is the name of your hub instance.
oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api
$ oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api
3.3.1.2. Configure your automation hub operator route options
The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation hub operator route options under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select your Ansible Automation Platform Operator deployment.
- Select the Automation Hub tab.
For new instances, click
.- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingress type, click the drop-down menu and select Route.
- Under Route DNS host, enter a common host name that the route answers to.
- Under Route TLS termination mechanism, click the drop-down menu and select Edge or Passthrough.
- Under Route TLS credential secret, click the drop-down menu and select a secret from the list.
3.3.1.3. Configuring the ingress type for your automation hub operator
The Ansible Automation Platform Operator installation form allows you to further configure your automation hub operator ingress under Advanced configuration.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select your Ansible Automation Platform Operator deployment.
- Select the Automation Hub tab.
For new instances, click
.- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Click .
- Under Ingress type, click the drop-down menu and select Ingress.
- Under Ingress annotations, enter any annotations to add to the ingress.
- Under Ingress TLS secret, click the drop-down menu and select a secret from the list.
After you have configured your automation hub operator, click
at the bottom of the form view. Red Hat OpenShift Container Platform creates the pods. This may take a few minutes.You can view the progress by navigating to
→ and locating the newly created instance.Verification
Verify that the following operator pods provided by the Ansible Automation Platform Operator installation from automation hub are running:
Operator manager controllers | Automation controller | Automation hub |
---|---|---|
The operator manager controllers for each of the 3 operators, include the following:
| After deploying automation controller, you will see the addition of these pods:
| After deploying automation hub, you will see the addition of these pods:
|
A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name>
to see if there is an ImagePullBackOff error on that pod.
3.3.2. Finding the automation hub route
You can access the automation hub through the platform gateway or through the following procedure.
Procedure
- Log into Red Hat OpenShift Container Platform.
- Navigate to → .
- Under Location, click on the URL for your automation hub instance.
The automation hub user interface launches where you can sign in with the administrator credentials specified during the operator configuration process.
If you did not specify an administrator password during configuration, one was automatically created for you. To locate this password, go to your project, select
→ and open controller-admin-password. From there you can copy the password and paste it into the Automation hub password field.3.3.3. Configuring an external database for automation hub on Red Hat Ansible Automation Platform Operator
For users who prefer to deploy Ansible Automation Platform with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the oc create
command.
By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment.
You can choose to use an external database instead if you prefer to use a dedicated node to ensure dedicated resources or to manually manage backups, upgrades, or performance tweaks.
The same external database (PostgreSQL instance) can be used for both automation hub, automation controller, and platform gateway as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.
The following section outlines the steps to configure an external database for your automation hub on a Ansible Automation Platform Operator.
Prerequisite
The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform.
Ansible Automation Platform 2.5 supports PostgreSQL 15.
Procedure
The external postgres instance credentials and connection information will need to be stored in a secret, which will then be set on the automation hub spec.
Create a
postgres_configuration_secret
YAML file, following the template below:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace> stringData: host: "<external_ip_or_url_resolvable_by_the_cluster>" port: "<external_port>" database: "<desired_database_name>" username: "<username_to_connect_as>" password: "<password_to_connect_with>" sslmode: "prefer" type: "unmanaged" type: Opaque
apiVersion: v1 kind: Secret metadata: name: external-postgres-configuration namespace: <target_namespace>
1 stringData: host: "<external_ip_or_url_resolvable_by_the_cluster>"
2 port: "<external_port>"
3 database: "<desired_database_name>" username: "<username_to_connect_as>" password: "<password_to_connect_with>"
4 sslmode: "prefer"
5 type: "unmanaged" type: Opaque
- 1
- Namespace to create the secret in. This should be the same namespace you want to deploy to.
- 2
- The resolvable hostname for your database node.
- 3
- External port defaults to
5432
. - 4
- Value for variable
password
should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration. - 5
- The variable
sslmode
is valid forexternal
databases only. The allowed values are:prefer
,disable
,allow
,require
,verify-ca
, andverify-full
.
Apply
external-postgres-configuration-secret.yml
to your cluster using theoc create
command.Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc create -f external-postgres-configuration-secret.yml
$ oc create -f external-postgres-configuration-secret.yml
When creating your
AutomationHub
custom resource object, specify the secret on your spec, following the example below:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: automationhub.ansible.com/v1beta1 kind: AutomationHub metadata: name: hub-dev spec: postgres_configuration_secret: external-postgres-configuration
apiVersion: automationhub.ansible.com/v1beta1 kind: AutomationHub metadata: name: hub-dev spec: postgres_configuration_secret: external-postgres-configuration
3.3.3.1. Enabling the hstore extension for the automation hub PostgreSQL database
Added in Ansible Automation Platform 2.5, the database migration script uses hstore
fields to store information, therefore the hstore
extension must be enabled in the automation hub PostgreSQL database.
This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server.
If the PostgreSQL database is external, you must enable the hstore
extension in the automation hub PostgreSQL database manually before installation.
If the hstore
extension is not enabled before installation, a failure raises during database migration.
Procedure
Check if the extension is available on the PostgreSQL server (automation hub database).
Copy to Clipboard Copied! Toggle word wrap Toggle overflow psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"
$ psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"
Where the default value for
<automation hub database>
isautomationhub
.Example output with
hstore
available:Copy to Clipboard Copied! Toggle word wrap Toggle overflow name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)
name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)
Example output with
hstore
not available:Copy to Clipboard Copied! Toggle word wrap Toggle overflow name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)
name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)
On a RHEL based server, the
hstore
extension is included in thepostgresql-contrib
RPM package, which is not installed automatically when installing the PostgreSQL server RPM package.To install the RPM package, use the following command:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow dnf install postgresql-contrib
dnf install postgresql-contrib
Load the
hstore
PostgreSQL extension into the automation hub database with the following command:Copy to Clipboard Copied! Toggle word wrap Toggle overflow psql -d <automation hub database> -c "CREATE EXTENSION hstore;"
$ psql -d <automation hub database> -c "CREATE EXTENSION hstore;"
In the following output, the
installed_version
field lists thehstore
extension used, indicating thathstore
is enabled.Copy to Clipboard Copied! Toggle word wrap Toggle overflow name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)
name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)
3.3.4. Finding and deleting PVCs
A persistent volume claim (PVC) is a storage volume used to store data that automation hub and automation controller applications use. These PVCs are independent from the applications and remain even when the application is deleted. If you are confident that you no longer need a PVC, or have backed it up elsewhere, you can manually delete them.
Procedure
List the existing PVCs in your deployment namespace:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pvc -n <namespace>
oc get pvc -n <namespace>
- Identify the PVC associated with your previous deployment by comparing the old deployment name and the PVC name.
Delete the old PVC:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc delete pvc -n <namespace> <pvc-name>
oc delete pvc -n <namespace> <pvc-name>
3.3.5. Additional configurations
A collection download count can help you understand collection usage. To add a collection download count to automation hub, set the following configuration:
spec: pulp_settings: ansible_collect_download_count: true
spec:
pulp_settings:
ansible_collect_download_count: true
When ansible_collect_download_count
is enabled, automation hub will display a download count by the collection.
3.3.6. Adding allowed registries to the automation controller image configuration
Before you can deploy a container image in automation hub, you must add the registry to the allowedRegistries
in the automation controller image configuration. To do this you can copy and paste the following code into your automation controller image YAML.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select the Resources drop-down list and type "Image".
- Select Image (config,openshift.io/v1).
- Click Name heading. under the
- Select the tab.
Paste in the following under spec value:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow spec: registrySources: allowedRegistries: - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - <OCP route for your automation hub>
spec: registrySources: allowedRegistries: - quay.io - registry.redhat.io - image-registry.openshift-image-registry.svc:5000 - <OCP route for your automation hub>
- Click .
3.3.7. Additional resources
- For more information on running operators on OpenShift Container Platform, navigate to the OpenShift Container Platform product documentation and click the Operators - Working with Operators in OpenShift Container Platform guide.
3.4. Deploying clustered Redis on Red Hat Ansible Automation Platform Operator
When you create an Ansible Automation Platform instance through the Ansible Automation Platform Operator, standalone Redis is assigned by default. To deploy clustered Redis, use the following procedure.
For more information about Redis, refer to Caching and queueing system in the Planning your installation guide.
Prerequisites
- You have installed an Ansible Automation Platform Operator deployment.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select your Ansible Automation Platform Operator deployment.
- Select the Details tab.
On the Ansible Automation Platform tile click .
- For existing instances, you can edit the YAML view by clicking the ⋮ icon and then .
- Change the redis_mode value to "cluster".
- Click , then .
- Click to expand Advanced configuration.
- For the Redis Mode list, select Cluster.
- Configure the rest of your instance as necessary, then click .
Your instance deploys with a cluster Redis with 6 Redis replicas as default.
You can modify your automation hub default redis cache PVC volume size, for help with this see, Modifying the default redis cache PVC volume size automation hub.
Chapter 4. Deploying the Ansible Lightspeed intelligent assistant on OpenShift Container Platform
As a system administrator, you can deploy Ansible Lightspeed intelligent assistant on Ansible Automation Platform 2.5 on OpenShift Container Platform.
4.1. Overview
The Ansible Lightspeed intelligent assistant is available on Ansible Automation Platform 2.5 on OpenShift Container Platform as a Technology Preview release. It is an intuitive chat interface embedded within the Ansible Automation Platform, utilizing generative artificial intelligence (AI) to answer questions about the Ansible Automation Platform.
The Ansible Lightspeed intelligent assistant interacts with users in their natural language prompts in English, and uses Large Language Models (LLMs) to generate quick, accurate, and personalized responses. These responses empower Ansible users to work more efficiently, thereby improving productivity and the overall quality of their work.
Ansible Lightspeed intelligent assistant requires the following configurations:
- Installation of Ansible Automation Platform 2.5 on Red Hat OpenShift Container Platform
- Deployment of an LLM served by either a Red Hat AI platform or a third-party AI platform. To know the LLM providers that you can use, see LLM providers.
- Red Hat does not collect any telemetry data from your interactions with the Ansible Lightspeed intelligent assistant.
Ansible Lightspeed intelligent assistant is available as a Technology Preview feature only.
Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
4.2. Prerequisites
4.2.1. Ansible Automation Platform 2.5
- You have installed Ansible Automation Platform 2.5 on your OpenShift Container Platform environment.
- You have administrator privileges for the Ansible Automation Platform.
- You have provisioned an OpenShift cluster with Operator Lifecycle Management installed.
4.2.2. Large Language Model (LLM) provider
You must have configured an LLM provider that you will use before deploying the Ansible Lightspeed intelligent assistant.
An LLM is a type of machine learning model that can interpret and generate human-like language. When an LLM is used with the Ansible Lightspeed intelligent assistant, the LLM can interpret questions accurately and provide helpful answers in a conversational manner.
As part of the Technology Preview release, Ansible Lightspeed intelligent assistant can rely on the following Software as a Service (SaaS) LLM providers:
Red Hat LLM providers
Red Hat Enterprise Linux AI
Red Hat Enterprise Linux AI is OpenAI API-compatible and is configured in a similar manner to the OpenAI provider. You can configure Red Hat Enterprise Linux AI as the LLM provider. For more information, see the Red Hat Enterprise Linux AI product page.
Red Hat OpenShift AI
Red Hat OpenShift AI is OpenAI API-compatible and is configured in a similar manner to the OpenAI provider. You can configure Red Hat OpenShift AI as the LLM provider. For more information, see the Red Hat OpenShift AI product page.
For configurations with Red Hat Enterprise Linux AI or Red Hat OpenShift AI, you must host your own LLM provider instead of using a SaaS LLM provider.
Third-party LLM providers
IBM watsonx.ai
To use IBM watsonx with the Ansible Lightspeed intelligent assistant, you need an account with IBM watsonx.ai.
OpenAI
To use OpenAI with the Ansible Lightspeed intelligent assistant, you need access the OpenAI API platform.
Microsoft Azure OpenAI
To use Microsoft Azure with the Ansible Lightspeed intelligent assistant, you need access to Microsoft Azure OpenAI.
4.3. Process
Perform the following tasks to set up and use the Ansible Lightspeed intelligent assistant in your Ansible Automation Platform instance on the OpenShift Container Platform environment:
Task | Description |
---|---|
Deploy the Ansible Lightspeed intelligent assistant on OpenShift Container Platform | An Ansible Automation Platform administrator who wants to deploy the Ansible Lightspeed intelligent assistant for all Ansible users in the organization. Perform the following tasks:
|
Access and use the Ansible Lightspeed intelligent assistant | All Ansible users who want to use the intelligent assistant to get answers to their questions about the Ansible Automation Platform. |
4.4. Deploying the Ansible Lightspeed intelligent assistant
This section provides information about the procedures involved in deploying the Ansible Lightspeed intelligent assistant on OpenShift Container Platform.
4.4.1. Installing and configuring the Ansible Automation Platform operator
Install and configure the Ansible Automation Platform operator on the OpenShift Container Platform, so that you can deploy and use the Ansible Lightspeed intelligent assistant.
4.4.1.1. Installing the Ansible Automation Platform operator
Install the Ansible Automation Platform operator on OpenShift Container Platform.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Search for Ansible Automation Platform and click .
Select an Update Channel:
- stable-2.x: installs a namespace-scoped operator, which limits deployments of automation hub and automation controller instances to the namespace the operator is installed in, this is suitable for most cases. The stable-2.x channel does not require administrator privileges and utilizes fewer resources because it only monitors a single namespace.
- stable-2.x-cluster-scoped: installs the Ansible Automation Platform Operator in a single namespace that manages Ansible Automation Platform custom resources and deployments in all namespaces. The Ansible Automation Platform Operator requires administrator privileges for all namespaces in the cluster.
- Select Installation Mode, Installed Namespace, and Approval Strategy.
- Click .
The installation process begins. When installation finishes, a modal appears notifying you that the Ansible Automation Platform Operator is installed in the specified namespace.
Verification
- Click to view your newly installed Ansible Automation Platform Operator and verify the following operator custom resources are present:
Automation controller | Automation hub | Event-Driven Ansible (EDA) | Red Hat Ansible Lightspeed |
---|---|---|---|
|
|
|
|
- Verify that the Ansible Automation Platform operator displays a Succeeded status.
4.4.1.2. Configuring the Ansible Automation Platform operator
After installing the Ansible Automation Platform Operator in your namespace, configure the Ansible Automation Platform operator to link your components to the platform gateway.
4.4.1.2.1. Linking your components to the platform gateway
After installing the Ansible Automation Platform Operator in your namespace you can set up your Ansible Automation Platform instance. Then link all the platform components to a single user interface.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select your Ansible Automation Platform Operator deployment.
- Select the Details tab.
- On the Ansible Automation Platform tile click .
- From the Create Ansible Automation Platform page enter a name for your instance in the Name field.
Click
and paste the following:Copy to Clipboard Copied! Toggle word wrap Toggle overflow spec: database: resource_requirements: requests: cpu: 200m memory: 512Mi storage_requirements: requests: storage: 100Gi controller: disabled: false eda: disabled: false hub: disabled: false storage_type: file file_storage_storage_class: <read-write-many-storage-class> file_storage_size: 10Gi
spec: database: resource_requirements: requests: cpu: 200m memory: 512Mi storage_requirements: requests: storage: 100Gi controller: disabled: false eda: disabled: false hub: disabled: false storage_type: file file_storage_storage_class: <read-write-many-storage-class> file_storage_size: 10Gi
- Click .
Verification
Go to your Ansible Automation Platform Operator deployment and click Ansible Automation Platform instance and the deployed AutomationController, EDA, and AutomationHub instances here.
to verify if all instances deployed correctly. You should see the
Alternatively you can check by the command line, run: oc get route
You must also verify that all the pods are running successfully. Perform the following steps:
- Navigate to → .
- Switch to the project named as described in the namespace metadata in the YAML configuration view.
- Verify that all pods display either a Running or Completed status, with no pods displaying an error status.
4.4.2. Creating a chatbot configuration secret
Create a configuration secret for the Ansible Lightspeed intelligent assistant, so that you can connect the intelligent assistant to the Ansible Automation Platform operator.
Procedure
- Log in to Red Hat OpenShift Container Platform as an administrator.
- Navigate to → .
- From the Projects list, select the namespace that you created when you installed the Ansible Automation Platform operator.
- Click → .
-
In the Secret name field, enter a unique name for the secret. For example,
chatbot-configuration-secret
. Add the following keys and their associated values individually:
Key Value Settings for all LLM setups
chatbot_model
Enter the LLM model name that is configured on your LLM setup.
chatbot_url
Enter the inference API base URL on your LLM setup. For example,
https://your_inference_api/v1
.chatbot_token
Enter the API token or the API key. This token is sent along with the authorization header when an inference API is called.
chatbot_llm_provider_type
Optional
Enter the provider type of your LLM setup by using one of the following values:
-
Red Hat Enterprise Linux AI:
rhoai_vllm
(Default value) -
Red Hat OpenShift AI:
rhelai_vllm
-
IBM watsonx.ai:
watsonx
-
OpenAI:
openai
-
Microsoft Azure OpenAI:
azure_openai
chatbot_context_window_size
Optional
Enter a value to configure the context window length for your LLM setup.
Default=
128000
chatbot_temperature_override
Optional
A lower temperature generates predictable results, while a higher temperature allows more diverse or creative responses.
Enter one of the following values:
-
0
: Least creativity and randomness in the responses. -
1
: Maximum creativity and randomness in the responses. null
: Override or disable the default temperature setting.NoteA few OpenAI o-series models (o1, o3-mini, and o4-mini models) do not support the temperature settings. Therefore, you must set the value to null to use these OpenAI models.
Additional setting for IBM watsonx.ai only
chatbot_llm_provider_project_id
Enter the project ID of your IBM watsonx setup.
Additional settings for Microsoft Azure OpenAI only
chatbot_azure_deployment_name
Enter the deployment name of your Microsoft Azure OpenAI setup.
chatbot_azure_api_version
Optional
Enter the API version of your Microsoft Azure OpenAI setup.
-
Red Hat Enterprise Linux AI:
- Click Create. The chatbot authorization secret is successfully created.
4.4.3. Updating the YAML file of the Ansible Automation Platform operator
After you create the chatbot authorization secret, you must update the YAML file of the Ansible Automation Platform operator to use the secret.
Procedure
- Log in to Red Hat OpenShift Container Platform as an administrator.
- Navigate to → .
- From the list of installed operators, select the Ansible Automation Platform operator.
- Locate and select the Ansible Automation Platform custom resource, and then click the required app.
- Select the YAML tab.
Scroll the text to find the
spec:
section, and add the following details under thespec:
section:Copy to Clipboard Copied! Toggle word wrap Toggle overflow spec: lightspeed: disabled: false chatbot_config_secret_name: <name of your chatbot configuration secret>
spec: lightspeed: disabled: false chatbot_config_secret_name: <name of your chatbot configuration secret>
- Click Save. The Ansible Lightspeed intelligent assistant service takes a few minutes to set up.
Verification
Verify that the chat interface service is running successfully:
- Navigate to → .
Filter with the term api and ensure that the following APIs are displayed in Running status:
-
myaap-lightspeed-api-<version number>
-
myaap-lightspeed-chatbot-api-<version number>
-
Verify that the chat interface is displayed on the Ansible Automation Platform:
Access the Ansible Automation Platform:
- Navigate to → .
- From the list of installed operators, click Ansible Automation Platform.
- Locate and select the Ansible Automation Platform custom resource, and then click the app that you created.
From the Details tab, record the information available in the following fields:
- URL: This is the URL of your Ansible Automation Platform instance.
- Gateway Admin User: This is the username to log into your Ansible Automation Platform instance.
- Gateway Admin password: This is the password to log into your Ansible Automation Platform instance.
- Log in to the Ansible Automation Platform using the URL, username, and password that you recorded earlier.
Access the Ansible Lightspeed intelligent assistant:
-
Click the Ansible Lightspeed intelligent assistant icon
that is displayed at the top right corner of the taskbar.
Verify that the chat interface is displayed, as shown in the following image:
.
-
Click the Ansible Lightspeed intelligent assistant icon
4.5. Using the Ansible Lightspeed intelligent assistant
After you deploy the Ansible Lightspeed intelligent assistant, all Ansible users within the organization can access and use the chat interface to ask questions and receive information about the Ansible Automation Platform.
Accessing the Ansible Lightspeed intelligent assistant
- Log in to the Ansible Automation Platform.
Click the Ansible Lightspeed intelligent assistant icon
that is displayed at the top right corner of the taskbar.
The Ansible Lightspeed intelligent assistant window opens with a welcome message, as shown in the following image:
Using the Ansible Lightspeed intelligent assistant
You can perform the following tasks:
- Ask questions in the prompt field and get answers about the Ansible Automation Platform
- View the chat history of all conversations in a chat session
Search the chat history using a user prompt or answer
The chat history is deleted when you close an existing chat session or log out from the Ansible Automation Platform.
- Restore a previous chat by clicking the relevant entry from the chat history
- Provide feedback on the quality of the chat answers, by clicking the Thumbs up or Thumbs down icon
- Copy and record the answers by clicking the Copy icon
-
Change the mode of the virtual assistant to dark or light mode, by clicking the Sun icon
from the top right corner of the toolbar
- Clear the context of an existing chat by using the New chat button in the chat history
- Close the chat interface while working on the Ansible Automation Platform
Chapter 5. Migrating Red Hat Ansible Automation Platform to Red Hat Ansible Automation Platform Operator
Migrating your Red Hat Ansible Automation Platform deployment to the Ansible Automation Platform Operator allows you to take advantage of the benefits provided by a Kubernetes native operator, including simplified upgrades and full lifecycle support for your Red Hat Ansible Automation Platform deployments.
Upgrades of Event-Driven Ansible version 2.4 to 2.5 are not supported. Database migrations between Event-Driven Ansible 2.4 and Event-Driven Ansible 2.5 are not compatible.
Use these procedures to migrate any of the following deployments to the Ansible Automation Platform Operator:
- OpenShift cluster A to OpenShift cluster B
- OpenShift namespace A to OpenShift namespace B
- Virtual machine (VM) based or containerized VM Ansible Automation Platform 2.5 → Ansible Automation Platform 2.5
5.1. Migration considerations
If you are upgrading from any version of Ansible Automation Platform older than 2.4, you must upgrade through Ansible Automation Platform first. If you are on OpenShift Container Platform 3 and you want to upgrade to OpenShift Container Platform 4, you must provision a fresh OpenShift Container Platform version 4 cluster and then migrate the Ansible Automation Platform to the new cluster.
5.2. Preparing for migration
Before migrating your current Ansible Automation Platform deployment to Ansible Automation Platform Operator, you must back up your existing data, and create Kubernetes secrets for your secret key and postgresql configuration.
If you are migrating both automation controller and automation hub instances, repeat the steps in Creating a secret key secret and Creating a postgresql configuration secret for both and then proceed to Migrating data to the Ansible Automation Platform Operator.
5.2.1. Migrating to Ansible Automation Platform Operator
Prerequisites
To migrate Ansible Automation Platform deployment to Ansible Automation Platform Operator, you must have the following:
- Secret key secret
- Postgresql configuration
- Role-based Access Control for the namespaces on the new OpenShift cluster
- The new OpenShift cluster must be able to connect to the previous PostgreSQL database
You can store the secret key information in the inventory file before the initial Red Hat Ansible Automation Platform installation. If you are unable to remember your secret key or have trouble locating your inventory file, contact Ansible support through the Red Hat Customer portal.
Before migrating your data from Ansible Automation Platform 2.4, you must back up your data for loss prevention.
Procedure
- Log in to your current deployment project.
-
Run
$ ./setup.sh -b
to create a backup of your current data or deployment.
5.2.2. Creating a secret key secret
To migrate your data to Ansible Automation Platform Operator on OpenShift Container Platform, you must create a secret key. If you are migrating automation controller, automation hub, and Event-Driven Ansible you must have a secret key for each that matches the secret key defined in the inventory file during your initial installation. Otherwise, the migrated data remains encrypted and unusable after migration.
When specifying the symmetric encryption secret key on the custom resources, note that for automation controller the field is called secret_key_name
. But for automation hub and Event-Driven Ansible, the field is called db_fields_encryption_secret
.
In the Kubernetes secrets, automation controller and Event-Driven Ansible use the same stringData key (secret_key
) but, automation hub uses a different key (database_fields.symmetric.key
).
Procedure
- Locate the old secret keys in the inventory file you used to deploy Ansible Automation Platform in your previous installation.
Create a YAML file for your secret keys:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow --- apiVersion: v1 kind: Secret metadata: name: <controller-resourcename>-secret-key namespace: <target-namespace> stringData: secret_key: <content of /etc/tower/SECRET_KEY from old controller> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: <eda-resourcename>-secret-key namespace: <target-namespace> stringData: secret_key: </etc/ansible-automation-platform/eda/SECRET_KEY> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: <hub-resourcename>-secret-key namespace: <target-namespace> stringData: database_fields.symmetric.key: </etc/pulp/certs/database_fields.symmetric.key> type: Opaque
--- apiVersion: v1 kind: Secret metadata: name: <controller-resourcename>-secret-key namespace: <target-namespace> stringData: secret_key: <content of /etc/tower/SECRET_KEY from old controller> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: <eda-resourcename>-secret-key namespace: <target-namespace> stringData: secret_key: </etc/ansible-automation-platform/eda/SECRET_KEY> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: <hub-resourcename>-secret-key namespace: <target-namespace> stringData: database_fields.symmetric.key: </etc/pulp/certs/database_fields.symmetric.key> type: Opaque
NoteIf
admin_password_secret
is not provided, the operator looks for a secret named<resourcename>-admin-password
for the admin password. If it is not present, the operator generates a password and creates a secret from it named<resourcename>-admin-password
.Apply the secret key YAML to the cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc apply -f <yaml-file>
oc apply -f <yaml-file>
5.2.3. Creating a postgresql configuration secret
For migration to be successful, you must provide access to the database for your existing deployment.
Procedure
Create a YAML file for your postgresql configuration secret:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: Secret metadata: name: <resourcename>-old-postgres-configuration namespace: <target namespace> stringData: host: "<external ip or url resolvable by the cluster>" port: "<external port, this usually defaults to 5432>" database: "<desired database name>" username: "<username to connect as>" password: "<password to connect with>" type: Opaque
apiVersion: v1 kind: Secret metadata: name: <resourcename>-old-postgres-configuration namespace: <target namespace> stringData: host: "<external ip or url resolvable by the cluster>" port: "<external port, this usually defaults to 5432>" database: "<desired database name>" username: "<username to connect as>" password: "<password to connect with>" type: Opaque
- Apply the postgresql configuration yaml to the cluster:
oc apply -f <old-postgres-configuration.yml>
oc apply -f <old-postgres-configuration.yml>
5.2.4. Verifying network connectivity
To ensure successful migration of your data, verify that you have network connectivity from your new operator deployment to your old deployment database.
Prerequisites
Take note of the host and port information from your existing deployment. This information is located in the postgres.py file located in the conf.d directory.
Procedure
Create a YAML file to verify the connection between your new deployment and your old deployment database:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: Pod metadata: name: dbchecker spec: containers: - name: dbchecker image: registry.redhat.io/rhel8/postgresql-13:latest command: ["sleep"] args: ["600"]
apiVersion: v1 kind: Pod metadata: name: dbchecker spec: containers: - name: dbchecker image: registry.redhat.io/rhel8/postgresql-13:latest command: ["sleep"] args: ["600"]
Apply the connection checker yaml file to your new project deployment:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc project ansible-automation-platform oc apply -f connection_checker.yaml
oc project ansible-automation-platform oc apply -f connection_checker.yaml
Verify that the connection checker pod is running:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc get pods
oc get pods
Connect to a pod shell:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow oc rsh dbchecker
oc rsh dbchecker
After the shell session opens in the pod, verify that the new project can connect to your old project cluster:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow pg_isready -h <old-host-address> -p <old-port-number> -U AutomationContoller
pg_isready -h <old-host-address> -p <old-port-number> -U AutomationContoller
Example
Copy to Clipboard Copied! Toggle word wrap Toggle overflow <old-host-address>:<old-port-number> - accepting connections
<old-host-address>:<old-port-number> - accepting connections
5.3. Migrating data to the Ansible Automation Platform Operator
When migrating a 2.5 containerized or RPM installed deployment to OpenShift Container Platform you must create a secret with credentials to access the PostgreSQL database from the original deployment, then specify it when creating the Ansible Automation Platform object.
The operator does not support Event-Driven Ansible migration at this time.
Prerequisites
You have completed the following procedures:
5.3.1. Creating an Ansible Automation Platform object
Use the following steps to create an AnsibleAutomationPlatform custom resource object.
Procedure
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select the Ansible Automation Platform Operator installed on your project namespace.
- Select the Ansible Automation Platform tab.
- Click .
Select YAML view and paste in the following, modified accordingly:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow --- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: postgres_configuration_secret: external-postgres-configuration controller: disabled: false postgres_configuration_secret: external-controller-postgres-configuration secret_key_secret: controller-secret-key hub: disabled: false postgres_configuration_secret: external-hub-postgres-configuration db_fields_encryption_secret: hub-db-fields-encryption-secret
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: postgres_configuration_secret: external-postgres-configuration controller: disabled: false postgres_configuration_secret: external-controller-postgres-configuration secret_key_secret: controller-secret-key hub: disabled: false postgres_configuration_secret: external-hub-postgres-configuration db_fields_encryption_secret: hub-db-fields-encryption-secret
- Click .
5.4. Post migration cleanup
After data migration, delete unnecessary instance groups and unlink the old database configuration secret
from the automation controller resource definition.
5.4.1. Deleting Instance Groups post migration
Procedure
Log in to Red Hat Ansible Automation Platform as the administrator with the password you created during migration.
NoteIf you did not create an administrator password during migration, one was automatically created for you. To locate this password, go to your project, select
→ and open controller-admin-password. From there you can copy the password and paste it into the Red Hat Ansible Automation Platform password field.- Select → → .
- Select all Instance Groups except controlplane and default.
- Click .
5.4.2. Unlinking the old database configuration secret post migration
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select the Ansible Automation Platform Operator installed on your project namespace.
- Select the Automation Controller tab.
- Click your AutomationController object. You can then view the object through the Form view or YAML view. The following inputs are available through the YAML view.
-
Locate the
old_postgres_configuration_secret
item within the spec section of the YAML contents. - Delete the line that contains this item.
- Click .
Chapter 6. Upgrading Red Hat Ansible Automation Platform Operator on Red Hat OpenShift Container Platform
The Ansible Automation Platform Operator simplifies the installation, upgrade, and deployment of new Red Hat Ansible Automation Platform instances in your OpenShift Container Platform environment.
6.1. Overview
You can use this document for help with upgrading Ansible Automation Platform 2.4 to 2.5 on Red Hat OpenShift Container Platform. This document applies to upgrades of Ansible Automation Platform 2.5 to later versions of 2.5.
The Ansible Automation Platform Operator manages deployments, upgrades, backups, and restores of automation controller and automation hub. It also handles deployments of AnsibleJob and JobTemplate resources from the Ansible Automation Platform Resource Operator.
Each operator version has default automation controller and automation hub versions. When the operator is upgraded, it also upgrades the automation controller and automation hub deployments it manages, unless overridden in the spec.
OpenShift deployments of Ansible Automation Platform use the built-in Operator Lifecycle Management (OLM) functionality. For more information, see Operator Lifecycle Manager concepts and resources. OpenShift does this by using Subscription, CSV, InstallPlan, and OperatorGroup objects. Most users will not have to interact directly with these resources. They are created when the Ansible Automation Platform Operator is installed from OperatorHub and managed through the Subscriptions tab in the OpenShift console UI. For more information, refer to Accessing the web console.
6.2. Upgrade considerations
If you are upgrading from version 2.4, continue to the Upgrading the Ansible Automation Platform Operator.
If your OpenShift Container Platform version is not supported by the Red Hat Ansible Automation Platform version you are upgrading to, you must upgrade your OpenShift Container Platform cluster to a supported version first.
Refer to the Red Hat Ansible Automation Platform Life Cycle to determine the OpenShift Container Platform version needed.
For information about upgrading your cluster, refer to Updating clusters.
6.3. Prerequisites
To upgrade to a newer version of Ansible Automation Platform Operator, you must:
- Ensure your system meets the system requirements detailed in the Operator topologies section of the Tested deployment models guide.
- Create AutomationControllerBackup and AutomationHubBackup objects. For help with this see Backup and recovery for operator environments
- Review the Release notes for the new Ansible Automation Platform version to which you are upgrading and any intermediate versions.
- Determine the type of upgrade you want to perform. See the Channel Upgrades section for more information.
6.4. Channel upgrades
Upgrading to version 2.5 from Ansible Automation Platform 2.4 involves retrieving updates from a “channel”. A channel refers to a location where you can access your update. It currently resides in the OpenShift console UI.
6.4.1. In-channel upgrades
Most upgrades occur within a channel as follows:
- A new update becomes available in the marketplace, through the redhat-operator CatalogSource.
The system automatically creates a new InstallPlan for your Ansible Automation Platform subscription.
- If set to Manual, the InstallPlan needs manual approval in the OpenShift UI.
If set to Automatic, it upgrades as soon as the new version is available.
NoteSet a manual install strategy on your Ansible Automation Platform Operator subscription during installation or upgrade. You will be prompted to approve upgrades when available in your chosen update channel. Stable channels, like stable-2.5, are available for each X.Y release.
- A new subscription, CSV, and operator containers are created alongside the old ones. The old resources are cleaned up after a successful install.
6.4.2. Cross-channel upgrades
Upgrading between X.Y channels is always manual and intentional. Stable channels for major and minor versions are in the Operator Catalog. Currently, only version 2.x is available, so there are few channels. It is recommended to stay on the latest minor version channel for the latest patches.
If the subscription is set for manual upgrades, you must approve the upgrade in the UI. Then, the system upgrades the Operator to the latest version in that channel.
It is recommended to set a manual install strategy on your Ansible Automation Platform Operator subscription during installation or upgrade. You will be prompted to approve upgrades when they become available in your chosen update channel. Stable channels, such as stable-2.5, are available for each X.Y release.
The containers provided in the latest channel are updated regularly for OS upgrades and critical fixes. This allows customers to receive critical patches and CVE fixes faster. Larger changes and new features are saved for minor and major releases.
For each major or minor version channel, there is a corresponding "cluster-scoped" channel available. Cluster-scoped channels deploy operators that can manage all namespaces, while non-cluster-scoped channels can only manage resources in their own namespace.
Cluster-scoped bundles are not compatible with namespace-scoped bundles. Do not try to switch between normal (stable-2.4 for example) channels and cluster-scoped (stable-2.4-cluster-scoped) channels, as this is not supported.
6.5. Upgrading the Ansible Automation Platform Operator
To upgrade to the latest version of Ansible Automation Platform Operator on OpenShift Container Platform, you can do the following:
Prerequisites
- Read the Release notes for 2.5
- [Optional] You need to deploy all of your Red Hat Ansible Automation Platform services (automation controller, automation hub, Event-Driven Ansible) to the same, single namespace before upgrading to 2.5 (only for existing deployments). For more information see, Migrating from one namespace to another.
Review the Backup and recovery for operator environments guide and backup your services:
- AutomationControllerBackup
- AutomationHubBackup
- EDABackup
Upgrading from Event-Driven Ansible 2.4 is not supported. If you are using Event-Driven Ansible 2.4 in production, contact Red Hat before you upgrade.
Procedure
- Log in to OpenShift Container Platform.
- Navigate to → .
- Select the Ansible Automation Platform Operator installed on your project namespace.
- Select the Subscriptions tab.
- Change the channel from stable-2.4 to stable-2.5. An InstallPlan is created for the user.
- Click .
- Click .
- Create a Custom Resource (CR) using the Ansible Automation Platform UI. The automation controller and automation hub UIs remain until all SSO configuration is supported in the platform gateway UI.
For more information on configuring your updated Ansible Automation Platform Operator, see Configuring the Red Hat Ansible Automation Platform Operator on Red Hat OpenShift Container Platform.
6.6. Creating Ansible Automation Platform custom resources
After upgrading to the latest version of Ansible Automation Platform Operator on OpenShift Container Platform, you can create an Ansible Automation Platform custom resource (CR) that specifies the names of your existing deployments, in the same namespace.
Procedure
This example outlines the steps to deploy a new Event-Driven Ansible setup after upgrading to the latest version, with existing automation controller and automation hub deployments already in place.
The Appendix contains more examples of Ansible Automation Platform CRs for different deployments.
- Log in to Red Hat OpenShift Container Platform.
- Navigate to → .
- Select your Ansible Automation Platform Operator deployment.
- Select the Details tab.
- On the Ansible Automation Platform tile click .
- From the Create Ansible Automation Platform page enter a name for your instance in the Name field.
Click aap-existing-controller-and-hub-new-eda.yml):
and paste the following YAML (Copy to Clipboard Copied! Toggle word wrap Toggle overflow --- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: name: existing-controller #obtain name from controller CR disabled: false eda: disabled: false hub: name: existing-hub disabled: false
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: name: existing-controller #obtain name from controller CR disabled: false eda: disabled: false hub: name: existing-hub disabled: false
- Click .
You can override the operator’s default image for automation controller, automation hub, or platform-resource app images by specifying the preferred image on the YAML spec. This enables upgrading a specific deployment, like a controller, without updating the operator.
The recommended approach however, is to upgrade the operator and use the default image values.
Verification
Navigate to your Ansible Automation Platform Operator deployment and click Ansible Automation Platform instance and the deployed AutomationController, EDA, and AutomationHub instances here.
to verify whether all instances have deployed correctly. You should see the
Alternatively, you can verify whether all instances deployed correctly by running oc get route
in the command line.
6.7. Ansible Automation Platform post-upgrade steps
After a successful upgrade to Ansible Automation Platform 2.5, the next crucial step is migrating your users to the latest version of the platform.
User data and legacy authentication settings from automation controller and private automation hub are carried over during the upgrade process and allow seamless initial access to the platform after upgrade. Customers can log in without additional action.
However, to fully transition authentication to use all of the features and capabilities of the 2.5 platform gateway, a manual process is required post-upgrade to leverage the new authentication framework. In the context of upgrading to Ansible Automation Platform 2.5, this manual process is referred to as migration.
There are important notes and considerations for each type of user migration, including the following:
- Admin users
- Normal users
- SSO users
- LDAP users
Be sure to read through the important notes highlighted for each user type to help make the migration process as smooth as possible.
6.7.1. Migrating admin users
Upgrades from Ansible Automation Platform 2.4 to 2.5 allows for the migration of administrators for each component with their existing component-level admin privileges maintained. However, escalation of privileges to platform gateway administrator is not automatic during the upgrade process. This ensures a secure privilege escalation process that can be customized to meet the organization’s specific needs.
Prerequisites
- Review current admin roles for the individual services in your current deployment.
- Confirm the users who will require platform gateway admin rights post-upgrade.
6.7.1.1. Key considerations
Component-level admin privileges are retained: Administrators for automation controller and automation hub will retain their existing admin privileges for those respective services post-upgrade. For example, an admin of automation controller will continue to have full administration privileges for automation controller resources.
Users previously designated as automation controller or automation hub administrators are labeled as Normal in the User type column of the Users list view. This is a mischaracterization. You can verify that these users have, in fact, retained their service level administrator privileges, by editing the account:
Procedure
- From the navigation panel of the platform gateway, select → .
- Select the check box for the user that you want to modify.
- Click the Pencil icon and select Edit user.
- The Edit user page is displayed where you can see the service level administrator privileges assigned by the User type checkboxes. See Editing a user for more information on these user types.
Only a platform administrator can escalate your privileges.
Escalation to platform gateway admin must be manually configured post-upgrade: During the upgrade process, admin privileges for individual services are not automatically translated to platform administrator privileges. Escalation to platform gateway admin must be granted by the platform administrator after upgrade and migration. Each service admin retains the original scope of their access until the access is changed.
As a platform administrator, you can escalate a user’s privileges by selecting the Ansible Automation Platform Administrator checkbox.
6.7.2. Migrating normal users
When you upgrade from Ansible Automation Platform 2.4 to 2.5, your existing user account is automatically migrated to a single platform account. However, if you have multiple component accounts (such as, automation controller, private automation hub and Event-Driven Ansible), your accounts must be linked to use the centralized features of the platform.
6.7.2.1. Key considerations
Previous service accounts are prefixed: Users with accounts on multiple services in 2.4 are migrated as individual users in 2.5 and prefixed to identify the service from which they were migrated. For example, automation hub accounts are prefixed as hub_<username>
. Automation controller user names do not include a prefix.
Automation controller user accounts take precedence: When an individual user had accounts on multiple services in 2.4, priority is given to their automation controller account during migration, so those are not renamed.
Component level roles are retained until user migration is complete: When users log in using an existing service account and do not perform the account linking process, only the roles for that specific service account are available. The migration process is completed once the user performs the account linking process. At that time, all roles for all services are migrated into the new platform gateway user account.
6.7.2.2. Additional resources
- See Creating a user for more information on user types.
6.7.2.3. Linking your account
Ansible Automation Platform 2.5 provides a centralized location for users, teams and organizations to access the platform’s services and features.
The first time you log in to Ansible Automation Platform 2.5, the platform searches through the existing services to locate a user account with the credentials you entered. When there is a match to an existing account, that account is registered and becomes centrally managed by the platform. Any subsequent component accounts in the system are orphaned and cannot be used to log into the platform.
To address this problem, use the account linking procedure to authenticate from any of your existing component accounts and still be recognized by the platform. Linking accounts associates existing component accounts with the same user profile.
Prerequisites
- You have completed the upgrade process and have a legacy Ansible Automation Platform account and credentials.
Procedure
If you have completed the upgrade process and have a legacy Ansible Automation Platform subscription, follow the account linking procedure below to migrate your account to Ansible Automation Platform 2.5.
- Navigate to the login page for Ansible Automation Platform.
- In the login modal, select either I have an automation controller account or I have an automation hub account based on the credentials you have.
On the next screen, enter the legacy credentials for the component account you selected and click
.NoteIf you are logging in using OIDC credentials, see How to fix broken OIDC redirect after upgrading to AAP 2.5.
- If you have successfully linked your account, the next screen shows your username with a green checkmark beside it. If you have other legacy accounts that you want to link, enter those account credentials and click to link them to your centralized platform gateway account.
- Click to complete linking your legacy accounts.
- After your accounts are linked, depending on your authentication method, you might be prompted to create a new username and password. These credentials will replace your legacy credentials for each component account.
You can also link your legacy account manually by taking the following steps:
- Select your user icon at the top right of your screen, and select User details.
- Select the ⋮ > Link user accounts. icon
- Enter the credentials for the account that you want to link.
If you encounter an error message telling you that your account could not be authenticated, contact your platform administrator.
If you log into Ansible Automation Platform for the first time and are prompted to change your username, this is an indication that another user has already logged into Ansible Automation Platform with the same username. To proceed with account migration, follow the prompts to change your username. Ansible Automation Platform uses your password to authenticate which account or accounts belong to you.
A diagram of the account linking flow
After you have migrated your user account, you can manage your account from the Access Management menu. See Managing access with role based access control.
6.7.3. Migrating Single Sign-On (SSO) users
When upgrading from Ansible Automation Platform 2.4 to 2.5, you must migrate your Single Sign-On (SSO) user accounts if you want to continue using SSO capabilities after the upgrade. Follow the steps in this procedure to ensure a smooth SSO user migration.
6.7.3.1. Key considerations
SSO configurations are not migrated automatically during upgrade to 2.5: While the legacy authentication settings are carried over during the upgrade process and allow seamless initial access to the platform after upgrade, SSO configurations must be manually migrated over to a new Ansible Automation Platform 2.5 authentication configuration. The legacy configuration acts as a reference to preserve existing authentication capabilities and facilitate the migration process. The legacy authentication configuration should not be modified directly or used after migration is complete.
SSO migration is supported in the UI: Migration of legacy SSO accounts is supported in 2.5 UI, and is done by selecting your legacy authenticator from the Auto migrate users from list when you configure a new authentication method. This is the legacy authenticator from which to automatically migrate users to a new platform gateway authentication configuration.
Migration of SSO must happen before users log in and start account linking: You must enable the Auto migrate users to setting after configuring SSO in 2.5 and before any users log in.
Ansible Automation Platform 2.4 SSO configurations are renamed during the upgrade process and are displayed in the Authentication Methods list view with a prefix to indicate a legacy configuration: for example, legacy_sso-saml-<entity id>
. The Authentication type is also listed as legacy sso. These configurations can not be modified.
Once you set up the auto migrate functionality, you should be able to login with SSO in the platform gateway and it will automatically link any matching accounts from the legacy SSO authenticator.
Additional resources
Refer to Ansible Automation Platform 2.4 to 2.5. Linking accounts post upgrade, and Setting up SAML authentication for a demonstration of the post upgrade steps.
6.7.4. Migrating LDAP users
As a platform administrator upgrading from Ansible Automation Platform 2.4 to 2.5, you must migrate your LDAP user accounts if you want to continue using LDAP authentication capabilities after the upgrade. Follow the steps in this procedure to ensure the smoothest possible LDAP user migration.
There are two primary scenarios for migrating users from legacy authentication systems to LDAP-based authentication:
- Legacy user login and account linking
- Migration to LDAP without account linking
6.7.4.1. Key considerations
LDAP configurations are not migrated automatically during upgrade to 2.5: While the legacy LDAP authentication settings are carried over during the upgrade process and allow seamless initial access to the platform after upgrade, LDAP configurations must be manually migrated over to a new Ansible Automation Platform 2.5 LDAP configuration. The legacy configuration acts as a reference to preserve existing authentication capabilities and facilitate the migration process. The legacy authentication configuration should not be modified directly or used after migration is complete.
UID collision risk: LDAP and legacy password authenticators both use usernames as the UID. This can cause UID collisions between users or users with the same name owned by different people. Any user accounts that are not secure for auto-migration due to UID conflicts must be manually migrated to ensure proper handling. You can manually migrate these users through the API /api/gateway/v1/authenticator_users/
before setting auto-migrations.
Do not log in using legacy LDAP authentication if you do not have a user account in the platform prior to the upgrade: Instead, you must auto migrate directly to LDAP without linking accounts.
6.7.4.2. Legacy user login and account linking
Users can log in using their legacy accounts by selecting “I have a <component> account” and entering their credentials (username and password). If the login is successful, they may be prompted to link their account with another component account for example, automation hub and automation controller. If the login credentials are the same for both automation hub and automation controller, account linking is automatically done for that user.
After successful account linking, user accounts from both components are merged into a gateway:legacy external password
authenticator. If user accounts are not automatically merged into the gateway:legacy external password
authenticator, you must auto migrate directly to LDAP without linking accounts.
For more information about account linking, see Linking your accounts.
6.7.4.3. Migrating LDAP users without account linking
If a user is unable to link their accounts because there is no linking option for their automation hub account, you must immediately configure the auto-migrate feature on all legacy password authenticators to target the new gateway LDAP authenticator.
Then, when a user logs in, the platform gateway will automatically migrate the user to the LDAP authenticator if a matching UID is found.
Prerequisites
- Verify that all legacy accounts are properly linked and merged before proceeding with auto-migration.
- Verify that there are no UID collisions or ensure they are manually migrated before proceeding with auto-migration.
Procedure
- Log in to the Ansible Automation Platform UI.
Set up a new LDAP authentication method in the platform gateway following the steps in Configuring LDAP authentication. This will be the configuration that you will migrate your previous LDAP users to.
NoteAnsible Automation Platform 2.4 LDAP configurations are renamed during the upgrade process and are displayed in the Authentication Methods list view prefixed to indicate that it is a legacy configuration, for example,
<controller/hub/eda>: legacy_password
. The Authentication type is listed as Legacy password. These configurations can not be modified.- Select the legacy LDAP authenticator from the Auto migrate users from list. This is the legacy authenticator you want to use for migrating users to your platform gateway LDAP authenticator.
Once you set up the auto migrate functionality, you should be able to login with LDAP in the platform gateway and any matching accounts from the legacy 2.4 LDAP authenticator will automatically be linked.
Chapter 7. Updating Red Hat Ansible Automation Platform on Red Hat OpenShift Container Platform
You can use an upgrade patch to update your operator-based Ansible Automation Platform.
7.1. Patch updating Ansible Automation Platform on OpenShift Container Platform
When you perform a patch update for an installation of Ansible Automation Platform on OpenShift Container Platform, most updates happen within a channel:
- A new update becomes available in the marketplace (through the redhat-operator CatalogSource).
A new InstallPlan is automatically created for your Ansible Automation Platform subscription. If the subscription is set to Manual, the InstallPlan must be manually approved in the OpenShift UI. If the subscription is set to Automatic, it upgrades as soon as the new version is available.
NoteIt is recommended that you set a manual install strategy on your Ansible Automation Platform Operator subscription (set when installing or upgrading the Operator) and you will be prompted to approve an upgrade when it becomes available in your selected update channel. Stable channels for each X.Y release (for example, stable-2.5) are available.
- A new Subscription, CSV, and Operator containers will be created alongside the old Subscription, CSV, and containers. Then the old resources will be cleaned up if the new install was successful.
Chapter 8. Adding execution nodes to Red Hat Ansible Automation Platform Operator
You can enable the Ansible Automation Platform Operator with execution nodes by downloading and installing the install bundle.
Prerequisites
- An automation controller instance.
- The receptor collection package is installed.
-
The Ansible Automation Platform repository
ansible-automation-platform-2.5-for-rhel-{RHEL-RELEASE-NUMBER}-x86_64-rpms
is enabled.
Procedure
- Log in to Red Hat Ansible Automation Platform.
- In the navigation panel, select → → .
- Click .
- Input the Execution Node domain name or IP in the Host Name field.
- Optional: Input the port number in the Listener Port field.
- Click .
-
Click the download icon
next to Install Bundle. This starts a download, take note of where you save the file
Untar the gz file.
NoteTo run the
install_receptor.yml
playbook you must install the receptor collection from Ansible Galaxy:Ansible-galaxy collection install -r requirements.yml
Update the playbook with your user name and SSH private key file. Note that
ansible_host
pre-populates with the hostname you input earlier.Copy to Clipboard Copied! Toggle word wrap Toggle overflow all: hosts: remote-execution: ansible_host: example_host_name # Same with configured in AAP WebUI ansible_user: <username> #user provided Ansible_ssh_private_key_file: ~/.ssh/id_example
all: hosts: remote-execution: ansible_host: example_host_name # Same with configured in AAP WebUI ansible_user: <username> #user provided Ansible_ssh_private_key_file: ~/.ssh/id_example
- Open your terminal, and navigate to the directory where you saved the playbook.
To install the bundle run:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow ansible-playbook install_receptor.yml -i inventory.yml
ansible-playbook install_receptor.yml -i inventory.yml
- When installed you can now upgrade your execution node by downloading and re-running the playbook for the instance you created.
Verification
To verify receptor service status run the following command:
sudo systemctl status receptor.service
sudo systemctl status receptor.service
Make sure the service is in active (running)
state
To verify if your playbook runs correctly on your new node run the following command:
watch podman ps
watch podman ps
Additional resources
- For more information about managing instance groups see the Managing Instance Groups section of the Automation Controller User Guide.
Chapter 9. Ansible Automation Platform Resource Operator
9.1. Resource Operator overview
Resource Operator is a custom resource (CR) that you can deploy after you have created your platform gateway deployment. With Resource Operator you can define resources such as projects, job templates, and inventories in YAML files. automation controller then uses the YAML files to create these resources. You can create the YAML through the Form view that prompts you for keys and values for your YAML code. Alternatively, to work with YAML directly, you can select YAML view.
The Resource Operator provides the following CRs:
- AnsibleJob
- JobTemplate
- Automation controller project
- Automation controller schedule
- Automation controller workflow
- Automation controller workflow template:
- Automation controller inventory
- Automation controller credential
For more information on any of the above custom resources, see Using automation execution.
9.2. Using Resource Operator
The Resource Operator itself does not do anything until the user creates an object. As soon as the user creates an AutomationControllerProject or AnsibleJob resource, the Resource Operator starts processing that object.
Prerequisites
- Install the Kubernetes-based cluster of your choice.
-
Deploy automation controller using the
automation-controller-operator
.
After installing the automation-controller-resource-operator
in your cluster, you must create a Kubernetes (k8s) secret with the connection information for your automation controller instance. Then you can use Resource Operator to create a k8s resource to manage your automation controller instance.
9.3. Connecting Resource Operator to platform gateway
To connect Resource Operator with platform gateway you must create a Kubernetes secret with the connection information for your automation controller instance.
You can only create OAuth 2 Tokens for your own user through the API or UI, which means you can only configure or view tokens from your own user profile.
Procedure
To create an OAuth2 token for your user in the platform gateway UI:
- Log in to Red Hat OpenShift Container Platform.
- In the navigation panel, select → .
- Select the username you want to create a token for.
- Select →
- Click .
- You can leave Applications empty. Add a description and select Read or Write for the Scope.
Make sure you provide a valid user when creating tokens. Otherwise, you get an error message that you tried to issue the command without either specifying a user, or supplying a username that does not exist.
9.4. Creating a automation controller connection secret for Resource Operator
To make your connection information available to the Resource Operator, create a k8s secret with the token and host value.
Procedure
The following is an example of the YAML for the connection secret. Save the following example to a file, for example,
automation-controller-connection-secret.yml
.Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: v1 kind: Secret metadata: name: controller-access type: Opaque stringData: token: <generated-token> host: https://my-controller-host.example.com/
apiVersion: v1 kind: Secret metadata: name: controller-access type: Opaque stringData: token: <generated-token> host: https://my-controller-host.example.com/
- Edit the file with your host and token value.
-
Apply it to your cluster by running the
kubectl create
command:
kubectl create -f controller-connection-secret.yml
kubectl create -f controller-connection-secret.yml
9.5. Creating custom resources for Resource Operator
9.5.1. Creating an AnsibleJob custom resource
An AnsibleJob custom resource launches a job in the automation controller instance specified in the Kubernetes secret (automation controller host URL, token). You can launch an automation job on automation controller by creating an AnsibleJob resource.
Procedure
Specify the connection secret and job template you want to launch.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: generateName: demo-job-1 # generate a unique suffix per 'kubectl create' spec: connection_secret: controller-access job_template_name: Demo Job Template
apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: generateName: demo-job-1 # generate a unique suffix per 'kubectl create' spec: connection_secret: controller-access job_template_name: Demo Job Template
Configure features such as, inventory, extra variables, and time to live for the job.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow spec: connection_secret: controller-access job_template_name: Demo Job Template inventory: Demo Inventory # Inventory prompt on launch needs to be enabled runner_image: quay.io/ansible/controller-resource-runner runner_version: latest job_ttl: 100 extra_vars: # Extra variables prompt on launch needs to be enabled test_var: test job_tags: "provision,install,configuration" # Specify tags to run skip_tags: "configuration,restart" # Skip tasks with a given tag
spec: connection_secret: controller-access job_template_name: Demo Job Template inventory: Demo Inventory # Inventory prompt on launch needs to be enabled runner_image: quay.io/ansible/controller-resource-runner runner_version: latest job_ttl: 100 extra_vars: # Extra variables prompt on launch needs to be enabled test_var: test job_tags: "provision,install,configuration" # Specify tags to run skip_tags: "configuration,restart" # Skip tasks with a given tag
NoteYou must enable prompt on launch for inventories and extra variables if you are configuring those. To enable Prompt on launch, within the automation controller UI: From the → page, select your template and select the Prompt on launch checkbox next to Inventory and Variables sections.
Launch a workflow job template with an AnsibleJob object by specifying the
workflow_template_name
instead ofjob_template_name
:Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: generateName: demo-job-1 # generate a unique suffix per 'kubectl create' spec: connection_secret: controller-access workflow_template_name: Demo Workflow Template
apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: generateName: demo-job-1 # generate a unique suffix per 'kubectl create' spec: connection_secret: controller-access workflow_template_name: Demo Workflow Template
9.5.2. Creating a JobTemplate custom resource
A job template is a definition and set of parameters for running an Ansible job. For more information see the Job Templates section of the Using automation execution guide.
Procedure
Create a job template on automation controller by creating a JobTemplate custom resource:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow apiVersion: tower.ansible.com/v1alpha1 kind: JobTemplate metadata: name: jobtemplate-4 spec: connection_secret: controller-access job_template_name: ExampleJobTemplate4 job_template_project: Demo Project job_template_playbook: hello_world.yml job_template_inventory: Demo Inventory
apiVersion: tower.ansible.com/v1alpha1 kind: JobTemplate metadata: name: jobtemplate-4 spec: connection_secret: controller-access job_template_name: ExampleJobTemplate4 job_template_project: Demo Project job_template_playbook: hello_world.yml job_template_inventory: Demo Inventory
9.5.3. Creating an automation controller project custom resource
A Project is a logical collection of Ansible playbooks, represented in automation controller. For more information see the Projects section of the Using automation execution guide.
Procedure
- Create a project on automation controller by creating an automation controller project custom resource:
apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleProject metadata: name: git spec: repo: https://github.com/ansible/ansible-tower-samples branch: main name: ProjectDemo-git scm_type: git organization: Default description: demoProject connection_secret: controller-access runner_pull_policy: IfNotPresent
apiVersion: tower.ansible.com/v1alpha1
kind: AnsibleProject
metadata:
name: git
spec:
repo: https://github.com/ansible/ansible-tower-samples
branch: main
name: ProjectDemo-git
scm_type: git
organization: Default
description: demoProject
connection_secret: controller-access
runner_pull_policy: IfNotPresent
9.5.4. Creating an automation controller schedule custom resource
Procedure
- Create a schedule on automation controller by creating an automation controller schedule custom resource:
apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleSchedule metadata: name: schedule spec: connection_secret: controller-access runner_pull_policy: IfNotPresent name: "Demo Schedule" rrule: "DTSTART:20210101T000000Z RRULE:FREQ=DAILY;INTERVAL=1;COUNT=1" unified_job_template: "Demo Job Template"
apiVersion: tower.ansible.com/v1alpha1
kind: AnsibleSchedule
metadata:
name: schedule
spec:
connection_secret: controller-access
runner_pull_policy: IfNotPresent
name: "Demo Schedule"
rrule: "DTSTART:20210101T000000Z RRULE:FREQ=DAILY;INTERVAL=1;COUNT=1"
unified_job_template: "Demo Job Template"
9.5.5. Creating an automation controller workflow custom resource
Workflows enable you to configure a sequence of disparate job templates (or workflow templates) that may or may not share inventory, playbooks, or permissions. For more information see the Workflows in automation controller section of the Using automation execution guide.
Procedure
- Create a workflow on automation controller by creating a workflow custom resource:
apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleWorkflow metadata: name: workflow spec: inventory: Demo Inventory workflow_template_name: Demo Job Template connection_secret: controller-access runner_pull_policy: IfNotPresent
apiVersion: tower.ansible.com/v1alpha1
kind: AnsibleWorkflow
metadata:
name: workflow
spec:
inventory: Demo Inventory
workflow_template_name: Demo Job Template
connection_secret: controller-access
runner_pull_policy: IfNotPresent
9.5.6. Creating an automation controller workflow template custom resource
A workflow job template links together a sequence of disparate resources to track the full set of jobs that were part of the release process as a single unit. For more information see the Workflow job templates section of the Using automation execution guide.
Procedure
- Create a workflow template on automation controller by creating a workflow template custom resource:
apiVersion: tower.ansible.com/v1alpha1 kind: WorkflowTemplate metadata: name: workflowtemplate-sample spec: connection_secret: controller-access name: ExampleTowerWorkflow description: Example Workflow Template organization: Default inventory: Demo Inventory workflow_nodes: - identifier: node101 unified_job_template: name: Demo Job Template inventory: organization: name: Default type: job_template - identifier: node102 unified_job_template: name: Demo Job Template inventory: organization: name: Default type: job_template
apiVersion: tower.ansible.com/v1alpha1
kind: WorkflowTemplate
metadata:
name: workflowtemplate-sample
spec:
connection_secret: controller-access
name: ExampleTowerWorkflow
description: Example Workflow Template
organization: Default
inventory: Demo Inventory
workflow_nodes:
- identifier: node101
unified_job_template:
name: Demo Job Template
inventory:
organization:
name: Default
type: job_template
- identifier: node102
unified_job_template:
name: Demo Job Template
inventory:
organization:
name: Default
type: job_template
9.5.7. Creating an automation controller inventory custom resource
By using an inventory file, Ansible Automation Platform can manage a large number of hosts with a single command. Inventories also help you use Ansible Automation Platform more efficiently by reducing the number of command line options you have to specify. For more information see the Inventories section of the Using automation execution guide.
Procedure
- Create an inventory on automation controller by creating an inventory custom resource:
metadata: name: inventory-new spec: connection_secret: controller-access description: my new inventory name: newinventory organization: Default state: present instance_groups: - default variables: string: "string_value" bool: true number: 1 list: - item1: true - item2: "1" object: string: "string_value" number: 2
metadata:
name: inventory-new
spec:
connection_secret: controller-access
description: my new inventory
name: newinventory
organization: Default
state: present
instance_groups:
- default
variables:
string: "string_value"
bool: true
number: 1
list:
- item1: true
- item2: "1"
object:
string: "string_value"
number: 2
9.5.8. Creating an automation controller credential custom resource
Credentials authenticate the automation controller user when launching jobs against machines, synchronizing with inventory sources, and importing project content from a version control system.
SSH and AWS are the most commonly used credentials. For a full list of supported credentials see the Credential types section of the Using automation execution guide.
For help with defining values you can refer to the OpenAPI (Swagger) file for Red Hat Ansible Automation Platform API KCS article.
You can use https://<aap-instance>/api/controller/v2/credential_types/
to view the list of credential types on your instance. To get the full list use the following curl
command:
export AAP_TOKEN="your-oauth2-token" export AAP_URL="https://your-aap-controller.example.com" curl -s -H "Authorization: Bearer $AAP_TOKEN" "$AAP_URL/api/controller/v2/credential_types/" | jq -r '.results[].name'
export AAP_TOKEN="your-oauth2-token"
export AAP_URL="https://your-aap-controller.example.com"
curl -s -H "Authorization: Bearer $AAP_TOKEN" "$AAP_URL/api/controller/v2/credential_types/" | jq -r '.results[].name'
Procedure
- Create a credential on automation controller by creating a credential custom resource:
9.5.8.1. SSH credential
apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleCredential metadata: name: ssh-cred spec: name: ssh-cred organization: Default connection_secret: controller-access description: "SSH credential" type: "Machine" ssh_username: "cat" ssh_secret: my-ssh-secret runner_pull_policy: IfNotPresent
apiVersion: tower.ansible.com/v1alpha1
kind: AnsibleCredential
metadata:
name: ssh-cred
spec:
name: ssh-cred
organization: Default
connection_secret: controller-access
description: "SSH credential"
type: "Machine"
ssh_username: "cat"
ssh_secret: my-ssh-secret
runner_pull_policy: IfNotPresent
9.5.8.2. AWS credential
apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleCredential metadata: name: aws-cred spec: name: aws-access organization: Default connection_secret: controller-access description: "This is a test credential" type: "Amazon Web Services" username_secret: aws-secret password_secret: aws-secret runner_pull_policy: IfNotPresent
apiVersion: tower.ansible.com/v1alpha1
kind: AnsibleCredential
metadata:
name: aws-cred
spec:
name: aws-access
organization: Default
connection_secret: controller-access
description: "This is a test credential"
type: "Amazon Web Services"
username_secret: aws-secret
password_secret: aws-secret
runner_pull_policy: IfNotPresent
Chapter 10. Appendix: Red Hat Ansible Automation Platform custom resources
This appendix provides a reference for the Ansible Automation Platform custom resources for various deployment scenarios.
You can link in existing components by specifying the component name under the name
variable. You can also use name
to create a custom name for a new component.
10.1. Custom resources
10.1.1. aap-existing-controller-and-hub-new-eda.yml
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: name: existing-controller disabled: false eda: disabled: false hub: name: existing-hub disabled: false
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
# Development purposes only
no_log: false
controller:
name: existing-controller
disabled: false
eda:
disabled: false
hub:
name: existing-hub
disabled: false
10.1.2. aap-all-defaults.yml
apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false # Platform ## uncomment to test bundle certs # bundle_cacert_secret: gateway-custom-certs # Components hub: disabled: false ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: nfs-local-rwx file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name # lightspeed: # disabled: true # End state: # * Automation controller deployed and named: myaap-controller # * * Event-Driven Ansible deployed and named: myaap-eda # * * Automation hub deployed and named: myaap-hub
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
# Development purposes only
no_log: false
# Platform
## uncomment to test bundle certs
# bundle_cacert_secret: gateway-custom-certs
# Components
hub:
disabled: false
## uncomment if using file storage for Content pod
storage_type: file
file_storage_storage_class: nfs-local-rwx
file_storage_size: 10Gi
## uncomment if using S3 storage for Content pod
# storage_type: S3
# object_storage_s3_secret: example-galaxy-object-storage
## uncomment if using Azure storage for Content pod
# storage_type: azure
# object_storage_azure_secret: azure-secret-name
# lightspeed:
# disabled: true
# End state:
# * Automation controller deployed and named: myaap-controller
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub deployed and named: myaap-hub
10.1.3. aap-existing-controller-only.yml
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: name: existing-controller eda: disabled: true hub: disabled: true ## uncomment if using file storage for Content pod # storage_type: file # file_storage_storage_class: nfs-local-rwx # file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name # End state: # * Automation controller: existing-controller registered with Ansible Automation Platform UI # * * Event-Driven Ansible deployed and named: myaap-eda # * * Automation hub deployed and named: myaap-hub
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
# Development purposes only
no_log: false
controller:
name: existing-controller
eda:
disabled: true
hub:
disabled: true
## uncomment if using file storage for Content pod
# storage_type: file
# file_storage_storage_class: nfs-local-rwx
# file_storage_size: 10Gi
## uncomment if using S3 storage for Content pod
# storage_type: S3
# object_storage_s3_secret: example-galaxy-object-storage
## uncomment if using Azure storage for Content pod
# storage_type: azure
# object_storage_azure_secret: azure-secret-name
# End state:
# * Automation controller: existing-controller registered with Ansible Automation Platform UI
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub deployed and named: myaap-hub
10.1.4. aap-existing-hub-and-controller.yml
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: name: existing-controller disabled: false eda: disabled: true hub: name: existing-hub disabled: false # End state: # * Automation controller: existing-controller registered with Ansible Automation Platform UI # * * Event-Driven Ansible deployed and named: myaap-eda # * * Automation hub: existing-hub registered with Ansible Automation Platform UI
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
# Development purposes only
no_log: false
controller:
name: existing-controller
disabled: false
eda:
disabled: true
hub:
name: existing-hub
disabled: false
# End state:
# * Automation controller: existing-controller registered with Ansible Automation Platform UI
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub: existing-hub registered with Ansible Automation Platform UI
10.1.5. aap-existing-hub-controller-eda.yml
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: name: existing-controller # <-- this is the name of the existing AutomationController CR disabled: false eda: name: existing-eda disabled: false hub: name: existing-hub disabled: false # End state: # * Controller: existing-controller registered with Ansible Automation Platform UI # * * Event-Driven Ansible: existing-eda registered with Ansible Automation Platform UI # * * Automation hub: existing-hub registered with Ansible Automation Platform UI # # Note: The automation controller, Event-Driven Ansible, and automation hub names must match the names of the existing. # Automation controller, Event-Driven Ansible, and automation hub CRs in the same namespace as the Ansible Automation Platform CR. If the names do not match, the Ansible Automation Platform CR will not be able to register the existing automation controller, Event-Driven Ansible, and automation hub with the Ansible Automation Platform UI,and will instead deploy new automation controller, Event-Driven Ansible, and automation hub instances.
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
# Development purposes only
no_log: false
controller:
name: existing-controller # <-- this is the name of the existing AutomationController CR
disabled: false
eda:
name: existing-eda
disabled: false
hub:
name: existing-hub
disabled: false
# End state:
# * Controller: existing-controller registered with Ansible Automation Platform UI
# * * Event-Driven Ansible: existing-eda registered with Ansible Automation Platform UI
# * * Automation hub: existing-hub registered with Ansible Automation Platform UI
#
# Note: The automation controller, Event-Driven Ansible, and automation hub names must match the names of the existing.
# Automation controller, Event-Driven Ansible, and automation hub CRs in the same namespace as the Ansible Automation Platform CR. If the names do not match, the Ansible Automation Platform CR will not be able to register the existing automation controller, Event-Driven Ansible, and automation hub with the Ansible Automation Platform UI,and will instead deploy new automation controller, Event-Driven Ansible, and automation hub instances.
10.1.6. aap-existing-hub-controller-eda.yml
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: name: existing-controller # <-- this is the name of the existing AutomationController CR disabled: false eda: name: existing-eda disabled: false hub: name: existing-hub disabled: false # End state: # * Automation controller: existing-controller registered with Ansible Automation Platform UI # * * Event-Driven Ansible: existing-eda registered with Ansible Automation Platform UI # * * Automation hub: existing-hub registered with Ansible Automation Platform UI # # Note: The automation controller, Event-Driven Ansible, and automation hub names must match the names of the existing. # Automation controller, Event-Driven Ansible, and automation hub CRs in the same namespace as the Ansible Automation Platform CR. If the names do not match, the Ansible Automation Platform CR will not be able to register the existing automation controller, Event-Driven Ansible, and automation hub with the Ansible Automation Platform UI,and will instead deploy new automation controller, Event-Driven Ansible, and automation hub instances.
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
# Development purposes only
no_log: false
controller:
name: existing-controller # <-- this is the name of the existing AutomationController CR
disabled: false
eda:
name: existing-eda
disabled: false
hub:
name: existing-hub
disabled: false
# End state:
# * Automation controller: existing-controller registered with Ansible Automation Platform UI
# * * Event-Driven Ansible: existing-eda registered with Ansible Automation Platform UI
# * * Automation hub: existing-hub registered with Ansible Automation Platform UI
#
# Note: The automation controller, Event-Driven Ansible, and automation hub names must match the names of the existing.
# Automation controller, Event-Driven Ansible, and automation hub CRs in the same namespace as the Ansible Automation Platform CR. If the names do not match, the Ansible Automation Platform CR will not be able to register the existing automation controller, Event-Driven Ansible, and automation hub with the Ansible Automation Platform UI,and will instead deploy new automation controller, Event-Driven Ansible, and automation hub instances.
10.1.7. aap-fresh-controller-eda.yml
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: disabled: false eda: disabled: false hub: disabled: true ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: nfs-local-rwx file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name # End state: # * Automation controller deployed and named: myaap-controller # * * Event-Driven Ansible deployed and named: myaap-eda # * * Automation hub disabled # * Red Hat Ansible Lightspeed disabled
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
# Development purposes only
no_log: false
controller:
disabled: false
eda:
disabled: false
hub:
disabled: true
## uncomment if using file storage for Content pod
storage_type: file
file_storage_storage_class: nfs-local-rwx
file_storage_size: 10Gi
## uncomment if using S3 storage for Content pod
# storage_type: S3
# object_storage_s3_secret: example-galaxy-object-storage
## uncomment if using Azure storage for Content pod
# storage_type: azure
# object_storage_azure_secret: azure-secret-name
# End state:
# * Automation controller deployed and named: myaap-controller
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub disabled
# * Red Hat Ansible Lightspeed disabled
10.1.8. aap-fresh-external-db.yml
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: disabled: false eda: disabled: false hub: disabled: false ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: nfs-local-rwx file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name # End state: # * Automation controller deployed and named: myaap-controller # * * Event-Driven Ansible deployed and named: myaap-eda # * * Automation hub deployed and named: myaap-hub
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
# Development purposes only
no_log: false
controller:
disabled: false
eda:
disabled: false
hub:
disabled: false
## uncomment if using file storage for Content pod
storage_type: file
file_storage_storage_class: nfs-local-rwx
file_storage_size: 10Gi
## uncomment if using S3 storage for Content pod
# storage_type: S3
# object_storage_s3_secret: example-galaxy-object-storage
## uncomment if using Azure storage for Content pod
# storage_type: azure
# object_storage_azure_secret: azure-secret-name
# End state:
# * Automation controller deployed and named: myaap-controller
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub deployed and named: myaap-hub
10.1.9. aap-configuring-external-db-all-default-components.yml
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: database: database_secret: external-postgres-configuration-gateway controller: postgres_configuration_secret: external-postgres-configuration-controller hub: postgres_configuration_secret: external-postgres-configuration-hub eda: database: database_secret: external-postgres-configuration-eda
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
database:
database_secret: external-postgres-configuration-gateway
controller:
postgres_configuration_secret: external-postgres-configuration-controller
hub:
postgres_configuration_secret: external-postgres-configuration-hub
eda:
database:
database_secret: external-postgres-configuration-eda
10.1.10. aap-configuring-existing-external-db-all-default-components.yml
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: database: database_secret: external-postgres-configuration-gateway
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
database:
database_secret: external-postgres-configuration-gateway
The system uses the external database for platform gateway, and automation controller, automation hub, and Event-Driven Ansible continues to use the existing databases that were used in 2.4.
10.1.11. aap-configuring-external-db-with-lightspeed-enabled.yml
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: database: database_secret: external-postgres-configuration-gateway controller: postgres_configuration_secret: external-postgres-configuration-controller hub: postgres_configuration_secret: external-postgres-configuration-hub eda: database: database_secret: external-postgres-configuration-eda lightspeed: disabled: false database: database_secret: <secret-name>-postgres-configuration auth_config_secret_name: 'auth-configuration-secret' model_config_secret_name: 'model-configuration-secret'
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
database:
database_secret: external-postgres-configuration-gateway
controller:
postgres_configuration_secret: external-postgres-configuration-controller
hub:
postgres_configuration_secret: external-postgres-configuration-hub
eda:
database:
database_secret: external-postgres-configuration-eda
lightspeed:
disabled: false
database:
database_secret: <secret-name>-postgres-configuration
auth_config_secret_name: 'auth-configuration-secret'
model_config_secret_name: 'model-configuration-secret'
You can follow the Red Hat Ansible Lightspeed with IBM watsonx Code Assistant User Guide for help with creating the model and auth secrets.
10.1.12. aap-fresh-install-local-management.yml
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false # Platform ## uncomment to test bundle certs # bundle_cacert_secret: gateway-custom-certs # Components controller: disabled: false extra_settings: - setting: ALLOW_LOCAL_RESOURCE_MANAGEMENT value: 'True' eda: disabled: false extra_settings: - setting: EDA_ALLOW_LOCAL_RESOURCE_MANAGEMENT value: '@bool True' hub: disabled: false ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: nfs-local-rwx file_storage_size: 10Gi pulp_settings: ALLOW_LOCAL_RESOURCE_MANAGEMENT: True # cache_enabled: false # redirect_to_object_storage: "False" # analytics: false # galaxy_collection_signing_service: "" # galaxy_container_signing_service: "" # token_auth_disabled: 'False' # token_signature_algorithm: 'ES256' ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name # Development purposes only no_log: false # lightspeed: # disabled: true # End state: # * Automation controller deployed and named: myaap-controller # * * Event-Driven Ansible deployed and named: myaap-eda # * * Automation hub deployed and named: myaap-hub
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
# Development purposes only
no_log: false
# Platform
## uncomment to test bundle certs
# bundle_cacert_secret: gateway-custom-certs
# Components
controller:
disabled: false
extra_settings:
- setting: ALLOW_LOCAL_RESOURCE_MANAGEMENT
value: 'True'
eda:
disabled: false
extra_settings:
- setting: EDA_ALLOW_LOCAL_RESOURCE_MANAGEMENT
value: '@bool True'
hub:
disabled: false
## uncomment if using file storage for Content pod
storage_type: file
file_storage_storage_class: nfs-local-rwx
file_storage_size: 10Gi
pulp_settings:
ALLOW_LOCAL_RESOURCE_MANAGEMENT: True
# cache_enabled: false
# redirect_to_object_storage: "False"
# analytics: false
# galaxy_collection_signing_service: ""
# galaxy_container_signing_service: ""
# token_auth_disabled: 'False'
# token_signature_algorithm: 'ES256'
## uncomment if using S3 storage for Content pod
# storage_type: S3
# object_storage_s3_secret: example-galaxy-object-storage
## uncomment if using Azure storage for Content pod
# storage_type: azure
# object_storage_azure_secret: azure-secret-name
# Development purposes only
no_log: false
# lightspeed:
# disabled: true
# End state:
# * Automation controller deployed and named: myaap-controller
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub deployed and named: myaap-hub
10.1.13. aap-fresh-install-with-settings.yml
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false image_pull_policy: Always # Platform ## uncomment to test bundle certs # bundle_cacert_secret: gateway-custom-certs # Components controller: disabled: false image_pull_policy: Always extra_settings: - setting: MAX_PAGE_SIZE value: '501' eda: disabled: false image_pull_policy: Always extra_settings: - setting: EDA_MAX_PAGE_SIZE value: '501' hub: disabled: false image_pull_policy: Always ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: rook-cephfs file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name pulp_settings: MAX_PAGE_SIZE: 501 cache_enabled: false # lightspeed: # disabled: true # End state: # * Automation controller deployed and named: myaap-controller # * * Event-Driven Ansible deployed and named: myaap-eda # * * Automation hub deployed and named: myaap-hub
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
# Development purposes only
no_log: false
image_pull_policy: Always
# Platform
## uncomment to test bundle certs
# bundle_cacert_secret: gateway-custom-certs
# Components
controller:
disabled: false
image_pull_policy: Always
extra_settings:
- setting: MAX_PAGE_SIZE
value: '501'
eda:
disabled: false
image_pull_policy: Always
extra_settings:
- setting: EDA_MAX_PAGE_SIZE
value: '501'
hub:
disabled: false
image_pull_policy: Always
## uncomment if using file storage for Content pod
storage_type: file
file_storage_storage_class: rook-cephfs
file_storage_size: 10Gi
## uncomment if using S3 storage for Content pod
# storage_type: S3
# object_storage_s3_secret: example-galaxy-object-storage
## uncomment if using Azure storage for Content pod
# storage_type: azure
# object_storage_azure_secret: azure-secret-name
pulp_settings:
MAX_PAGE_SIZE: 501
cache_enabled: false
# lightspeed:
# disabled: true
# End state:
# * Automation controller deployed and named: myaap-controller
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub deployed and named: myaap-hub
10.1.14. aap-fresh-install.yml
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false # Redis Mode # redis_mode: cluster # Platform ## uncomment to test bundle certs # bundle_cacert_secret: gateway-custom-certs # extra_settings: # - setting: MAX_PAGE_SIZE # value: '501' # Components controller: disabled: false eda: disabled: false hub: disabled: false ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: nfs-local-rwx file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name # lightspeed: # disabled: true # End state: # * Automation controller deployed and named: myaap-controller # * * Event-Driven Ansible deployed and named: myaap-eda # * * Automation hub deployed and named: myaap-hub
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
# Development purposes only
no_log: false
# Redis Mode
# redis_mode: cluster
# Platform
## uncomment to test bundle certs
# bundle_cacert_secret: gateway-custom-certs
# extra_settings:
# - setting: MAX_PAGE_SIZE
# value: '501'
# Components
controller:
disabled: false
eda:
disabled: false
hub:
disabled: false
## uncomment if using file storage for Content pod
storage_type: file
file_storage_storage_class: nfs-local-rwx
file_storage_size: 10Gi
## uncomment if using S3 storage for Content pod
# storage_type: S3
# object_storage_s3_secret: example-galaxy-object-storage
## uncomment if using Azure storage for Content pod
# storage_type: azure
# object_storage_azure_secret: azure-secret-name
# lightspeed:
# disabled: true
# End state:
# * Automation controller deployed and named: myaap-controller
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub deployed and named: myaap-hub
10.1.15. aap-fresh-only-controller.yml
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: disabled: false eda: disabled: true hub: disabled: true ## uncomment if using file storage for Content pod # storage_type: file # file_storage_storage_class: nfs-local-rwx # file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name # End state: # * Automation controller: existing-controller registered with Ansible Automation Platform UI # * * Event-Driven Ansible deployed and named: myaap-eda # * * Automation hub deployed and named: myaap-hub
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
# Development purposes only
no_log: false
controller:
disabled: false
eda:
disabled: true
hub:
disabled: true
## uncomment if using file storage for Content pod
# storage_type: file
# file_storage_storage_class: nfs-local-rwx
# file_storage_size: 10Gi
## uncomment if using S3 storage for Content pod
# storage_type: S3
# object_storage_s3_secret: example-galaxy-object-storage
## uncomment if using Azure storage for Content pod
# storage_type: azure
# object_storage_azure_secret: azure-secret-name
# End state:
# * Automation controller: existing-controller registered with Ansible Automation Platform UI
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub deployed and named: myaap-hub
10.1.16. aap-fresh-only-hub.yml
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: disabled: true eda: disabled: true hub: disabled: false ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: nfs-local-rwx file_storage_size: 10Gi # # AaaS Hub Settings # pulp_settings: # cache_enabled: false ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name lightspeed: disabled: false # End state: # * Automation controller disabled # * * Event-Driven Ansible disabled # * * Automation hub deployed and named: myaap-hub # * Red Hat Ansible Lightspeed disabled
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
# Development purposes only
no_log: false
controller:
disabled: true
eda:
disabled: true
hub:
disabled: false
## uncomment if using file storage for Content pod
storage_type: file
file_storage_storage_class: nfs-local-rwx
file_storage_size: 10Gi
# # AaaS Hub Settings
# pulp_settings:
# cache_enabled: false
## uncomment if using S3 storage for Content pod
# storage_type: S3
# object_storage_s3_secret: example-galaxy-object-storage
## uncomment if using Azure storage for Content pod
# storage_type: azure
# object_storage_azure_secret: azure-secret-name
lightspeed:
disabled: false
# End state:
# * Automation controller disabled
# * * Event-Driven Ansible disabled
# * * Automation hub deployed and named: myaap-hub
# * Red Hat Ansible Lightspeed disabled
10.1.17. aap-lightspeed-enabled.yml
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: disabled: false eda: disabled: false hub: disabled: false ## uncomment if using file storage for Content pod storage_type: file file_storage_storage_class: nfs-local-rwx file_storage_size: 10Gi ## uncomment if using S3 storage for Content pod # storage_type: S3 # object_storage_s3_secret: example-galaxy-object-storage ## uncomment if using Azure storage for Content pod # storage_type: azure # object_storage_azure_secret: azure-secret-name lightspeed: disabled: false # End state: # * Automation controller deployed and named: myaap-controller # * * Event-Driven Ansible deployed and named: myaap-eda # * * Automation hub deployed and named: myaap-hub # * Red Hat Ansible Lightspeed deployed and named: myaap-lightspeed
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
# Development purposes only
no_log: false
controller:
disabled: false
eda:
disabled: false
hub:
disabled: false
## uncomment if using file storage for Content pod
storage_type: file
file_storage_storage_class: nfs-local-rwx
file_storage_size: 10Gi
## uncomment if using S3 storage for Content pod
# storage_type: S3
# object_storage_s3_secret: example-galaxy-object-storage
## uncomment if using Azure storage for Content pod
# storage_type: azure
# object_storage_azure_secret: azure-secret-name
lightspeed:
disabled: false
# End state:
# * Automation controller deployed and named: myaap-controller
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub deployed and named: myaap-hub
# * Red Hat Ansible Lightspeed deployed and named: myaap-lightspeed
10.1.18. gateway-only.yml
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: # Development purposes only no_log: false controller: disabled: true eda: disabled: true hub: disabled: true lightspeed: disabled: true # End state: # * Platform gateway deployed and named: myaap-gateway # * UI is reachable at: https://myaap-gateway-gateway.apps.ocp4.example.com # * Automation controller is not deployed # * * Event-Driven Ansible is not deployed # * * Automation hub is not deployed # * Red Hat Ansible Lightspeed is not deployed
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
# Development purposes only
no_log: false
controller:
disabled: true
eda:
disabled: true
hub:
disabled: true
lightspeed:
disabled: true
# End state:
# * Platform gateway deployed and named: myaap-gateway
# * UI is reachable at: https://myaap-gateway-gateway.apps.ocp4.example.com
# * Automation controller is not deployed
# * * Event-Driven Ansible is not deployed
# * * Automation hub is not deployed
# * Red Hat Ansible Lightspeed is not deployed
10.1.19. eda-max-running-activations.yml
--- apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: myaap spec: eda: extra_settings: - setting: EDA_MAX_RUNNING_ACTIVATIONS value: "15" # Setting this value to "-1" means there will be no limit
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
name: myaap
spec:
eda:
extra_settings:
- setting: EDA_MAX_RUNNING_ACTIVATIONS
value: "15" # Setting this value to "-1" means there will be no limit