Installing on OpenShift Container Platform


Red Hat Ansible Automation Platform 2.6

Install and configure Ansible Automation Platform operator on OpenShift Container Platform

Red Hat Customer Content Services

Abstract

This guide provides procedures and reference information for the supported installation scenarios for the Red Hat Ansible Automation Platform operator on OpenShift Container Platform.

Preface

Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments.

This guide helps you to understand the installation, migration and upgrade requirements for deploying the Ansible Automation Platform Operator on OpenShift Container Platform.

Providing feedback on Red Hat documentation

If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.

As a system administrator, you can use Ansible Automation Platform Operator to deploy new Ansible Automation Platform instances in your OpenShift environment.

Red Hat Ansible Automation Platform is supported on both Red Hat Enterprise Linux and Red Hat OpenShift.

OpenShift operators help install and automate day-2 operations of complex, distributed software on Red Hat OpenShift Container Platform. The Ansible Automation Platform Operator enables you to deploy and manage Ansible Automation Platform components on Red Hat OpenShift Container Platform.

You can use this section to help plan your Red Hat Ansible Automation Platform installation on your Red Hat OpenShift Container Platform environment. Before installing, review the supported installation scenarios to determine which meets your requirements.

1.1.1. About Ansible Automation Platform Operator

The Ansible Automation Platform Operator provides cloud-native, push-button deployment of new Ansible Automation Platform instances in your OpenShift environment.

The Ansible Automation Platform Operator includes resource types to deploy and manage instances of automation controller and private automation hub.

It also includes automation controller job resources for defining and launching jobs inside your automation controller deployments.

Deploying Ansible Automation Platform instances with a Kubernetes native operator offers several advantages over launching instances from a playbook deployed on Red Hat OpenShift Container Platform, including upgrades and full lifecycle support for your Red Hat Ansible Automation Platform deployments.

You can install the Ansible Automation Platform Operator from the Red Hat Operators catalog in OperatorHub.

For information about the Ansible Automation Platform Operator system requirements and infrastructure topology see Operator topologies in Tested deployment models

The Ansible Automation Platform Operator to install Ansible Automation Platform 2.6 is available on OpenShift Container Platform 4.12 through to 4.17 and later versions.

You can use the OperatorHub on the Red Hat OpenShift Container Platform web console to install Ansible Automation Platform Operator.

Alternatively, you can install Ansible Automation Platform Operator from the OpenShift Container Platform command-line interface (CLI), oc. See Installing Red Hat Ansible Automation Platform Operator from the OpenShift Container Platform CLI for help with this.

After you have installed Ansible Automation Platform Operator you must create an Ansible Automation Platform custom resource (CR). This enables you to manage Ansible Automation Platform components from a single unified interface known as the platform gateway. In version 2.6, you must create an Ansible Automation Platform CR, even if you have an existing automation controller, automation hub, or Event-Driven Ansible, components.

If existing components have already been deployed, you must specify these components on the Ansible Automation Platform CR. You must create the custom resource in the same namespace as the existing components.

Expand
Supported scenariosSupported scenarios with existing components
  • Ansible Automation Platform CR for blank slate install with automation controller, automation hub, and Event-Driven Ansible enabled
  • Ansible Automation Platform CR with just automation controller enabled
  • Ansible Automation Platform CR with just automation controller, automation hub enabled
  • Ansible Automation Platform CR with just automation controller, Event-Driven Ansible enabled
  • Ansible Automation Platform CR created in the same namespace as an existing automation controller CR with the automation controller name specified on the Ansible Automation Platform CR spec
  • Same with automation controller and automation hub
  • Same with automation controller, automation hub, and Event-Driven Ansible
  • Same with automation controller and Event-Driven Ansible

1.1.4. Custom resources

You can define custom resources for each primary installation workflows.

In Ansible Automation Platform version 2.6 the Ansible Automation Platform Operator on OpenShift Container Platform creates OpenShift Routes and configures your Cross-site request forgery (CSRF) settings automatically.

When using external ingress, you must configure your CSRF on the ingress, for help with this see Configuring your CSRF settings for your platform gateway operator ingress.

Important

In previous versions CSRF was configurable through the automation controller user interface, in version 2.6 automation controller settings are still present but have no impact on CSRF settings for the platform gateway.

The following table helps to clarify which settings are applicable for which component.

Expand
UI settingApplicable for

Subscription

automation controller

platform gateway

platform gateway

User Preferences

User interface

System

Automation controller

Job

Automation controller

Logging

Automation controller

Troubleshooting

Automation controller

1.1.6. Additional resources

To learn more about OpenShift Container Platform OperatorHub you can review OpenShift Container Platform documentation:

Use this procedure to guide you through deploying the Red Hat Ansible Automation Platform Operator through the Operators section on Red Hat OpenShift Container Platform, selecting the appropriate update channel and installation mode, and then verifying the successful deployment.

When installing your Ansible Automation Platform Operator you have a choice of a namespace-scoped operator or a cluster-scoped operator. This depends on the update channel you choose, stable-2.x or cluster-scoped-2.x.

A namespace-scoped operator is confined to one namespace, offering tighter security. A cluster-scoped operator spans multiple namespaces, which grants broader permissions.

If you are managing multiple Ansible Automation Platform instances with the same Ansible Automation Platform Operator version, use the cluster-scoped operator, which uses a single operator to manage all Ansible Automation Platform custom resources in your cluster.

If you need multiple operator versions in the same cluster, you must use the namespace-scoped operator. The operator and the deployment share the same namespace. This can also be helpful when debugging because the operator logs pertain to custom resources in that namespace only.

Note

For information about the Ansible Automation Platform Operator system requirements and infrastructure topology see Operator topologies in Tested deployment models.

For help with installing a namespace or cluster-scoped operator see the following procedure.

Important

You cannot deploy Ansible Automation Platform in the default namespace on your OpenShift Cluster. The aap namespace is recommended. You can use a custom namespace, but it should run only Ansible Automation Platform.

Prerequisites

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to OperatorsOperatorHub.
  3. Search for Ansible Automation Platform and click Install.
  4. Select an Update Channel:

    • stable-2.x: installs a namespace-scoped operator, which limits deployments of automation hub and automation controller instances to the namespace the operator is installed in, this is suitable for most cases. The stable-2.x channel does not require administrator privileges and utilizes fewer resources because it only monitors a single namespace.
    • stable-2.x-cluster-scoped: installs the Ansible Automation Platform Operator in a single namespace that manages Ansible Automation Platform custom resources and deployments in all namespaces. The Ansible Automation Platform Operator requires administrator privileges for all namespaces in the cluster.
  5. Select Installation Mode, Installed Namespace, and Approval Strategy.
  6. Click Install.

Verification

The installation process begins. When installation finishes, a modal appears notifying you that the Ansible Automation Platform Operator is installed in the specified namespace.

  • Click View Operator to view your newly installed Ansible Automation Platform Operator and verify the following operator custom resources are present:
Expand
Automation controllerAutomation hubEvent-Driven Ansible (EDA)Red Hat Ansible Lightspeed
  • Automation Controller
  • Automation Controller Backup
  • Automation Controller Restore
  • Automation Controller Mesh Ingress
  • Automation Hub
  • Automation Hub Backup
  • Automation Hub Restore
  • EDA
  • EDA Backup
  • EDA Restore
  • Ansible Lightspeed
  • Verify that the Ansible Automation Platform operator displays a Succeeded status.

Use these instructions to install the Ansible Automation Platform Operator on Red Hat OpenShift Container Platform from the OpenShift Container Platform command-line interface (CLI) using the oc command.

Use this procedure to subscribe a namespace to an operator.

Important

You cannot deploy Ansible Automation Platform in the default namespace on your OpenShift Cluster. The 'ansible-automation-platform' namespace is recommended. You can use a custom namespace, but it should run only Ansible Automation Platform.

Prerequisites

  • Access to Red Hat OpenShift Container Platform using an account with operator installation permissions.
  • The OpenShift Container Platform CLI oc command is installed on your local system. Refer to Installing the OpenShift CLI in the Red Hat OpenShift Container Platform product documentation for further information.

Procedure

  1. Create a project for the operator.

    oc new-project ansible-automation-platform
  2. Create a file called sub.yaml.
  3. Add the following YAML code to the sub.yaml file.

    ---
    apiVersion: operators.coreos.com/v1
    kind: OperatorGroup
    metadata:
      name: ansible-automation-platform-operator
      namespace: ansible-automation-platform
    spec:
      targetNamespaces:
        - ansible-automation-platform
    ---
    apiVersion: operators.coreos.com/v1alpha1
    kind: Subscription
    metadata:
      name: ansible-automation-platform
      namespace: ansible-automation-platform
    spec:
      channel: 'stable-2.6'
      installPlanApproval: Automatic
      name: ansible-automation-platform-operator
      source: redhat-operators
      sourceNamespace: openshift-marketplace
    ---

    This file creates a Subscription object called ansible-automation-platform that subscribes the ansible-automation-platform namespace to the ansible-automation-platform-operator operator.

  4. Run the oc apply command to create the objects specified in the sub.yaml file:

    oc apply -f sub.yaml
  5. Verify the CSV PHASE reports "Succeeded" before proceeding using the oc get csv -n ansible-automation-platform command:

    oc get csv -n ansible-automation-platform
    
    NAME                               DISPLAY                       VERSION              REPLACES                           PHASE
    aap-operator.v2.6.0-0.1728520175   Ansible Automation Platform   2.6.0+0.1728520175   aap-operator.v2.6.0-0.1727875185   Succeeded
  6. Create an AnsibleAutomationPlatform object called example in the ansible-automation-platform namespace.

    To change the Ansible Automation Platform and its components from example, edit the name field in the metadata: section and replace example with the name you want to use:

    oc apply -f - <<EOF
    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: example
      namespace: ansible-automation-platform
    spec:
      # Platform
      image_pull_policy: IfNotPresent
      # Components
      controller:
        disabled: false
      eda:
        disabled: false
      hub:
        disabled: false
        ## Modify to contain your RWM storage class name
        storage_type: file
        file_storage_storage_class: <your-read-write-many-storage-class>
        file_storage_size: 10Gi
    
        ## uncomment if using S3 storage for Content pod
        # storage_type: S3
        # object_storage_s3_secret: example-galaxy-object-storage
    
        ## uncomment if using Azure storage for Content pod
        # storage_type: azure
        # object_storage_azure_secret: azure-secret-name
      lightspeed:
        disabled: true
    EOF

As a namespace administrator, you can use Ansible Automation Platform gateway to manage new Ansible Automation Platform components in your OpenShift environment.

The Ansible Automation Platform gateway uses the Ansible Automation Platform custom resource to manage and integrate the following Ansible Automation Platform components into a unified user interface:

  • Automation controller
  • Automation hub
  • Event-Driven Ansible
  • Red Hat Ansible Lightspeed (This feature is disabled by default, you must opt in to use it.)

Before you can deploy the platform gateway you must have Ansible Automation Platform Operator installed in a namespace. If you have not installed Ansible Automation Platform Operator see Installing the Red Hat Ansible Automation Platform Operator on Red Hat OpenShift Container Platform.

Note

Platform gateway is only available under Ansible Automation Platform Operator version 2.6. Every component deployed under Ansible Automation Platform Operator 2.6 defaults to version 2.6.

If you have the Ansible Automation Platform Operator and some or all of the Ansible Automation Platform components installed see Deploying the platform gateway with existing Ansible Automation Platform components for how to proceed.

You can link any components of the Ansible Automation Platform, that you have already installed to a new Ansible Automation Platform instance.

The following procedure simulates a scenario where you have automation controller as an existing component and want to add automation hub and Event-Driven Ansible.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to OperatorsInstalled Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Click Subscriptions and edit your Update channel to stable-2.6.
  5. Click Details and on the Ansible Automation Platform tile click Create instance.
  6. From the Create Ansible Automation Platform page enter a name for your instance in the Name field.

    • When deploying an Ansible Automation Platform instance, ensure that auto_update is set to the default value of false on your existing automation controller instance in order for the integration to work.
  7. Click YAML view and copy in the following:

    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: example-aap
      namespace: aap
    spec:
      database:
        resource_requirements:
          requests:
            cpu: 200m
            memory: 512Mi
        storage_requirements:
          requests:
            storage: 100Gi
    
      # Platform
      image_pull_policy: IfNotPresent
    
      # Components
      controller:
        disabled: false
        name: existing-controller-name
      eda:
        disabled: false
      hub:
        disabled: false
        ## uncomment if using file storage for Content pod
        storage_type: file
        file_storage_storage_class: <your-read-write-many-storage-class>
        file_storage_size: 10Gi
    
        ## uncomment if using S3 storage for Content pod
        # storage_type: S3
        # object_storage_s3_secret: example-galaxy-object-storage
    
        ## uncomment if using Azure storage
    1. For new components, if you do not specify a name, a default name is generated.
  8. Click Create.
  9. To access your new instance, see Accessing the platform gateway.

    Note

    If you have an existing controller with a managed Postgres pod, after creating the Ansible Automation Platform resource your automation controller instance will continue to use that original Postgres pod. If you were to do a fresh install you would have a single Postgres managed pod for all instances.

Use the Ansible Automation Platform instance as your default. This instance links the automation controller, automation hub, and Event-Driven Ansible deployments to a single interface.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to NetworkingRoutes
  3. Click the link under Location for Ansible Automation Platform.
  4. This redirects you to the Ansible Automation Platform login page. Enter "admin" as your username in the Username field.
  5. For the password you must:

    1. Go to WorkloadsSecrets.
    2. Click <your instance name>-admin-password and copy the password.
    3. Paste the password into the Password field.
  6. Click Login.
  7. Apply your subscription:

    1. Click Subscription manifest or Username/password.
    2. Upload your manifest or enter your username and password.
    3. Select your subscription from the Subscription list.
    4. Click Next. This redirects you to the Analytics page.
  8. Click Next.
  9. Select the I agree to the terms of the license agreement checkbox.
  10. Click Next.

Verification

You now have access to the platform gateway user interface.

Troubleshooting

If you cannot access the Ansible Automation Platform see Frequently asked questions on platform gateway for help with troubleshooting and debugging.

You can use the OpenShift Container Platform CLI to fetch the web address and the password of the Automation controller that you created. To login to the platform gateway, you need the web address and the password.

2.4.1. Fetching the platform gateway web address

A Red Hat OpenShift Container Platform route exposes a service at a host name, so that external clients can reach it by name. When you created the platform gateway instance, a route was created for it. The route inherits the name that you assigned to the platform gateway object in the YAML file.

Procedure

  • Use the following command to fetch the routes:

    oc get routes -n <platform_namespace>

    Verification

    You can see in the following example, the example platform gateway is running in the ansible-automation-platform namespace.

$ oc get routes -n ansible-automation-platform

NAME      HOST/PORT                                              PATH   SERVICES          PORT   TERMINATION     WILDCARD
example   example-ansible-automation-platform.apps-crc.testing          example-service   http   edge/Redirect   None

The address for the platform gateway instance is example-ansible-automation-platform.apps-crc.testing.

2.4.2. Fetching the platform gateway password

The YAML block for the platform gateway instance in the AnsibleAutomationPlatform object assigns values to the name and admin_user keys.

Procedure

  1. Use these values in the following command to fetch the password for the platform gateway instance.

    oc get secret/<your instance name>-<admin_user>-password -o yaml
  2. The default value for admin_user is admin. Modify the command if you changed the admin username in the AnsibleAutomationPlatform object.

    The following example retrieves the password for a platform gateway object called example:

    oc get secret/example-admin-password -o yaml

    The base64 encoded password for the platform gateway instance is listed in the metadata field in the output:

    $ oc get secret/example-admin-password -o yaml
    
    apiVersion: v1
    data:
      password: ODzLODzLODzLODzLODzLODzLODzLODzLODzLODzLODzL
    kind: Secret
    metadata:
      labels:
        app.kubernetes.io/component: aap
        app.kubernetes.io/name: example
        app.kubernetes.io/operator-version: ""
        app.kubernetes.io/part-of: example
      name: example-admin-password
      namespace: ansible-automation-platform

2.4.3. Decoding the platform gateway password

After you have fetched your gateway password, you must decode it from base64.

Procedure

  • Run the following command to decode your password from base64:
oc get secret/example-admin-password -o jsonpath={.data.password} | base64 --decode

Ansible is an open source software project and is licensed under the GNU General Public License version 3, as described in the Ansible Source Code.

You must have valid subscriptions attached before installing Ansible Automation Platform.

3.1. Trial and evaluation

You need a subscription to run Ansible Automation Platform. You can start by signing up for a free trial subscription.

  • Trial subscriptions for Ansible Automation Platform are available at the Red Hat product trial center.
  • Support is not included in a trial subscription or during an evaluation of the Ansible Automation Platform.

3.2. Node counting in subscriptions

The Ansible Automation Platform subscription defines the number of Managed Nodes that can be managed as part of your subscription.

For more information about managed node requirements for subscriptions, see How are "managed nodes" defined as part of the Red Hat Ansible Automation Platform offering.

Note

Ansible does not recycle node counts or reset automated hosts.

3.3. Subscription Types

Red Hat Ansible Automation Platform is provided at various levels of support and number of machines as an annual subscription.

All subscription levels include regular updates and releases of automation controller, Ansible, and any other components of the Ansible Automation Platform.

For more information, contact Ansible through the Red Hat Customer Portal or at the Ansible site.

3.4. Obtaining a manifest file

You can obtain a subscription manifest in the Subscription Allocations section of Red Hat Subscription Management.

After you obtain a subscription allocation, you can download its manifest file and upload it to activate Ansible Automation Platform.

To begin, log in to the Red Hat Customer Portal by using your administrator user account and follow the procedures listed.

3.4.1. Create a subscription allocation

With a new subscription allocation you can set aside subscriptions and entitlements for a system that is currently offline or air-gapped. This is necessary before you download its manifest and upload it to Ansible Automation Platform.

Procedure

  1. From the Subscription Allocations page, click New Subscription Allocation.
  2. Enter a name for the allocation so that you can find it later.
  3. Select Type: Satellite 6.16 as the management application.
  4. Click Create.

After you create an allocation, you can add the subscriptions you need for Ansible Automation Platform to run properly. This is necessary before you download the manifest and add it to Ansible Automation Platform.

Procedure

  1. From the Subscription Allocations page, click the name of the Subscription Allocation to which you want to add a subscription.
  2. Click the Subscriptions tab.
  3. Click Add Subscriptions.
  4. Enter the number of Ansible Automation Platform Entitlements you plan to add.
  5. Click Submit.

3.4.3. Downloading a manifest file

After you create an allocation with the appropriate subscriptions on it, you can download the manifest file from Red Hat Subscription Management.

Procedure

  1. From the Subscription Allocations page, click the name of the Subscription Allocation to which you want to generate a manifest.
  2. Click the Subscriptions tab.
  3. Click Export Manifest to download the manifest file.

    This downloads a file manifest_<allocation name>_<date>.zip to your default downloads folder.

Red Hat Ansible Automation Platform uses available subscriptions or a subscription manifest to allow the use of Ansible Automation Platform.

To obtain a subscription, you can do either of the following:

  1. Use your Red Hat username and password, service account credentials, or Satellite credentials when you launch Ansible Automation Platform.
  2. Upload a subscriptions manifest file either using the Red Hat Ansible Automation Platform interface or manually in an Ansible Playbook.

3.5.1. Activate with credentials

Activate your Ansible Automation Platform subscription at the first launch by providing either Red Hat service account credentials or your personal Red Hat username and password. This process automatically retrieves and imports the required license, which grants the platform access to Red Hat content and entitlement services.

Note

You are opted in for Automation Analytics by default when you activate the platform on first login. This helps Red Hat improve the product by delivering you a much better user experience. You can opt out after activating Ansible Automation Platform by taking the following steps:

  1. From the navigation panel, select SettingsAutomation ExecutionSystem.
  2. Clear the Gather data for Automation Analytics option.
  3. Click Save.

Procedure

  1. Log in to Red Hat Ansible Automation Platform.
  2. Select the Service Account tab in the subscription wizard.
  3. Enter your Client ID and Client secret.
  4. Select your subscription from the Subscription list.

    Note

    You can also enter your Satellite username and password in the Satellite tab if your cluster nodes are registered to Satellite through Subscription Manager.

  5. Review the End User License Agreement and select I agree to the End User License Agreement.
  6. Click Finish.

Verification

After your subscription has been accepted, subscription details are displayed. A status of Compliant indicates your subscription is in compliance with the number of hosts you have automated within your subscription count. Otherwise, your status shows as Out of Compliance, indicating you have exceeded the number of hosts in your subscription. Other important information displayed include the following:

Hosts automated
Host count automated by the job, which consumes the license count
Hosts imported
Host count considering all inventory sources (does not impact hosts remaining)
Hosts remaining
Total host count minus hosts automated

3.5.2. Activate with a manifest file

If you have a subscriptions manifest, you can upload the manifest file by using the Red Hat Ansible Automation Platform interface.

Note

You are opted in for Automation Analytics by default when you activate the platform on first login. This helps Red Hat improve the product by delivering you a much better user experience. You can opt out after activating Ansible Automation Platform by taking the following steps:

  1. From the navigation panel, select SettingsAutomation ExecutionSystem.
  2. Clear the Gather data for Automation Analytics option.
  3. Click Save.

Prerequisites

You must have a Red Hat subscription manifest file exported from the Red Hat Customer Portal. For more information, see Obtaining a manifest file.

Procedure

  1. Log in to Red Hat Ansible Automation Platform.

    1. If you are not immediately taken to the subscription wizard, go to SettingsSubscription.
  2. Select the Subscription manifest tab.
  3. Click Browse and select your manifest file.
  4. Review the End User License Agreement and select I agree to the End User License Agreement.
  5. Click Finish.

    Note

    If the BROWSE button is disabled on the subscription wizard page, clear the USERNAME and PASSWORD fields.

Verification

After your subscription has been accepted, subscription details are displayed. A status of Compliant indicates your subscription is in compliance with the number of hosts you have automated within your subscription count. Otherwise, your status shows as Out of Compliance, indicating you have exceeded the number of hosts in your subscription. Other important information displayed include the following:

Hosts automated
Host count automated by the job, which uses the subscription count
Hosts imported
Host count considering all inventory sources (does not impact hosts remaining)
Hosts remaining
Total host count minus hosts automated

After installing the Ansible Automation Platform Operator, you can customize your deployment by setting configuration options for its nested components. You must define these parameters on the parent Automation Ansible Platform Custom Resource (CR). The operator automatically disseminates the configuration to each component of the platform.

You can discover the configuration parameters for your Ansible Automation Platform Operator by viewing its Custom Resource (CR). The parameters are listed in the YAML schema.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to OperatorsInstalled Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Go to the Ansible Automation Platform tab and click the name of your CR.
  5. Switch to the YAML view tab to view and edit the configuration. The available parameters are listed in the YAML schema.

    Note

    If you cannot see the Schema panel, you might have closed or minimized the side bar. Click view sidebar to reopen it.

The Ansible Automation Platform Operator manages multiple custom resources (CRs), each with its own configuration parameters. Use the oc explain command to discover all available configuration options for the AnsibleAutomationPlatform CR and its nested components.

Procedure

  1. To see all available configuration parameters for a top-level CR, run:

    oc explain ansibleautomationplatform.spec
  2. To view component-specific configuration options nested under the Ansible Automation Platform CR, query them through the component section:

    oc explain ansibleautomationplatform.spec.controller.postgres_configuration_secret
    oc explain ansibleautomationplatform.spec.controller.route_tls_termination_mechanism
    oc explain ansibleautomationplatform.spec.hub.storage_type
    oc explain ansibleautomationplatform.spec.eda.automation_server_url
  3. To explore all nested fields for a specific component, use the --recursive flag:

    oc explain ansibleautomationplatform.spec.controller --recursive
    oc explain ansibleautomationplatform.spec.hub --recursive
    oc explain ansibleautomationplatform.spec.eda --recursive
    Note

    You can also query individual component CRs directly if needed:

    oc explain automationcontroller.spec
    oc explain automationhub.spec
    oc explain eda.spec

    However, when configuring components through the Ansible Automation Platform CR (recommended approach), use the nested paths shown above.

4.3. Defining a parameter on a nested component

To define a parameter, such as the resource_requirements for automation controller, you add the configuration to the parent Ansible Automation Platform CR YAML. This ensures that the Ansible Automation Platform CR is the single source of truth for your deployment.

Note

The image and image_version, as well as the {component}_image and {component}_image_version parameters are intended for development or hotfix purposes only.

Do not use these in production environments. These settings bypass standard version management and can lead to configuration drift, inconsistent deployments, and difficulty troubleshooting issues.

Procedure

  1. Log in to OpenShift Container Platform.
  2. Navigate to OperatorsInstalled Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Go to the Ansible Automation Platform tab and click the name of your CR.
  5. In the YAML view tab, locate the spec section.
  6. Add the automationcontroller parameter with the nested resource_requirements parameter and its value:

    spec:
      database:
        resource_requirements:
          requests:
            cpu: 200m
            memory: 512Mi
        storage_requirements:
          requests:
            storage: 100Gi
  7. Click Save to apply the changes. The operator automatically applies this configuration to the automation controller component.

4.4. Customizing your resource requirements

Customize resource requirements for your Ansible Automation Platform components to optimize performance and resource allocation in your specific environment.

The following section provides a complete code block with the default resource requirements for each component. The main reasons for customizing your resource requirements include:

  • Performance Tuning: Increase resource limits for components that perform heavy workloads.
  • To comply with a ResourceQuota enforced by the cluster admin.
  • Resource Constrained Environments: Decrease resource requests to conserve cluster resources in development or test environments.
  • Environment Specifics: Align the resource allocation with the capacity of your OpenShift or Kubernetes cluster nodes.

You can use this reference as a starting point. Copy the full code block for your Ansible Automation Platform instance and modify the values for the components you want to change. This method helps ensure all default settings are applied correctly, reducing the risk of deployment errors.

Note

When adding parameters, you can add it to the Ansible Automation Platform custom resource (CR) only and those parameters will work their way down to the nested CRs.

When removing parameters, you have to remove them both from the Ansible Automation Platform CR and the nested CR, for example, the Automation Controller CR.

# Example of defining custom resource requirements for all components
# This can be useful for clusters with a ResourceQuota in the AAP namespace
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: aap
spec:

  # Platform
  api:
    replicas: 1
    resource_requirements:
      requests:
        cpu: 100m
        memory: 256Mi
      limits:
        cpu: 500m
        memory: 1000Mi
  redis:
    replicas: 1
    resource_requirements:
      requests:
        cpu: 100m
        memory: 256Mi
      limits:
        cpu: 500m
        memory: 500Mi
  database:
    resource_requirements:
      requests:
        cpu: 100m
        memory: 256Mi
      limits:
        cpu: 500m
        memory: 800Mi

  # Components
  controller:
    disabled: false
    uwsgi_processes: 2
    task_resource_requirements:
      requests:
        cpu: 100m
        memory: 150Mi
      limits:
        cpu: 1000m
        memory: 1200Mi
    web_resource_requirements:
      requests:
        cpu: 100m
        memory: 200Mi
      limits:
        cpu: 200m
        memory: 1600Mi
    ee_resource_requirements:
      requests:
        cpu: 100m
        memory: 64Mi
      limits:
        cpu: 1000m
        memory: 500Mi
    redis_resource_requirements:
      requests:
        cpu: 50m
        memory: 64Mi
      limits:
        cpu: 100m
        memory: 200Mi
    rsyslog_resource_requirements:
      requests:
        cpu: 100m
        memory: 128Mi
      limits:
        cpu: 500m
        memory: 250Mi
    init_container_resource_requirements:
      requests:
        cpu: 100m
        memory: 128Mi
      limits:
        cpu: 500m
        memory: 200Mi
  eda:
    disabled: false
    api:
      replicas: 1
      resource_requirements:
        requests:
          cpu: 50m
          memory: 350Mi
        limits:
          cpu: 500m
          memory: 400Mi
    ui:
      replicas: 1
      resource_requirements:
        requests:
          cpu: 25m
          memory: 64Mi
        limits:
          cpu: 500m
          memory: 150Mi
    scheduler:
      replicas: 1
      resource_requirements:
        requests:
          cpu: 50m
          memory: 200Mi
        limits:
          cpu: 500m
          memory: 250Mi
    worker:
      replicas: 2
      resource_requirements:
        requests:
          cpu: 25m
          memory: 200Mi
        limits:
          cpu: 250m
          memory: 250Mi
    default_worker:
      replicas: 1
      resource_requirements:
        requests:
          cpu: 25m
          memory: 200Mi
        limits:
          cpu: 500m
          memory: 400Mi
    activation_worker:
      replicas: 1
      resource_requirements:
        requests:
          cpu: 25m
          memory: 150Mi
        limits:
          cpu: 500m
          memory: 400Mi
    event_stream:
      replicas: 1
      resource_requirements:
        requests:
          cpu: 25m
          memory: 150Mi
        limits:
          cpu: 100m
          memory: 300Mi
  hub:
    disabled: false
    ## uncomment if using file storage for Content pod
    storage_type: file
    file_storage_storage_class: nfs-local-rwx  # replace with the rwx storage class for your cluster
    file_storage_size: 50Gi

    ## uncomment if using S3 storage for Content pod
    # storage_type: S3
    # object_storage_s3_secret: example-galaxy-object-storage

    ## uncomment if using Azure storage for Content pod
    # storage_type: azure
    # object_storage_azure_secret: azure-secret-name

    api:
      replicas: 1
      resource_requirements:
        requests:
          cpu: 150m
          memory: 256Mi
        limits:
          cpu: 800m
          memory: 500Mi
    content:
      replicas: 1
      resource_requirements:
        requests:
          cpu: 150m
          memory: 256Mi
        limits:
          cpu: 800m
          memory: 1200Mi
    worker:
      replicas: 1
      resource_requirements:
        requests:
          cpu: 150m
          memory: 256Mi
        limits:
          cpu: 800m
          memory: 400Mi
    web:
      replicas: 1
      resource_requirements:
        requests:
          cpu: 100m
          memory: 256Mi
        limits:
          cpu: 500m
          memory: 300Mi
    redis:
      replicas: 1
      resource_requirements:
        requests:
          cpu: 100m
          memory: 250Mi
        limits:
          cpu: 300m
          memory: 400Mi


  # lightspeed:
  #   disabled: true

# End state:
# * Controller deployed and named: myaap-controller
# * EDA deployed and named: myaap-eda
# * Hub deployed and named: myaap-hub

After you have installed Ansible Automation Platform Operator and set up your Ansible Automation Platform components, you can configure them for your desired output.

You can use these instructions to further configure the platform gateway operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.

There are two scenarios for deploying Ansible Automation Platform with an external database:

Expand

Scenario

Action required

Fresh install

You must specify a single external database instance for the platform to use for the following:

  • Platform gateway
  • Automation controller
  • Automation hub
  • Event-Driven Ansible
  • Red Hat Ansible Lightspeed (If enabled)

See the aap-configuring-external-db-all-default-components.yml example in the 14.1. Custom resources section for help with this.

If using Red Hat Ansible Lightspeed, use the aap-configuring-external-db-with-lightspeed-enabled.yml example.

Existing external database in 2.4

Your existing external database remains the same after upgrading but you must specify the external-postgres-configuration-gateway (spec.database.database_secret) on the Ansible Automation Platform custom resource. For detailed steps, see Upgrading an external database for platform gateway on Red Hat Ansible Automation Platform Operator.

To deploy Ansible Automation Platform with an external database, you must first create a Kubernetes secret with credentials for connecting to the database.

By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment. You can deploy Ansible Automation Platform with an external database instead of the managed PostgreSQL pod that the Ansible Automation Platform Operator automatically creates.

Using an external database lets you share and reuse resources and manually manage backups, upgrades, and performance optimizations.

Note

The same external database (PostgreSQL instance) can be used for both automation hub, automation controller, and platform gateway as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.

The following section outlines the steps to configure an external database for your platform gateway on a Ansible Automation Platform Operator.

Prerequisite

The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform. The external postgres instance credentials and connection information must be stored in a secret, which is then set on the platform gateway spec.

Note

Ansible Automation Platform 2.6 supports PostgreSQL 15 for its managed databases and additionally supports PostgreSQL 15, 16, and 17 for external databases.

If you choose to use an externally managed database with version 16 or 17 you must also rely on external backup and restore processes.

Procedure

  1. Create a postgres_configuration_secret YAML file, following the template below:

    apiVersion: v1
    kind: Secret
    metadata:
      name: external-postgres-configuration
      namespace: <target_namespace> 
    1
    
    stringData:
      host: "<external_ip_or_url_resolvable_by_the_cluster>" 
    2
    
      port: "<external_port>" 
    3
    
      database: "<desired_database_name>"
      username: "<username_to_connect_as>"
      password: "<password_to_connect_with>" 
    4
    
      type: "unmanaged"
    type: Opaque
    1. Namespace to create the secret in. This should be the same namespace you want to deploy to.
    2. The resolvable hostname for your database node.
    3. External port defaults to 5432.
    4. Value for variable password should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration.
  2. Apply external-postgres-configuration-secret.yml to your cluster using the oc create command.

    $ oc create -f external-postgres-configuration-secret.yml
    Note

    The following example is for a platform gateway deployment. To configure an external database for all components, use the aap-configuring-external-db-all-default-components.yml example in the 14.1. Custom resources section.

  3. When creating your AnsibleAutomationPlatform custom resource object, specify the secret on your spec, following the example below:

    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: example-aap
      Namespace: aap
    spec:
      database:
         database_secret: automation-platform-postgres-configuration

When upgrading the Ansible Automation Platform Operator you may encounter an error like the following:

NotImplementedError: can't parse timestamptz with DateStyle 'Redwood, SHOW_TIME': '18-MAY-23 20:33:55.765755 +00:00'

Errors like this occur when you have an external database with an unexpected DateStyle set. You can refer to the following steps to resolve this issue.

Procedure

  1. Edit the /var/lib/pgsql/data/postgres.conf file on the database server:

    # vi /var/lib/pgsql/data/postgres.conf
  2. Find and comment out the line:

    #datestyle = 'Redwood, SHOW_TIME'
  3. Add the following setting immediately below the newly-commented line:

    datestyle = 'iso, mdy'
  4. Save and close the postgres.conf file.
  5. Reload the database configuration:

    # systemctl reload postgresql
    Note

    Running this command does not disrupt database operations.

HTTPS redirect for SAML, allows you to log in once and access all of the platform gateway without needing to reauthenticate.

Prerequisites

  • You have successfully configured SAML in the gateway from the Ansible Automation Platform Operator. Refer to Configuring SAML authentication for help with this.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Go to OperatorsInstalled Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select All Instances and go to your AnsibleAutomationPlatform instance.
  5. Click the ⋮ icon and then select Edit AnsibleAutomationPlatform.
  6. In the YAML view paste the following YAML code under the spec: section:

    spec:
      extra_settings:
        - setting: REDIRECT_IS_HTTPS
          value: '"True"'
  7. Click Save.

Verification

After you have added the REDIRECT_IS_HTTPS setting, wait for the pod to redeploy automatically. You can verify this setting makes it into the pod by running:

oc exec -it <gateway-pod-name> -- grep REDIRECT /etc/ansible-automation-platform/gateway/settings.py

The Red Hat Ansible Automation Platform Operator creates Openshift Routes and configures your Cross-site request forgery (CSRF) settings automatically. When using external ingress, you must configure your CSRF on the ingress to allow for cross-site requests. You can configure your platform gateway operator ingress under Advanced configuration.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to OperatorsInstalled Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select the Ansible Automation Platform tab.
  5. For new instances, click Create AnsibleAutomationPlatform.

    1. For existing instances, you can edit the YAML view by clicking the ⋮ icon and then Edit AnsibleAutomationPlatform.
  6. Click Advanced Configuration.
  7. Under Ingres annotations, enter any annotations to add to the ingress.
  8. Under Ingress TLS secret, click the drop-down list and select a secret from the list.
  9. Under YAML view paste in the following code:

    spec:
      extra_settings:
        - setting: CSRF_TRUSTED_ORIGINS
          value:
            - https://my-aap-domain.com
  10. After you have configured your platform gateway, click Create at the bottom of the form view (Or Save in the case of editing existing instances).

Verification

Red Hat OpenShift Container Platform creates the pods. This may take a few minutes. You can view the progress by navigating to WorkloadsPods and locating the newly created instance. Verify that the following operator pods provided by the Red Hat Ansible Automation Platform Operator installation from platform gateway are running:

Expand
Operator manager controllers podsAutomation controller podsAutomation hub podsEvent-Driven Ansible podsPlatform gateway pods

The operator manager controllers for each of the four operators, include the following:

  • automation-controller-operator-controller-manager
  • automation-hub-operator-controller-manager
  • resource-operator-controller-manager
  • aap-gateway-operator-controller-manager
  • ansible-lightspeed-operator-controller-manager
  • eda-server-operator-controller-manager

After deploying automation controller, you can see the addition of the following pods:

  • Automation controller web
  • Automation controller task
  • Mesh ingress
  • Automation controller postgres

After deploying automation hub, you can see the addition of the following pods:

  • Automation hub web
  • Automation hub task
  • Automation hub API
  • Automation hub worker

After deploying EDA, you can see the addition of the following pods:

  • EDA API
  • EDA Activation
  • EDA worker
  • EDA stream
  • EDA Scheduler

After deploying platform gateway, you can see the addition of the following pods:

  • platform gateway
  • platform gateway redis
Note

A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name> to see if there is an ImagePullBackOff error on that pod.

The postgres_extra_settings variable allows you to pass a list of custom name: value pairs directly to the PostgreSQL configuration file (/var/lib/pgsql/data/postgresql.conf) within the database pod.

Prerequisites

  • You have installed the Ansible Automation Platform Operator.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Go to Operators → Installed Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select All Instances and go to your Ansible Automation Platform instance.
  5. Click the ⋮ icon and then select Edit Ansible Automation Platform.
  6. In the YAML view, locate the spec: section
  7. Add the database section and the required settings under spec:. The following example configures SSL ciphers and the maximum connections:

    spec:
      database:
        postgres_extra_settings:
          - name: max_connections
            value: '1000'
  8. Click Save.

Verification

Inspect the PostgreSQL pod logs to verify the new settings.

Alternatively, you can run the following command to check the settings. Replace <aap postgres pod> with the name of your PostgreSQL pod.

+

$ oc exec -it <aap postgres pod> -- psql -d gateway -c "SHOW max_connections;"

You can configure Mutual Transport Layer Security (mTLS) for the Event-Driven Ansible event stream by setting parameters in the AnsibleAutomationPlatform custom resource.

You can configure the following parameters nested under spec.eda.event_stream:.

Expand
VariableDescriptionDefault ValueNotes

mtls

Controls whether mTLS is enabled for the event stream endpoint.

true

Set the value to false to disable event stream mTLS during installation.

mtls_prefix

Customizes the mTLS endpoint prefix for the event stream. You must provide a valid URL prefix.

/mtls/eda-event-streams

The value you provide is used as a prefix for the full endpoint URL. Customizing the full URL path is out of scope.

Custom resource example

The following example shows how to configure the event stream parameters in the AnsibleAutomationPlatform custom resource:

apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
  namespace: ansible-automation-platform
spec:
  eda:
    disabled: false
    event_stream:
      mtls: true
      mtls_prefix: /custom/path/mtls

5.1.7. Cascading client timeouts

Cascading timeouts ensure that if an outer layer of the system times out, inner processes also terminate to prevent resource exhaustion from orphaned requests.

Set the primary timeout at the Gateway level to allow Ansible Automation Platform to synchronize timeouts automatically across component applications.

5.1.7.1. Timeout relationships

The client_request_timeout serves as the primary value. Internal layers follow this logic:

  • The sum of the Envoy request_timeout and the gRPC authentication timeout (gateway_grpc_auth_service_timeout) must be less than the client_request_timeout.
  • The Nginx read timeout (nginx_read_timeout) must be less than or equal to the Envoy request_timeout.
  • The Python web server timeout (python_webserver_timeout) must be less than or equal to the nginx_read_timeout.
5.1.7.2. Timeout grace periods

At the uWSGI layer, the uwsgi_timeout_grace_period allows the application to attempt a graceful shutdown. During this period, the application displays a traceback of the current stack position. If the process does not exit within the grace period, Ansible Automation Platform terminates it.

During high-volume API operations, such as Configuration as Code (CasC) restores, the OpenShift Route might time out if the operation exceeds the default 30-second window.

You must increase the client_request_timeout in the AnsibleAutomationPlatform Custom Resource (CR) to resolve HTTP 504 (Gateway Timeout) or HTTP 503 (Service Unavailable) errors.

Prerequisites

  • Access to the OpenShift Container Platform web console with administrator privileges.
  • Update the Ansible Automation Platform 2.6 operator to the latest version.

Procedure

  1. Log in to the OpenShift web console.
  2. Navigate to Installed Operators > Ansible Automation Platform > All Instances.
  3. Select your AnsibleAutomationPlatform instance.
  4. Click the YAML tab.
  5. In the spec: section, add the route_annotations to extend the timeout:

    spec:
      route_annotations: |
        haproxy.router.openshift.io/timeout: 180s
  6. Click Save.

Verification

  1. Navigate to Networking > Routes in the OpenShift console.
  2. Select the route for your Ansible Automation Platform instance.
  3. Verify the Annotations section contains the updated timeout value.

Manage your Ansible Automation Platform deployment and troubleshoot common issues with these frequently asked questions. Learn about resource management, logging, and error recovery for your components.

If I delete my Ansible Automation Platform deployment will I still have access to automation controller?
No, automation controller, automation hub, and Event-Driven Ansible are nested within the deployment and are also deleted.
How must I manage parameters when adding or removing them in the Ansible Automation Platform custom resource (CR) hierarchy?
When adding parameters, you can add it to the Ansible Automation Platform custom resource (CR) only and those parameters will work their way down to the nested CRs.

When removing parameters, you have to remove them both from the Ansible Automation Platform CR and the nested CR, for example, the Automation Controller CR.

Something went wrong with my deployment but I’m not sure what, how can I find out?
You can follow along in the command line while the operator is reconciling, this can be helpful for debugging. Alternatively you can click into the deployment instance to see the status conditions being updated as the deployment goes on.
Is it still possible to view individual component logs?
When troubleshooting you should examine the Ansible Automation Platform instance for the main logs and then each individual component (EDA, AutomationHub, AutomationController) for more specific information.
Where can I view the condition of an instance?
To display status conditions click into the instance, and look under the Details or Events tab. Alternatively, to display the status conditions you can run the get command: oc get automationcontroller <instance-name> -o jsonpath=Pipe "| jq"
Can I track my migration in real time?
To help track the status of the migration or to understand why migration might have failed you can look at the migration logs as they are running. Use the logs command: oc logs fresh-install-controller-migration-4.6.0-jwfm6 -f
I have configured my SAML but authentication fails with this error: "Unable to complete social auth login" What can I do?
You must update your Ansible Automation Platform instance to include the REDIRECT_IS_HTTPS extra setting. See Enabling single sign-on (SSO) for platform gateway on OpenShift Container Platform for help with this.

You can use these instructions to configure the automation controller operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.

Automation controller configuration can be done through the automation controller extra_settings or directly in the user interface after deployment. However, it is important to note that configurations made in extra_settings take precedence over settings made in the user interface.

Note

When an instance of automation controller is removed, the associated PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the previous one. Therefore, it is recommended that you manually remove old PVCs before deploying a new automation controller instance in the same namespace. See Finding and deleting PVCs for more information.

5.2.1. Prerequisites

  • You have installed the Red Hat Ansible Automation Platform catalog in Operator Hub.
  • For automation controller, a default StorageClass must be configured on the cluster for the operator to dynamically create needed PVCs. This is not necessary if an external PostgreSQL database is configured.
  • For Hub a StorageClass that supports ReadWriteMany must be available on the cluster to dynamically created the PVC needed for the content, redis and api pods. If it is not the default StorageClass on the cluster, you can specify it when creating your AutomationHub object.

Use this procedure to configure the image pull policy on your automation controller.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Go to OperatorsInstalled Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select the Ansible Automation Platform tab.
  5. Click the ⋮ icon next to your Ansible Automation Platform instance and select Edit AnsibleAutomationPlatform.
  6. Click YAML view and locate the spec.controller: section.
  7. Configure the image pull policy and resource requirements under the controller: section:

    spec:
      controller:
        image_pull_policy: IfNotPresent  # Options: Always, Never, IfNotPresent
        image_pull_secrets:
          - pull-secret-name
        web_resource_requirements:
          limits:
            cpu: 1000m
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 1Gi
        task_resource_requirements:
          limits:
            cpu: 2000m
            memory: 4Gi
          requests:
            cpu: 1000m
            memory: 2Gi
        ee_resource_requirements:
          limits:
            cpu: 500m
            memory: 1Gi
          requests:
            cpu: 250m
            memory: 512Mi
        redis_resource_requirements:
          limits:
            cpu: 500m
            memory: 1Gi
          requests:
            cpu: 250m
            memory: 512Mi
        postgres_resource_requirements:
          limits:
            cpu: 1000m
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 1Gi
        postgres_storage_requirements:
          limits:
            storage: 10Gi
          requests:
            storage: 8Gi
        replicas: 1
        garbage_collect_secrets: false
        create_preload_data: true
  8. Click Save.

    Note

    These settings apply to the automation controller component managed by this Ansible Automation Platform instance. If you specified an existing controller under controller.name, these settings will update that instance.

    For more examples of Ansible Automation Platform custom resources, see Appendix: Red Hat Ansible Automation Platform custom resources.

5.2.1.2. Configuring your controller LDAP security

You can configure your LDAP SSL configuration for automation controller through any of the following options:

  • The automation controller user interface.
  • The platform gateway user interface. See the Configuring LDAP authentication section of the Access management and authentication guide for additional steps.
  • The following procedure steps.

Procedure

  1. Create a secret in your Ansible Automation Platform namespace for the bundle-ca.crt file (the filename must be bundle-ca.crt):

    $ oc create secret -n aap generic bundle-ca-secret --from-file=bundle-ca.crt
    Note

    The target filename for this operation must be bundle-ca.crt and the secret name should be bundle-ca-secret.

  2. Add the bundle_cacert_secret to the Ansible Automation Platform customer resource:

    ...
    spec:
      bundle_cacert_secret: bundle-ca-secret
    ...

    Verification

    You can verify the expected certificate by running:

    oc get deployments -l 'app.kubernetes.io/component=aap-gateway'

    Followed by:

    oc exec -it deployment.apps/<gateway-deployment-name-from-above> -- openssl x509 -in /etc/pki/tls/certs/ca-bundle.crt -noout -text

The Red Hat Ansible Automation Platform Operator installation form provides advanced options to configure your automation controller operator route.

Important

You must assign a unique metadata.name to each custom resource (CR) in your namespace. If you assign an AutomationControllerMeshIngress the same name as your Ansible Automation Platform installation, the operator overrides default routes and services. This conflict causes the platform installation to fail.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to OperatorsInstalled Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select the Ansible Automation Platform tab.
  5. Click the ⋮ icon next to your Ansible Automation Platform instance and select Edit AnsibleAutomationPlatform.
  6. Click YAML view and locate the spec.controller: section.
  7. Configure the route options under the controller: section:

    spec:
      controller:
        ingress_type: Route
        route_host: controller.example.com  # Custom hostname for the route
        route_tls_termination_mechanism: Edge  # Options: Edge, Passthrough
        route_tls_secret: controller-tls-secret  # Optional: TLS credential secret
        projects_persistence: false  # Enable/disable persistence for /var/lib/projects
  8. Click Save.

    Note

    Edge termination is recommended for most instances. After configuring your route, you can customize additional route settings by adding them to the controller: section in the Ansible Automation Platform custom resource.

    For more examples of Ansible Automation Platform custom resources, see Appendix: Red Hat Ansible Automation Platform custom resources.

The Ansible Automation Platform Operator installation form allows you to further configure your automation controller operator ingress under Advanced configuration.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to OperatorsInstalled Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select the Ansible Automation Platform tab.
  5. Click the ⋮ icon next to your Ansible Automation Platform instance and select Edit AnsibleAutomationPlatform.
  6. Click YAML view and locate the spec.controller: section.
  7. Configure the ingress options under the controller: section:

    spec:
      controller:
        ingress_type: Ingress
        ingress_annotations: |
          nginx.ingress.kubernetes.io/proxy-body-size: "0"
          nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
        ingress_tls_secret: controller-ingress-tls-secret
  8. Click Save.

    Note

    These ingress settings apply to the automation controller component managed by this Ansible Automation Platform instance. The operator automatically updates the ingress configuration for the controller.

    For more examples of Ansible Automation Platform custom resources, see Appendix: Red Hat Ansible Automation Platform custom resources.

Verification

After you have configured your automation controller ingress settings, Red Hat OpenShift Container Platform updates the pods. This may take a few minutes.

You can view the progress by navigating to WorkloadsPods and locating the newly created instance.

Verify that the following operator pods provided by the Ansible Automation Platform Operator installation from automation controller are running:

Expand
Operator manager controllersAutomation controllerAutomation hubEvent-Driven Ansible (EDA)

The operator manager controllers for each of the three operators, include the following:

  • automation-controller-operator-controller-manager
  • automation-hub-operator-controller-manager
  • resource-operator-controller-manager
  • aap-gateway-operator-controller-manager
  • ansible-lightspeed-operator-controller-manager
  • eda-server-operator-controller-manager

After deploying automation controller, you can see the addition of the following pods:

  • controller
  • controller-postgres
  • controller-web
  • controller-task

After deploying automation hub, you can see the addition of the following pods:

  • hub-api
  • hub-content
  • hub-postgres
  • hub-redis
  • hub-worker

After deploying EDA, you can see the addition of the following pods:

  • eda-activation-worker
  • da-api
  • eda-default-worker
  • eda-event-stream
  • eda-scheduler
Note

A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name> to see if there is an ImagePullBackOff error on that pod.

For users who prefer to deploy Ansible Automation Platform with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the oc create command.

By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment. You can deploy Ansible Automation Platform with an external database instead of the managed PostgreSQL pod that the Ansible Automation Platform Operator automatically creates.

Using an external database lets you share and reuse resources and manually manage backups, upgrades, and performance optimizations.

Note

The same external database (PostgreSQL instance) can be used for both automation hub, automation controller, and platform gateway as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.

The following section outlines the steps to configure an external database for your automation controller on a Ansible Automation Platform Operator.

Prerequisite

The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform. The external postgres instance credentials and connection information must be stored in a secret, which is then set on the automation controller spec.

Note

Ansible Automation Platform 2.6 supports PostgreSQL 15 for its managed databases and additionally supports PostgreSQL 15, 16, and 17 for external databases.

If you choose to use an externally managed database with version 16 or 17 you must also rely on external backup and restore processes.

Procedure

  1. Create a postgres_configuration_secret YAML file, following the template below:

    apiVersion: v1
    kind: Secret
    metadata:
      name: external-postgres-configuration
      namespace: <target_namespace>
    stringData:
      host: "<external_ip_or_url_resolvable_by_the_cluster>"
      port: "<external_port>"
      database: "<desired_database_name>"
      username: "<username_to_connect_as>"
      password: "<password_to_connect_with>"
      sslmode: "prefer"
      type: "unmanaged"
    type: Opaque

    When configuring the secret:

    • namespace: Specify the namespace to create the secret in. This should be the same namespace you want to deploy to.
    • host: Specify the resolvable hostname for your database node.
    • port: Specify the external port. The default is 5432.
    • password: Ensure the password does not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration.
    • sslmode: This variable is valid for external databases only. The allowed values are: prefer, disable, allow, require, verify-ca, and verify-full.
  2. Apply external-postgres-configuration-secret.yml to your cluster using the oc create command.

    $ oc create -f external-postgres-configuration-secret.yml
  3. When creating your AnsibleAutomationPlatform custom resource object, specify the secret under the controller section in your spec, following the example below:

    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: myaap
    spec:
      controller:
        name: controller-dev  # Optional: specify existing instance or custom name
        postgres_configuration_secret: external-postgres-configuration
    Note

    If you have an existing automation controller instance, specify its name under controller.name to apply these settings to the existing instance. If you omit the name field, the operator will create a new instance with the default name pattern <aap-instance-name>-controller.

    For more examples of Ansible Automation Platform custom resources, see Appendix: Red Hat Ansible Automation Platform custom resources.

5.2.3. Finding and deleting PVCs

A persistent volume claim (PVC) is a storage volume used to store data that automation hub and automation controller applications use. This persistence is a key feature of static provisioning. If you redeploy an instance using the same name, the Operator must bind to these existing PVCs, allowing for data continuity across deployments. If you are confident that you no longer need a PVC, or have backed it up elsewhere, you can manually delete them.

Procedure

  1. List the existing PVCs in your deployment namespace:

    oc get pvc -n <namespace>
  2. Identify the PVC associated with your previous deployment by comparing the old deployment name and the PVC name.
  3. Delete the old PVC:

    oc delete pvc -n <namespace> <pvc-name>

You can use these instructions to configure the automation hub operator on Red Hat OpenShift Container Platform, specify custom resources, and deploy Ansible Automation Platform with an external database.

Automation hub configuration can be done through the automation hub pulp_settings or directly in the user interface after deployment. However, it is important to note that configurations made in pulp_settings take precedence over settings made in the user interface. Hub settings should always be set as lowercase on the Hub custom resource specification.

Note

When an instance of automation hub is removed, the PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the previous one. Therefore, it is recommended that you manually remove old PVCs before deploying a new automation hub instance in the same namespace. See Finding and deleting PVCs for more information.

5.3.1. Prerequisites

  • You have installed the Ansible Automation Platform Operator in Operator Hub.

Automation hub requires ReadWriteMany file-based storage, Azure Blob storage, or Amazon S3 storage for operation so that multiple pods can access shared content, such as collections.

The process for configuring object storage on the AutomationHub CR is similar for Amazon S3 and Azure Blob Storage.

If you are using file-based storage and your installation scenario includes automation hub, ensure that the storage option for Ansible Automation Platform Operator is set to ReadWriteMany. ReadWriteMany is the default storage option.

In addition, OpenShift Data Foundation provides a ReadWriteMany or S3 implementation. Also, you can set up NFS storage configuration to support ReadWriteMany. This, however, introduces the NFS server as a potential, single point of failure.

To ensure successful installation of Ansible Automation Platform Operator, you must provision your storage type for automation hub initially to ReadWriteMany access mode.

Procedure

  1. Go to StoragePersistentVolume.
  2. Click Create PersistentVolume.
  3. In the first step, update the accessModes from the default ReadWriteOnce to ReadWriteMany.

    1. See Provisioning to update the access mode. for a detailed overview.
  4. Complete the additional steps in this section to create the persistent volume claim (PVC).
5.3.1.1.2. Configuring object storage on Amazon S3

Red Hat supports Amazon Simple Storage Service (S3) for automation hub. You can configure it when deploying the AnsibleAutomationPlatform custom resource (CR), or you can configure it for an existing instance.

Prerequisites

  • Create an Amazon S3 bucket to store the objects.
  • Note the name of the S3 bucket.

Procedure

  1. Create a Kubernetes secret containing the AWS credentials and connection details, and the name of your Amazon S3 bucket. The following example creates a secret called test-s3:

    $ oc -n $HUB_NAMESPACE apply -f- <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: 'test-s3'
    stringData:
      s3-access-key-id: $S3_ACCESS_KEY_ID
      s3-secret-access-key: $S3_SECRET_ACCESS_KEY
      s3-bucket-name: $S3_BUCKET_NAME
      s3-region: $S3_REGION
    EOF
  2. Add the secret to the Ansible Automation Platform custom resource (CR) under the hub section in the spec:

    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: myaap
    spec:
      hub:
        storage_type: S3
        object_storage_s3_secret: test-s3
    Note

    If you have an existing automation hub instance, specify its name using hub.name: existing-hub-name to apply these settings to the existing instance.

    For more examples of Ansible Automation Platform custom resources, see Appendix: Red Hat Ansible Automation Platform custom resources.

  3. If you are applying this secret to an existing instance, restart the API pods for the change to take effect. <hub-name> is the name of your hub instance.

    $ oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api

Red Hat supports Azure Blob Storage for automation hub. You can configure it when deploying the AnsibleAutomationPlatform custom resource (CR), or you can configure it for an existing instance.

Prerequisites

  • Create an Azure Storage blob container to store the objects.
  • Note the name of the blob container.

Procedure

  1. Create a Kubernetes secret containing the credentials and connection details for your Azure account, and the name of your Azure Storage blob container. The following example creates a secret called test-azure:

    $ oc -n $HUB_NAMESPACE apply -f- <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: 'test-azure'
    stringData:
      azure-account-name: $AZURE_ACCOUNT_NAME
      azure-account-key: $AZURE_ACCOUNT_KEY
      azure-container: $AZURE_CONTAINER
      azure-container-path: $AZURE_CONTAINER_PATH
      azure-connection-string: $AZURE_CONNECTION_STRING
    EOF
  2. Add the secret to the Ansible Automation Platform custom resource (CR) under the hub section in the spec:

    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: myaap
    spec:
      hub:
        storage_type: azure
        object_storage_azure_secret: test-azure
    Note

    If you have an existing automation hub instance, specify its name using hub.name: existing-hub-name to apply these settings to the existing instance.

    For more examples of Ansible Automation Platform custom resources, see Appendix: Red Hat Ansible Automation Platform custom resources.

  3. If you are applying this secret to an existing instance, restart the API pods for the change to take effect. <hub-name> is the name of your hub instance.

    $ oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=<hub-name>-api

The Red Hat Ansible Automation Platform operator installation form allows you to further configure your automation hub operator route options under Advanced configuration.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to OperatorsInstalled Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select the Ansible Automation Platform tab.
  5. Click the ⋮ icon next to your Ansible Automation Platform instance and select Edit AnsibleAutomationPlatform.
  6. Click YAML view and locate the spec.hub: section.
  7. Configure the route options under the hub: section:

    spec:
      hub:
        ingress_type: Route
        route_host: hub.example.com  # Custom hostname for the route
        route_tls_termination_mechanism: Edge  # Options: Edge, Passthrough
        route_tls_secret: hub-tls-secret  # Optional: TLS credential secret
  8. Click Save.

    Note

    Edge termination is recommended for most instances. After configuring your route, you can customize additional route settings by adding them to the hub: section in the Ansible Automation Platform custom resource.

    For more examples of Ansible Automation Platform custom resources, see Appendix: Red Hat Ansible Automation Platform custom resources.

The Ansible Automation Platform Operator installation form allows you to further configure your automation hub operator ingress under Advanced configuration.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to OperatorsInstalled Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select the Ansible Automation Platform tab.
  5. Click the ⋮ icon next to your Ansible Automation Platform instance and select Edit AnsibleAutomationPlatform.
  6. Click YAML view and locate the spec.hub: section.
  7. Configure the ingress options under the hub: section:

    spec:
      hub:
        ingress_type: Ingress
        ingress_annotations: |
          nginx.ingress.kubernetes.io/proxy-body-size: "0"
          nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
        ingress_tls_secret: hub-ingress-tls-secret
  8. Click Save.

    Note

    These ingress settings apply to the automation hub component managed by this Ansible Automation Platform instance. The operator automatically updates the ingress configuration for the hub.

    For more examples of Ansible Automation Platform custom resources, see Appendix: Red Hat Ansible Automation Platform custom resources.

Verification

After you have configured your automation hub ingress settings, Red Hat OpenShift Container Platform updates the pods. This may take a few minutes.

You can view the progress by navigating to WorkloadsPods and locating the newly created instance.

Verify that the following operator pods provided by the Ansible Automation Platform Operator installation from automation hub are running:

Expand
Operator manager controllersAutomation controllerAutomation hub

The operator manager controllers for each of the 3 operators, include the following:

  • automation-controller-operator-controller-manager
  • automation-hub-operator-controller-manager
  • resource-operator-controller-manager

After deploying automation controller, you will see the addition of these pods:

  • controller
  • controller-postgres

After deploying automation hub, you will see the addition of these pods:

  • hub-api
  • hub-content
  • hub-postgres
  • hub-redis
  • hub-worker
Note

A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See Using image pull secrets for more information. You can diagnose this issue further by running oc describe pod <pod-name> to see if there is an ImagePullBackOff error on that pod.

5.3.2. Finding the automation hub route

You can access the automation hub through the platform gateway or through the following procedure.

Procedure

  1. Log into Red Hat OpenShift Container Platform.
  2. Navigate to NetworkingRoutes.
  3. Under Location, click on the URL for your automation hub instance.

Verification

The automation hub user interface launches where you can sign in with the administrator credentials specified during the operator configuration process.

Note

If you did not specify an administrator password during configuration, one was automatically created for you. To locate this password, go to your project, select WorkloadsSecrets and open controller-admin-password. From there you can copy the password and paste it into the Automation hub password field.

For users who prefer to deploy Ansible Automation Platform with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the oc create command.

By default, the Ansible Automation Platform Operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your Ansible Automation Platform deployment.

You can choose to use an external database instead if you prefer to use a dedicated node to ensure dedicated resources or to manually manage backups, upgrades, or performance tweaks.

Note

The same external database (PostgreSQL instance) can be used for both automation hub, automation controller, and platform gateway as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance.

The following section outlines the steps to configure an external database for your automation hub on a Ansible Automation Platform Operator.

Prerequisite

The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform. The external postgres instance credentials and connection information will need to be stored in a secret, which will then be set on the automation hub spec.

Note

Ansible Automation Platform 2.6 supports PostgreSQL 15 for its managed databases and additionally supports PostgreSQL 15, 16, and 17 for external databases.

If you choose to use an externally managed database with version 16 or 17 you must also rely on external backup and restore processes.

Procedure

  1. Create a postgres_configuration_secret YAML file, following the template below:

    apiVersion: v1
    kind: Secret
    metadata:
      name: external-postgres-configuration
      namespace: <target_namespace> 
    1
    
    stringData:
      host: "<external_ip_or_url_resolvable_by_the_cluster>" 
    2
    
      port: "<external_port>" 
    3
    
      database: "<desired_database_name>"
      username: "<username_to_connect_as>"
      password: "<password_to_connect_with>" 
    4
    
      sslmode: "prefer" 
    5
    
      type: "unmanaged"
    type: Opaque
    1. Namespace to create the secret in. This should be the same namespace you want to deploy to.
    2. The resolvable hostname for your database node.
    3. External port defaults to 5432.
    4. Value for variable password should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration.
    5. The variable sslmode is valid for external databases only. The allowed values are: prefer, disable, allow, require, verify-ca, and verify-full.
  2. Apply external-postgres-configuration-secret.yml to your cluster using the oc create command.

    $ oc create -f external-postgres-configuration-secret.yml
  3. When creating your AnsibleAutomationPlatform custom resource object, specify the secret under the hub section in your spec, following the example below:

    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: myaap
    spec:
      hub:
        name: hub-dev  # Optional: specify existing instance or custom name
        postgres_configuration_secret: external-postgres-configuration
        storage_type: file
        file_storage_storage_class: <your-read-write-many-storage-class>
        file_storage_size: 10Gi
    Note

    If you have an existing automation hub instance, specify its name under hub.name to apply these settings to the existing instance. If you omit the name field, the operator will create a new instance with the default name pattern <aap-instance-name>-hub.

    For more examples of Ansible Automation Platform custom resources, see Appendix: Red Hat Ansible Automation Platform custom resources.

The database migration script uses hstore fields to store information, therefore the hstore extension must be enabled in the automation hub PostgreSQL database.

This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server.

If the PostgreSQL database is external, you must enable the hstore extension in the automation hub PostgreSQL database manually before installation.

If the hstore extension is not enabled before installation, a failure raises during database migration.

Procedure

  1. Check if the extension is available on the PostgreSQL server (automation hub database).

    $ psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"
  2. Where the default value for <automation hub database> is automationhub.

    Example output with hstore available:

    name  | default_version | installed_version |comment
    ------+-----------------+-------------------+---------------------------------------------------
     hstore | 1.7           |                   | data type for storing sets of (key, value) pairs
    (1 row)

    Example output with hstore not available:

     name | default_version | installed_version | comment
    ------+-----------------+-------------------+---------
    (0 rows)
  3. On a RHEL based server, the hstore extension is included in the postgresql-contrib RPM package, which is not installed automatically when installing the PostgreSQL server RPM package.

    To install the RPM package, use the following command:

    dnf install postgresql-contrib
  4. Load the hstore PostgreSQL extension into the automation hub database with the following command:

    $ psql -d <automation hub database> -c "CREATE EXTENSION hstore;"

    In the following output, the installed_version field lists the hstore extension used, indicating that hstore is enabled.

    name | default_version | installed_version | comment
    -----+-----------------+-------------------+------------------------------------------------------
    hstore  |     1.7      |       1.7         | data type for storing sets of (key, value) pairs
    (1 row)

5.3.4. Finding and deleting PVCs

A persistent volume claim (PVC) is a storage volume used to store data that automation hub and automation controller applications use. This persistence is a key feature of static provisioning. If you redeploy an instance using the same name, the Operator must bind to these existing PVCs, allowing for data continuity across deployments. If you are confident that you no longer need a PVC, or have backed it up elsewhere, you can manually delete them.

Procedure

  1. List the existing PVCs in your deployment namespace:

    oc get pvc -n <namespace>
  2. Identify the PVC associated with your previous deployment by comparing the old deployment name and the PVC name.
  3. Delete the old PVC:

    oc delete pvc -n <namespace> <pvc-name>

5.3.5. Additional configurations

A collection download count can help you understand collection usage. To add a collection download count to automation hub, set the following configuration:

spec:
  pulp_settings:
    ansible_collect_download_count: true

When ansible_collect_download_count is enabled, automation hub will display a download count by the collection.

Before you can deploy a container image in automation hub, you must add the registry to the allowedRegistries in the automation controller image configuration. To do this you can copy and paste the following code into your automation controller image YAML.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to HomeSearch.
  3. Select the Resources drop-down list and type "Image".
  4. Select Image (config,openshift.io/v1).
  5. Click Cluster under the Name heading.
  6. Select the YAML tab.
  7. Paste in the following under spec value:

    spec:
      registrySources:
        allowedRegistries:
        - quay.io
        - registry.redhat.io
        - image-registry.openshift-image-registry.svc:5000
        - <OCP route for your automation hub>
  8. Click Save.

As an automation administrator for your organization, you can configure Ansible Automation Platform Hub Operator for signing and publishing Ansible content collections from different groups within your organization.

For additional security, automation creators can configure Ansible-Galaxy CLI to verify these collections to ensure that they have not been changed after they were uploaded to automation hub.

To successfully sign and publish Ansible Certified Content Collections, you must configure private automation hub for signing.

Prerequisites

  • A GPG key pair. If you do not have one, you can generate one using the gpg --full-generate-key command.
  • Your public-private key pair has proper access for configuring content signing on Ansible Automation Platform Hub Operator.

Procedure

  1. Create a ConfigMap for signing scripts. The ConfigMap you create contains the scripts used by the signing service for collections and container images.

    Note

    This script is used as part of the signing service and must generate an ascii-armored detached gpg signature for that file using the key specified through the PULP_SIGNING_KEY_FINGERPRINT environment variable.

    The script prints out a JSON structure with the following format.

    {"file": "filename", "signature": "filename.asc"}

    All the file names are relative paths inside the current working directory. The file name must remain the same for the detached signature.

    Example: The following script produces signatures for content:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: signing-scripts
    data:
      collection_sign.sh: |-
          #!/usr/bin/env bash
    
          FILE_PATH=$1
          SIGNATURE_PATH=$1.asc
    
          ADMIN_ID="$PULP_SIGNING_KEY_FINGERPRINT"
          PASSWORD="password"
    
          # Create a detached signature
          gpg --quiet --batch --pinentry-mode loopback --yes --passphrase \
            $PASSWORD --homedir /var/lib/pulp/.gnupg --detach-sign --default-key $ADMIN_ID \
            --armor --output $SIGNATURE_PATH $FILE_PATH
    
          # Check the exit status
          STATUS=$?
          if [ $STATUS -eq 0 ]; then
            echo {\"file\": \"$FILE_PATH\", \"signature\": \"$SIGNATURE_PATH\"}
          else
            exit $STATUS
          fi
      container_sign.sh: |-
        #!/usr/bin/env bash
    
        # galaxy_container SigningService will pass the next 4 variables to the script.
        MANIFEST_PATH=$1
        FINGERPRINT="$PULP_SIGNING_KEY_FINGERPRINT"
        IMAGE_REFERENCE="$REFERENCE"
        SIGNATURE_PATH="$SIG_PATH"
    
        # Create container signature using skopeo
        skopeo standalone-sign \
          $MANIFEST_PATH \
          $IMAGE_REFERENCE \
          $FINGERPRINT \
          --output $SIGNATURE_PATH
    
        # Optionally pass the passphrase to the key if password protected.
        # --passphrase-file /path/to/key_password.txt
    
        # Check the exit status
        STATUS=$?
        if [ $STATUS -eq 0 ]; then
          echo {\"signature_path\": \"$SIGNATURE_PATH\"}
        else
          exit $STATUS
        fi
  2. Create a secret for your GnuPG private key. This secret securely stores the GnuPG private key you use for signing.

    gpg --export --armor <your-gpg-key-id> > signing_service.gpg
    
    oc create secret generic signing-galaxy --from-file=signing_service.gpg

    The secret must have a key named signing_service.gpg.

  3. Configure the AnsibleAutomationPlatform CR.

    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: aap-hub-signing-sample
    spec:
      hub:
        signing_secret: "signing-galaxy"
        signing_scripts_configmap: "signing-scripts"

Configure static storage when your environment does not support dynamic volume provisioning. This process ensures the Ansible Automation Platform Operator adopts manually created Persistent Volume Claims by using specific naming conventions.

By default, the Ansible Automation Platform Operator uses dynamic provisioning to create the required storage for components such as the database and automation hub.

If your environment does not allow dynamic provisioning, you must use static provisioning.

With static provisioning, you manually create Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) before you deploy the AnsibleAutomationPlatform custom resource. When the Operator starts the deployment, it searches the namespace for PVCs that match its internal naming conventions. If a matching PVC exists, the Operator binds to that claim instead of attempting to provision new storage.

Static provisioning also enables data persistence during redeployments. If you delete an AnsibleAutomationPlatform instance, the Operator does not delete the associated PVCs. You can redeploy the instance using the same name to reconnect to the existing data.

Follow this process to manually prepare storage for an Ansible Automation Platform installation when dynamic provisioning is disabled.

Prerequisites

  • You have an active OpenShift Container Platform CLI (oc) session.
  • You have defined Persistent Volumes (PVs) that meet the minimum size and access mode requirements for your components.

Procedure

  1. Identify the name you intend to use for your AnsibleAutomationPlatform deployment (for example, myaap).
  2. Create a PVC manifest for the PostgreSQL database using the required naming convention: postgres-15-<deployment_name>-0.
  3. Ensure the accessModes and resources.requests.storage match your manually provisioned PV.
  4. Apply the PVC manifest:

    oc apply -f postgres-pvc.yaml
  5. Repeat these steps for other components, such as automation hub, using the correct naming conventions.
  6. Leave the storage_class fields empty or omit them from the specification. This forces the Operator to use the pre-created PVCs.

    Note

    Unlike core components, the AnsibleAutomationPlatformBackup and Restore custom resources provide a backup_pvc parameter. You must use this parameter to specify your custom PVC name instead of relying on naming conventions.

Verification

  • Check the status of the PVCs to ensure they are in a Bound state:

    oc get pvc -n <namespace>

The Operator must find PVCs with exact names to adopt them for static provisioning. Replace <instance_name> with the name of your AnsibleAutomationPlatform custom resource.

Expand
ComponentRequired PVC NameDefault Access Mode

Ansible Automation Platform Database

postgres-15-<aap_cr_name>-postgres-15-0

ReadWriteOnce

Automation Hub Storage

<instance_name>-hub-file-storage

(Required when storage_type is set to file)

ReadWriteMany

Automation Hub Redis Persistence

<instance_name>-hub-redis-data

ReadWriteOnce

When you create an Ansible Automation Platform instance through the Ansible Automation Platform Operator, standalone Redis is assigned by default. If you would prefer to deploy clustered Redis, you can use the following procedure.

For more information about Redis, refer to Caching and queueing system in the Planning your installation guide.

Important

Switching Redis modes on an existing instance is not supported and can lead to unexpected consequences, including data loss. To change the Redis mode, you must deploy a new instance.

Prerequisites

  • You have installed an Ansible Automation Platform Operator deployment.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to OperatorsInstalled Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select the Details tab.
  5. On the Ansible Automation Platform tile click Create instance.

    1. For existing instances, you can edit the YAML view by clicking the ⋮ icon and then Edit AnsibleAutomationPlatform.
    2. Change the redis_mode value to "cluster".
    3. Click Reload, then Save.
  6. Click to expand Advanced configuration.
  7. For the Redis Mode list, select Cluster.
  8. Configure the rest of your instance as necessary, then click Create.

Verification

Your instance deploys with a cluster Redis with 6 Redis replicas as default.

Note

You can modify your automation hub default redis cache PVC volume size, for help with this see, Modifying the default redis cache PVC volume size automation hub.

As a system administrator, you can deploy Ansible Lightspeed intelligent assistant on Ansible Automation Platform 2.6 on OpenShift Container Platform.

6.1. Overview

You can install and use Ansible Lightspeed intelligent assistant on Ansible Automation Platform 2.6 on OpenShift Container Platform. Ansible Lightspeed intelligent assistant is an intuitive chat interface embedded within the Ansible Automation Platform, using generative artificial intelligence (AI) to answer questions about the Ansible Automation Platform.

The Ansible Lightspeed intelligent assistant interacts with users in their natural language prompts in English, and uses Large Language Models (LLMs) to generate quick, accurate, and personalized responses. These responses empower Ansible users to work more efficiently, thereby improving productivity and the overall quality of their work.

Ansible Lightspeed intelligent assistant requires the following configurations:

  • Installation of Ansible Automation Platform 2.6 on Red Hat OpenShift Container Platform
  • Deployment of an LLM provider served by either a Red Hat AI platform or a third-party AI platform. To know the LLM providers that you can use, see LLM providers.
Important

Red Hat does not collect any telemetry data from your interactions with the Ansible Lightspeed intelligent assistant.

Upgrading from Ansible Automation Platform 2.5 to 2.6.1 or 2.6 to 2.6.1 enables HTTPS and TLS by default for internal communication between the Ansible Lightspeed API and the Ansible Lightspeed intelligent assistant pod. Following the upgrade to Ansible Automation Platform 2.6.1, the intelligent assistant will be unavailable for approximately 60 seconds while its pod restarts.

6.1.1. Integration with MCP server

Ansible Lightspeed intelligent assistant integration with the Model Context Protocol (MCP) server is available as a Technology Preview release. This integration enhances the user experience by delivering relevant, dynamically sourced data results to your queries.

MCP is an open protocol that standardizes how applications provide context to LLMs. Using the protocol, an MCP server provides a standardized way for an LLM to increase context by requesting and receiving real-time information from external resources. The integration with an MCP server enables the Ansible Lightspeed intelligent assistant to offer an enhanced user experience by delivering relevant, dynamically sourced data results to your queries. You can configure a MCP server in the chatbot configuration secret. For more information, see Creating a chatbot configuration secret.

Note

Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

  • You have installed Ansible Automation Platform 2.6 on your OpenShift Container Platform environment.
  • You have administrator privileges for the Ansible Automation Platform.
  • You have provisioned an OpenShift cluster with Operator Lifecycle Management installed.

You must have configured an LLM provider that you will use before deploying the Ansible Lightspeed intelligent assistant.

An LLM is a type of machine learning model that can interpret and generate human-like language. When an LLM is used with the Ansible Lightspeed intelligent assistant, the LLM can interpret questions accurately and provide helpful answers in a conversational manner.

Ansible Lightspeed intelligent assistant can rely on the following LLM providers:

  • Red Hat LLM providers:

    • Red Hat Enterprise Linux AI

      You can configure Red Hat Enterprise Linux AI as the LLM provider. As the Red Hat Enterprise Linux is in a different environment than the Ansible Lightspeed deployment, the model deployment must allow access using a secure connection. For more information, see Optional: Allowing access to a model from a secure endpoint.

      Ansible Lightspeed intelligent assistant supports vLLM Server. When self-hosting an LLM with Red Hat Enterprise Linux AI, you can use vLLM Server as the inference engine.

    • Red Hat OpenShift AI

      You must deploy an LLM on the Red Hat OpenShift AI single-model serving platform that uses the Virtual Large Language Model (vLLM) runtime. If the model deployment resides in a different OpenShift environment than the Ansible Lightspeed deployment, include a route to expose the model deployment outside the cluster. For more information, see About the single-model serving platform.

      Ansible Lightspeed intelligent assistant supports vLLM Server. When self-hosting an LLM with Red Hat OpenShift AI, you can use vLLM Server as the inference engine.

      Note

      For configurations with Red Hat Enterprise Linux AI or Red Hat OpenShift AI, you must host your own LLM provider instead of using a SaaS LLM provider.

    • Red Hat AI Inference Server

      You can deploy an LLM using Red Hat AI Inference Server as your inference runtime. Red Hat AI Inference Server supports vLLM runtimes for efficient model serving and can be configured to work with Ansible Lightspeed intelligent assistant. For more information, see Red Hat AI Inference Server documentation.

      If the Red Hat AI Inference Server deployment is in a different environment than the Ansible Lightspeed deployment, ensure the model deployment allows access using a secure connection and configure appropriate network routing.

      Ansible Lightspeed intelligent assistant supports vLLM Server when self-hosting an LLM with Red Hat AI Inference Server as the inference engine.

  • Third-party LLM providers:

    • OpenAI

      To use OpenAI with the Ansible Lightspeed intelligent assistant, you need access to the OpenAI API platform.

    • Microsoft Azure OpenAI

      To use Microsoft Azure with the Ansible Lightspeed intelligent assistant, you need access to Microsoft Azure OpenAI product page.

Perform the following tasks to set up and use the Ansible Lightspeed intelligent assistant in your Ansible Automation Platform instance on the OpenShift Container Platform environment:

Expand
TaskDescription

Deploy the Ansible Lightspeed intelligent assistant on OpenShift Container Platform

An Ansible Automation Platform administrator who wants to deploy the Ansible Lightspeed intelligent assistant for all Ansible users in the organization.

Perform the following tasks:

  1. Create a chatbot configuration secret.
  2. Update the YAML file of the Ansible Automation Platform to use the chatbot connection secret.
  3. Optional: Change your LLM model if you want to use a different LLM provider after deploying Red Hat Ansible Lightspeed.

Access and use the Ansible Lightspeed intelligent assistant

All Ansible users who want to use the intelligent assistant to get answers to their questions about the Ansible Automation Platform. For more details, see Using the Ansible Lightspeed intelligent assistant.

This section provides information about the procedures involved in deploying the Ansible Lightspeed intelligent assistant on OpenShift Container Platform.

6.2.1. Creating a chatbot configuration secret

Create a configuration secret for the Ansible Lightspeed intelligent assistant, so that you can connect the intelligent assistant to the Ansible Automation Platform operator.

Procedure

  1. Log in to Red Hat OpenShift Container Platform as an administrator.
  2. Navigate to WorkloadsSecrets.
  3. From the Projects list, select the namespace that you created when you installed the Ansible Automation Platform operator.
  4. Click CreateKey/value secret.
  5. In the Secret name field, enter a unique name for the secret. For example, chatbot-configuration-secret.
  6. Add the following keys and their associated values individually:

    Expand
    KeyValue

    Settings for all LLM setups

    chatbot_model

    Enter the LLM model name that is configured on your LLM setup.

    chatbot_url

    Enter the inference API base URL on your LLM setup. For example, https://your_inference_api/v1.

    chatbot_token

    Enter the API token or the API key. This token is sent along with the authorization header when an inference API is called.

    chatbot_llm_provider_type

    Optional

    Enter the value as per the provider type of your LLM setup:

    • Red Hat Enterprise Linux AI: rhelai_vllm
    • Red Hat OpenShift AI: rhoai_vllm
    • OpenAI: openai
    • Microsoft Azure OpenAI: azure_openai

    chatbot_model_config_extras

    Optional

    Use this field to pass a JSON dictionary of extra parameters to pass directly to the model provider, for settings not covered by other standard fields.

    For example, you can specify a parameter api_version for Microsoft Azure OpenAI in the JSON format '{"api_version": "<your API version>"}'.

    Additional settings for MCP server configuration

    • aap_gateway_url
    • aap_controller_url

    Configure a Model Context Protocol (MCP) server that interfaces with the Ansible Lightspeed intelligent assistant.

    The values aap_gateway_url and aap_controller_url are internal URLs accessible to the platform gateway and automation controller services on the OpenShift cluster. For example, if the name of your Ansible Automation Platform custom resource is myaap, these URLs will be:

    • aap_gateway_url: http://myaap
    • aap_controller_url: http://myaap-controller-service

    For MCP server configuration:

    • If none of these parameters are configured, no MCP server is provisioned or registered with the underlying LLM’s tool at runtime.
    • If you configure the aap_gateway_url parameter only, the Ansible Lightspeed Service MCP server is provisioned. Authentication attempts to use the JSON Web Token (JWT) token associated with the user’s authenticated context.
    • If you configure both parameters aap_gateway_url and aap_controller_url, the Ansible Lightspeed Service MCP server and Ansible Automation Platform Controller Service MCP server are both configured. Authentication attempts to use the JWT token associated with the user’s authenticated context.
  7. Click Create. The chatbot authorization secret is successfully created.

After you create the chatbot authorization secret, you must update the YAML file of the Ansible Automation Platform operator to use the secret.

Procedure

  1. Log in to Red Hat OpenShift Container Platform as an administrator.
  2. Navigate to OperatorsInstalled Operators.
  3. From the list of installed operators, select the Ansible Automation Platform operator.
  4. Locate and select the Ansible Automation Platform custom resource, and then click the required app.
  5. Select the YAML tab.
  6. Scroll the text to find the spec: section, and add the following details under the spec: section:

    spec:
      lightspeed:
        disabled: false
        chatbot_config_secret_name: <name of your chatbot configuration secret>
  7. Click Save. The Ansible Lightspeed intelligent assistant service takes a few minutes to set up.

    Note

    Upgrading from Ansible Automation Platform 2.5 to 2.6.1 enables HTTPS and enables TLS by default for internal communication between the Ansible Lightspeed API and the Ansible Lightspeed intelligent assistant pod. Following the upgrade to Ansible Automation Platform 2.6.1, the intelligent assistant will be unavailable for approximately 60 seconds while its pod restarts.

Verification

  1. Verify that the chat interface service is running successfully:

    1. Navigate to WorkloadsPods.
    2. Filter with the term api and ensure that the following APIs are displayed in Running status:

      • myaap-lightspeed-api-<version number>
      • myaap-lightspeed-chatbot-api-<version number>
  2. Verify the MCP server configuration if you specified either aap_gateway_url or aap_controller_url parameter:

    • Open the lightspeed-chatbot-api pod and click the Containers section.

      • If the ansible-mcp-lightspeed container is displayed, the Ansible Lightspeed MCP server is running.
      • If the ansible-mcp-controller container is displayed, the Ansible Automation Platform Controller Service MCP server is running.
  3. Verify that the chat interface is displayed on the Ansible Automation Platform:

    1. Access the Ansible Automation Platform:

      1. Navigate to OperatorsInstalled Operators.
      2. From the list of installed operators, click Ansible Automation Platform.
      3. Locate and select the Ansible Automation Platform custom resource, and then click the app that you created.
      4. From the Details tab, record the information available in the following fields:

        • URL: This is the URL of your Ansible Automation Platform instance.
        • Gateway Admin User: This is the username to log into your Ansible Automation Platform instance.
        • Gateway Admin password: This is the password to log into your Ansible Automation Platform instance.
      5. Log in to the Ansible Automation Platform using the URL, username, and password that you recorded earlier.
    2. Access the Ansible Lightspeed intelligent assistant:

      1. Click the Ansible Lightspeed intelligent assistant icon Ansible Lightspeed intelligent assistant icon that is displayed at the top right corner of the taskbar.
      2. Verify that the chat interface is displayed, as shown in the following image:

        Ansible Lightspeed intelligent assistant .

6.2.3. Changing your LLM model

If you have already deployed Ansible Lightspeed intelligent assistant but want to change your LLM model, you can create a new chatbot configuration secret for the new LLM model.

Alternatively, if you want to use the same chatbot configuration secret, you must delete and redeploy the Ansible Lightspeed intelligent assistant.

Procedure

  • To create and use a new chatbot configuration secret:

    1. Create a new chatbot configuration secret with a different name for the new LLM model.
    2. Update the YAML file of the Ansible Automation Platform operator with the new chatbot configuration secret name.

      The Ansible Automation Platform operator detects the new configuration and redeploys the Ansible Lightspeed intelligent assistant.

    3. Verify that the chat interface service is running successfully. See the verification steps mentioned in the topic Update the YAML file of the Ansible Automation Platform operator.

      Important

      Do not update the existing chatbot configuration secret with the new LLM model, as the reconciliation logic does not check the updates made to the secret.

  • To use the same chatbot secret by deleting and redeploying the Ansible Lightspeed intelligent assistant:

    1. Disable the Ansible Lightspeed operator instance:

      1. Navigate to OperatorsInstalled Operators.
      2. From the list of installed operators, select Ansible Automation Platform.
      3. Locate and select the Ansible Automation Platform custom resource.
      4. Select the YAML tab and under the spec: section for lightspeed category, specify disabled:true.
      5. Click Save.
    2. Delete the Ansible Lightspeed operator instance:

      1. Navigate to OperatorsInstalled Operators.
      2. From the list of installed operators, select Ansible Lightspeed and delete the operator.
    3. Re-enable the Ansible Automation Platform instance:

      1. Navigate to OperatorsInstalled Operators.
      2. From the list of installed operators, select Ansible Automation Platform.
      3. Locate and select the Ansible Automation Platform custom resource.
      4. Select the YAML tab and under the spec: section for lightspeed category, specify disabled:false.
      5. Click Save.

After you deploy the Ansible Lightspeed intelligent assistant, all Ansible users within the organization can access and use the chat interface to ask questions and receive information about the Ansible Automation Platform.

  1. Log in to the Ansible Automation Platform.
  2. Click the Ansible Lightspeed intelligent assistant icon Ansible Lightspeed intelligent assistant icon that is displayed at the top right corner of the taskbar.

    The Ansible Lightspeed intelligent assistant window opens with a welcome message, as shown in the following image:

    Ansible Lightspeed intelligent assistant

You can perform the following tasks:

  • Ask questions in the prompt field and get answers about the Ansible Automation Platform

    Note

    If you are using an IBM Granite 3.3 series AI model, you might experience a delay of about one minute when waiting for a chat response. To resolve this error, restart the chat session.

  • View the chat history of all conversations in a chat session.
  • Search the chat history using a user prompt or answer. The chat history is deleted when you close an existing chat session or log out from the Ansible Automation Platform.
  • Restore an earlier chat by clicking the relevant entry from the chat history.
  • Give feedback on the quality of the chat answers, by clicking the Thumbs up or Thumbs down icon.
  • Copy and record the answers by clicking the Copy icon.
  • Change the mode of the virtual assistant to dark or light mode, by clicking the Sun icon Sun icon from the top right corner of the toolbar.
  • Clear the context of an existing chat by using the New chat button in the chat history.
  • Close the chat interface while working on the Ansible Automation Platform.

As an organization administrator, you can deploy an Ansible Model Context Protocol (MCP) server on an operator-based installation or container-based installation of Ansible Automation Platform 2.6. This functionality is available as a Technology Preview release.

7.1. Overview

Model Context Protocol (MCP) is an open standard that enables AI models to use external AI tools and services via a unified interface. Using the Ansible MCP server, you can connect your Ansible Automation Platform with your preferred external AI tool (such as Claude, Cursor, or ChatGPT). The AI tools can access key information about your Ansible Automation Platform environment and perform tasks. Ansible users can query information, execute workflows, and perform automation tasks using natural language prompts directly within their preferred AI tool.

Note

Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

7.1.1. Benefits

The following are the benefits of the Ansible MCP server:

For external AI tools:

  • Provides a standardized interface for securely querying infrastructure data and executing automation workflows within the Ansible Automation Platform.
  • Enables agentic workflows to interact with the Ansible Automation Platform.

For Ansible users:

  • Provides the ability to use the chatbot interface of their preferred external AI tool to get information about their Ansible Automation Platform environment, and run automation jobs directly through that tool.

For developers:

  • Reduces the time and complexity of developing or integrating the Ansible Automation Platform with AI applications or agents.
  • Simplifies AI integration, enabling existing automation through Ansible Automation Platform to be exposed to AI tools without writing custom API code or middleware.

7.1.2. Workflow

The standalone Ansible MCP server functions as a secure link between your external AI clients and the Ansible Automation Platform. The AI agent accesses underlying infrastructure only when the Ansible MCP server has appropriate permissions.

The following describes the workflow:

  1. AI client (The requester): The user initiates a request through their external AI agent (for example, Cursor or Claude) using natural-language prompts.
  2. The AI model (The translator): The AI agent receives the request, interprets the intent, and maps it to the appropriate exposed Ansible toolset. It then sends a structured toolset call with the necessary parameters.
  3. Ansible MCP server (The gatekeeper): Upon receiving the call, the Ansible MCP server validates the request. It uses the user’s API token to authenticate with the automation controller.
  4. Ansible controller (The executor): The automation controller accepts the validated command from the MCP server and triggers the appropriate automation job.
  5. Response loop: The automation result is returned to the Ansible MCP server, standardized into a format the AI agent can process, and displayed to the user via the AI client.
Important

Both the Ansible MCP server and the Ansible Automation Platform UI access the Ansible Automation Platform API. However, because the AI tool processes the API output before displaying it in its chat interface, you might observe different results when comparing the output from the AI tool with the Ansible Automation Platform UI.

7.1.3. Ansible MCP server toolsets

The Ansible MCP server provides a pre-configured suite of toolsets that effectively act as a bridge between your preferred AI agent and the Ansible Automation Platform. Once configured, these toolsets enable your AI agent to perform specific, authorized actions without requiring you to leave the chat interface.

The Ansible MCP server turns your AI agent from a passive assistant into an active operator that can interact with your Ansible Automation Platform infrastructure and execute workflows or automate tasks based on the permissions you define.

The following toolsets are available in this Technology Preview release:

Expand
ToolsetDescriptionUsage examples

Job management

Tools to list available job templates, launch automation jobs, and monitor their real-time status.

Operators can:

  • Launch job templates and workflows to execute automation tasks for their projects and services.
  • View job output and logs to troubleshoot failed automation tasks and understand what went wrong.
  • Relaunch failed jobs to recover from temporary failures and complete necessary automation tasks.

Inventory management

Tools to query your inventory for host details, check group membership, and verify system facts.

Operators can:

  • View and browse inventories across environments to understand which systems they are managing with automation.
  • Manage group assignments to target automation to specific sets of systems.
  • View hosts that are configured for automation.

System monitoring

Tools to retrieve job logs, troubleshoot failed tasks, and check the health of your automation environment.

Administrators can:

  • Perform platform status and health checks across all services to identify issues and ensure the automation platform is running correctly.
  • Monitor service health through the platform gateway to ensure all platform components are functioning correctly.
  • Audit user activity and generate reports to ensure compliance and identify potential security issues.

User management

Tools to allow the AI agent to administer access and organizational structure within the Ansible Automation Platform.

Administrators can:

  • Use natural-language prompts to provision users and enforce hierarchy, rather than manually navigating the UI.
  • Create, modify, and delete users and teams to manage access to the Ansible Automation Platform and support organizational changes.
  • Configure role-based access control to ensure users have the appropriate permissions for their responsibilities while maintaining security.
  • View team memberships and structure to see who else in their organization is working on automation.

Security/compliance

Tools that enable the AI agent to act as a security operator, managing sensitive credentials and verifying platform integrity without exposing raw secrets.

Operators can:

  • View available credentials to understand what authentication options are available for their automation jobs.

Administrators can:

  • Manage credentials and security policies to ensure secure access to external systems while maintaining proper governance.
  • Manage custom credential types for seamless integration with third-party applications.

Platform configuration

Tools that enable organization administrators and developers to inspect and tune the Ansible Automation Platform infrastructure itself.

Administrators can:

  • Manage system settings across all components to configure the platform in line with the organizational requirements and policies.
  • Manage and track licenses to ensure compliance with licensing terms and optimize license utilization.

Developers can:

  • Tune execution environments to optimize the runtime performance of their automation content.

7.1.4. Server-level and user-level permissions

The Ansible MCP server employs a dual-layer security model to ensure safe integration between AI tools and your Ansible Automation Platform infrastructure. This approach combines a global administrative safeguard with the granular Role-Based Access Control (RBAC) of the Ansible Automation Platform.

You can grant the following access types to the Ansible MCP server:

  • Server-level permissions: Organization administrators assign a global-level permission while deploying the Ansible MCP server. Administrators can choose one of the following access levels:

    • Read-only access: The default setting that enforces a strict "look but do not touch" policy. The AI agent can retrieve system data, such as logs and inventory, but the agent cannot launch jobs or modify configurations. This global safeguard overrides all individual user permissions to prevent unintended automation.
    • Read-write access: This setting authorizes the AI agent to make changes in your Ansible Automation Platform, such as executing job templates, managing resources, and applying infrastructure changes. However, these actions are subject to the specific RBAC permissions of the user-provided API token.
  • User-level permissions: The AI agent’s specific capabilities are inherited from the user account that generated the authentication API token.

    • Inherited permissions: The AI tool inherits the user’s permissions and performs only the actions the user is authorized to perform. For example, if the user’s token only has permissions to view the "network" inventory, the AI tool cannot access or modify the "database" inventory even if the user requests it.
    • Rejection of unauthorized actions: If the AI tool attempts an action (like launching a job) that the user’s token is not authorized to perform, the Ansible Automation Platform API rejects the request.
Warning

Enabling read-write access for the Ansible MCP server grants the AI agent autonomy to directly make changes in your Ansible Automation Platform environment, for example, executing automation jobs. The AI agent can directly make changes in your Ansible Automation Platform environment only if the user has write permissions. Large Language Models (LLMs) can occasionally misinterpret prompts or hallucinate commands. Therefore, enabling read-write access may introduce a risk of unintended changes to your environment.

Red Hat collects anonymized telemetry data from the Ansible MCP server. The telemetry data includes metrics related to MCP server performance, adoption trends, and usage patterns.

Telemetry data will be automatically collected for Ansible MCP server deployments using Ansible Automation Platform patch release on 21 January 2026 and later versions. Red Hat will use this data to monitor the operational health of your MCP servers and to ensure the long-term scalability of the MCP ecosystem.

Important

Telemetry data collection cannot be disabled, but strict user privacy is maintained. Red Hat does not collect users' personal information, such as usernames or passwords. If any personal information is inadvertently received, the data is deleted. Refer to the Red Hat Privacy Statement for more information about Red Hat’s privacy practices.

7.1.6. Prerequisites

  • Platform version: An instance of Ansible Automation Platform 2.6 or later.
  • Deployment environment:

    • OpenShift: Access to an OpenShift cluster with permissions to install operators.
    • Containerized: A supported container runtime.
  • Access credentials: A valid user or service account within Ansible Automation Platform with permissions to execute the desired automation jobs. You will need to generate an API token for this account.

7.1.7. Process

Perform the following tasks to deploy and configure an Ansible MCP server and integrate it with your preferred AI tool:

Expand
Step numberTaskDescription

1

Deploy and configure an Ansible MCP server on operator-based installation.

An organization administrator deploys and configures the Ansible MCP server on an operator-based installation of Ansible Automation Platform 2.6.

2

Create an API token for the Ansible MCP server.

An Ansible user creates an API token for their Ansible Automation Platform instance and uses it to connect to their preferred AI tool. The AI tools will inherit the user’s permissions for authentication using the API token.

3

Connect an external AI agent to the Ansible MCP server

The Ansible user then configures an external AI tool with the Ansible MCP server’s API token, enabling the AI tool to connect to the Ansible MCP server and execute workflows and automate tasks.

As an organization administrator, you can deploy and configure an Ansible MCP server on an operator-based installation of Ansible Automation Platform 2.6. Use the following procedure to deploy and configure the Ansible MCP server.

Prerequisites

  • You have a valid Ansible Automation Platform 2.6 subscription.

Procedure

  1. Log in to Red Hat OpenShift Container Platform as an administrator.
  2. Navigate to the namespace where you want to install the MCP server.
  3. Select OperatorsInstalled Operators.
  4. From the list of installed operators, select Ansible Automation Platform.
  5. In the Ansible Automation Platform tile, click Create instance.
  6. From the Configure via field, select the Form view, then provide the instance name. For example, aap-mcp.
  7. Select the YAML view, and under the spec: section, add the mcp component:

    spec:
      mcp:
        disabled: false
        allow_write_operations: false
    Important

    Use the allow_write_operations variable to configure the operational access level of the Ansible MCP server:

    • Read-only access: Set the variable to false to restrict the AI agent to viewing data only. In this mode, the AI tool can query job statuses and logs, but cannot trigger new automation in the Ansible Automation Platform. The MCP server is set to read-only mode by default.
    • Read-write access: Set the variable to true to allow the AI agent to make changes in Ansible Automation Platform, such as executing jobs or modifying the system state.
  8. Click Create. The Ansible MCP server is created.

Verification

  1. Navigate to WorkloadsDeployments.
  2. Check that the deployment you created is listed there. For example: aap-mcp.
  3. Check one of the pod’s logs and verify there are no errors.

Create an API token for your Ansible Automation Platform instance, so you can use it to connect with your preferred AI agent. The AI tool will inherit the user’s permissions for API token-based authentication.

Prerequisites

  • Your organization administrator has deployed an Ansible MCP server.

Procedure

  1. From the navigation panel, select Access ManagementUsers.
  2. Select the username for your user profile to configure OAuth 2 tokens.
  3. Select the Tokens tab. When no tokens are present, the Tokens screen prompts you to add them.
  4. Click Create token, and provide the following details:

    • Application: Enter the name of the application with which you want to associate your token. Alternatively, you can search for it by clicking Browse. This opens a separate window that enables you to choose from the available options. Select Name from the filter list to filter by name if the list is extensive.

      Note

      To create a Personal Access Token (PAT) that is not linked to any application, leave the Application field blank.

    • Description: (Optional) Provide a short description for your token.
    • Scope: (Required) Specify the level of access you want this token to have. The scope of an OAuth 2 token can be set as one of the following:

      • Write: Allows requests sent with this token to add, edit, and delete resources in the system.
      • Read: Limits actions to read only. The write scope includes the read scope.
  5. Click Create token. The token information is displayed.
  6. On the token information page that appears, click the Copy icon and save the token for future use.

    Important

    This will be the only time the token is displayed. Therefore, ensure that you save the token for future use.

Verification

You can verify that the application now shows the user with the appropriate token by selecting the Tokens tab on the Application Details page:

  1. From the navigation panel, select Access ManagementOAuth Applications.
  2. Select the application you want to verify from the Applications list view.
  3. Select the Tokens tab.

    Your token should be displayed in the list of tokens associated with the application you chose.

Use the API token of the Ansible MCP server to connect it with your preferred AI agent, such as Claude, Cursor, or ChatGPT.

Prerequisites

  • An Ansible MCP server is deployed on your Ansible Automation Platform 2.6 environment.
  • An API token is created for your Ansible MCP server.

Procedure

  1. Go to the AI tool that you want to connect to the Ansible Automation Platform.
  2. Follow your AI client’s instructions to configure the MCP server settings.

    Typically, you must specify the MCP server configurations in the mcp.json file.

  3. When configuring the mcp.json file, add the Ansible MCP server URL in the following format:

    <Ansible MCP server URL>/<toolset>/mcp

    Key:

    • Ansible MCP server URL = The URL of the Ansible MCP server. For example, https://api.example.com/.

      To obtain the Ansible MCP server URL, contact your organization administrator.

    • Toolset = The toolset that you want to connect to. For example, job_management, inventory_management, system_monitoring, user_management, security_compliance, and platform_configuration.
    • Token = The API token of the Ansible MCP server.

      Use the following format to add details about your Ansible MCP server in the mcp.json file:

      "mcpServers": {
              "aap-mcp-job-management": {
                "type": "http",
                "url": "https://api.example.com/job_management/mcp",
                "headers": {
                  "Authorization": "Bearer ${env:MY_SERVICE_TOKEN}"
                }
              },
              "aap-mcp-inventory-management": {
                "type": "http",
                "url": "https://api.example.com/inventory_management/mcp",
                "headers": {
                  "Authorization": "Bearer ${env:MY_SERVICE_TOKEN}"
                }
              },
              "aap-mcp-system-monitoring": {
                "type": "http",
                "url": "https://api.example.com/system_monitoring/mcp",
                "headers": {
                  "Authorization": "Bearer ${env:MY_SERVICE_TOKEN}"
                }
              },
              "aap-mcp-user-management": {
                "type": "http",
                "url": "https://api.example.com/user_management/mcp",
                "headers": {
                  "Authorization": "Bearer ${env:MY_SERVICE_TOKEN}"
                }
              },
              "aap-mcp-security-compliance": {
                "type": "http",
                "url": "https://api.example.com/security_compliance/mcp",
                "headers": {
                  "Authorization": "Bearer ${env:MY_SERVICE_TOKEN}"
                }
              },
              "aap-mcp-platform-configuration": {
                "type": "http",
                "url": "https://api.example.com/platform_configuration/mcp",
                "headers": {
                  "Authorization": "Bearer ${env:MY_SERVICE_TOKEN}"
                }
              }
            },
      Important

      Use a concise MCP server name, ideally limited to 20 characters. This is because AI agents combine the MCP server name with the tool name to create a unique identifier, and most AI agents enforce a 64-character limit on this combined identifier.

Verification

  • Verify that the AI tool successfully connects to the Ansible Automation Platform MCP server using the API token.

    In your AI agent’s chat window, enter a prompt like What MCP tools are available for my Ansible Automation Platform?. The AI agent should return a response with a list of tools that are enabled for the Ansible Automation Platform MCP server.

If you changed the permissions of the Ansible MCP server after it was created and deployed, you must delete the AnsibleMCPServer custom resource and recreate it.

Prerequisites

  • You have a deployed an Ansible MCP server on an operator-based installation of Ansible Automation Platform.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Go to the resources tab under Installed Operatorsaap-operator.v2.6.0-, and then select AnsibleMCPServer details.
  3. Under Resources, search for the AnsibleMCPServer custom resource.
  4. Select the active AnsibleMCPServer instance. An active AnsibleMCPServer instance is identified by the -mcp suffix appended to the Ansible Automation Platform custom resource name.
  5. Select the Settings menu on the right side of the instance, and then click Delete AnsibleMCPServer.

    After the reconciliation process completes, the existing MCP server instance is deleted, and a new Ansible MCP server instance is created.

Verification

  1. Navigate to WorkloadsDeployments.
  2. Check that the deployment you created is listed there. For example: aap-mcp.
  3. Check one of the pod’s logs and verify there are no errors.

7.6. Troubleshooting Ansible MCP server errors

This section contains information to help you diagnose and resolve issues with deploying the Ansible MCP server and connecting it to an external AI agent.

Issue: Ansible Automation Platform rejects an API request (for example, retrieving job stdout) with an HTTP 406 status code if the MCP server’s requested output is not in JSON format.

Workaround: To obtain the output in a specific format, instruct your AI tool to use JSON format first. You can then transform the JSON output into your desired format.

7.6.2. User requests rejected with 400 status code

Issue: The Ansible MCP server may reject user requests from the external AI tool with 400 Bad Request status code. This error is encountered when the Ansible Automation Platform uses a self-signed certificate.

Workaround: Configure the Ansible MCP server to ignore certificate errors using the following steps:

  • For container-based installation: Set the value of variable mcp_ignore_certificate_errors to true.
  • For operator-based installation:

    Add the IGNORE_CERTIFICATE_ERRORS setting to the mcp: section of AnsibleAutomationPlatform custom resource in the following format:

      spec:
        mcp:
          extra_settings:
            - setting: IGNORE_CERTIFICATE_ERRORS
              value: true

Issue: If you changed the permissions of the Ansible MCP server after it was created and deployed, you must delete the AnsibleMCPServer custom resource and recreate it.

Workaround: Perform the following steps:

  1. Navigate to the Ansible Automation Platform portal.
  2. Under Resources, search for the AnsibleMCPServer custom resource.
  3. Select the active AnsibleMCPServer instance. An active AnsibleMCPServer instance is identified by the -mcp suffix appended to the Ansible Automation Platform custom resource name.
  4. Select the Settings menu (3-dot menu icon) on the right side of the instance, then click Delete AnsibleMCPServer.
  5. After the reconciliation process is completed, the existing Ansible MCP server instance is deleted and a new Ansible MCP server instance is created.

You can scale down all Ansible Automation Platform deployments and StatefulSets by using the idle_aap variable. This is useful for scenarios such as upgrades, migrations, or disaster recovery.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Go to OperatorsInstalled Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select All Instances and go to your AnsibleAutomationPlatform instance.
  5. Click the icon and then select Edit AnsibleAutomationPlatform.
  6. In the YAML view paste the following YAML code under the spec: section:

    idle_aap: true
  7. Click Save.

Next steps

Setting the idle_aap value to true scales down all active deployments. Setting the value to false scales the deployments back up.

Migrating your Red Hat Ansible Automation Platform deployment to the Ansible Automation Platform Operator allows you to take advantage of the benefits provided by a Kubernetes native operator, including simplified upgrades and full lifecycle support for your Red Hat Ansible Automation Platform deployments.

You can use the Ansible Automation Platform migration guide for help with migrating.

Note

Upgrades of Event-Driven Ansible version 2.4 to 2.6 are not supported. Database migrations between Event-Driven Ansible 2.4 and Event-Driven Ansible 2.6 are not compatible.

The Ansible Automation Platform Operator simplifies the installation, upgrade, and deployment of new Red Hat Ansible Automation Platform instances in your OpenShift Container Platform environment.

10.1. Overview

You can use this document for help with upgrading Ansible Automation Platform versions 2.4 and 2.5 to 2.6 on Red Hat OpenShift Container Platform. This document applies to upgrades of Ansible Automation Platform 2.6 to later versions of 2.6.

The Ansible Automation Platform Operator manages deployments, upgrades, backups, and restores of automation controller and automation hub. It also handles deployments of AnsibleJob and JobTemplate resources from the Ansible Automation Platform Resource Operator.

Each operator version has default automation controller and automation hub versions. When the operator is upgraded, it also upgrades the automation controller and automation hub deployments it manages, unless overridden in the spec.

OpenShift deployments of Ansible Automation Platform use the built-in Operator Lifecycle Management (OLM) functionality. For more information, see Operator Lifecycle Manager concepts and resources. OpenShift does this by using Subscription, CSV, InstallPlan, and OperatorGroup objects. Most users will not have to interact directly with these resources. They are created when the Ansible Automation Platform Operator is installed from OperatorHub and managed through the Subscriptions tab in the OpenShift console UI. For more information, refer to Accessing the web console.

Subscription tab

10.2. Upgrade considerations

If you are upgrading from version 2.4, continue to the Upgrading the Ansible Automation Platform Operator.

If your OpenShift Container Platform version is not supported by the Red Hat Ansible Automation Platform version you are upgrading to, you must upgrade your OpenShift Container Platform cluster to a supported version first.

Refer to the Red Hat Ansible Automation Platform Life Cycle to determine the OpenShift Container Platform version needed.

For information about upgrading your cluster, refer to Updating clusters.

10.3. Prerequisites

To upgrade to a newer version of Ansible Automation Platform Operator, you must:

  • Ensure your system meets the system requirements detailed in the Operator topologies section of the Tested deployment models guide.
  • Create AutomationControllerBackup and AutomationHubBackup objects. For help with this see Backup and recovery for operator environments
  • Review the Release notes for the new Ansible Automation Platform version to which you are upgrading and any intermediate versions.
  • Determine the type of upgrade you want to perform. See the Channel Upgrades section for more information.

10.4. Channel upgrades

Upgrading to version 2.6 from Ansible Automation Platform 2.4 involves retrieving updates from a “channel”. A channel refers to a location where you can access your update. It currently resides in the OpenShift console UI.

Update channel

10.4.1. In-channel upgrades

Most upgrades occur within a channel as follows:

  1. A new update becomes available in the marketplace, through the redhat-operator CatalogSource.
  2. The system automatically creates a new InstallPlan for your Ansible Automation Platform subscription.

    • If set to Manual, the InstallPlan needs manual approval in the OpenShift UI.
    • If set to Automatic, it upgrades as soon as the new version is available.

      Note

      Set a manual install strategy on your Ansible Automation Platform Operator subscription during installation or upgrade. You will be prompted to approve upgrades when available in your chosen update channel. Stable channels, like stable-2.5, are available for each X.Y release.

  3. A new subscription, CSV, and operator containers are created alongside the old ones. The old resources are cleaned up after a successful install.

10.4.2. Cross-channel upgrades

Upgrading between X.Y channels is always manual and intentional. Stable channels for major and minor versions are in the Operator Catalog. Currently, only version 2.x is available, so there are few channels. It is recommended to stay on the latest minor version channel for the latest patches.

If the subscription is set for manual upgrades, you must approve the upgrade in the UI. Then, the system upgrades the Operator to the latest version in that channel.

Note

It is recommended to set a manual install strategy on your Ansible Automation Platform Operator subscription during installation or upgrade. You will be prompted to approve upgrades when they become available in your chosen update channel. Stable channels, such as stable-2.5, are available for each X.Y release.

The containers provided in the latest channel are updated regularly for OS upgrades and critical fixes. This allows customers to receive critical patches and CVE fixes faster. Larger changes and new features are saved for minor and major releases.

For each major or minor version channel, there is a corresponding "cluster-scoped" channel available. Cluster-scoped channels deploy operators that can manage all namespaces, while non-cluster-scoped channels can only manage resources in their own namespace.

Important

Cluster-scoped bundles are not compatible with namespace-scoped bundles. Do not try to switch between normal (stable-2.6 for example) channels and cluster-scoped (stable-2.6-cluster-scoped) channels, as this is not supported.

To upgrade to the latest version of Ansible Automation Platform Operator on OpenShift Container Platform, you can use the following procedure:

Note

If you are on version 2.4, it is recommended to skip 2.5 and upgrade straight to version 2.6.

If you upgraded from 2.4 to 2.5, you must migrate your authentication methods and users before upgrading to 2.6 as that legacy authenticator functionality was removed.

Prerequisites

Important

Upgrading from Event-Driven Ansible 2.4 is not supported. If you are using Event-Driven Ansible 2.4 in production, contact Red Hat before you upgrade.

Procedure

  1. Log in to OpenShift Container Platform.
  2. Navigate to OperatorsInstalled Operators.
  3. Select the Ansible Automation Platform Operator installed on your project namespace.
  4. Select the Subscriptions tab.
  5. Change the channel:

    1. To upgrade from version 2.4, change the channel to stable-2.6.
    2. To upgrade from version 2.5, change the channel to stable-2.6.
  6. This creates an InstallPlan for the user. Click Preview InstallPlan.
  7. Click Approve.
  8. Create a Custom Resource (CR) using the Ansible Automation Platform UI. The automation controller and automation hub UIs remain until all SSO configuration is supported in the platform gateway UI.

Verification

You can confirm you have upgraded successfully by navigating to OperatorsInstalled Operators, here under Ansible Automation Platform you can see the version number, begins with 2.6.x.

Additionally, go to your Ansible Automation Platform Operator deployment and click All instances to verify if all instances upgraded correctly. All pods should display either a Running or Completed status, with no pods displaying an error status.

After upgrading to the latest version of Ansible Automation Platform Operator on OpenShift Container Platform, you can create an Ansible Automation Platform custom resource (CR) that specifies the names of your existing deployments, in the same namespace.

The following example outlines the steps to deploy a new Event-Driven Ansible setup after upgrading to the latest version, with existing automation controller and automation hub deployments already in place.

The Appendix contains more examples of Ansible Automation Platform CRs for different deployments.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. Navigate to OperatorsInstalled Operators.
  3. Select your Ansible Automation Platform Operator deployment.
  4. Select the Details tab.
  5. On the Ansible Automation Platform tile click Create instance.
  6. From the Create Ansible Automation Platform page enter a name for your instance in the Name field.
  7. Click YAML view and paste the following YAML (aap-existing-controller-and-hub-new-eda.yml):

    ---
    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: myaap
    spec:
      # Development purposes only
      no_log: false
    
      controller:
        name: existing-controller #obtain name from controller CR
        disabled: false
    
      eda:
        disabled: false
    
      hub:
        name: existing-hub
        disabled: false
  8. Click Create.

    Note

    You can override the operator’s default image for automation controller, automation hub, or platform-resource app images by specifying the preferred image on the YAML spec. This enables upgrading a specific deployment, like a controller, without updating the operator.

    The recommended approach however, is to upgrade the operator and use the default image values.

    Verification

    Navigate to your Ansible Automation Platform Operator deployment and click All instances to verify whether all instances have deployed correctly. You should see the Ansible Automation Platform instance and the deployed AutomationController, EDA, and AutomationHub instances here.

Alternatively, you can verify whether all instances deployed correctly by running oc get route in the command line.

To upgrade from Ansible Automation Platform 2.4 to 2.6 with an external database, you must scale down your Operator deployment, upgrade your PostgreSQL, then scale your deployment back up.

Prerequisites

  • A 2.4 automation controller and automation hub deployment with an external PostgreSQL 13 database
  • A newly provisioned PostgreSQL 15 database for the new platform gateway component

Procedure

  1. Create a secret postgres-config-gateway with PostgreSQL 15 credentials for the platform gateway component. For example:

    apiVersion: v1
    kind: Secret
    metadata:
      name: postgres-config-gateway
      namespace: aap
    stringData:
      host: "<DB_HOST_OR_IP>"
      port: "<DB_PORT>"     # default is 5432
      database: "<DB_NAME>" # for example "gateway"
      username: "<DB_USER>" # for example "gateway"
      password: "<DB_PASSWORD>"
      sslmode: "prefer"
      type: "unmanaged"
    type: Opaque
  2. Add your newly created secret to your Ansible Automation Platform instance:

    spec:
      postgres_configuration_secret: postgres-config-gateway
  3. Scale down your deployments in their respective namespaces using:

    oc scale deployment --replicas=0 -n <component-namespace> <component-deployment>

    1. Automation controller:

      1. automation-controller-operator-controller-manager
      2. <controller-name>-controller-task
      3. <controller-name>-controller-web
    2. Automation hub:

      1. automation-hub-operator-controller-manager
      2. <hub-name>-hub-api
      3. <hub-name>-hub-content
      4. <hub-name>-hub-redis
      5. <hub-name>-hub-worker
    3. The remaining operators:

      1. ansible-lightspeed-operator-controller-manager
      2. eda-server-operator-controller-manager
      3. resource-operator-controller-manager
  4. Upgrade your PostgreSQL 13 to PostgreSQL 15.
  5. Scale your deployments back up using:

    oc scale deployment --replicas=1 -n <component-namespace> <component-deployment>

  6. Log in to Red Hat OpenShift Container Platform.
  7. Navigate to OperatorsInstalled Operators.
  8. Click the ⋮ icon next to your deployment and then click Edit Subscription.
  9. From the Details tab, select Update Channel.
  10. Select stable-2.6 as the channel and click Save.
  11. Deploy Ansible Automation Platform 2.6 using the following custom resource (CR):

    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: aap
    spec:
    
      database:
        database_secret: postgres-config-gateway
    
      controller:
        name: existing-controller
    
      eda:
        disabled: true
    
      hub:
        name: existing-hub

Verification

To verify your upgrade was successful, go to your users, collection, job history or similar and confirm that they are on the new 2.6 instance and in the new PostgreSQL 15 databases.

You can use an upgrade patch to update your operator-based Ansible Automation Platform.

When you perform a patch update for an installation of Ansible Automation Platform on OpenShift Container Platform, most updates happen within a channel:

  1. A new update becomes available in the marketplace (through the redhat-operator CatalogSource).
  2. A new InstallPlan is automatically created for your Ansible Automation Platform subscription. If the subscription is set to Manual, the InstallPlan must be manually approved in the OpenShift UI. If the subscription is set to Automatic, it upgrades as soon as the new version is available.

    Note

    It is recommended that you set a manual install strategy on your Ansible Automation Platform Operator subscription (set when installing or upgrading the Operator) and you will be prompted to approve an upgrade when it becomes available in your selected update channel. Stable channels for each X.Y release (for example, stable-2.5) are available.

  3. A new Subscription, CSV, and Operator containers will be created alongside the old Subscription, CSV, and containers. Then the old resources will be cleaned up if the new install was successful.

You can enable the Ansible Automation Platform Operator with execution nodes by downloading and installing the install bundle.

Note

When using a custom certificate for Receptor nodes, the certificate requires the otherName field specified in the Subject Alternative Name (SAN) of the certificate with the value 1.3.6.1.4.1.2312.19.1. For more information, see Above the mesh TLS.

Receptor does not support the usage of wildcard certificates. Additionally, each Receptor certificate must have the host FQDN specified in its SAN for TLS hostname validation to be correctly performed.

You can add execution nodes from the Ansible Automation Platform user interface.

Prerequisites

  • An automation controller instance.
  • The receptor collection package is installed.
  • The Ansible Automation Platform repository ansible-automation-platform-2.6-for-rhel-{RHEL-RELEASE-NUMBER}-x86_64-rpms is enabled.

Procedure

  1. Log in to Red Hat Ansible Automation Platform.
  2. In the navigation panel, select Automation ExecutionInfrastructureInstances.
  3. Click Create Instance.
  4. Input the Execution Node domain name or IP in the Host Name field.
  5. Optional: Input the port number in the Listener Port field.
  6. Click Create Instance.
  7. Click the download icon download next to Install Bundle. This starts a download, take note of where you save the file
  8. Untar the gz file.

    Note

    To run the install_receptor.yml playbook you must install the receptor collection from Ansible Galaxy: ansible-galaxy collection install -r requirements.yml

  9. Update the playbook with your user name and SSH private key file. Note that ansible_host pre-populates with the hostname you input earlier.

    all:
       hosts:
          remote-execution:
    	        ansible_host: example_host_name # Must match what is configured in AAP WebUI
    	        ansible_user: <username> #user provided
    	        Ansible_ssh_private_key_file: ~/.ssh/id_example
  10. Open your terminal, and navigate to the directory where you saved the playbook.
  11. To install the bundle run:

    ansible-playbook install_receptor.yml -i inventory.yml
  12. When installed you can now upgrade your execution node by downloading and re-running the playbook for the instance you created.

Verification

To verify receptor service status run the following command:

sudo systemctl status receptor.service

Make sure the service is in active (running) state

To verify if your playbook runs correctly on your new node run the following command:

watch podman ps

Additional resources

13.1. Resource Operator overview

Resource Operator is a custom resource (CR) that you can deploy after you have created your platform gateway deployment.

With Resource Operator you can define resources such as projects, job templates, and inventories in YAML files.

automation controller then uses the YAML files to create these resources. You can create the YAML through the Form view that prompts you for keys and values for your YAML code. Alternatively, to work with YAML directly, you can select YAML view.

The Resource Operator provides the following CRs:

  • AnsibleJob
  • JobTemplate
  • Automation controller project
  • Automation controller schedule
  • Automation controller workflow
  • Automation controller workflow template:
  • Automation controller inventory
  • Automation controller credential

For more information on any of the above custom resources, see Using automation execution.

13.2. Using Resource Operator

The Resource Operator itself does not do anything until the user creates an object. As soon as the user creates an AutomationControllerProject or AnsibleJob resource, the Resource Operator starts processing that object.

Prerequisites

  • Install the Kubernetes-based cluster of your choice.
  • Deploy automation controller using the automation-controller-operator.

Procedure

  1. After installing the automation-controller-resource-operator in your cluster, you must create a Kubernetes (k8s) secret with the connection information for your automation controller instance.
  2. Then you can use Resource Operator to create a k8s resource to manage your automation controller instance.

To connect Resource Operator with platform gateway you must create a Kubernetes secret with the connection information for your automation controller instance.

Use the following procedure to create an OAuth2 token for your user in the platform gateway UI.

Note

You can only create OAuth 2 Tokens for your own user through the API or UI, which means you can only configure or view tokens from your own user profile.

Procedure

  1. Log in to Red Hat OpenShift Container Platform.
  2. In the navigation panel, select Access ManagementUsers.
  3. Select the username you want to create a token for.
  4. Select TokensAutomation Execution
  5. Click Create Token.
  6. You can leave Applications empty. Add a description and select Read or Write for the Scope.

    Note

    Make sure you provide a valid user when creating tokens. Otherwise, you get an error message that you tried to issue the command without either specifying a user, or supplying a username that does not exist.

To make your connection information available to the Resource Operator, create a k8s secret with the token and host value.

Procedure

  1. The following is an example of the YAML for the connection secret. Save the following example to a file, for example, automation-controller-connection-secret.yml.

    apiVersion: v1
    kind: Secret
    metadata:
      name: controller-access
      type: Opaque
    stringData:
      token: <generated-token>
      host: https://my-controller-host.example.com/
  2. Edit the file with your host and token value.
  3. Apply it to your cluster by running the kubectl create command:
kubectl create -f controller-connection-secret.yml

Use the Resource Operator to manage automation controller resources directly from your Kubernetes cluster. This section provides the procedures for creating custom resources like AnsibleJob, JobTemplate, AnsibleProject, and more.

13.5.1. Creating an AnsibleJob custom resource

An AnsibleJob custom resource launches a job in the automation controller instance specified in the Kubernetes secret (automation controller host URL, token). You can launch an automation job on automation controller by creating an AnsibleJob resource.

Procedure

  1. Specify the connection secret and job template you want to launch.

    apiVersion: tower.ansible.com/v1alpha1
    kind: AnsibleJob
    metadata:
      generateName: demo-job-1 # generate a unique suffix per 'kubectl create'
    spec:
      connection_secret: controller-access
      job_template_name: Demo Job Template
  2. Configure features such as, inventory, extra variables, and time to live for the job.

    spec:
      connection_secret: controller-access
      job_template_name: Demo Job Template
      inventory: Demo Inventory                    # Inventory prompt on launch needs to be enabled
      runner_image: quay.io/ansible/controller-resource-runner
      runner_version: latest
      job_ttl: 100
      extra_vars:                                  # Extra variables prompt on launch needs to be enabled
         test_var: test
      job_tags: "provision,install,configuration"  # Specify tags to run
      skip_tags: "configuration,restart"           # Skip tasks with a given tag
    Note

    You must enable prompt on launch for inventories and extra variables if you are configuring those. To enable Prompt on launch, within the automation controller UI: From the ResourcesTemplates page, select your template and select the Prompt on launch checkbox next to Inventory and Variables sections.

  3. Launch a workflow job template with an AnsibleJob object by specifying the workflow_template_name instead of job_template_name:

    apiVersion: tower.ansible.com/v1alpha1
    kind: AnsibleJob
    metadata:
      generateName: demo-job-1 # generate a unique suffix per 'kubectl create'
    spec:
      connection_secret: controller-access
      workflow_template_name: Demo Workflow Template

13.5.2. Creating a JobTemplate custom resource

A job template is a definition and set of parameters for running an Ansible job. For more information see the Job Templates section of the Using automation execution guide.

Procedure

  • Create a job template on automation controller by creating a JobTemplate custom resource:

    apiVersion: tower.ansible.com/v1alpha1
    kind: JobTemplate
    metadata:
      name: jobtemplate-4
    spec:
      connection_secret: controller-access
      job_template_name: ExampleJobTemplate4
      job_template_project: Demo Project
      job_template_playbook: hello_world.yml
      job_template_inventory: Demo Inventory

A Project is a logical collection of Ansible playbooks, represented in automation controller. For more information see the Projects section of the Using automation execution guide.

Procedure

  • Create a project on automation controller by creating an automation controller project custom resource:

    apiVersion: tower.ansible.com/v1alpha1
    kind: AnsibleProject
    metadata:
      name: git
    spec:
      repo: https://github.com/ansible/ansible-tower-samples
      branch: main
      name: ProjectDemo-git
      scm_type: git
      organization: Default
      description: demoProject
      connection_secret: controller-access
      runner_pull_policy: IfNotPresent

Define an AnsibleSchedule custom resource to create a schedule on the automation controller, ensuring you specify the necessary apiVersion, kind, and a unique metadata.name.

Procedure

  • Create a schedule on automation controller by creating an automation controller schedule custom resource:

    apiVersion: tower.ansible.com/v1alpha1
    kind: AnsibleSchedule
    metadata:
      name: schedule
    spec:
      connection_secret: controller-access
      runner_pull_policy: IfNotPresent
      name: "Demo Schedule"
      rrule: "DTSTART:20210101T000000Z RRULE:FREQ=DAILY;INTERVAL=1;COUNT=1"
      unified_job_template: "Demo Job Template"

Workflows enable you to configure a sequence of disparate job templates (or workflow templates) that may or may not share inventory, playbooks, or permissions. For more information see the Workflows in automation controller section of the Using automation execution guide.

Procedure

  • Create a workflow on automation controller by creating a workflow custom resource:

    apiVersion: tower.ansible.com/v1alpha1
    kind: AnsibleWorkflow
    metadata:
      name: workflow
    spec:
      inventory: Demo Inventory
      workflow_template_name: Demo Job Template
      connection_secret: controller-access
      runner_pull_policy: IfNotPresent

A workflow job template links together a sequence of disparate resources to track the full set of jobs that were part of the release process as a single unit.

For more information see the Workflow job templates section of the Using automation execution guide.

Procedure

  • Create a workflow template on automation controller by creating a workflow template custom resource:

    apiVersion: tower.ansible.com/v1alpha1
    kind: WorkflowTemplate
    metadata:
      name: workflowtemplate-sample
    spec:
      connection_secret: controller-access
      name: ExampleTowerWorkflow
      description: Example Workflow Template
      organization: Default
      inventory: Demo Inventory
      workflow_nodes:
      - identifier: node101
        unified_job_template:
          name: Demo Job Template
          inventory:
            organization:
              name: Default
          type: job_template
      - identifier: node102
        unified_job_template:
          name: Demo Job Template
          inventory:
            organization:
              name: Default
          type: job_template

By using an inventory file, Ansible Automation Platform can manage a large number of hosts with a single command.

Inventories also help you use Ansible Automation Platform more efficiently by reducing the number of command line options you have to specify. For more information see the Inventories section of the Using automation execution guide.

Procedure

  • Create an inventory on automation controller by creating an inventory custom resource:

    metadata:
      name: inventory-new
    spec:
      connection_secret: controller-access
      description: my new inventory
      name: newinventory
      organization: Default
      state: present
      instance_groups:
        - default
      variables:
        string: "string_value"
        bool: true
        number: 1
        list:
          - item1: true
          - item2: "1"
        object:
          string: "string_value"
          number: 2

Credentials authenticate the automation controller user when launching jobs against machines, synchronizing with inventory sources, and importing project content from a version control system.

SSH and AWS are the most commonly used credentials. For a full list of supported credentials see the Credential types section of the Using automation execution guide.

For help with defining values you can refer to the OpenAPI (Swagger) file for Red Hat Ansible Automation Platform API KCS article.

Note

You can use https://<aap-instance>/api/controller/v2/credential_types/ to view the list of credential types on your instance. To get the full list use the following curl command:

export AAP_TOKEN="your-oauth2-token"
export AAP_URL="https://your-aap-controller.example.com"

curl -s -H "Authorization: Bearer $AAP_TOKEN" "$AAP_URL/api/controller/v2/credential_types/" | jq -r '.results[].name'

Procedure

  • Create an AWS or SSH credential on automation controller by creating a credential custom resource:

    • SSH credential:

      apiVersion: tower.ansible.com/v1alpha1
      kind: AnsibleCredential
      metadata:
        name: ssh-cred
      spec:
        name: ssh-cred
        organization: Default
        connection_secret: controller-access
        description: "SSH credential"
        type: "Machine"
        ssh_username: "cat"
        ssh_secret: my-ssh-secret
        runner_pull_policy: IfNotPresent
    • AWS credential:

      apiVersion: tower.ansible.com/v1alpha1
      kind: AnsibleCredential
      metadata:
        name: aws-cred
      spec:
        name: aws-access
        organization: Default
        connection_secret: controller-access
        description: "This is a test credential"
        type: "Amazon Web Services"
        username_secret: aws-secret
        password_secret: aws-secret
        runner_pull_policy: IfNotPresent

This guide provides a collection of commands and tips to help you diagnose and resolve common issues with your Ansible Automation Platform deployment on OpenShift Container Platform. You will learn how to view logs, inspect resources, and collect diagnostic data for support.

When the operator deploys an Automation Controller instance, it runs an installer role inside the operator container. If the automation controller’s status is Failed, you must check the automation-controller-operator container logs. These logs provide the installer role’s output and are a critical first step in debugging deployment issues.

You can view events in the OpenShift Container Platform web console to monitor for errors and troubleshoot issues. This helps you quickly diagnose problems by examining the status of custom resources and their related events.

You can debug by first reviewing the status conditions of the Ansible Automation Platform custom resource (CR) and then checking any nested CRs for errors.

Procedure

  1. Log in to the OpenShift Container Platform web console.
  2. In the navigation menu, select HomeEvents.
  3. Select your project from the project list.
  4. To view events for a specific resource, navigate to that resource’s page. Many resource pages, such as pods and deployments, have their own Events tab.
  5. Select a resource to bring you to the Pod Details page.

Verification

Check the Conditions section on the Pod details page to confirm no errors are listed in the Message column.

14.3. Viewing operator logs

The following procedure is an example of how to view the logs for an automation-controller-operator pod.

Procedure

  1. To find the pod name, run:

    oc get pods | grep operator
  2. to view the logs for the pod, run:

    oc logs <operator-pod-name> -f
    1. Alternatively, to view the logs without first getting the pod name, run:

      oc logs deployments/automation-controller-operator-controller-manager -c automation-controller-manager -f

14.4. Configuring log verbosity

You can enable task output for debugging on any custom resources (CRs) by setting no_log to false in the component section of the AnsibleAutomationPlatform CR.

The logs then show output for any failed tasks that originally had no_log set to true. All Ansible Automation Platform components (automation controller, automation hub, and Event-Driven Ansible) support the no_log setting.

Procedure

  1. Edit the Ansible Automation Platform CR and set the no_log field to false for the component you want to debug.

    For automation controller:

    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: myaap
    spec:
      controller:
        no_log: false

    For automation hub:

    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: myaap
    spec:
      hub:
        no_log: false

    For Event-Driven Ansible:

    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: myaap
    spec:
      eda:
        no_log: false
    Note

    This might expose sensitive data in the logs. On production clusters, this value must generally be set to true unless you are actively debugging an issue.

  2. To increase the Ansible Playbook verbosity from the operator, set the verbosity level using an annotation on the Ansible Automation Platform CR:

    apiVersion: aap.ansible.com/v1alpha1
    kind: AnsibleAutomationPlatform
    metadata:
      name: myaap
      annotations:
        ansible.sdk.operatorframework.io/verbosity: "4"
    spec:
      # ... component configuration ...

To inspect a OpenShift Container Platform resource, you must use the oc command to get a summary or the full YAML definition of the resource.

Procedure

  1. To view a human-readable summary of a resource, run:

    oc describe -n <namespace> <resource> <resource-name>
  2. To view the complete YAML definition of a resource, use the -o yaml flag:

    oc get -n <namespace> <resource> <resource-name> -o yaml
    • For example, to get the YAML for the automationcontroller custom resource, run:

      oc get -n aap automationcontroller aap -o yaml

14.6. Core Ansible Automation Platform resources

The following table lists and describes the core custom resources (CRs) that the Ansible Automation Platform Operator manages. Understanding these resources will help you with advanced troubleshooting and configuration.

Expand
Resource nameDescription

ansibleautomationplatform

CR for deploying the entire Ansible Automation Platform.

ansibleautomationplatformbackup

CR for creating backups of the entire Ansible Automation Platform instance.

ansibleautomationplatformrestore

CR for restoring the entire Ansible Automation Platform instance from a backup.

automationcontroller

CR defining the desired state of an automation controller instance.

automationcontrollerbackup

CR for creating backups of automation controller data and configuration.

automationcontrollerrestore

CR for restoring the automation controller from a backup.

automationhub

CR for deploying an automation hub (Galaxy) instance.

automationhubbackup

CR for creating backups of automation hub data and configuration.

automationhubrestore

CR for restoring automation hub from a backup.

eda

CR for deploying an Event-Driven Ansible (EDA) instance.

edabackup

CR for creating backups of EDA data and configuration.

edarestore

CR for restoring EDA from a backup.

ansiblelightspeed

CR for deploying an Red Hat Ansible Lightspeed instance.

14.7. Standard Kubernetes resources

Standard Kubernetes resources are a core part of the OpenShift Container Platform. The following table describes the standard resources you can inspect to troubleshoot the state and configuration of an application.

Expand
Resource nameDescription

pod

Smallest deployable unit containing one or more containers running the application workloads.

deployment

Manages pod configuration and scaling.

pvc

A PersistentVolumeClaim (PVC) is a request for storage resources, used for persistent data storage.

service

Exposes pods as network services with stable IP addresses and DNS names within the cluster.

ingress

Manages external HTTP and HTTPS access to services within the cluster.

route

An OpenShift-specific resource for exposing services externally (similar to an ingress).

secrets

Stores sensitive data like passwords, tokens, and certificates.

serviceaccount

Provides identity for processes running in pods to access permissions to other Kubernetes resources.

The Ansible Automation Platform Operator manages multiple custom resources (CRs), each with its own configuration parameters. Use the oc explain command to discover all available configuration options for the AnsibleAutomationPlatform CR and its nested components.

Procedure

  1. To see all available configuration parameters for a top-level CR, run:

    oc explain ansibleautomationplatform.spec
  2. To view component-specific configuration options nested under the Ansible Automation Platform CR, query them through the component section:

    oc explain ansibleautomationplatform.spec.controller.postgres_configuration_secret
    oc explain ansibleautomationplatform.spec.controller.route_tls_termination_mechanism
    oc explain ansibleautomationplatform.spec.hub.storage_type
    oc explain ansibleautomationplatform.spec.eda.automation_server_url
  3. To explore all nested fields for a specific component, use the --recursive flag:

    oc explain ansibleautomationplatform.spec.controller --recursive
    oc explain ansibleautomationplatform.spec.hub --recursive
    oc explain ansibleautomationplatform.spec.eda --recursive
    Note

    You can also query individual component CRs directly if needed:

    oc explain automationcontroller.spec
    oc explain automationhub.spec
    oc explain eda.spec

    However, when configuring components through the Ansible Automation Platform CR (recommended approach), use the nested paths shown above.

14.9. Collecting Diagnostic Data

Use the oc adm must-gather command to collect comprehensive diagnostic data about your cluster and the Ansible Automation Platform components. This data is essential when contacting Red Hat Support.

Procedure

  1. To start the must-gather tool, run:

    oc adm must-gather --image=registry.redhat.io/ansible-automation-platform-26/aap-must-gather-rhel9
    Note

    For version 2.6, the base image name changes to registry.redhat.io/ansible-automation-platform-26/aap-must-gather-rhel9.

  2. View the collected data, use the omc tool to query the must-gather tarball as if it were a live cluster.

    omc use <path-to-must-gather>
    omc get pods

14.10. Debugging crashing pods

If a pod is failing or crashing, use the oc debug command. This command creates a new pod with the same configuration and mounts as the pod you specified, allowing you to access it for debugging.

Procedure

  • To connect to the pod, run:

    oc debug <pod-name>

14.11. Operator service account error

Manually modifying the aap_operator_service_account user in the Ansible Automation Platform database or UI removes the required is_superuser flag. This action causes a critical failure in the platform gateway operator’s reconciliation loop.

You see the following error:

TASK [ansibleautomationplatform : Create operator service account user] … CommandError: Error: That username is already taken

The Ansible Automation Platform operator automatically recreates the service account when the account is missing. To restore the required superuser privileges, you must remove the existing, incorrectly configured user.

After you delete the user, the platform gateway operator automatically runs its idempotency logic, recreates the account, and ensures it has the necessary is_superuser=True flag, restoring the reconciliation loop’s functionality.

This appendix provides a reference for the Ansible Automation Platform custom resources for various deployment scenarios.

Tip

You can link in existing components by specifying the component name under the name variable. You can also use name to create a custom name for a new component.

15.1. Custom resources

---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
spec:
  # Development purposes only
  no_log: false

  controller:
    name: existing-controller
    disabled: false

  eda:
    disabled: false

  hub:
    name: existing-hub
    disabled: false

15.1.2. aap-all-defaults.yml

apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
spec:
  # Development purposes only
  no_log: false

  # Platform
  ## uncomment to test bundle certs
  # bundle_cacert_secret: gateway-custom-certs

  # Components

  hub:
    disabled: false
    ## uncomment if using file storage for Content pod
    storage_type: file
    file_storage_storage_class: nfs-local-rwx
    file_storage_size: 10Gi

    ## uncomment if using S3 storage for Content pod
    # storage_type: S3
    # object_storage_s3_secret: example-galaxy-object-storage

    ## uncomment if using Azure storage for Content pod
    # storage_type: azure
    # object_storage_azure_secret: azure-secret-name

  # lightspeed:
  #   disabled: true

# End state:
# * Automation controller deployed and named: myaap-controller
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub deployed and named: myaap-hub

15.1.3. aap-existing-controller-only.yml

---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
spec:
  # Development purposes only
  no_log: false

  controller:
    name: existing-controller

  eda:
    disabled: true

  hub:
    disabled: true
    ## uncomment if using file storage for Content pod
    # storage_type: file
    # file_storage_storage_class: nfs-local-rwx
    # file_storage_size: 10Gi

    ## uncomment if using S3 storage for Content pod
    # storage_type: S3
    # object_storage_s3_secret: example-galaxy-object-storage

    ## uncomment if using Azure storage for Content pod
    # storage_type: azure
    # object_storage_azure_secret: azure-secret-name


# End state:
# * Automation controller: existing-controller registered with Ansible Automation Platform UI
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub deployed and named: myaap-hub

15.1.4. aap-existing-hub-and-controller.yml

---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
spec:
  # Development purposes only
  no_log: false

  controller:
    name: existing-controller
    disabled: false

  eda:
    disabled: true

  hub:
    name: existing-hub
    disabled: false

# End state:
# * Automation controller: existing-controller registered with Ansible Automation Platform UI
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub: existing-hub registered with Ansible Automation Platform UI

15.1.5. aap-existing-hub-controller-eda.yml

---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
spec:
  # Development purposes only
  no_log: false

  controller:
    name: existing-controller # <-- this is the name of the existing AutomationController CR
    disabled: false

  eda:
    name: existing-eda
    disabled: false

  hub:
    name: existing-hub
    disabled: false

# End state:
# * Controller: existing-controller registered with Ansible Automation Platform UI
# * * Event-Driven Ansible: existing-eda registered with Ansible Automation Platform UI
# * * Automation hub: existing-hub registered with Ansible Automation Platform UI
#
# Note: The automation controller, Event-Driven Ansible, and automation hub names must match the names of the existing.
# Automation controller, Event-Driven Ansible, and automation hub CRs in the same namespace as the Ansible Automation Platform CR. If the names do not match, the Ansible Automation Platform CR will not be able to register the existing automation controller, Event-Driven Ansible, and automation hub with the Ansible Automation Platform UI,and will instead deploy new automation controller, Event-Driven Ansible, and automation hub instances.

15.1.6. aap-existing-hub-controller-eda.yml

---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
spec:
  # Development purposes only
  no_log: false

  controller:
    name: existing-controller # <-- this is the name of the existing AutomationController CR
    disabled: false

  eda:
    name: existing-eda
    disabled: false

  hub:
    name: existing-hub
    disabled: false

# End state:
# * Automation controller: existing-controller registered with Ansible Automation Platform UI
# * * Event-Driven Ansible: existing-eda registered with Ansible Automation Platform UI
# * * Automation hub: existing-hub registered with Ansible Automation Platform UI
#
# Note: The automation controller, Event-Driven Ansible, and automation hub names must match the names of the existing.
# Automation controller, Event-Driven Ansible, and automation hub CRs in the same namespace as the Ansible Automation Platform CR. If the names do not match, the Ansible Automation Platform CR will not be able to register the existing automation controller, Event-Driven Ansible, and automation hub with the Ansible Automation Platform UI,and will instead deploy new automation controller, Event-Driven Ansible, and automation hub instances.

15.1.7. aap-fresh-controller-eda.yml

---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
spec:
  # Development purposes only
  no_log: false

  controller:
    disabled: false

  eda:
    disabled: false

  hub:
    disabled: true
    ## uncomment if using file storage for Content pod
    storage_type: file
    file_storage_storage_class: nfs-local-rwx
    file_storage_size: 10Gi

    ## uncomment if using S3 storage for Content pod
    # storage_type: S3
    # object_storage_s3_secret: example-galaxy-object-storage

    ## uncomment if using Azure storage for Content pod
    # storage_type: azure
    # object_storage_azure_secret: azure-secret-name

# End state:
# * Automation controller deployed and named: myaap-controller
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub disabled
# * Red Hat Ansible Lightspeed disabled

15.1.8. aap-fresh-external-db.yml

---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
spec:
  # Development purposes only
  no_log: false

  controller:
    disabled: false

  eda:
    disabled: false

  hub:
    disabled: false
    ## uncomment if using file storage for Content pod
    storage_type: file
    file_storage_storage_class: nfs-local-rwx
    file_storage_size: 10Gi

    ## uncomment if using S3 storage for Content pod
    # storage_type: S3
    # object_storage_s3_secret: example-galaxy-object-storage

    ## uncomment if using Azure storage for Content pod
    # storage_type: azure
    # object_storage_azure_secret: azure-secret-name


# End state:
# * Automation controller deployed and named: myaap-controller
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub deployed and named: myaap-hub
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
spec:
  database:
     database_secret: external-postgres-configuration-gateway
  controller:
     postgres_configuration_secret: external-postgres-configuration-controller
  hub:
     postgres_configuration_secret: external-postgres-configuration-hub
  eda:
     database:
       database_secret: external-postgres-configuration-eda
---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
spec:
  database:
     database_secret: external-postgres-configuration-gateway
Note

The system uses the external database for platform gateway, and automation controller, automation hub, and Event-Driven Ansible continues to use the existing databases that were used in 2.4.

---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
spec:
  database:
     database_secret: external-postgres-configuration-gateway
  controller:
     postgres_configuration_secret: external-postgres-configuration-controller
  hub:
     postgres_configuration_secret: external-postgres-configuration-hub
  eda:
     database:
       database_secret: external-postgres-configuration-eda
  lightspeed:
    disabled: false
    database:
      database_secret: <secret-name>-postgres-configuration
    auth_config_secret_name: 'auth-configuration-secret'
    model_config_secret_name: 'model-configuration-secret'
Note

You can follow the Red Hat Ansible Lightspeed with IBM watsonx Code Assistant User Guide for help with creating the model and auth secrets.

15.1.12. aap-fresh-install-with-settings.yml

---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
spec:
  # Development purposes only
  no_log: false
  image_pull_policy: Always

  # Platform
  ## uncomment to test bundle certs
  # bundle_cacert_secret: gateway-custom-certs

  # Components
  controller:
    disabled: false
    image_pull_policy: Always

    extra_settings:
      - setting: MAX_PAGE_SIZE
        value: '501'

  eda:
    disabled: false
    image_pull_policy: Always

    extra_settings:
      - setting: EDA_MAX_PAGE_SIZE
        value: '501'

  hub:
    disabled: false
    image_pull_policy: Always

    ## uncomment if using file storage for Content pod
    storage_type: file
    file_storage_storage_class: rook-cephfs
    file_storage_size: 10Gi

    ## uncomment if using S3 storage for Content pod
    # storage_type: S3
    # object_storage_s3_secret: example-galaxy-object-storage

    ## uncomment if using Azure storage for Content pod
    # storage_type: azure
    # object_storage_azure_secret: azure-secret-name

    pulp_settings:
      MAX_PAGE_SIZE: 501
      cache_enabled: false

  # lightspeed:
  #   disabled: true

# End state:
# * Automation controller deployed and named: myaap-controller
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub deployed and named: myaap-hub

15.1.13. aap-fresh-install.yml

---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
spec:
  # Development purposes only
  no_log: false

  # Redis Mode
  # redis_mode: cluster

  # Platform
  ## uncomment to test bundle certs
  # bundle_cacert_secret: gateway-custom-certs
  # extra_settings:
  #   - setting: MAX_PAGE_SIZE
  #     value: '501'

  # Components
  controller:
    disabled: false

  eda:
    disabled: false

  hub:
    disabled: false
    ## uncomment if using file storage for Content pod
    storage_type: file
    file_storage_storage_class: nfs-local-rwx
    file_storage_size: 10Gi

    ## uncomment if using S3 storage for Content pod
    # storage_type: S3
    # object_storage_s3_secret: example-galaxy-object-storage

    ## uncomment if using Azure storage for Content pod
    # storage_type: azure
    # object_storage_azure_secret: azure-secret-name

  # lightspeed:
  #   disabled: true

# End state:
# * Automation controller deployed and named: myaap-controller
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub deployed and named: myaap-hub

15.1.14. aap-fresh-only-controller.yml

---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
spec:
  # Development purposes only
  no_log: false

  controller:
    disabled: false

  eda:
    disabled: true

  hub:
    disabled: true
    ## uncomment if using file storage for Content pod
    # storage_type: file
    # file_storage_storage_class: nfs-local-rwx
    # file_storage_size: 10Gi

    ## uncomment if using S3 storage for Content pod
    # storage_type: S3
    # object_storage_s3_secret: example-galaxy-object-storage

    ## uncomment if using Azure storage for Content pod
    # storage_type: azure
    # object_storage_azure_secret: azure-secret-name


# End state:
# * Automation controller: existing-controller registered with Ansible Automation Platform UI
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub deployed and named: myaap-hub

15.1.15. aap-fresh-only-hub.yml

---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
spec:
  # Development purposes only
  no_log: false

  controller:
    disabled: true

  eda:
    disabled: true

  hub:
    disabled: false
    ## uncomment if using file storage for Content pod
    storage_type: file
    file_storage_storage_class: nfs-local-rwx
    file_storage_size: 10Gi

    # # AaaS Hub Settings
    # pulp_settings:
    #   cache_enabled: false

    ## uncomment if using S3 storage for Content pod
    # storage_type: S3
    # object_storage_s3_secret: example-galaxy-object-storage

    ## uncomment if using Azure storage for Content pod
    # storage_type: azure
    # object_storage_azure_secret: azure-secret-name

  lightspeed:
    disabled: false

# End state:
# * Automation controller disabled
# * * Event-Driven Ansible disabled
# * * Automation hub deployed and named: myaap-hub
# * Red Hat Ansible Lightspeed disabled

15.1.16. aap-lightspeed-enabled.yml

---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
spec:
  # Development purposes only
  no_log: false

  controller:
    disabled: false

  eda:
    disabled: false

  hub:
    disabled: false
    ## uncomment if using file storage for Content pod
    storage_type: file
    file_storage_storage_class: nfs-local-rwx
    file_storage_size: 10Gi

    ## uncomment if using S3 storage for Content pod
    # storage_type: S3
    # object_storage_s3_secret: example-galaxy-object-storage

    ## uncomment if using Azure storage for Content pod
    # storage_type: azure
    # object_storage_azure_secret: azure-secret-name

  lightspeed:
    disabled: false

# End state:
# * Automation controller deployed and named: myaap-controller
# * * Event-Driven Ansible deployed and named: myaap-eda
# * * Automation hub deployed and named: myaap-hub
# * Red Hat Ansible Lightspeed deployed and named: myaap-lightspeed

15.1.17. gateway-only.yml

---
apiVersion: aap.ansible.com/v1alpha1
kind: AnsibleAutomationPlatform
metadata:
  name: myaap
spec:
  # Development purposes only
  no_log: false

  controller:
    disabled: true

  eda:
    disabled: true

  hub:
    disabled: true

  lightspeed:
    disabled: true

# End state:
# * Platform gateway deployed and named: myaap-gateway
#   * UI is reachable at: https://myaap-gateway-gateway.apps.ocp4.example.com
# * Automation controller is not deployed
# * * Event-Driven Ansible is not deployed
# * * Automation hub is not deployed
# * Red Hat Ansible Lightspeed is not deployed

Legal Notice

Copyright © Red Hat.
Except as otherwise noted below, the text of and illustrations in this documentation are licensed by Red Hat under the Creative Commons Attribution–Share Alike 3.0 Unported license . If you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, the Red Hat logo, JBoss, Hibernate, and RHCE are trademarks or registered trademarks of Red Hat, LLC. or its subsidiaries in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS is a trademark or registered trademark of Hewlett Packard Enterprise Development LP or its subsidiaries in the United States and other countries.
The OpenStack® Word Mark and OpenStack logo are trademarks or registered trademarks of the Linux Foundation, used under license.
All other trademarks are the property of their respective owners.
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2026 Red Hat
Back to top