Containerized installation
Install the containerized version of Ansible Automation Platform
Abstract
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
Disclaimer: Links contained in this information to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.
Chapter 1. Containerized Ansible Automation Platform installation Copy linkLink copied to clipboard!
Containerized Ansible Automation Platform uses Podman to run the platform in containers on Red Hat Enterprise Linux host machines. With this installation method, you manage both the product and infrastructure lifecycle while taking advantage of containerized architecture.
Containerized Ansible Automation Platform runs as rootless containers for enhanced security by default. You can install and operate Ansible Automation Platform with a non-root user account. All runtime data, configuration files, and container storage are located under the installing user’s home directory.
Chapter 2. Choosing an installation type Copy linkLink copied to clipboard!
Containerized Ansible Automation Platform supports two installation types: online and disconnected. Review the requirements for each to decide which is appropriate for your environment.
2.1. Online installation Copy linkLink copied to clipboard!
An online installation pulls container images directly from Red Hat registries during the installation process.
Requirements:
- An active internet connection on all Ansible Automation Platform nodes
-
A Red Hat registry service account with credentials (
registry_usernameandregistry_password) - Network access to Red Hat registries (registry.redhat.io)
For online installation instructions, see Preparing the containerized Ansible Automation Platform installation.
2.2. Disconnected (bundled) installation Copy linkLink copied to clipboard!
A disconnected installation uses a pre-packaged bundle that includes all container images and dependencies. This installation type is designed for air-gapped or restricted network environments.
Requirements:
- Local RPM repository configured with required dependencies
- No internet connection required during installation
- Red Hat registry credentials are not required
For disconnected installation instructions, see Disconnected installation.
Chapter 3. Managing Ansible Automation Platform subscriptions, updates, and support Copy linkLink copied to clipboard!
Ansible is an open source software project and is licensed under the GNU General Public License version 3, as described in the Ansible Source Code.
You must have valid subscriptions attached before installing Ansible Automation Platform.
3.1. Trial and evaluation Copy linkLink copied to clipboard!
You need a subscription to run Ansible Automation Platform. You can start by signing up for a free trial subscription.
- Trial subscriptions for Ansible Automation Platform are available at the Red Hat product trial center.
- Support is not included in a trial subscription or during an evaluation of the Ansible Automation Platform.
3.2. Node counting in subscriptions Copy linkLink copied to clipboard!
The Ansible Automation Platform subscription defines the number of Managed Nodes that can be managed as part of your subscription.
For more information about managed node requirements for subscriptions, see How are "managed nodes" defined as part of the Red Hat Ansible Automation Platform offering.
Ansible does not recycle node counts or reset automated hosts.
3.3. Subscription Types Copy linkLink copied to clipboard!
Red Hat Ansible Automation Platform is provided at various levels of support and number of machines as an annual subscription.
Standard:
- Manage any size environment
- Enterprise 8x5 support and SLA
- Maintenance and upgrades included
- Review the SLA at Product Support Terms of Service
- Review the Red Hat Support Severity Level Definitions
Premium:
- Manage any size environment, including mission-critical environments
- Premium 24x7 support and SLA
- Maintenance and upgrades included
- Review the SLA at Product Support Terms of Service
- Review the Red Hat Support Severity Level Definitions
All subscription levels include regular updates and releases of automation controller, Ansible, and any other components of the Ansible Automation Platform.
For more information, contact Ansible through the Red Hat Customer Portal or at the Ansible site.
3.4. Attaching your Red Hat Ansible Automation Platform subscription Copy linkLink copied to clipboard!
You must have valid subscriptions on all nodes before installing Red Hat Ansible Automation Platform.
Simple Content Access (SCA) is now the default subscription method for all Red Hat accounts. With SCA, you must register your systems to Red Hat Subscription Management (RHSM) or Satellite to access content. Traditional pool-based subscription attachment commands (such as subscription-manager attach --pool or subscription-manager attach --auto) are no longer required. For more information, see Simple Content Access.
Procedure
Register your system with Red Hat Subscription Management:
$ sudo subscription-manager register --username <$INSERT_USERNAME_HERE> --password <$INSERT_PASSWORD_HERE>With Simple Content Access (SCA), registration is the only step required to access Ansible Automation Platform content.
NoteFor accounts still using legacy subscription pools, you might have to manually attach subscriptions using the commands shown in the troubleshooting section.
Verification
Refresh the subscription information on your system:
$ sudo subscription-manager refreshVerify your registration:
$ sudo subscription-manager identityThis command displays your system identity, name, organization name, and organization ID, confirming successful registration.
Troubleshooting
For legacy accounts not using SCA, you might have to manually attach subscriptions:
$ sudo subscription-manager list --available --all | grep -A 30 "Ansible Automation Platform"This command displays the subscription details including the Pool ID. Look for the
Pool ID:line in the output.Once you have identified the correct Pool ID, attach the subscription:
$ sudo subscription-manager attach --pool=<pool_id>NoteDo not use MCT4022 as a
pool_idas it can cause subscription attachment to fail.
3.5. Obtaining a manifest file Copy linkLink copied to clipboard!
You can obtain a subscription manifest in the Subscription Allocations section of Red Hat Subscription Management.
After you obtain a subscription allocation, you can download its manifest file and upload it to activate Ansible Automation Platform.
To begin, log in to the Red Hat Customer Portal by using your administrator user account and follow the procedures listed.
3.5.1. Create a subscription allocation Copy linkLink copied to clipboard!
With a new subscription allocation you can set aside subscriptions and entitlements for a system that is currently offline or air-gapped. This is necessary before you can download its manifest and upload it to Ansible Automation Platform.
Procedure
- From the Subscription Allocations page, click .
- Enter a name for the allocation so that you can find it later.
- Select Type: Satellite 6.16 as the management application.
- Click .
3.5.2. Adding subscriptions to a subscription allocation Copy linkLink copied to clipboard!
After you create an allocation, you can add the subscriptions you need for Ansible Automation Platform to run properly. This step is necessary before you can download the manifest and add it to Ansible Automation Platform.
Procedure
- From the Subscription Allocations page, click the name of the Subscription Allocation to which you want to add a subscription.
- Click the Subscriptions tab.
- Click .
- Enter the number of Ansible Automation Platform Entitlements you plan to add.
- Click .
3.5.3. Downloading a manifest file Copy linkLink copied to clipboard!
After you create an allocation with the appropriate subscriptions on it, you can download the manifest file from Red Hat Subscription Management.
Procedure
- From the Subscription Allocations page, click the name of the Subscription Allocation to which you want to generate a manifest.
- Click the Subscriptions tab.
Click to download the manifest file.
This downloads a file
manifest_<allocation name>_<date>.zipto your default downloads folder.
3.6. Activating Red Hat Ansible Automation Platform Copy linkLink copied to clipboard!
Red Hat Ansible Automation Platform uses available subscriptions or a subscription manifest to allow the use of Ansible Automation Platform.
To obtain a subscription, you can do either of the following:
- Use your Red Hat username and password, service account credentials, or Satellite credentials when you launch Ansible Automation Platform.
- Upload a subscriptions manifest file either using the Red Hat Ansible Automation Platform interface or manually in an Ansible Playbook.
3.6.1. Activate with credentials Copy linkLink copied to clipboard!
When Ansible Automation Platform launches for the first time, the Ansible Automation Platform subscription wizard automatically displays. If you are an organization administrator, you can create a Red Hat service account and use the client ID and client secret to retrieve and import your subscription directly into Ansible Automation Platform.
If you do not have administrative access, you can enter your Red Hat username and password in the Username and password tab to locate and add your subscription to your Ansible Automation Platform instance.
You are opted in for Automation Analytics by default when you activate the platform on first login. This helps Red Hat improve the product by delivering you a much better user experience. You can opt out after activating Ansible Automation Platform by taking the following steps:
- From the navigation panel, select → → .
- Clear the Gather data for Automation Analytics option.
- Click .
Procedure
- Log in to Red Hat Ansible Automation Platform.
- Select the Service Account tab in the subscription wizard.
- Enter your Client ID and Client secret.
Select your subscription from the Subscription list.
NoteYou can also enter your Satellite username and password in the Satellite tab if your cluster nodes are registered to Satellite through Subscription Manager.
- Review the End User License Agreement and select I agree to the End User License Agreement.
- Click .
Verification
After your subscription has been accepted, subscription details are displayed. A status of Compliant indicates your subscription is in compliance with the number of hosts you have automated within your subscription count. Otherwise, your status shows as Out of Compliance, indicating you have exceeded the number of hosts in your subscription. Other important information displayed include the following:
- Hosts automated
- Host count automated by the job, which consumes the license count
- Hosts imported
- Host count considering all inventory sources (does not impact hosts remaining)
- Hosts remaining
- Total host count minus hosts automated
3.6.2. Activate with a manifest file Copy linkLink copied to clipboard!
If you have a subscriptions manifest, you can upload the manifest file by using the Red Hat Ansible Automation Platform interface.
You are opted in for Automation Analytics by default when you activate the platform on first login. This helps Red Hat improve the product by delivering you a much better user experience. You can opt out after activating Ansible Automation Platform by taking the following steps:
- From the navigation panel, select → → .
- Clear the Gather data for Automation Analytics option.
- Click .
Prerequisites
You must have a Red Hat subscription manifest file exported from the Red Hat Customer Portal. For more information, see Obtaining a manifest file.
Procedure
Log in to Red Hat Ansible Automation Platform.
- If you are not immediately taken to the subscription wizard, go to → .
- Select the Subscription manifest tab.
- Click and select your manifest file.
- Review the End User License Agreement and select I agree to the End User License Agreement.
Click .
NoteIf the button is disabled on the subscription wizard page, clear the USERNAME and PASSWORD fields.
Verification
After your subscription has been accepted, subscription details are displayed. A status of Compliant indicates your subscription is in compliance with the number of hosts you have automated within your subscription count. Otherwise, your status shows as Out of Compliance, indicating you have exceeded the number of hosts in your subscription. Other important information displayed include the following:
- Hosts automated
- Host count automated by the job, which consumes the subscription count
- Hosts imported
- Host count considering all inventory sources (does not impact hosts remaining)
- Hosts remaining
- Total host count minus hosts automated
To activate Ansible Automation Platform using credentials, see Activate with credentials.
To activate Ansible Automation Platform with a manifest file, see Activate with a manifest file.
Chapter 4. Preparing the containerized Ansible Automation Platform installation Copy linkLink copied to clipboard!
Prepare your environment for containerized Ansible Automation Platform by understanding deployment topologies, verifying system requirements, configuring Red Hat Enterprise Linux hosts, and setting up inventory files.
4.1. Tested deployment models Copy linkLink copied to clipboard!
Red Hat tests Ansible Automation Platform 2.5 with a defined set of topologies to give you opinionated deployment options. The supported topologies include infrastructure topology diagrams, tested system configurations, example inventory files, and network ports information.
For containerized Ansible Automation Platform, there are two infrastructure topology shapes:
- Growth - (All-in-one) Intended for organizations that are getting started with Ansible Automation Platform. This topology allows for smaller footprint deployments.
- Enterprise - Intended for organizations that require Ansible Automation Platform deployments to have redundancy or higher compute for large volumes of automation. This is a more future-proofed scaled out architecture.
For more information about the tested deployment topologies for containerized Ansible Automation Platform, see Container topologies in Tested deployment models.
4.2. System requirements Copy linkLink copied to clipboard!
Use this information when planning your installation of containerized Ansible Automation Platform.
4.2.1. Prerequisites Copy linkLink copied to clipboard!
Configure a dedicated non-root user on the Red Hat Enterprise Linux host.
-
This user requires
sudoor other Ansible supported privilege escalation (sudois recommended) to perform administrative tasks during the installation. - This user is responsible for the installation of containerized Ansible Automation Platform.
- This user is also the service account for the containers running Ansible Automation Platform.
-
This user requires
- For managed nodes, configure a dedicated user on each node. Ansible Automation Platform connects as this user to run tasks on the node. For more information about configuring a dedicated user on each node, see Preparing the managed nodes for containerized installation.
- For remote host installations, configure SSH public key authentication for the non-root user. For guidelines on setting up SSH public key authentication for the non-root user, see How to configure SSH public key authentication for passwordless login.
- Ensure the Red Hat Enterprise Linux host has internet access if you are using the default online installation method.
- Open the appropriate network ports if you have a firewall in place. For more information about the ports to open, see Container topologies in Tested deployment models.
Containerized Ansible Automation Platform stores all runtime data, configuration files, container images, and Podman volumes under the installing user’s home directory. This includes $HOME/aap/ for component configuration and data, and $HOME/.local/share/containers/ for container images and volumes.
Podman does not support storing container images on an NFS share. To use an NFS share for the user home directory, set up the Podman storage backend path outside of the NFS share. For more information, see Rootless Podman and NFS.
4.2.2. Ansible Automation Platform system requirements Copy linkLink copied to clipboard!
Your system must meet the following minimum system requirements to install and run Red Hat Ansible Automation Platform.
| Type | Description | Notes |
|---|---|---|
| Subscription |
| |
| Operating system |
| |
| CPU architecture | x86_64, AArch64, s390x (IBM Z), ppc64le (IBM Power) | |
|
|
|
|
| Browser | A currently supported version of Mozilla Firefox or Google Chrome. | |
| Database | PostgreSQL 15 | External (customer supported) databases require International Components for Unicode (ICU) support. |
Each virtual machine (VM) has the following system requirements:
| Requirement | Minimum requirement |
|---|---|
| RAM |
|
| CPUs | 4 |
| Local disk |
|
| Disk IOPS | 3000 |
4.2.3. Database requirements Copy linkLink copied to clipboard!
Ansible Automation Platform can work with two varieties of database:
- Database installed with Ansible Automation Platform - This database consists of a PostgreSQL installation done as part of an Ansible Automation Platform installation using PostgreSQL packages that Red Hat provides.
- Customer provided or configured database - This is an external database that the customer provides, whether on bare metal, virtual machine, container, or cloud hosted service.
Ansible Automation Platform requires a customer provided (external) database to have International Components for Unicode (ICU) support.
4.3. Preparing the Red Hat Enterprise Linux host for containerized installation Copy linkLink copied to clipboard!
Containerized Ansible Automation Platform runs the component services as Podman based containers on top of a Red Hat Enterprise Linux host. Prepare the Red Hat Enterprise Linux host to ensure a successful installation.
Procedure
- Log in to the Red Hat Enterprise Linux host as your non-root user.
Ensure that the hostname of your host uses a fully qualified domain name (FQDN).
To check the hostname of your host, run the following command:
hostname -fExample output:
aap.example.orgIf the hostname is not a FQDN, you can set it with the following command:
$ sudo hostnamectl set-hostname <your_hostname>
Register your Red Hat Enterprise Linux host with
subscription-manager:$ sudo subscription-manager registerVerify that only the BaseOS and AppStream repositories are enabled on the host:
$ sudo dnf repolistExample output for RHEL 9:
Updating Subscription Management repositories. repo id repo name rhel-9-for-x86_64-appstream-rpms Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-baseos-rpms Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)Example output for RHEL 10:
Updating Subscription Management repositories. repo id repo name rhel-10-for-x86_64-appstream-rpms Red Hat Enterprise Linux 10 for x86_64 - AppStream (RPMs) rhel-10-for-x86_64-baseos-rpms Red Hat Enterprise Linux 10 for x86_64 - BaseOS (RPMs)- For disconnected installations, follow the steps in Obtaining and configuring RPM source dependencies to access these repositories.
- Ensure the host can resolve host names and IP addresses using DNS. This is essential to ensure services can talk to one another.
Install
ansible-core:$ sudo dnf install -y ansible-coreOptional: Install additional utilities that are useful for troubleshooting purposes, for example
wget,git-core,rsync, andvim:$ sudo dnf install -y wget git-core rsync vim- Optional: To have the installation program automatically pick up and apply your Ansible Automation Platform subscription manifest license, follow the steps in Obtaining a manifest file.
4.4. Preparing the managed nodes for containerized installation Copy linkLink copied to clipboard!
Managed nodes, also referred to as hosts, are the devices that Ansible Automation Platform manages. To ensure a consistent and secure setup of containerized Ansible Automation Platform, create a dedicated user on each managed node. Ansible Automation Platform connects as this user to run tasks on the node.
Procedure
- Log in to the host as the root user.
Create a new user. Replace
<username>with the username you want, for exampleaap.$ sudo adduser <username>Set a password for the new user. Replace
<username>with the username you created.$ sudo passwd <username>Configure the user to run
sudocommands.For a secure and maintainable installation, configure
sudoprivileges for the installation user in a dedicated file within the/etc/sudoers.d/directory.Create a dedicated
sudoersfile for the user:$ sudo visudo -f /etc/sudoers.d/<username>Add the following line to the file, replacing
<username>with the username you created:<username> ALL=(ALL) NOPASSWD: ALL- Save and exit the file.
4.5. Downloading Ansible Automation Platform Copy linkLink copied to clipboard!
Choose the installation program you need based on your Red Hat Enterprise Linux environment internet connectivity and download the installation program to your Red Hat Enterprise Linux host.
Prerequisites
- You have logged in to the Red Hat Enterprise Linux host as your non-root user.
Procedure
Download the latest version of containerized Ansible Automation Platform from the Ansible Automation Platform download page.
- For online installations: Ansible Automation Platform 2.5 Containerized Setup
- For offline or bundled installations: Ansible Automation Platform 2.5 Containerized Setup Bundle
Copy the installation program
.tar.gzfile and the optional manifest.zipfile onto your Red Hat Enterprise Linux host.Use the
scpcommand to securely copy the files. The basic syntax forscpis:scp [options] <path_to_source_file> <path_to_destination>For example, use the following
scpcommand to copy the installation program.tar.gzfile to an AWS EC2 instance with a private key (replace the placeholder<>values with your actual information):scp -i <path_to_private_key> ansible-automation-platform-containerized-setup-<version_number>.tar.gz ec2-user@<remote_host_ip_or_hostname>:<path_to_destination>Decide where you want the installation program to reside on the file system. This is your installation directory.
- The installation creates installation-related files under this location and requires at least 15 GB for the initial installation.
Unpack the installation program
.tar.gzfile into your installation directory, and go to the unpacked directory.To unpack the online installer:
$ tar xfvz ansible-automation-platform-containerized-setup-<version_number>.tar.gzTo unpack the offline or bundled installer:
$ tar xfvz ansible-automation-platform-containerized-setup-bundle-<version_number>-<arch_name>.tar.gz
4.6. Configuring the inventory file Copy linkLink copied to clipboard!
You can control the installation of Ansible Automation Platform with inventory files. Inventory files define the host details, certificate details, and component-specific settings needed to customize the installation.
Example inventory files are available in this document that you can copy and change to get started.
The inventory file requirements differ based on your installation type:
-
Online installation: Requires the
registry_usernameandregistry_passwordvariables to authenticate and pull container images from Red Hat registries during installation. -
Disconnected (bundled) installation: Does not require
registry_usernameorregistry_passwordbecause all container images are pre-packaged in the bundle. Instead, requires thebundle_install=trueandbundle_dirvariables.
The following inventory file examples are for online installations. For disconnected installation inventory requirements, see Performing a disconnected installation.
Additionally, growth topology and enterprise topology inventory files are available in the following locations:
In the downloaded installation program package:
-
The default inventory file, named
inventory, is for the enterprise topology pattern. -
To deploy the growth topology (all-in-one) pattern, use the
inventory-growthfile instead.
-
The default inventory file, named
- In Container topologies in Tested deployment models.
To use the example inventory files, replace the < > placeholders with your specific variables, and update the host names.
Refer to the README.md file in the installation directory or Inventory file variables for more information about optional and required variables.
4.6.1. Inventory file for online installation for containerized growth topology (all-in-one) Copy linkLink copied to clipboard!
Use the example inventory file to perform an online installation for the containerized growth topology (all-in-one):
# This is the Ansible Automation Platform installer inventory file intended for the container growth deployment topology.
# This inventory file expects to be run from the host where Ansible Automation Platform will be installed.
# Consult the Ansible Automation Platform product documentation about this topology's tested hardware configuration.
# https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/tested_deployment_models/container-topologies
#
# Consult the docs if you are unsure what to add
# For all optional variables consult the included README.md
# or the Ansible Automation Platform documentation:
# https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation
# This section is for your platform gateway hosts
# -----------------------------------------------------
[automationgateway]
aap.example.org
# This section is for your automation controller hosts
# -----------------------------------------------------
[automationcontroller]
aap.example.org
# This section is for your automation hub hosts
# -----------------------------------------------------
[automationhub]
aap.example.org
# This section is for your Event-Driven Ansible controller hosts
# -----------------------------------------------------
[automationeda]
aap.example.org
# This section is for the Ansible Automation Platform database
# -----------------------------------------------------
[database]
aap.example.org
[all:vars]
# Ansible
ansible_connection=local
# Common variables
# https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#general-variables
# -----------------------------------------------------
postgresql_admin_username=postgres
postgresql_admin_password=<set your own>
registry_username=<your RHN username>
registry_password=<your RHN password>
redis_mode=standalone
# Platform gateway
# https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#platform-gateway-variables
# -----------------------------------------------------
gateway_admin_password=<set your own>
gateway_pg_host=aap.example.org
gateway_pg_password=<set your own>
# Automation controller
# https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#controller-variables
# -----------------------------------------------------
controller_admin_password=<set your own>
controller_pg_host=aap.example.org
controller_pg_password=<set your own>
controller_percent_memory_capacity=0.5
# Automation hub
# https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#hub-variables
# -----------------------------------------------------
hub_admin_password=<set your own>
hub_pg_host=aap.example.org
hub_pg_password=<set your own>
hub_seed_collections=false
# Event-Driven Ansible controller
# https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#event-driven-ansible-variables
# -----------------------------------------------------
eda_admin_password=<set your own>
eda_pg_host=aap.example.org
eda_pg_password=<set your own>
ansible_connection=local- Used for all-in-one installations where the installation program is run on the same node that hosts Ansible Automation Platform.-
If the installation program is run from a separate node, do not include
ansible_connection=local. In this case, use an SSH connection instead.
-
If the installation program is run from a separate node, do not include
-
[database]- This group in the inventory file defines the Ansible Automation Platform managed database.
4.6.2. Inventory file for online installation for containerized enterprise topology Copy linkLink copied to clipboard!
Use the example inventory file to perform an online installation for the containerized enterprise topology:
# This is the Ansible Automation Platform enterprise installer inventory file
# Consult the docs if you are unsure what to add
# For all optional variables consult the included README.md
# or the Red Hat documentation:
# https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation
# This section is for your platform gateway hosts
# -----------------------------------------------------
[automationgateway]
gateway1.example.org
gateway2.example.org
# This section is for your automation controller hosts
# -----------------------------------------------------
[automationcontroller]
controller1.example.org
controller2.example.org
# This section is for your Ansible Automation Platform execution hosts
# -----------------------------------------------------
[execution_nodes]
hop1.example.org receptor_type='hop'
exec1.example.org
exec2.example.org
# This section is for your automation hub hosts
# -----------------------------------------------------
[automationhub]
hub1.example.org
hub2.example.org
# This section is for your Event-Driven Ansible controller hosts
# -----------------------------------------------------
[automationeda]
eda1.example.org
eda2.example.org
[redis]
gateway1.example.org
gateway2.example.org
hub1.example.org
hub2.example.org
eda1.example.org
eda2.example.org
[all:vars]
# Common variables
# https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#general-variables
# -----------------------------------------------------
postgresql_admin_username=<set your own>
postgresql_admin_password=<set your own>
registry_username=<your RHN username>
registry_password=<your RHN password>
# Platform gateway
# https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#platform-gateway-variables
# -----------------------------------------------------
gateway_admin_password=<set your own>
gateway_pg_host=externaldb.example.org
gateway_pg_database=<set your own>
gateway_pg_username=<set your own>
gateway_pg_password=<set your own>
# Automation controller
# https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#controller-variables
# -----------------------------------------------------
controller_admin_password=<set your own>
controller_pg_host=externaldb.example.org
controller_pg_database=<set your own>
controller_pg_username=<set your own>
controller_pg_password=<set your own>
# Automation hub
# https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#hub-variables
# -----------------------------------------------------
hub_admin_password=<set your own>
hub_pg_host=externaldb.example.org
hub_pg_database=<set your own>
hub_pg_username=<set your own>
hub_pg_password=<set your own>
# Event-Driven Ansible controller
# https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#event-driven-ansible-variables
# -----------------------------------------------------
eda_admin_password=<set your own>
eda_pg_host=externaldb.example.org
eda_pg_database=<set your own>
eda_pg_username=<set your own>
eda_pg_password=<set your own>
4.7. Setting registry_username and registry_password Copy linkLink copied to clipboard!
When using the registry_username and registry_password variables for an online non-bundled installation, you need to create a new registry service account.
Registry service accounts are named tokens that you can use in environments where you share credentials, such as deployment systems.
Procedure
- Go to https://access.redhat.com/terms-based-registry/accounts.
- On the Registry Service Accounts page click .
- Enter a name for the account using only the allowed characters.
- Optionally enter a description for the account.
- Click .
- Find the created account in the list by searching for your name in the search field.
- Click the name of the account that you created.
Alternatively, if you know the name of your token, you can go directly to the page by entering the URL:
https://access.redhat.com/terms-based-registry/token/<name-of-your-token>A token page opens, displaying a generated username (different from the account name) and a token.
- If no token is displayed, click . You can also click this to generate a new username and token.
-
Copy the username (for example "1234567|testuser") and use it to set the variable
registry_username. -
Copy the token and use it to set the variable
registry_password.
Chapter 5. Advanced containerized deployment Copy linkLink copied to clipboard!
Configure external databases, custom TLS certificates, execution nodes, HAProxy load balancers, and hub storage for complex containerized Ansible Automation Platform deployments.
If you are not using these advanced configuration options, go to Installing containerized Ansible Automation Platform to continue with your installation.
5.1. Adding a safe plugin variable to Event-Driven Ansible controller Copy linkLink copied to clipboard!
When using redhat.insights_eda or similar plugins to run rulebook activations in Event-Driven Ansible controller, you must add a safe plugin variable to a directory in Ansible Automation Platform. This ensures connection between Event-Driven Ansible controller and the source plugin, and displays port mappings correctly.
Procedure
Create a directory for the safe plugin variable:
mkdir -p ./group_vars/automationeda-
Create a file within that directory for your new setting (for example,
touch ./group_vars/automationeda/custom.yml) Add the variable
eda_safe_pluginswith a list of plugins to enable. For example:eda_safe_plugins: ['ansible.eda.webhook', 'ansible.eda.alertmanager']
5.2. Adding execution nodes Copy linkLink copied to clipboard!
Containerized Ansible Automation Platform can deploy remote execution nodes.
You can define remote execution nodes in the [execution_nodes] group of your inventory file:
[execution_nodes]
<fqdn_of_your_execution_host>
By default, an execution node uses the following settings that you can update as needed:
receptor_port=27199
receptor_protocol=tcp
receptor_type=execution
-
receptor_port- The port number that receptor listens on for incoming connections from other receptor nodes. -
receptor_type- The role of the node. Valid options includeexecutionorhop. -
receptor_protocol- The protocol used for communication. Valid options includetcporudp.
By default, execution nodes automatically peer with all automation controller nodes. To configure an execution node to peer with specific automation controller nodes instead, use the receptor_peers variable.
The value of receptor_peers must be a comma-separated list of host names. Do not use inventory group names.
Example:
[execution_nodes]
# Uses default peering (peers with all controller nodes)
exec1.example.com
# Only peers with specific controller nodes
exec2.example.com receptor_peers='["controller1.example.com","controller2.example.com"]'
# Hop node that peers with specific execution nodes
hop1.example.com receptor_type=hop receptor_peers='["exec1.example.com","exec2.example.com"]'
5.3. Configuring storage for automation hub Copy linkLink copied to clipboard!
Configure storage backends for automation hub to store automation content by using Amazon S3, Azure Blob Storage, or Network File System (NFS).
5.3.1. Configuring Amazon S3 storage for automation hub Copy linkLink copied to clipboard!
Amazon S3 storage is a type of object storage that is supported in containerized installations. When using an AWS S3 storage backend, set hub_storage_backend to s3. The AWS S3 bucket needs to exist before running the installation program.
Procedure
- Ensure your AWS S3 bucket exists before proceeding with the installation.
Add the following variables to your inventory file under the
[all:vars]group to configure S3 storage:[all:vars] hub_storage_backend=s3 hub_s3_access_key=<access_key> hub_s3_secret_key=<secret_key> hub_s3_bucket_name=<bucket_name>Optional: You can pass extra parameters to the AWS S3 storage backend by using the
hub_s3_extra_settingsvariable. For example:hub_s3_extra_settings={'AWS_S3_REGION_NAME': 'eu-south-1', 'AWS_S3_ENDPOINT_URL': 'https://endpoint'}
5.3.2. Configuring Azure Blob Storage for automation hub Copy linkLink copied to clipboard!
Azure Blob storage is a type of object storage that is supported in containerized installations. When using an Azure blob storage backend, set hub_storage_backend to azure. The Azure container needs to exist before running the installation program.
Procedure
- Ensure your Azure container exists before proceeding with the installation.
Add the following variables to your inventory file under the
[all:vars]group to configure Azure Blob storage:[all:vars] hub_storage_backend=azure hub_azure_account_key=<account_key> hub_azure_account_name=<account_name> hub_azure_container=<container_name>Optional: You can pass extra parameters to the Azure Blob storage backend by using the
hub_azure_extra_settingsvariable. For example:hub_azure_extra_settings={'AZURE_LOCATION': 'foo', 'AZURE_SSL': True, 'AZURE_URL_EXPIRATION_SECS': 60}
5.3.3. Configuring Network File System (NFS) storage for automation hub Copy linkLink copied to clipboard!
NFS is a type of shared storage that is supported in containerized installations. Shared storage is required when installing more than one instance of automation hub with a file storage backend. When installing a single instance of the automation hub, shared storage is optional.
Procedure
To configure shared storage for automation hub, set the
hub_shared_data_pathvariable in your inventory file:hub_shared_data_path=<path_to_nfs_share>The value must match the format
host:dir, for examplenfs-server.example.com:/exports/hub.-
(Optional) To change the mount options for your NFS share, use the
hub_shared_data_mount_optsvariable. The default value isrw,sync,hard.
5.4. Configuring a HAProxy load balancer Copy linkLink copied to clipboard!
To configure a HAProxy load balancer in front of platform gateway with a custom CA cert, set the following inventory file variables under the [all:vars] group:
custom_ca_cert=<path_to_cert_crt>
gateway_main_url=<https://load_balancer_url>
- Ensure your load balancer is configured to use HTTP/1.1 when communicating with platform gateway. HTTP/2 is not supported.
- HAProxy SSL passthrough mode is not supported with platform gateway.
5.5. Enabling automation content collection and container signing Copy linkLink copied to clipboard!
Automation content signing is disabled by default. To enable it, the following installation variables are required in the inventory file:
# Collection signing
hub_collection_signing=true
hub_collection_signing_key=<full_path_to_collection_gpg_key>
# Container signing
hub_container_signing=true
hub_container_signing_key=<full_path_to_container_gpg_key>
The following variables are required if the keys are protected by a passphrase:
# Collection signing
hub_collection_signing_pass=<gpg_key_passphrase>
# Container signing
hub_container_signing_pass=<gpg_key_passphrase>
The hub_collection_signing_key and hub_container_signing_key variables require the set up of keys before running an installation.
Automation content signing currently only supports GnuPG (GPG) based signature keys. For more information about GPG, see the GnuPG man page.
The algorithm and cipher used is the responsibility of the customer.
Procedure
On a RHEL9 server run the following command to create a new key pair for collection signing:
gpg --gen-keyEnter your information for "Real name" and "Email address":
Example output:
gpg --gen-key gpg (GnuPG) 2.3.3; Copyright (C) 2021 Free Software Foundation, Inc. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Note: Use "gpg --full-generate-key" for a full featured key generation dialog. GnuPG needs to construct a user ID to identify your key. Real name: Joe Bloggs Email address: jbloggs@example.com You selected this USER-ID: "Joe Bloggs <jbloggs@example.com>" Change (N)ame, (E)mail, or (O)kay/(Q)uit? O- If this fails, your environment does not have the necessary prerequisite packages installed for GPG. Install the necessary packages to proceed.
- A dialog box will appear and ask you for a passphrase. This is optional but recommended.
The keys are then generated, and produce output similar to the following:
We need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy. gpg: key 022E4FBFB650F1C4 marked as ultimately trusted gpg: revocation certificate stored as '/home/aapuser/.gnupg/openpgp-revocs.d/F001B037976969DD3E17A829022E4FBFB650F1C4.rev' public and secret key created and signed. pub rsa3072 2024-10-25 [SC] [expires: 2026-10-25] F001B037976969DD3E17A829022E4FBFB650F1C4 uid Joe Bloggs <jbloggs@example.com> sub rsa3072 2024-10-25 [E] [expires: 2026-10-25]- Note the expiry date that you can set based on company standards and needs.
You can view all of your GPG keys by running the following command:
gpg --list-secret-keys --keyid-format=longTo export the public key run the following command:
gpg --export -a --output collection-signing-key.pub <email_address_used_to_generate_key>To export the private key run the following command:
gpg -a --export-secret-keys <email_address_used_to_generate_key> > collection-signing-key.priv- Enter the passphrase if prompted.
To view the private key file contents, run the following command:
cat collection-signing-key.privExample output:
-----BEGIN PGP PRIVATE KEY BLOCK----- lQWFBGcbN14BDADTg5BsZGbSGMHypUJMuzmIffzzz4LULrZA8L/I616lzpBHJvEs sSN6KuKY1TcIwIDCCa/U5Obm46kurpP2Y+vNA1YSEtMJoSeHeamWMDd99f49ItBp <snippet> j920hRy/3wJGRDBMFa4mlQg= =uYEF -----END PGP PRIVATE KEY BLOCK------ Repeat steps 1 to 7 to create a key pair for container signing.
Add the following variables to the inventory file and run the installation to create the signing services:
# Collection signing hub_collection_signing=true hub_collection_signing_key=/home/aapuser/aap/ansible-automation-platform-containerized-setup-<version_number>/collection-signing-key.priv # This variable is required if the key is protected by a passphrase hub_collection_signing_pass=<password> # Container signing hub_container_signing=true hub_container_signing_key=/home/aapuser/aap/ansible-automation-platform-containerized-setup-<version_number>/container-signing-key.priv # This variable is required if the key is protected by a passphrase hub_container_signing_pass=<password>
5.6. Configuring an external (customer provided) PostgreSQL database Copy linkLink copied to clipboard!
Set up an external (customer provided) PostgreSQL database for containerized Ansible Automation Platform to use your own database infrastructure.
There are two possible scenarios for setting up an external database:
- An external database with PostgreSQL admin credentials
- An external database without PostgreSQL admin credentials
- When using an external database with Ansible Automation Platform, you must create and support that database. Ensure that you clear your external database when uninstalling Ansible Automation Platform.
- Red Hat Ansible Automation Platform requires customer provided (external) database to have International Components for Unicode (ICU) support.
- During configuration of an external database, you must check the external database coverage. For more information, see Red Hat Ansible Automation Platform Database Scope of Coverage.
-
The
[database]group in your inventory file defines the Ansible Automation Platform managed database. When using an externally managed database, do not include the[database]group in your inventory file.
5.6.1. Setting up an external database with PostgreSQL admin credentials Copy linkLink copied to clipboard!
If you have PostgreSQL admin credentials, you can supply them in the inventory file and the installation program creates the PostgreSQL users and databases for each component for you. The PostgreSQL admin account must have SUPERUSER privileges.
Procedure
To configure the PostgreSQL admin credentials, add the following variables to the inventory file under the
[all:vars]group:postgresql_admin_username=<set your own> postgresql_admin_password=<set your own>
5.6.2. Setting up an external database without PostgreSQL admin credentials Copy linkLink copied to clipboard!
If you do not have PostgreSQL admin credentials, then PostgreSQL users and databases need to be created for each component (platform gateway, automation controller, automation hub, and Event-Driven Ansible) before running the installation program.
Procedure
Connect to a PostgreSQL compliant database server with a user that has
SUPERUSERprivileges.# psql -h <hostname> -U <username> -p <port_number>For example:
# psql -h db.example.com -U superuser -p 5432Create the user with a password and ensure the
CREATEDBrole is assigned to the user. For more information, see Database Roles.CREATE USER <username> WITH PASSWORD <password> CREATEDB;Create the database and add the user you created as the owner.
CREATE DATABASE <database_name> OWNER <username>;When you have created the PostgreSQL users and databases for each component, you can supply them in the inventory file under the
[all:vars]group.# Platform gateway gateway_pg_host=aap.example.org gateway_pg_database=<set your own> gateway_pg_username=<set your own> gateway_pg_password=<set your own> # Automation controller controller_pg_host=aap.example.org controller_pg_database=<set your own> controller_pg_username=<set your own> controller_pg_password=<set your own> # Automation hub hub_pg_host=aap.example.org hub_pg_database=<set your own> hub_pg_username=<set your own> hub_pg_password=<set your own> # Event-Driven Ansible eda_pg_host=aap.example.org eda_pg_database=<set your own> eda_pg_username=<set your own> eda_pg_password=<set your own>
5.6.3. Enabling the hstore extension for the automation hub PostgreSQL database Copy linkLink copied to clipboard!
The database migration script uses hstore fields to store information, therefore the hstore extension must be enabled in the automation hub PostgreSQL database.
This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server.
If the PostgreSQL database is external, you must enable the hstore extension in the automation hub PostgreSQL database manually before installation.
If the hstore extension is not enabled before installation, a failure raises during database migration.
Procedure
Check if the extension is available on the PostgreSQL server (automation hub database).
$ psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"Where the default value for
<automation hub database>isautomationhub.Example output with
hstoreavailable:name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)Example output with
hstorenot available:name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)On a RHEL based server, the
hstoreextension is included in thepostgresql-contribRPM package, which is not installed automatically when installing the PostgreSQL server RPM package.To install the RPM package, use the following command:
dnf install postgresql-contribLoad the
hstorePostgreSQL extension into the automation hub database with the following command:$ psql -d <automation hub database> -c "CREATE EXTENSION hstore;"In the following output, the
installed_versionfield lists thehstoreextension used, indicating thathstoreis enabled.name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)
5.6.4. Optional: configuring mutual TLS (mTLS) authentication for an external database Copy linkLink copied to clipboard!
mTLS authentication is disabled by default. To configure each component’s database with mTLS authentication, add the following variables to your inventory file under the [all:vars] group and ensure each component has a different TLS certificate and key:
Procedure
Add the following variables to your inventory file under the
[all:vars]group:# Platform gateway gateway_pg_cert_auth=true gateway_pg_tls_cert=/path/to/gateway.cert gateway_pg_tls_key=/path/to/gateway.key gateway_pg_sslmode=verify-full # Automation controller controller_pg_cert_auth=true controller_pg_tls_cert=/path/to/awx.cert controller_pg_tls_key=/path/to/awx.key controller_pg_sslmode=verify-full # Automation hub hub_pg_cert_auth=true hub_pg_tls_cert=/path/to/pulp.cert hub_pg_tls_key=/path/to/pulp.key hub_pg_sslmode=verify-full # Event-Driven Ansible eda_pg_cert_auth=true eda_pg_tls_cert=/path/to/eda.cert eda_pg_tls_key=/path/to/eda.key eda_pg_sslmode=verify-full
5.7. Configuring custom TLS certificates Copy linkLink copied to clipboard!
Red Hat Ansible Automation Platform uses X.509 certificate and key pairs to secure traffic. These certificates secure internal traffic between Ansible Automation Platform components and external traffic for public UI and API connections.
There are two primary ways to manage TLS certificates for your Ansible Automation Platform deployment:
- Ansible Automation Platform generated certificates (this is the default)
- User-provided certificates
5.7.1. Ansible Automation Platform generated certificates Copy linkLink copied to clipboard!
By default, the installation program creates a self-signed Certificate Authority (CA) and uses it to generate self-signed TLS certificates for all Ansible Automation Platform services. The self-signed CA certificate and key are generated on one node under the ~/aap/tls/ directory and copied to the same location on all other nodes. This CA is valid for 10 years after the initial creation date.
Self-signed certificates are not part of any public chain of trust. The installation program creates a certificate truststore that includes the self-signed CA certificate under ~/aap/tls/extracted/ and bind-mounts that directory to each Ansible Automation Platform service container under /etc/pki/ca-trust/extracted/. This allows each Ansible Automation Platform component to validate the self-signed certificates of the other Ansible Automation Platform services. The CA certificate can also be added to the truststore of other systems or browsers as needed.
5.7.2. User-provided certificates Copy linkLink copied to clipboard!
To use your own TLS certificates and keys to replace some or all of the self-signed certificates generated during installation, you can set specific variables in your inventory file. A public or organizational CA must generate these certificates and keys in advance so that they are available during the installation process.
5.7.2.1. Using a custom CA to generate all TLS certificates Copy linkLink copied to clipboard!
Use this method when you want Ansible Automation Platform to generate all of the certificates, but you want them signed by a custom CA rather than the default self-signed certificates.
When you use ca_tls_cert and ca_tls_key, the installation program automatically creates TLS certificates for each Ansible Automation Platform service using your provided CA certificate. You do not need to define individual service certificate variables (such as gateway_tls_cert, controller_tls_cert, or hub_tls_cert) because the installation program generates these certificates for you.
Procedure
To use a custom Certificate Authority (CA) to generate TLS certificates for all Ansible Automation Platform services, set the following variables in your inventory file:
ca_tls_cert=<path_to_ca_tls_certificate> ca_tls_key=<path_to_ca_tls_key>Where:
-
ca_tls_certis the path to your custom CA certificate file. -
ca_tls_keyis the path to the key file for your custom CA certificate.
5.7.2.2. Providing custom TLS certificates for each service Copy linkLink copied to clipboard!
Use this method if your organization manages TLS certificates outside of Ansible Automation Platform and requires manual provisioning.
Procedure
To manually provide TLS certificates for each individual service (for example, automation controller, automation hub, and Event-Driven Ansible), set the following variables in your inventory file:
# Platform gateway gateway_tls_cert=<path_to_tls_certificate> gateway_tls_key=<path_to_tls_key> gateway_pg_tls_cert=<path_to_tls_certificate> gateway_pg_tls_key=<path_to_tls_key> gateway_redis_tls_cert=<path_to_tls_certificate> gateway_redis_tls_key=<path_to_tls_key> # Automation controller controller_tls_cert=<path_to_tls_certificate> controller_tls_key=<path_to_tls_key> controller_pg_tls_cert=<path_to_tls_certificate> controller_pg_tls_key=<path_to_tls_key> # Automation hub hub_tls_cert=<path_to_tls_certificate> hub_tls_key=<path_to_tls_key> hub_pg_tls_cert=<path_to_tls_certificate> hub_pg_tls_key=<path_to_tls_key> # Event-Driven Ansible eda_tls_cert=<path_to_tls_certificate> eda_tls_key=<path_to_tls_key> eda_pg_tls_cert=<path_to_tls_certificate> eda_pg_tls_key=<path_to_tls_key> eda_redis_tls_cert=<path_to_tls_certificate> eda_redis_tls_key=<path_to_tls_key> # PostgreSQL postgresql_tls_cert=<path_to_tls_certificate> postgresql_tls_key=<path_to_tls_key> # Receptor receptor_tls_cert=<path_to_tls_certificate> receptor_tls_key=<path_to_tls_key> # Redis redis_tls_cert=<path_to_tls_certificate> redis_tls_key=<path_to_tls_key>
If all components share the same fully qualified domain name (FQDN), use the same certificate and key for each service:
gateway_tls_cert=/home/user/certs/myhost.example.com.crt
gateway_tls_key=/home/user/certs/myhost.example.com.key
controller_tls_cert=/home/user/certs/myhost.example.com.crt
controller_tls_key=/home/user/certs/myhost.example.com.key
hub_tls_cert=/home/user/certs/myhost.example.com.crt
hub_tls_key=/home/user/certs/myhost.example.com.key
eda_tls_cert=/home/user/certs/myhost.example.com.crt
eda_tls_key=/home/user/certs/myhost.example.com.key
postgresql_tls_cert=/home/user/certs/myhost.example.com.crt
postgresql_tls_key=/home/user/certs/myhost.example.com.key
If components are deployed on separate hosts with different FQDNs, provide a unique certificate for each service:
gateway_tls_cert=/home/user/certs/gateway.example.com.crt
gateway_tls_key=/home/user/certs/gateway.example.com.key
controller_tls_cert=/home/user/certs/controller.example.com.crt
controller_tls_key=/home/user/certs/controller.example.com.key
hub_tls_cert=/home/user/certs/hub.example.com.crt
hub_tls_key=/home/user/certs/hub.example.com.key
eda_tls_cert=/home/user/certs/eda.example.com.crt
eda_tls_key=/home/user/certs/eda.example.com.key
postgresql_tls_cert=/home/user/certs/postgresql.example.com.crt
postgresql_tls_key=/home/user/certs/postgresql.example.com.key
5.7.2.3. Considerations for certificates provided per service Copy linkLink copied to clipboard!
When providing custom TLS certificates for each individual service, consider the following:
-
Each service has its own
_tls_certand_tls_keyvariables. You can provide unique certificates for each service, or use the same certificate across multiple services if they share a fully qualified domain name (FQDN). If you do not define a certificate for a service, the installation program generates a self-signed certificate for that service. - For services deployed across many nodes (for example, when following the enterprise topology), the provided certificate for that service must include the FQDN of all associated nodes in its Subject Alternative Name (SAN) field.
- If an external-facing service (such as automation controller or platform gateway) is deployed behind a load balancer that performs SSL/TLS offloading, the service’s certificate must include the load balancer’s FQDN in its SAN field, in addition to the FQDNs of the individual service nodes.
5.7.2.4. Providing a custom CA certificate Copy linkLink copied to clipboard!
When you manually provide TLS certificates for Ansible Automation Platform services (such as gateway_tls_cert, controller_tls_cert, or hub_tls_cert), those certificates might be signed by a custom CA.
Use the custom_ca_cert variable to add your CA certificate to the environment for proper authentication and trust of the manually provided certificates.
Procedure
If any of the TLS certificates you manually provided are signed by a custom CA, specify the CA certificate by using the following variable in your inventory file:
custom_ca_cert=<path_to_custom_ca_certificate>If you have more than one CA certificate, combine them into a single file and reference the combined certificate with the
custom_ca_certvariable.
5.7.3. Receptor certificate considerations Copy linkLink copied to clipboard!
When using a custom certificate for Receptor nodes, the certificate requires the otherName field specified in the Subject Alternative Name (SAN) of the certificate with the value 1.3.6.1.4.1.2312.19.1. For more information, see Above the mesh TLS.
Receptor does not support the usage of wildcard certificates. Additionally, each Receptor certificate must have the host FQDN specified in its SAN for TLS hostname validation to be correctly performed.
5.7.4. Redis certificate considerations Copy linkLink copied to clipboard!
When using custom TLS certificates for Redis-related services, consider the following for mutual TLS (mTLS) communication if specifying Extended Key Usage (EKU):
-
The Redis server certificate (
redis_tls_cert) should include theserverAuth(web server authentication) andclientAuth(client authentication) EKU. -
The Redis client certificates (
gateway_redis_tls_cert,eda_redis_tls_cert) should include theclientAuth(client authentication) EKU.
5.7.5. Using custom Receptor signing keys Copy linkLink copied to clipboard!
Receptor signing is enabled by default unless receptor_disable_signing=true is set, and an RSA key pair (public and private) is generated by the installation program. However, you can set custom RSA public and private keys by using the following variables:
receptor_signing_private_key=<full_path_to_private_key>
receptor_signing_public_key=<full_path_to_public_key>
Chapter 6. Installing containerized Ansible Automation Platform Copy linkLink copied to clipboard!
Run the install playbook to install containerized Ansible Automation Platform after preparing the Red Hat Enterprise Linux host, downloading the installation program, and configuring the inventory file.
Prerequisites
- You have prepared the Red Hat Enterprise Linux host
- You have prepared the managed nodes
- You have downloaded Ansible Automation Platform
- You have configured the inventory file
- You are logged in to the Red Hat Enterprise Linux host as your non-root user
Procedure
- Go to the installation directory on your Red Hat Enterprise Linux host.
Run the
installplaybook:ansible-playbook -i <inventory_file_name> ansible.containerized_installer.installFor example:
ansible-playbook -i inventory ansible.containerized_installer.installYou can add additional parameters to the installation command as needed:
ansible-playbook -i <inventory_file_name> -e @<vault_file_name> --ask-vault-pass -K -v ansible.containerized_installer.installFor example:
ansible-playbook -i inventory -e @vault.yml --ask-vault-pass -K -v ansible.containerized_installer.install-
-i <inventory_file_name>- The inventory file to use for the installation. -
-e @<vault_file_name> --ask-vault-pass- (Optional) If you are using a vault to store sensitive variables, add this to the installation command. -
-K- (Optional) If your privilege escalation (becoming root) requires you to enter a password, add this to the installation command. You are then prompted for the BECOME password. -
-v- (Optional) You can use increasing verbosity, up to 4 (-vvvv) to see installation process details. This can significantly increase installation time. Use it only as needed or when requested by Red Hat support.
-
Verification
After the installation completes, verify that you can access Ansible Automation Platform which is available by default at the following URL:
https://<gateway_node>:443-
Log in as the admin user with the credentials you created for
gateway_admin_usernameandgateway_admin_password. The default ports and protocols used for Ansible Automation Platform are 80 (HTTP) and 443 (HTTPS). You can customize the ports with the following variables:
envoy_http_port=80 envoy_https_port=443If you want to disable HTTPS, set
envoy_disable_httpstotrue:envoy_disable_https: true
Chapter 7. Maintaining containerized Ansible Automation Platform Copy linkLink copied to clipboard!
Update, backup, restore, uninstall, or reinstall containerized Ansible Automation Platform deployments to support your automation infrastructure.
7.1. Updating containerized Ansible Automation Platform Copy linkLink copied to clipboard!
Perform a patch update for a container-based installation of Ansible Automation Platform from 2.5 to 2.5.x.
Upgrades from 2.4 Containerized Ansible Automation Platform Tech Preview to 2.5 Containerized Ansible Automation Platform are not supported.
Prerequisites
- You have reviewed the release notes for the associated patch release. For more information, see Ansible Automation Platform Release notes.
- You have a backup of your Ansible Automation Platform deployment. For more information, see Backing up container-based Ansible Automation Platform.
Procedure
- Log in to the Red Hat Enterprise Linux host as your dedicated non-root user.
- Follow the steps in Downloading Ansible Automation Platform to download the latest version of containerized Ansible Automation Platform.
- Copy the downloaded installation program to your Red Hat Enterprise Linux Host.
-
Edit the
inventoryfile to match your required configuration. You can keep the same parameters from your existing Ansible Automation Platform deployment or you can change the parameters to match any modifications to your environment. Run the
installplaybook:$ ansible-playbook -i inventory ansible.containerized_installer.install-
If your privilege escalation requires a password to be entered, append
-Kto the command. You will then be prompted for theBECOMEpassword. -
You can use increasing verbosity, up to 4 v’s (
-vvvv) to see the details of the installation process. However it is important to note that this can significantly increase installation time, so it is recommended that you use it only as needed or requested by Red Hat support.
-
If your privilege escalation requires a password to be entered, append
- The update begins.
7.2. Backing up containerized Ansible Automation Platform Copy linkLink copied to clipboard!
Perform a backup of your container-based installation of Ansible Automation Platform.
- When backing up Ansible Automation Platform, use the installation program that matches your currently installed version of Ansible Automation Platform.
- Backup functionality only works with the PostgreSQL versions supported by your current Ansible Automation Platform version. For more information, see System requirements.
- Backup and restore for content stored in Azure Blob Storage or Amazon S3 must be handled through the vendor portals, as each vendor provides their own backup solutions.
Prerequisites
- You have logged in to the Red Hat Enterprise Linux host as your dedicated non-root user.
Procedure
- Go to the Red Hat Ansible Automation Platform installation directory on your Red Hat Enterprise Linux host.
To control compression of the backup artifacts before they are sent to the host running the backup operation, you can use the following variables in your inventory file:
For control of compression for filesystem related backup files:
# Global control of compression for filesystem backup files use_archive_compression=true # Component-level control of compression for filesystem backup files #controller_use_archive_compression=true #eda_use_archive_compression=true #gateway_use_archive_compression=true #hub_use_archive_compression=true #pcp_use_archive_compression=true #postgresql_use_archive_compression=true #receptor_use_archive_compression=true #redis_use_archive_compression=trueFor control of compression for database related backup files:
# Global control of compression for database backup files use_db_compression=true # Component-level control of compression for database backup files #controller_use_db_compression=true #eda_use_db_compression=true #hub_use_db_compression=true #gateway_use_db_compression=true
Run the
backupplaybook:$ ansible-playbook -i <path_to_inventory> ansible.containerized_installer.backupThe backup process creates archives of the following data:
- PostgreSQL databases
- Configuration files
- Data files
Next steps
To customize the backup process, you can use the following variables in your inventory file:
-
Change the backup destination directory from the default
./backupsby using thebackup_dirvariable. Exclude paths that contain duplicated data, such as snapshot subdirectories, by using the
hub_data_path_excludevariable.For example, to exclude a
.snapshotssubdirectory from the backup, add the following to your inventory file:hub_data_path_exclude=["*/.snapshots", "*/.snapshots/*"]Alternatively, you can pass this variable at runtime by using the
-eflag:$ ansible-playbook -i inventory ansible.containerized_installer.backup -e hub_data_path_exclude="['*/.snapshots', '*/.snapshots/*']"You can also define the exclusion patterns in a YAML extra variables file and pass it at runtime:
exclude_vars.yml
hub_data_path_exclude: - "*/.snapshots/*" - "*/.snapshots"$ ansible-playbook -i inventory ansible.containerized_installer.backup -e @exclude_vars.yml
7.3. Restoring containerized Ansible Automation Platform Copy linkLink copied to clipboard!
Restore your container-based installation of Ansible Automation Platform from a backup, or to a different environment.
When restoring Ansible Automation Platform, use the latest installation program available at the time of the restore. For example, if you are restoring a backup taken from version 2.5-1, use the latest 2.5-x installation program available at the time of the restore.
Restore functionality only works with the PostgreSQL versions supported by your current Ansible Automation Platform version. For more information, see System requirements.
Prerequisites
- You have logged in to the Red Hat Enterprise Linux host as your dedicated non-root user.
- You have a backup of your Ansible Automation Platform deployment. For more information, see Backing up container-based Ansible Automation Platform.
- If restoring to a different environment with the same hostnames, you have performed a fresh installation on the target environment with the same topology as the original (source) environment.
- You have ensured that the administrator credentials on the target environment match the administrator credentials from the source environment.
Procedure
- Go to the installation directory on your Red Hat Enterprise Linux host.
Perform the relevant restoration steps:
If you are restoring to the same environment with the same hostnames, run the
restoreplaybook:$ ansible-playbook -i <path_to_inventory> ansible.containerized_installer.restoreThis restores the important data deployed by the containerized installer such as:
- PostgreSQL databases
- Configuration files
Data files
By default, the backup directory is set to
./backups. You can change this by using thebackup_dirvariable in yourinventoryfile.
If you are restoring to a different environment with different hostnames, perform the following additional steps before running the
restoreplaybook:ImportantRestoring to a different environment with different hostnames is not recommended and is intended only as a workaround.
For each component, identify the backup file from the source environment that contains the PostgreSQL dump file.
For example:
$ cd ansible-automation-platform-containerized-setup-<version_number>/backups$ tar tvf gateway_env1-gateway-node1.tar.gz | grep db -rw-r--r-- ansible/ansible 4850774 2025-06-30 11:05 aap/backups/awx.db- Copy the backup files from the source environment to the target environment.
Rename the backup files on the target environment to reflect the new node names.
For example:
$ cd ansible-automation-platform-containerized-setup-<version_number>/backups$ mv gateway_env1-gateway-node1.tar.gz gateway_env2-gateway-node1.tar.gzFor enterprise topologies, ensure that the component backup file containing the
component.dbfile is listed first in its group within the inventory file.For example:
$ cd ansible-automation-platform-containerized-setup-<version_number>$ ls backups/gateway* gateway_env2-gateway-node1.tar.gz gateway_env2-gateway-node2.tar.gz$ tar tvf backups/gateway_env2-gateway-node1.tar.gz | grep db -rw-r--r-- ansible/ansible 416687 2025-06-30 11:05 aap/backups/gateway.db$ tar tvf backups/gateway_env2-gateway-node2.tar.gz | grep db$ vi inventory [automationgateway] env2-gateway-node1 env2-gateway-node2
7.4. Uninstalling containerized Ansible Automation Platform Copy linkLink copied to clipboard!
Uninstall your container-based installation of Ansible Automation Platform.
Prerequisites
- You have logged in to the Red Hat Enterprise Linux host as your dedicated non-root user.
Procedure
If you intend to reinstall Ansible Automation Platform and want to use the preserved databases, you must collect the existing secret keys:
First, list the available secrets:
$ podman secret listNext, collect the secret keys by running the following command:
$ podman secret inspect --showsecret <secret_key_variable> | jq -r .[].SecretDataFor example:
$ podman secret inspect --showsecret controller_secret_key | jq -r .[].SecretData
Run the
uninstallplaybook:$ ansible-playbook -i inventory ansible.containerized_installer.uninstallThis stops all systemd units and containers and then deletes all resources used by the containerized installer such as:
- configuration and data directories and files
- systemd unit files
- Podman containers and images
- RPM packages
To keep container images, set the
container_keep_imagesparameter totrue.$ ansible-playbook -i inventory ansible.containerized_installer.uninstall -e container_keep_images=trueTo keep PostgreSQL databases, set the
postgresql_keep_databasesparameter totrue.$ ansible-playbook -i inventory ansible.containerized_installer.uninstall -e postgresql_keep_databases=true
7.5. Reinstalling containerized Ansible Automation Platform Copy linkLink copied to clipboard!
To reinstall a containerized deployment after uninstalling and preserving the database, follow the steps in Installing containerized Ansible Automation Platform and include the existing secret key value in the playbook command:
$ ansible-playbook -i inventory ansible.containerized_installer.install -e controller_secret_key=<secret_key_value>
Chapter 8. Disconnected installation Copy linkLink copied to clipboard!
You can install containerized Ansible Automation Platform in an environment that does not have an active internet connection. To do this you need to obtain and configure the RPM source dependencies before performing the disconnected installation.
8.1. Obtaining and configuring RPM source dependencies Copy linkLink copied to clipboard!
The Ansible Automation Platform containerized setup bundle installation program does not include RPM source dependencies from the BaseOS and AppStream repositories. It relies on the host system’s package manager to resolve these dependencies.
To access these dependencies in a disconnected environment, you can use one of the following methods:
- Use Red Hat Satellite to synchronize repositories in your disconnected environment.
-
Use a local repository that you create with the
reposynccommand on a Red Hat Enterprise Linux host that has an active internet connection. - Use a local repository that you create from a mounted Red Hat Enterprise Linux Binary DVD ISO image.
8.1.1. Configuring a local repository using reposync Copy linkLink copied to clipboard!
With the reposync command you can to synchronize the BaseOS and AppStream repositories to a local directory on a Red Hat Enterprise Linux host with an active internet connection. You can then transfer the repositories to your disconnected environment.
Prerequisites
- A Red Hat Enterprise Linux host with an active internet connection.
Procedure
Attach the BaseOS and AppStream repositories using
subscription-manager, replacing<RHEL_VERSION>with your RHEL version number:$ sudo subscription-manager repos \ --enable rhel-<RHEL_VERSION>-baseos-rhui-rpms \ --enable rhel-<RHEL_VERSION>-appstream-rhui-rpmsInstall the
yum-utilspackage:$ sudo dnf install yum-utilsSynchronize the repositories with the
reposynccommand. Replace<path_to_download>with a suitable value.$ sudo reposync -m --download-metadata --gpgcheck \ -p <path_to_download>For example:
$ sudo reposync -m --download-metadata --gpgcheck \ -p rhel-repos-
Use reposync with the
--download-metadataoption and without the--newest-onlyoption for optimal download time.
-
Use reposync with the
After the
reposyncoperation is complete, compress the directory:$ tar czvf rhel-repos.tar.gz rhel-repos- Move the compressed archive to your disconnected environment.
On the disconnected environment, create a directory to store the repository files:
$ sudo mkdir /opt/rhel-reposExtract the archive into the
/opt/rhel-reposdirectory. The following command assumes the archive file is in your home directory:$ sudo tar xzvf ~/rhel-repos.tar.gz -C /optCreate a Yum repository file at
/etc/yum.repos.d/rhel.repowith the following content, replacing<RHEL_VERSION>with your RHEL version number:[RHEL-BaseOS] name=Red Hat Enterprise Linux BaseOS baseurl=file:///opt/rhel-repos/rhel-<RHEL_VERSION>-baseos-rhui-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [RHEL-AppStream] name=Red Hat Enterprise Linux AppStream baseurl=file:///opt/rhel-repos/rhel-<RHEL_VERSION>-appstream-rhui-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-releaseImport the gpg key to allow the system to verify the packages, replacing
<RHEL_VERSION>with your RHEL version number:$ sudo rpm --import /opt/rhel-repos/rhel-<RHEL_VERSION>-baseos-rhui-rpms/RPM-GPG-KEY-redhat-releaseVerify the repository configuration:
$ sudo yum repolist
8.1.2. Configuring a local repository from a mounted ISO Copy linkLink copied to clipboard!
You can use a Red Hat Enterprise Linux Binary DVD image to access the necessary RPM source dependencies in a disconnected environment.
Prerequisites
- You have downloaded the Red Hat Enterprise Linux Binary DVD image from the Red Hat Enterprise Linux downloads page and moved it to your disconnected environment.
Procedure
In your disconnected environment, create a mount point directory to serve as the location for the ISO file:
$ sudo mkdir /media/rhelMount the ISO image to the mount point. Replace
<version_number>and<arch_name>with suitable values:$ sudo mount -o loop rhel-<version_number>-<arch_name>-dvd.iso /media/rhel- Note: The ISO is mounted in a read-only state.
Create a Yum repository file at
/etc/yum.repos.d/rhel.repowith the following content:[RHEL-BaseOS] name=Red Hat Enterprise Linux BaseOS baseurl=file:///media/rhel/BaseOS enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [RHEL-AppStream] name=Red Hat Enterprise Linux AppStream baseurl=file:///media/rhel/AppStream enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-releaseImport the gpg key to allow the system to verify the packages:
$ sudo rpm --import /media/rhel/RPM-GPG-KEY-redhat-releaseVerify the repository configuration:
$ sudo yum repolist
8.2. Performing a disconnected installation Copy linkLink copied to clipboard!
A disconnected installation installs containerized Ansible Automation Platform without requiring network access to external registries.
Prerequisites
- You have prepared the Red Hat Enterprise Linux host
-
You have obtained and configured the RPM source dependencies. The installation program uses your host system’s
dnfpackage manager to resolve these dependencies. - You have prepared the managed nodes
- You have downloaded the containerized Ansible Automation Platform setup bundle from the Ansible Automation Platform download page.
Procedure
- Log in to the Red Hat Enterprise Linux host as your non-root user.
Update the inventory file by following the steps in Configuring the inventory file.
NoteDo not include
registry_usernameorregistry_passwordin your inventory file for disconnected installations. These variables are only required for online installations. All container images are pre-packaged in the setup bundle.Ensure you include the following variables in your inventory file under the
[all:vars]group:bundle_install=true # The bundle directory must include /bundle in the path bundle_dir='{{ lookup("ansible.builtin.env", "PWD") }}/bundle'- Follow the steps in Installing containerized Ansible Automation Platform to install containerized Ansible Automation Platform and verify your installation.
Chapter 9. Horizontal scaling in Red Hat Ansible Automation Platform Copy linkLink copied to clipboard!
You can set up multi-node deployments for components across Ansible Automation Platform. Whether you require horizontal scaling for Automation Execution, Automation Decisions, or automation mesh, you can scale your deployments based on your organization’s needs.
9.1. Horizontal scaling in Event-Driven Ansible controller Copy linkLink copied to clipboard!
With Event-Driven Ansible controller, you can set up horizontal scaling for your events automation. This multi-node deployment enables you to define as many nodes as you prefer during the installation process. You can also increase or decrease the number of nodes at any time according to your organizational needs.
The following node types are used in this deployment:
- API node type
- Responds to the HTTP REST API of Event-Driven Ansible controller.
- Worker node type
- Runs an Event-Driven Ansible worker, which is the component of Event-Driven Ansible that not only manages projects and activations, but also executes the activations themselves.
- Hybrid node type
- Is a combination of the API node and the worker node.
The following example shows how you can set up an inventory file for horizontal scaling of Event-Driven Ansible controller on Red Hat Enterprise Linux VMs using the host group name [automationeda] and the node type variable eda_type:
[automationeda]
3.88.116.111 routable_hostname=automationeda-api.example.com eda_type=api
# worker node
3.88.116.112 routable_hostname=automationeda-api.example.com eda_type=worker
9.1.1. Sizing and scaling guidelines Copy linkLink copied to clipboard!
API nodes process user requests (interactions with the UI or API) while worker nodes process the activations and other background tasks required for Event-Driven Ansible to function properly. The number of API nodes you require correlates to the required number of users of the application and the number of worker nodes correlates to the required number of activations you want to run.
Since activations are variable and controlled by worker nodes, the supported approach for scaling is to use separate API and worker nodes instead of hybrid nodes due to the efficient allocation of hardware resources by worker nodes. By separating the nodes, you can scale each type independently based on specific needs, leading to better resource utilization and cost efficiency.
An example of an instance in which you might consider scaling up your node deployment is when you want to deploy Event-Driven Ansible for a small group of users who will run a large number of activations. In this case, one API node is adequate, but if you require more, you can scale up to three additional worker nodes.
9.1.2. Setting up horizontal scaling for Event-Driven Ansible controller Copy linkLink copied to clipboard!
To scale up (add more nodes) or scale down (remove nodes), you must update the content of the inventory file to add or remove nodes and rerun the installation program.
Procedure
Update the inventory to add two more worker nodes:
[automationeda] 3.88.116.111 routable_hostname=automationeda-api.example.com eda_type=api 3.88.116.112 routable_hostname=automationeda-api.example.com eda_type=worker # two more worker nodes 3.88.116.113 routable_hostname=automationeda-api.example.com eda_type=worker 3.88.116.114 routable_hostname=automationeda-api.example.com eda_type=worker- Re-run the installer.
Appendix A. Troubleshooting containerized Ansible Automation Platform Copy linkLink copied to clipboard!
Use this information to troubleshoot your containerized Ansible Automation Platform installation.
A.1. Gathering Ansible Automation Platform logs Copy linkLink copied to clipboard!
With the sos utility, you can collect configuration, diagnostic, and troubleshooting data, and give those files to Red Hat Technical Support. An sos report is a common starting point for Red Hat technical support engineers when performing analysis of a service request for Ansible Automation Platform.
You can collect an sos report for each host in your containerized Ansible Automation Platform deployment by running the log_gathering playbook with the appropriate parameters.
Procedure
- Go to the Ansible Automation Platform installation directory.
Run the
log_gatheringplaybook. This playbook connects to each host in the inventory file, installs thesostool, and generates thesosreport.$ ansible-playbook -i <path_to_inventory_file> ansible.containerized_installer.log_gatheringOptional: To define additional parameters, specify them with the
-eoption. For example:$ ansible-playbook -i <path_to_inventory_file> ansible.containerized_installer.log_gathering -e 'target_sos_directory=<path_to_files>' -e 'case_number=0000000' -e 'clean=true' -e 'upload=true' -s-
You can use the
-soption to step through each task in the playbook and confirm its execution. This is optional but can be helpful for debugging. The following is a list of the parameters you can use with the
log_gatheringplaybook:Expand Table A.1. Parameter reference Parameter name Description Default target_sos_directoryUsed to change the default location for the
sosreport files./tmpdirectory of the current server.case_numberSpecifies the support case number if relevant to the log gathering.
cleanObfuscates sensitive data that might be present on the
sosreport.falseuploadAutomatically uploads the
sosreport data to Red Hat.false
-
You can use the
-
Gather the
sosreport files described in the playbook output and share them with the support engineer or directly upload thesosreport to Red Hat using theupload=trueadditional parameter.
A.2. Diagnosing the problem Copy linkLink copied to clipboard!
For general container-based troubleshooting, you can inspect the container logs for any running service to help troubleshoot underlying issues.
Identifying the running containers
To get a list of the running container names run the following command:
$ podman ps --all --format "{{.Names}}"
| Component group | Container name | Purpose |
|---|---|---|
| Automation controller |
| Handles centralized logging for automation controller. |
| Automation controller |
| Manages and runs tasks related to automation controller, such as running playbooks and interacting with inventories. |
| Automation controller |
| A web server that provides a REST API for automation controller. This is accessed and routed through platform gateway for user interaction. |
| Event-Driven Ansible |
| Exposes the API for Event-Driven Ansible, allowing external systems to trigger and manage event-driven automations. |
| Event-Driven Ansible |
| A web server for Event-Driven Ansible, handling WebSocket connections and serving static files. |
| Event-Driven Ansible |
| A web server that provides a REST API for Event-Driven Ansible. This is accessed and routed through platform gateway for user interaction. |
| Event-Driven Ansible |
| These containers run the automation rules and playbooks based on incoming events. |
| Event-Driven Ansible |
| These containers manage the activation of automation rules, ensuring they run when specific conditions are met. |
| Event-Driven Ansible |
| Responsible for scheduling and managing recurring tasks and rule activations. |
| Platform gateway |
| Acts as a reverse proxy, routing incoming requests to the appropriate Ansible Automation Platform services. |
| Platform gateway |
| Responsible for authentication, authorization, and overall request handling for the platform, all of which is exposed through a REST API and served by a web server. |
| Automation hub |
| Provides the API for automation hub, enabling interaction with collection content, user management, and other automation hub functionality. |
| Automation hub |
| Manages and serves Ansible Content Collections, roles, and modules stored in automation hub. |
| Automation hub |
| A web server that provides a REST API for automation hub. This is accessed and routed through platform gateway for user interaction. |
| Automation hub |
| These containers handle background tasks for automation hub, such as content synchronization, indexing, and validation. |
| Performance Co-Pilot |
| If Performance Co-Pilot Monitoring is enabled, this container is used for system performance monitoring and data collection. |
| PostgreSQL |
| Hosts the PostgreSQL database for Ansible Automation Platform. |
| Receptor |
| Facilitates secure and reliable communication within Ansible Automation Platform. |
| Redis |
| Responsible for caching, real-time analytics and fast data retrieval. |
Inspecting the logs
Containerized Ansible Automation Platform uses journald for Podman logging. To inspect any running container logs, run the journalctl command:
$ journalctl CONTAINER_NAME=<container_name>
Example command with output:
$ journalctl CONTAINER_NAME=automation-gateway-proxy
Oct 08 01:40:12 aap.example.org automation-gateway-proxy[1919]: [2024-10-08 00:40:12.885][2][info][upstream] [external/envoy/source/common/upstream/cds_ap>
Oct 08 01:40:12 aap.example.org automation-gateway-proxy[1919]: [2024-10-08 00:40:12.885][2][info][upstream] [external/envoy/source/common/upstream/cds_ap>
Oct 08 01:40:19 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T00:40:16.753Z] "GET /up HTTP/1.1" 200 - 0 1138 10 0 "192.0.2.1" "python->
To view the logs of a running container in real-time, run the podman logs -f command:
$ podman logs -f <container_name>
Controlling container operations
You can control operations for a container by running the systemctl command:
$ systemctl --user status <container_name>
Example command with output:
$ systemctl --user status automation-gateway-proxy
● automation-gateway-proxy.service - Podman automation-gateway-proxy.service
Loaded: loaded (/home/user/.config/systemd/user/automation-gateway-proxy.service; enabled; preset: disabled)
Active: active (running) since Mon 2024-10-07 12:39:23 BST; 23h ago
Docs: man:podman-generate-systemd(1)
Process: 780 ExecStart=/usr/bin/podman start automation-gateway-proxy (code=exited, status=0/SUCCESS)
Main PID: 1919 (conmon)
Tasks: 1 (limit: 48430)
Memory: 852.0K
CPU: 2.996s
CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/automation-gateway-proxy.service
└─1919 /usr/bin/conmon --api-version 1 -c 2dc3c7b2cecd73010bad1e0aaa806015065f92556ed3591c9d2084d7ee209c7a -u 2dc3c7b2cecd73010bad1e0aaa80>
Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:02.926Z] "GET /api/galaxy/_ui/v1/settings/ HTTP/1.1" 200 - 0 654 58 47 ">
Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:03.387Z] "GET /api/controller/v2/config/ HTTP/1.1" 200 - 0 4018 58 44 "1>
Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:03.370Z] "GET /api/galaxy/v3/plugin/ansible/search/collection-versions/?>
Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:03.405Z] "GET /api/controller/v2/organizations/?role_level=notification_>
Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:04.366Z] "GET /api/galaxy/_ui/v1/me/ HTTP/1.1" 200 - 0 1368 79 40 "192.1>
Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:04.360Z] "GET /api/controller/v2/workflow_approvals/?page_size=200&statu>
Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:04.379Z] "GET /api/controller/v2/job_templates/7/ HTTP/1.1" 200 - 0 1356>
Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:04.378Z] "GET /api/galaxy/_ui/v1/feature-flags/ HTTP/1.1" 200 - 0 207 81>
Oct 08 11:44:13 aap.example.org automation-gateway-proxy[1919]: [2024-10-08 10:44:13.856][2][info][upstream] [external/envoy/source/common/upstream/cds_ap>
Oct 08 11:44:13 aap.example.org automation-gateway-proxy[1919]: [2024-10-08 10:44:13.856][2][info][upstream] [external/envoy/source/common/upstream/cds_ap
Getting container information about the execution plane
To get container information about automation controller, Event-Driven Ansible, and execution_nodes nodes, prefix any Podman commands with either:
CONTAINER_HOST=unix://run/user/<user_id>/podman/podman.sock
or
CONTAINERS_STORAGE_CONF=<user_home_directory>/aap/containers/storage.conf
Example with output:
$ CONTAINER_HOST=unix://run/user/1000/podman/podman.sock podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.redhat.io/ansible-automation-platform-25/ee-supported-rhel8 latest 59d1bc680a7c 6 days ago 2.24 GB
registry.redhat.io/ansible-automation-platform-25/ee-minimal-rhel8 latest a64b9fc48094 6 days ago 338 MB
A.3. Troubleshooting containerized Ansible Automation Platform installation Copy linkLink copied to clipboard!
Use this information to troubleshoot your containerized installation of Ansible Automation Platform.
The installation takes a long time, or has errors, what should I check?
- Ensure your system meets the minimum requirements as outlined in System requirements. Factors such as improper storage choices and high latency when distributing across many hosts will all have an impact on installation time.
-
Review the installation log file which is located by default at
./aap_install.log. You can change the log file location within theansible.cfgfile in the installation directory. -
Enable task profiling callbacks on an ad hoc basis to give an overview of where the installation program spends the most time. To do this, use the local
ansible.cfgfile. Add a callback line under the[defaults]section, for example:
$ cat ansible.cfg
[defaults]
callbacks_enabled = ansible.posix.profile_tasks
Automation controller returns an error of 413
This error occurs when manifest.zip license files that are larger than the controller_nginx_client_max_body_size setting. If this error occurs, update the inventory file to include the following variable:
controller_nginx_client_max_body_size=5m
The default setting of 5m should prevent this issue, but you can increase the value as needed.
When attempting to install containerized Ansible Automation Platform in Amazon Web Services you receive output that there is no space left on device
TASK [ansible.containerized_installer.automationcontroller : Create the receptor container] ***************************************************
fatal: [ec2-13-48-25-168.eu-north-1.compute.amazonaws.com]: FAILED! => {"changed": false, "msg": "Can't create container receptor", "stderr": "Error: creating container storage: creating an ID-mapped copy of layer \"98955f43cc908bd50ff43585fec2c7dd9445eaf05eecd1e3144f93ffc00ed4ba\": error during chown: storage-chown-by-maps: lchown usr/local/lib/python3.9/site-packages/azure/mgmt/network/v2019_11_01/operations/__pycache__/_available_service_aliases_operations.cpython-39.pyc: no space left on device: exit status 1\n", "stderr_lines": ["Error: creating container storage: creating an ID-mapped copy of layer \"98955f43cc908bd50ff43585fec2c7dd9445eaf05eecd1e3144f93ffc00ed4ba\": error during chown: storage-chown-by-maps: lchown usr/local/lib/python3.9/site-packages/azure/mgmt/network/v2019_11_01/operations/__pycache__/_available_service_aliases_operations.cpython-39.pyc: no space left on device: exit status 1"], "stdout": "", "stdout_lines": []}
If you are installing a /home filesystem into a default Amazon Web Services marketplace RHEL instance, it might be too small since /home is part of the root / filesystem. To resolve this issue you must make more space available. For more information about the system requirements, see System requirements.
"Install container tools" task fails due to unavailable packages
This error can be seen in the installation process output as the following:
TASK [ansible.containerized_installer.common : Install container tools] **********************************************************************************************************
fatal: [192.0.2.1]: FAILED! => {"changed": false, "failures": ["No package crun available.", "No package podman available.", "No package slirp4netns available.", "No package fuse-overlayfs available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
fatal: [192.0.2.2]: FAILED! => {"changed": false, "failures": ["No package crun available.", "No package podman available.", "No package slirp4netns available.", "No package fuse-overlayfs available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
fatal: [192.0.2.3]: FAILED! => {"changed": false, "failures": ["No package crun available.", "No package podman available.", "No package slirp4netns available.", "No package fuse-overlayfs available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
fatal: [192.0.2.4]: FAILED! => {"changed": false, "failures": ["No package crun available.", "No package podman available.", "No package slirp4netns available.", "No package fuse-overlayfs available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
fatal: [192.0.2.5]: FAILED! => {"changed": false, "failures": ["No package crun available.", "No package podman available.", "No package slirp4netns available.", "No package fuse-overlayfs available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []}
To fix this error, run the following command on the target hosts:
sudo subscription-manager register
A.4. Troubleshooting containerized Ansible Automation Platform configuration Copy linkLink copied to clipboard!
Use this information to troubleshoot your containerized Ansible Automation Platform configuration.
Sometimes the post install for seeding my Ansible Automation Platform content errors out
This could manifest itself as output similar to this:
TASK [infra.controller_configuration.projects : Configure Controller Projects | Wait for finish the projects creation] ***************************************
Friday 29 September 2023 11:02:32 +0100 (0:00:00.443) 0:00:53.521 ******
FAILED - RETRYING: [daap1.lan]: Configure Controller Projects | Wait for finish the projects creation (1 retries left).
failed: [daap1.lan] (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': '536962174348.33944', 'results_file': '/home/aap/.ansible_async/536962174348.33944', 'changed': False, '__controller_project_item': {'name': 'AAP Config-As-Code Examples', 'organization': 'Default', 'scm_branch': 'main', 'scm_clean': 'no', 'scm_delete_on_update': 'no', 'scm_type': 'git', 'scm_update_on_launch': 'no', 'scm_url': 'https://github.com/user/repo.git'}, 'ansible_loop_var': '__controller_project_item'}) => {"__projects_job_async_results_item": {"__controller_project_item": {"name": "AAP Config-As-Code Examples", "organization": "Default", "scm_branch": "main", "scm_clean": "no", "scm_delete_on_update": "no", "scm_type": "git", "scm_update_on_launch": "no", "scm_url": "https://github.com/user/repo.git"}, "ansible_job_id": "536962174348.33944", "ansible_loop_var": "__controller_project_item", "changed": false, "failed": 0, "finished": 0, "results_file": "/home/aap/.ansible_async/536962174348.33944", "started": 1}, "ansible_job_id": "536962174348.33944", "ansible_loop_var": "__projects_job_async_results_item", "attempts": 30, "changed": false, "finished": 0, "results_file": "/home/aap/.ansible_async/536962174348.33944", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
The infra.controller_configuration.dispatch role uses an asynchronous loop with 30 retries to apply each configuration type, and the default delay between retries is 1 second. If the configuration is large, this might not be enough time to apply everything before the last retry occurs.
Increase the retry delay by setting the controller_configuration_async_delay variable to 2 seconds for example. You can set this variable in the [all:vars] section of the installation program inventory file.
Re-run the installation program to ensure everything works as expected.
A.5. Containerized Ansible Automation Platform reference Copy linkLink copied to clipboard!
Use this information to understand the architecture for your containerized Ansible Automation Platform deployment.
Can you give details of the architecture for the Ansible Automation Platform containerized design?
We use as much of the underlying Red Hat Enterprise Linux technology as possible. Podman is used for the container runtime and management of services.
Use podman ps to list the running containers on the system:
$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
88ed40495117 registry.redhat.io/rhel8/postgresql-13:latest run-postgresql 48 minutes ago Up 47 minutes postgresql
8f55ba612f04 registry.redhat.io/rhel8/redis-6:latest run-redis 47 minutes ago Up 47 minutes redis
56c40445c590 registry.redhat.io/ansible-automation-platform-24/ee-supported-rhel8:latest /usr/bin/receptor... 47 minutes ago Up 47 minutes receptor
f346f05d56ee registry.redhat.io/ansible-automation-platform-24/controller-rhel8:latest /usr/bin/launch_a... 47 minutes ago Up 45 minutes automation-controller-rsyslog
26e3221963e3 registry.redhat.io/ansible-automation-platform-24/controller-rhel8:latest /usr/bin/launch_a... 46 minutes ago Up 45 minutes automation-controller-task
c7ac92a1e8a1 registry.redhat.io/ansible-automation-platform-24/controller-rhel8:latest /usr/bin/launch_a... 46 minutes ago Up 28 minutes automation-controller-web
Use podman images to display information about locally stored images:
$ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.redhat.io/ansible-automation-platform-24/ee-supported-rhel8 latest b497bdbee59e 10 days ago 3.16 GB
registry.redhat.io/ansible-automation-platform-24/controller-rhel8 latest ed8ebb1c1baa 10 days ago 1.48 GB
registry.redhat.io/rhel8/redis-6 latest 78905519bb05 2 weeks ago 357 MB
registry.redhat.io/rhel8/postgresql-13 latest 9b65bc3d0413 2 weeks ago 765 MB
Containerized Ansible Automation Platform runs as rootless containers for enhanced security by default. This means you can install containerized Ansible Automation Platform by using any local unprivileged user account. Privilege escalation is only needed for certain root level tasks, and by default is not needed to use root directly.
The installation program adds the following files to the filesystem where you run the installation program on the underlying Red Hat Enterprise Linux host:
$ tree -L 1
.
├── aap_install.log
├── ansible.cfg
├── collections
├── galaxy.yml
├── inventory
├── LICENSE
├── meta
├── playbooks
├── plugins
├── README.md
├── requirements.yml
├── roles
The installation root directory includes other containerized services that make use of Podman volumes.
Here are some examples for further reference:
The containers directory includes some of the Podman specifics used and installed for the execution plane:
containers/
├── podman
├── storage
│ ├── defaultNetworkBackend
│ ├── libpod
│ ├── networks
│ ├── overlay
│ ├── overlay-containers
│ ├── overlay-images
│ ├── overlay-layers
│ ├── storage.lock
│ └── userns.lock
└── storage.conf
The controller directory has some of the installed configuration and runtime data points:
controller/
├── data
│ ├── job_execution
│ ├── projects
│ └── rsyslog
├── etc
│ ├── conf.d
│ ├── launch_awx_task.sh
│ ├── settings.py
│ ├── tower.cert
│ └── tower.key
├── nginx
│ └── etc
├── rsyslog
│ └── run
└── supervisor
└── run
The receptor directory has the automation mesh configuration:
receptor/
├── etc
│ └── receptor.conf
└── run
├── receptor.sock
└── receptor.sock.lock
After installation, you will also find other files in the local user’s /home directory such as the .cache directory:
.cache/
├── containers
│ └── short-name-aliases.conf.lock
└── rhsm
└── rhsm.log
As services are run using rootless Podman by default, you can use other services such as running systemd as non-privileged users. Under systemd you can see some of the component service controls available:
The .config directory:
.config/
├── cni
│ └── net.d
│ └── cni.lock
├── containers
│ ├── auth.json
│ └── containers.conf
└── systemd
└── user
├── automation-controller-rsyslog.service
├── automation-controller-task.service
├── automation-controller-web.service
├── default.target.wants
├── podman.service.d
├── postgresql.service
├── receptor.service
├── redis.service
└── sockets.target.wants
This is specific to Podman and conforms to the Open Container Initiative (OCI) specifications. When you run Podman as the root user /var/lib/containers is used by default. For standard users the hierarchy under $HOME/.local is used.
The .local directory:
.local/
└── share
└── containers
├── cache
├── podman
└── storage
As an example .local/storage/volumes contains what the output from podman volume ls provides:
$ podman volume ls
DRIVER VOLUME NAME
local d73d3fe63a957bee04b4853fd38c39bf37c321d14fdab9ee3c9df03645135788
local postgresql
local redis_data
local redis_etc
local redis_run
The execution plane is isolated from the control plane main services to ensure it does not affect the main services.
Control plane services run with the standard Podman configuration and can be found in: ~/.local/share/containers/storage.
Execution plane services (automation controller, Event-Driven Ansible and execution nodes) use a dedicated configuration found in ~/aap/containers/storage.conf. This separation prevents execution plane containers from affecting the control plane services.
You can view the execution plane configuration with one of the following commands:
CONTAINERS_STORAGE_CONF=~/aap/containers/storage.conf podman <subcommand>
CONTAINER_HOST=unix://run/user/<user uid>/podman/podman.sock podman <subcommand>
How can I see host resource utilization statistics?
Run the following command to display host resource utilization statistics:
$ podman container stats -a
Example output based on a Dell sold and offered containerized Ansible Automation Platform solution (DAAP) install that utilizes ~1.8 GB RAM:
ID NAME CPU % MEM USAGE / LIMIT MEM % NET IO BLOCK IO PIDS CPU TIME AVG CPU %
0d5d8eb93c18 automation-controller-web 0.23% 959.1MB / 3.761GB 25.50% 0B / 0B 0B / 0B 16 20.885142s 1.19%
3429d559836d automation-controller-rsyslog 0.07% 144.5MB / 3.761GB 3.84% 0B / 0B 0B / 0B 6 4.099565s 0.23%
448d0bae0942 automation-controller-task 1.51% 633.1MB / 3.761GB 16.83% 0B / 0B 0B / 0B 33 34.285272s 1.93%
7f140e65b57e receptor 0.01% 5.923MB / 3.761GB 0.16% 0B / 0B 0B / 0B 7 1.010613s 0.06%
c1458367ca9c redis 0.48% 10.52MB / 3.761GB 0.28% 0B / 0B 0B / 0B 5 9.074042s 0.47%
ef712cc2dc89 postgresql 0.09% 21.88MB / 3.761GB 0.58% 0B / 0B 0B / 0B 21 15.571059s 0.80%
How much storage is used and where?
The container volume storage is under the local user at $HOME/.local/share/containers/storage/volumes.
To view the details of each volume, run the following command:
$ podman volume lsRun the following command to display detailed information about a specific volume:
$ podman volume inspect <volume_name>
For example:
$ podman volume inspect postgresql
Example output:
[
{
"Name": "postgresql",
"Driver": "local",
"Mountpoint": "/home/aap/.local/share/containers/storage/volumes/postgresql/_data",
"CreatedAt": "2024-01-08T23:39:24.983964686Z",
"Labels": {},
"Scope": "local",
"Options": {},
"MountCount": 0,
"NeedsCopyUp": true
}
]
Several files created by the installation program are located in $HOME/aap/ and bind-mounted into various running containers.
To view the mounts associated with a container run the following command:
$ podman ps --format "{{.ID}}\t{{.Command}}\t{{.Names}}"Example output:
89e779b81b83 run-postgresql postgresql 4c33cc77ef7d run-redis redis 3d8a028d892d /usr/bin/receptor... receptor 09821701645c /usr/bin/launch_a... automation-controller-rsyslog a2ddb5cac71b /usr/bin/launch_a... automation-controller-task fa0029a3b003 /usr/bin/launch_a... automation-controller-web 20f192534691 gunicorn --bind 1... automation-eda-api f49804c7e6cb daphne -b 127.0.0... automation-eda-daphne d340b9c1cb74 /bin/sh -c nginx ... automation-eda-web 111f47de5205 aap-eda-manage rq... automation-eda-worker-1 171fcb1785af aap-eda-manage rq... automation-eda-worker-2 049d10555b51 aap-eda-manage rq... automation-eda-activation-worker-1 7a78a41a8425 aap-eda-manage rq... automation-eda-activation-worker-2 da9afa8ef5e2 aap-eda-manage sc... automation-eda-scheduler 8a2958be9baf gunicorn --name p... automation-hub-api 0a8b57581749 gunicorn --name p... automation-hub-content 68005b987498 nginx -g daemon o... automation-hub-web cb07af77f89f pulpcore-worker automation-hub-worker-1 a3ba05136446 pulpcore-worker automation-hub-worker-2Run the following command:
$ podman inspect <container_name> | jq -r .[].Mounts[].SourceExample output:
/home/aap/.local/share/containers/storage/volumes/receptor_run/_data /home/aap/.local/share/containers/storage/volumes/redis_run/_data /home/aap/aap/controller/data/rsyslog /home/aap/aap/controller/etc/tower.key /home/aap/aap/controller/etc/conf.d/callback_receiver_workers.py /home/aap/aap/controller/data/job_execution /home/aap/aap/controller/nginx/etc/controller.conf /home/aap/aap/controller/etc/conf.d/subscription_usage_model.py /home/aap/aap/controller/etc/conf.d/cluster_host_id.py /home/aap/aap/controller/etc/conf.d/insights.py /home/aap/aap/controller/rsyslog/run /home/aap/aap/controller/data/projects /home/aap/aap/controller/etc/settings.py /home/aap/aap/receptor/etc/receptor.conf /home/aap/aap/controller/etc/conf.d/execution_environments.py /home/aap/aap/tls/extracted /home/aap/aap/controller/supervisor/run /home/aap/aap/controller/etc/uwsgi.ini /home/aap/aap/controller/etc/conf.d/container_groups.py /home/aap/aap/controller/etc/launch_awx_task.sh /home/aap/aap/controller/etc/tower.certIf the
jqRPM is not installed, install it by running the following command:$ sudo dnf -y install jq
Appendix B. Inventory file variables Copy linkLink copied to clipboard!
The following tables contain information about the variables used in Ansible Automation Platform’s installation inventory files. The tables include the variables that you can use for RPM-based installation and container-based installation.
B.1. Ansible variables Copy linkLink copied to clipboard!
The following variables control how Ansible Automation Platform interacts with remote hosts.
| Variable | Description |
|---|---|
|
| The connection plugin used for the task on the target host. This can be the name of any Ansible connection plugin.
SSH protocol types are
Default = |
|
|
The IP address or name of the target host to use instead of |
|
| The password to authenticate to the host. Do not store this variable in plain text. Always use a vault. For more information, see Keep vaulted variables safely visible. |
|
| The connection port number.
The default for SSH is |
|
|
This setting is always appended to the default |
|
|
This setting is always appended to the default |
|
|
This sets the shell that the Ansible controller uses on the target machine and overrides the executable in |
|
| The shell type of the target system.
Do not use this setting unless you have set the |
|
|
This setting is always appended to the default command line for |
|
|
This setting overrides the default behavior to use the system |
|
|
This setting is always appended to the default |
|
|
Determines if SSH
This can override the |
|
| Private key file used by SSH. Useful if using multiple keys and you do not want to use an SSH agent. |
|
| The user name to use when connecting to the host.
Do not change this variable unless |
|
| This variable takes the hostname of the machine from the inventory script or the Ansible configuration file. You cannot set the value of this variable. Because the value is taken from the configuration file, the actual runtime hostname value can vary from what is returned by this variable. |
B.2. Automation hub variables Copy linkLink copied to clipboard!
Inventory file variables for automation hub.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
|
|
|
Automation hub administrator password. Use of special characters for this variable is limited. The password can include any printable ASCII character except | Required | |
|
| Set the existing token for the installation program. For example, a regenerated token in the automation hub UI will invalidate an existing token. Use this variable to set that token in the installation program the next time you run the installation program. | Optional | ||
|
|
|
If a collection signing service is enabled, collections are not signed automatically by default. Set this variable to | Optional |
|
|
|
Ansible automation hub provides artifacts in | Optional |
| |
|
|
| Maximum allowed size for data sent to automation hub through NGINX. | Optional |
|
|
| Denote whether or not the collection download count should be displayed in the UI. | Optional |
| |
|
|
Controls the type of content to upload when | Optional | Both certified and validated are enabled by default. | |
|
|
| Path to the collection signing key file. | Required if a collection signing service is enabled. | |
|
|
Denote whether or not to run the command | Optional |
| |
|
|
| Path to the container signing key file. | Required if a container signing service is enabled. | |
|
|
|
Set this variable to | Optional |
|
|
|
|
Set this variable to | Optional |
|
|
| automation hub backup path to exclude. | Optional |
| |
|
|
|
Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for automation hub. Set this variable to | Optional |
|
|
|
|
Controls whether HTTPS is enabled or disabled for automation hub. Set this variable to | Optional |
|
|
|
Controls whether logging is enabled or disabled at | Optional |
| |
|
|
Controls whether read-only access is enabled or disabled for unauthorized users viewing collections or namespaces for automation hub. Set this variable to | Optional |
| |
|
|
Controls whether or not unauthorized users can download read-only collections from automation hub. Set this variable to | Optional |
| |
|
|
| The firewall zone where automation hub related firewall rules are applied. This controls which networks can access automation hub based on the zone’s trust level. | Optional |
RPM = no default set. Container = |
|
|
Denote whether or not to require the change of the default administrator password for automation hub during installation. Set to | Optional |
| |
|
|
|
Dictionary of settings to pass to the | Optional | |
|
|
Denote whether the web certificate sources are local to the installation program ( | Optional |
The value defined in | |
|
|
|
Controls whether client certificate authentication is enabled or disabled on the automation hub PostgreSQL database. Set this variable to | Optional |
|
|
|
| Name of the PostgreSQL database used by automation hub. | Optional |
RPM = |
|
|
| Hostname of the PostgreSQL database used by automation hub. | Required |
RPM = |
|
|
|
Password for the automation hub PostgreSQL database user. Use of special characters for this variable is limited. The | Optional | |
|
|
| Port number for the PostgreSQL database used by automation hub. | Optional |
|
|
|
|
Controls the SSL/TLS mode to use when automation hub connects to the PostgreSQL database. Valid options include | Optional |
|
|
|
| Username for the automation hub PostgreSQL database user. | Optional |
RPM = |
|
|
| Path to the PostgreSQL SSL/TLS certificate file for automation hub. | Required if using client certificate authentication. | |
|
|
| Path to the PostgreSQL SSL/TLS key file for automation hub. | Required if using client certificate authentication. | |
|
|
Denote whether the PostgreSQL client certificate sources are local to the installation program ( | Optional |
The value defined in | |
|
|
Controls whether content signing is enabled or disabled for automation hub. By default when you upload collections to automation hub, an administrator must approve it before they are made available to users. To disable the content approval flow, set the variable to | Optional |
| |
|
|
Controls whether or not existing signing keys should be restored from a backup. Set to | Optional |
| |
|
|
|
Controls whether or not pre-loading of collections is enabled. When you run the bundle installer, validated content is uploaded to the | Optional |
|
|
|
| Path to the SSL/TLS certificate file for automation hub. | Optional | |
|
|
| Path to the SSL/TLS key file for automation hub. | Optional | |
|
|
|
Denote whether the automation hub provided certificate files are local to the installation program ( | Optional |
|
|
|
|
Controls whether archive compression is enabled or disabled for automation hub. You can control this functionality globally by using | Optional |
|
|
|
|
Controls whether database compression is enabled or disabled for automation hub. You can control this functionality globally by using | Optional |
|
|
|
| List of additional NGINX headers to add to automation hub’s NGINX configuration. | Optional |
|
|
|
Controls whether automation hub is the only registry for execution environment images. If set to | Optional |
| |
|
|
Controls whether or not a token is generated for automation hub during installation. By default, a token is automatically generated during a fresh installation. If set to | Optional |
| |
|
| Defines additional settings for use by automation hub during installation. For example:
| Optional |
| |
|
|
| Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for automation hub. | Optional |
|
|
|
| Secret key value used by automation hub to sign and encrypt data. | Optional | |
|
| Azure blob storage account key. | Required if using an Azure blob storage backend. | ||
|
| Account name associated with the Azure blob storage. | Required when using an Azure blob storage backend. | ||
|
| Name of the Azure blob storage container. | Optional |
| |
|
| Defines extra parameters for the Azure blob storage backend. For more information about the list of parameters, see django-storages documentation - Azure Storage. | Optional |
| |
|
| Password for the automation content collection signing service. | Required if the collection signing service is protected by a passphrase. | ||
|
| Service for signing collections. | Optional |
| |
|
| Password for the automation content container signing service. | Required if the container signing service is protected by a passphrase. | ||
|
| Service for signing containers. | Optional |
| |
|
| Port number that automation hub listens on for HTTP requests. | Optional |
| |
|
| Port number that automation hub listens on for HTTPS requests. | Optional |
| |
|
|
| Protocols that automation hub will support when handling HTTPS traffic. | Optional |
|
|
| UNIX socket used by automation hub to connect to the PostgreSQL database. | Optional | ||
|
| AWS S3 access key. | Required if using an AWS S3 storage backend. | ||
|
| Name of the AWS S3 storage bucket. | Optional |
| |
|
| Used to define extra parameters for the AWS S3 storage backend. For more information about the list of parameters, see django-storages documentation - Amazon S3. | Optional |
| |
|
| AWS S3 secret key. | Required if using an AWS S3 storage backend. | ||
|
| Mount options for the Network File System (NFS) share. | Optional |
| |
|
|
Path to the Network File System (NFS) share with read, write, and execute (RWX) access. The value must match the format |
Required if installing more than one instance of automation hub with a | ||
|
|
Automation hub storage backend type. Possible values include: | Optional |
| |
|
| Number of automation hub workers. | Optional |
|
B.3. Automation controller variables Copy linkLink copied to clipboard!
Inventory file variables for automation controller.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
|
|
| Email address used by Django for the admin user for automation controller. | Optional |
|
|
|
|
Automation controller administrator password. Use of special characters for this variable is limited. The password can include any printable ASCII character except | Required | |
|
|
| Username used to identify and create the administrator user in automation controller. | Optional |
|
|
|
| Maximum allowed size for data sent to automation controller through NGINX. | Optional |
|
|
|
|
Controls whether archive compression is enabled or disabled for automation controller. You can control this functionality globally by using | Optional |
|
|
|
|
Controls whether database compression is enabled or disabled for automation controller. You can control this functionality globally by using | Optional |
|
|
|
|
Controls whether client certificate authentication is enabled or disabled on the automation controller PostgreSQL database. Set this variable to | Optional |
|
|
|
| The firewall zone where automation controller related firewall rules are applied. This controls which networks can access automation controller based on the zone’s trust level. | Optional |
|
|
|
Denote whether the web certificate sources are local to the installation program ( | Optional |
The value defined in | |
|
|
Denote whether the PostgreSQL client certificate sources are local to the installation program ( | Optional |
The value defined in | |
|
|
|
Denote whether the automation controller provided certificate files are local to the installation program ( | Optional |
|
|
|
|
Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for automation controller. Set this variable to | Optional |
|
|
|
|
Controls whether HTTPS is enabled or disabled for automation controller. Set this variable to | Optional |
|
|
|
| Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for automation controller. | Optional |
|
|
|
| Port number that automation controller listens on for HTTP requests. | Optional |
RPM = |
|
|
| Port number that automation controller listens on for HTTPS requests. | Optional |
RPM = |
|
|
| Protocols that automation controller supports when handling HTTPS traffic. | Optional |
|
|
|
| List of additional NGINX headers to add to automation controller’s NGINX configuration. | Optional |
|
|
| Controls whether or not to create preloaded content during installation. | Optional |
| |
|
|
The status of a node or group of nodes. Valid options include | Optional |
| |
|
|
See |
For the
For the
| Optional |
For |
|
|
See |
Used to indicate which nodes a specific host or group connects to. Wherever this variable is defined, an outbound connection to the specific host or group is established. This variable can be a comma-separated list of hosts and groups from the inventory. This is resolved into a set of hosts that is used to construct the | Optional | |
|
|
| Name of the PostgreSQL database used by automation controller. | Optional |
|
|
|
| Hostname of the PostgreSQL database used by automation controller. | Required | |
|
|
|
Password for the automation controller PostgreSQL database user. Use of special characters for this variable is limited. The | Required if not using client certificate authentication. | |
|
|
| Port number for the PostgreSQL database used by automation controller. | Optional |
|
|
|
|
Controls the SSL/TLS mode to use when automation controller connects to the PostgreSQL database. Valid options include | Optional |
|
|
|
| Username for the automation controller PostgreSQL database user. | Optional |
|
|
|
| Path to the PostgreSQL SSL/TLS certificate file for automation controller. | Required if using client certificate authentication. | |
|
|
| Path to the PostgreSQL SSL/TLS key file for automation controller. | Required if using client certificate authentication. | |
|
|
Number of hours worth of events table partitions to pre-create before starting a backup to avoid | Optional | 3 | |
|
|
|
Number of requests | Optional |
|
|
|
| Path to the SSL/TLS certificate file for automation controller. | Optional | |
|
|
| Path to the SSL/TLS key file for automation controller. | Optional | |
|
| Number of event workers that handle job-related events inside automation controller. | Optional |
| |
|
| Defines additional settings for use by automation controller during installation. For example:
| Optional |
| |
|
| Path to the automation controller license file. | |||
|
| Memory allocation for automation controller. | Optional |
| |
|
| UNIX socket used by automation controller to connect to the PostgreSQL database. | Optional | ||
|
| Secret key value used by automation controller to sign and encrypt data. | Optional |
B.4. Database variables Copy linkLink copied to clipboard!
Inventory file variables for the database used with Ansible Automation Platform.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
|
|
| Port number for the PostgreSQL database. | Optional |
|
|
|
| Defines additional settings for use by PostgreSQL. Example usage for RPM:
Example usage for containerized:
| Optional | |
|
|
| The firewall zone where PostgreSQL related firewall rules are applied. This controls which networks can access PostgreSQL based on the zone’s trust level. | Optional |
RPM = no default set. Container = |
|
|
| Maximum number of concurrent connections to the database if you are using an installer-managed database. For more information see PostgreSQL database configuration and maintenance for automation controller. | Optional |
|
|
|
| Path to the PostgreSQL SSL/TLS certificate file. | Optional | |
|
|
| Path to the PostgreSQL SSL/TLS key file. | Optional | |
|
|
| Controls whether SSL/TLS is enabled or disabled for the PostgreSQL database. | Optional |
|
|
| Database name used for connections to the PostgreSQL database server. | Optional |
| |
|
| Password for the PostgreSQL admin user. When used, the installation program creates each component’s database and credentials. |
Required if using | ||
|
| Username for the PostgreSQL admin user. When used, the installation program creates each component’s database and credentials. | Optional |
| |
|
| Memory allocation available (in MB) for caching data. | Optional | ||
|
|
Controls whether or not to keep databases during uninstall. This variable applies to databases managed by the installation program only, and not external (customer-managed) databases. Set to | Optional |
| |
|
| Destination for server log output. | Optional |
| |
|
| The algorithm for encrypting passwords. | Optional |
| |
|
| Memory allocation (in MB) for shared memory buffers. | Optional | ||
|
|
Denote whether the PostgreSQL provided certificate files are local to the installation program ( | Optional |
| |
|
|
Controls whether archive compression is enabled or disabled for PostgreSQL. You can control this functionality globally by using | Optional |
|
B.5. Event-Driven Ansible controller variables Copy linkLink copied to clipboard!
Inventory file variables for Event-Driven Ansible controller.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
|
|
| Number of workers used for ansible-rulebook activation pods in Event-Driven Ansible. | Optional |
RPM = (# of cores or threads) * 2 + 1. Container = |
|
|
| Email address used by Django for the admin user for Event-Driven Ansible. | Optional |
|
|
|
|
Event-Driven Ansible administrator password. Use of special characters for this variable is limited. The password can include any printable ASCII character except | Required | |
|
|
| Username used to identify and create the administrator user in Event-Driven Ansible. | Optional |
|
|
| Number of workers for handling the API served through Gunicorn on worker nodes. | Optional |
| |
|
|
Denote whether the cache cert sources are local to the installation program ( | Optional |
| |
|
|
Controls whether or not to regenerate Event-Driven Ansible client certificates for the platform cache. Set to | Optional |
| |
|
|
| Number of workers used in Event-Driven Ansible for application work. | Optional | Number of cores or threads |
|
|
|
Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for Event-Driven Ansible. Set this variable to | Optional |
|
|
|
|
Controls whether HTTPS is enabled or disabled for Event-Driven Ansible. Set this variable to | Optional |
|
|
|
| API prefix path used for Event-Driven Ansible event-stream through platform gateway. | Optional |
|
|
|
| The firewall zone where Event-Driven Ansible related firewall rules are applied. This controls which networks can access Event-Driven Ansible based on the zone’s trust level. | Optional |
RPM = no default set. Container = |
|
| Number of workers for handling event streaming for Event-Driven Ansible. | Optional |
| |
|
|
| Number of workers for handling the API served through Gunicorn. | Optional | (Number of cores or threads) * 2 + 1 |
|
|
| Port number that Event-Driven Ansible listens on for HTTP requests. | Optional |
RPM = |
|
|
| Port number that Event-Driven Ansible listens on for HTTPS requests. | Optional |
RPM = |
|
|
| Number of maximum activations running concurrently per node. This is an integer that must be greater than 0. | Optional |
|
|
|
Denote whether the web cert sources are local to the installation program ( | Optional |
| |
|
|
|
Controls whether client certificate authentication is enabled or disabled on the Event-Driven Ansible PostgreSQL database. Set this variable to | Optional |
|
|
|
| Name of the PostgreSQL database used by Event-Driven Ansible. | Optional |
RPM = |
|
|
| Hostname of the PostgreSQL database used by Event-Driven Ansible. | Required | |
|
|
|
Password for the Event-Driven Ansible PostgreSQL database user. Use of special characters for this variable is limited. The | Required if not using client certificate authentication. | |
|
|
| Port number for the PostgreSQL database used by Event-Driven Ansible. | Optional |
|
|
|
|
Determines the level of encryption and authentication for client server connections. Valid options include | Optional |
|
|
|
| Username for the Event-Driven Ansible PostgreSQL database user. | Optional |
RPM = |
|
|
| Path to the PostgreSQL SSL/TLS certificate file for Event-Driven Ansible. | Required if using client certificate authentication. | |
|
|
| Path to the PostgreSQL SSL/TLS key file for Event-Driven Ansible. | Required if using client certificate authentication. | |
|
|
Denote whether the PostgreSQL client cert sources are local to the installation program ( | Optional |
| |
|
|
|
URL for connecting to the event stream. The URL must start with the | Optional | |
|
|
| Hostname of the Redis host used by Event-Driven Ansible. | Optional |
First node in the |
|
|
| Password for Event-Driven Ansible Redis. | Optional | Randomly generated string |
|
|
| Port number for the Redis host for Event-Driven Ansible. | Optional |
RPM = The value defined in platform gateway’s implementation ( |
|
|
| Username for Event-Driven Ansible Redis. | Optional |
|
|
|
| Secret key value used by Event-Driven Ansible to sign and encrypt data. | Optional | |
|
|
| Path to the SSL/TLS certificate file for Event-Driven Ansible. | Optional | |
|
|
| Path to the SSL/TLS key file for Event-Driven Ansible. | Optional | |
|
|
|
Denote whether the Event-Driven Ansible provided certificate files are local to the installation program ( | Optional |
|
|
|
List of host addresses in the form: | Optional |
| |
|
|
|
Controls whether archive compression is enabled or disabled for Event-Driven Ansible. You can control this functionality globally by using | Optional |
|
|
|
|
Controls whether database compression is enabled or disabled for Event-Driven Ansible. You can control this functionality globally by using | Optional |
|
|
|
| List of additional NGINX headers to add to Event-Driven Ansible’s NGINX configuration. | Optional |
|
|
|
Controls whether or not to perform SSL verification for the Daphne WebSocket used by Podman to communicate from the pod to the host. Set to | Optional |
| |
|
|
|
Event-Driven Ansible node type. Valid options include | Optional |
|
|
|
Controls whether debug mode is enabled or disabled for Event-Driven Ansible. Set to | Optional |
| |
|
| Defines additional settings for use by Event-Driven Ansible during installation. For example:
| Optional |
| |
|
| Maximum allowed size for data sent to Event-Driven Ansible through NGINX. | Optional |
| |
|
| Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for Event-Driven Ansible. | Optional |
| |
|
|
| Protocols that Event-Driven Ansible supports when handling HTTPS traffic. | Optional |
|
|
| UNIX socket used by Event-Driven Ansible to connect to the PostgreSQL database. | Optional | ||
|
|
| Controls whether TLS is enabled or disabled for Event-Driven Ansible Redis. Set this variable to true to disable TLS. | Optional |
|
|
| Path to the Event-Driven Ansible Redis certificate file. | Optional | ||
|
| Path to the Event-Driven Ansible Redis key file. | Optional | ||
|
| List of plugins that are allowed to run within Event-Driven Ansible. For more information, see Adding a safe plugin variable to Event-Driven Ansible controller. | Optional |
|
B.6. General variables Copy linkLink copied to clipboard!
General inventory file variables for Ansible Automation Platform.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
|
|
|
Path to the user-provided CA certificate file. When you specify this variable, the installation program automatically generates TLS certificates for each Ansible Automation Platform service signed by this CA. You do not need to define individual service certificate variables (such as | Optional | |
|
|
|
Denote whether the CA certificate files are local to the installation program ( | Optional |
|
|
| Bit size of the internally managed CA certificate private key. | Optional |
| |
|
|
|
Path to the key file for the CA certificate provided in | Optional | |
|
| Cipher used for signing the internally managed CA certificate private key. | Optional |
| |
|
| Denotes whether or not to regenerate the internally managed CA certificate key pair. | Optional |
| |
|
| Bit size of the component key pair managed by the internal CA. | Optional |
| |
|
| Denotes whether or not to regenerate the component key pair managed by the internal CA. | Optional |
| |
|
|
A list of additional SAN records for signing a service. Assign these to components in the inventory file as host variables rather than group or all variables. All strings must also contain their corresponding SAN option prefix such as | Optional |
| |
|
|
Directory local to | Optional |
The value defined in | |
|
|
| Directory used to store backup files. | Optional |
RPM = |
|
| Prefix used for the file backup name for the final backup file. | Optional |
| |
|
|
|
Controls whether or not to perform an offline or bundled installation. Set this variable to | Optional |
|
|
|
| Path to the bundle directory used when performing a bundle install. |
Required if |
RPM = |
|
|
|
Path to the custom CA certificate file. Use this variable when you have manually provided TLS certificates for Ansible Automation Platform services (such as
This variable adds the CA certificate to the environment to ensure proper authentication and trust of the manually provided certificates. This variable is not needed when using | Optional | |
|
|
The default install registers the node to the Red Hat Insights for Red Hat Ansible Automation Platform for the Red Hat Ansible Automation Platform Service if the node is registered with Subscription Manager. Set to | Optional |
| |
|
|
|
Password credential for access to the registry source defined in
Not required for disconnected (bundled) installations where |
RPM = Required if you need a password to access | |
|
|
| URL of the registry source from which to pull execution environment images. | Optional |
|
|
|
|
Username credential for access to the registry source defined in
Not required for disconnected (bundled) installations where |
RPM = Required if you need a password to access | |
|
|
| Controls whether SSL/TLS certificate verification is enabled or disabled when making HTTPS requests. | Optional |
|
|
| Path to the tar file used for the platform restore. | Optional |
| |
|
| Path prefix for the staged restore components. | Optional |
| |
|
|
|
Used if the machine running the installation program can only route to the target host through a specific URL. For example, if you use short names in your inventory, but the node running the installation program can only resolve that host by using a FQDN. If | Optional | |
|
|
|
Controls at a global level whether the filesystem-related backup files are compressed before being sent to the host to run the backup operation. If set to
You can control this functionality at a component level by using the | Optional |
|
|
|
| Controls at a global level whether the database-related backup files are compressed before being sent to the host to run the backup operation.
You can control this functionality at a component level by using the | Optional |
|
|
|
Passphrase used to decrypt the key provided in | Optional | ||
|
|
Sets the HTTP timeout for end-user requests. The minimum value is | Optional |
| |
|
| Compression software to use for compressing container images. | Optional |
| |
|
|
Controls whether or not to keep container images when uninstalling Ansible Automation Platform. Set to | Optional |
| |
|
|
Controls whether or not to pull newer container images during installation. Set to | Optional |
| |
|
| The directory where the installation program temporarily stores container images during installation. | Optional | The system’s temporary directory. | |
|
| The firewall zone where Performance Co-Pilot related firewall rules are applied. This controls which networks can access Performance Co-Pilot based on the zone’s trust level. | Optional | public | |
|
|
Controls whether archive compression is enabled or disabled for Performance Co-Pilot. You can control this functionality globally by using | Optional |
| |
|
|
Controls whether to use registry authentication. When set to | Optional |
| |
|
| Ansible Automation Platform registry namespace. | Optional |
| |
|
| RHEL registry namespace. | Optional |
| |
|
|
Set to | Optional |
|
B.7. Image variables Copy linkLink copied to clipboard!
Inventory file variables for images.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
|
| Additional container images to pull from the configured container registry during deployment. | Optional |
| |
|
| Container image for automation controller. | Optional |
| |
|
| Additional decision environment container images to pull from the configured container registry during deployment. | Optional |
| |
|
| Supported decision environment container image. | Optional |
| |
|
| Backend container image for Event-Driven Ansible. | Optional |
| |
|
| Front-end container image for Event-Driven Ansible. | Optional |
| |
|
| Additional execution environment container images to pull from the configured container registry during deployment. | Optional |
| |
|
| Minimal execution environment container image. | Optional |
| |
|
| Supported execution environment container image. | Optional |
| |
|
| Container image for platform gateway. | Optional |
| |
|
| Container image for platform gateway proxy. | Optional |
| |
|
| Backend container image for automation hub. | Optional |
| |
|
| Front-end container image for automation hub. | Optional |
| |
|
| Container image for Performance Co-Pilot. | Optional |
| |
|
| Container image for PostgreSQL. | Optional |
| |
|
| Container image for receptor. | Optional |
| |
|
| Container image for Redis. | Optional |
|
B.8. Platform gateway variables Copy linkLink copied to clipboard!
Inventory file variables for platform gateway.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
|
|
| Email address used by Django for the admin user for platform gateway. | Optional |
|
|
|
|
Platform gateway administrator password. Use of special characters for this variable is limited. The password can include any printable ASCII character except | Required | |
|
|
|
Username used to identify and create the administrator user in platform gateway. The installation program uses this account to register services with platform gateway. If you have deleted the default | Optional |
|
|
|
| Path to the platform gateway Redis certificate file. | Optional | |
|
|
| Path to the platform gateway Redis key file. | Optional | |
|
|
Denote whether the cache client certificate files are local to the installation program ( | Optional |
The value defined in | |
|
|
Controls whether or not to regenerate platform gateway client certificates for the platform cache. Set to | Optional |
| |
|
|
| Port number for the platform gateway control plane. | Optional |
|
|
|
|
Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for platform gateway. Set this variable to | Optional |
|
|
|
|
Controls whether HTTPS is enabled or disabled for platform gateway. Set this variable to | Optional |
RPM = The value defined in |
|
|
| The firewall zone where platform gateway related firewall rules are applied. This controls which networks can access platform gateway based on the zone’s trust level. | Optional | RPM = no default set. Container = 'public'. |
|
|
| Timeout duration (in seconds) for requests made to the gRPC service on platform gateway. | Optional |
|
|
|
| Maximum number of threads that each gRPC server process can create to handle requests on platform gateway. | Optional |
|
|
|
| Number of processes for handling gRPC requests on platform gateway. | Optional |
|
|
|
| Port number that platform gateway listens on for HTTP requests. | Optional |
RPM = |
|
|
| Port number that platform gateway listens on for HTTPS requests. | Optional |
RPM = |
|
|
|
URL of the main instance of platform gateway that clients connect to. Use if you are performing a clustered deployment and you need to use the URL of the load balancer instead of the component’s server. The URL must start with | Optional | |
|
|
Denote whether the web cert sources are local to the installation program ( | Optional |
The value defined in | |
|
|
|
Controls whether client certificate authentication is enabled or disabled on the platform gateway PostgreSQL database. Set this variable to | Optional |
|
|
|
| Name of the PostgreSQL database used by platform gateway. | Optional |
RPM = |
|
|
| Hostname of the PostgreSQL database used by platform gateway. | Required | |
|
|
|
Password for the platform gateway PostgreSQL database user. Use of special characters for this variable is limited. The | Optional | |
|
|
| Port number for the PostgreSQL database used by platform gateway. | Optional |
|
|
|
|
Controls the SSL mode to use when platform gateway connects to the PostgreSQL database. Valid options include | Optional |
|
|
|
| Username for the platform gateway PostgreSQL database user. | Optional |
RPM = |
|
|
| Path to the PostgreSQL SSL/TLS certificate file for platform gateway. | Required if using client certificate authentication. | |
|
|
| Path to the PostgreSQL SSL/TLS key file for platform gateway. | Required if using client certificate authentication. | |
|
|
Denote whether the PostgreSQL client cert sources are local to the installation program ( | Optional |
The value defined in | |
|
|
| Hostname of the Redis host used by platform gateway. | Optional |
First node in the |
|
|
| Password for platform gateway Redis. | Optional | Randomly generated string. |
|
|
| Username for platform gateway Redis. | Optional |
|
|
|
| Secret key value used by platform gateway to sign and encrypt data. | Optional | |
|
|
| Path to the SSL/TLS certificate file for platform gateway. | Optional | |
|
|
| Path to the SSL/TLS key file for platform gateway. | Optional | |
|
|
|
Denote whether the platform gateway provided certificate files are local to the installation program ( | Optional |
|
|
|
|
The number of | Optional | The number of vCPUs multiplied by two, plus one. |
|
|
|
Controls whether archive compression is enabled or disabled for platform gateway. You can control this functionality globally by using | Optional |
|
|
|
|
Controls whether database compression is enabled or disabled for platform gateway. You can control this functionality globally by using | Optional |
|
|
|
| List of additional NGINX headers to add to platform gateway’s NGINX configuration. | Optional |
|
|
|
Denotes whether or not to verify platform gateway’s web certificates when making calls from platform gateway to itself during installation. Set to | Optional |
| |
|
|
|
Controls whether or not HTTPS is disabled when accessing the platform UI. Set to | Optional |
RPM = The value defined in |
|
|
| Port number on which the Envoy proxy listens for incoming HTTP connections. | Optional |
|
|
|
| Port number on which the Envoy proxy listens for incoming HTTPS connections. | Optional |
|
|
|
| Protocols that platform gateway will support when handling HTTPS traffic. | Optional |
|
|
|
|
Controls whether TLS is enabled or disabled for platform gateway Redis. Set this variable to | Optional |
|
|
|
| Port number for the Redis host for platform gateway. | Optional |
|
|
| Defines additional settings for use by platform gateway during installation. For example:
| Optional |
| |
|
| Maximum allowed size for data sent to platform gateway through NGINX. | Optional |
| |
|
| Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for platform gateway. | Optional |
| |
|
|
Number of requests | Optional |
|
B.9. Receptor variables Copy linkLink copied to clipboard!
Inventory file variables for Receptor.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
|
| The directory where receptor stores its runtime data and local artifacts. The target directory must be accessible to awx users. If the target directory is a temporary file system tmpfs, ensure it is remounted correctly after a reboot. Failure to do so results in the receptor no longer having a working directory. | Optional |
| |
|
|
| Port number that receptor listens on for incoming connections from other receptor nodes. | Optional |
|
|
|
| Protocol that receptor will support when handling traffic. | Optional |
|
|
|
|
Controls the verbosity of logging for receptor. Valid options include: | Optional |
|
|
|
Controls whether TLS is enabled or disabled for receptor. Set this variable to | Optional |
| |
|
See |
|
For the
For the
| Optional |
For the |
|
See |
| Used to indicate which nodes a specific host connects to. Wherever this variable is defined, an outbound connection to the specific host is established. The value must be a comma-separated list of hostnames. Do not use inventory group names.
This is resolved into a set of hosts that is used to construct the For more information, see Adding execution nodes. | Optional |
|
|
|
Controls whether signing of communications between receptor nodes is enabled or disabled. Set this variable to | Optional |
| |
|
|
Controls whether TLS is enabled or disabled for receptor. Set this variable to | Optional |
| |
|
| The firewall zone where receptor related firewall rules are applied. This controls which networks can access receptor based on the zone’s trust level. | Optional |
| |
|
|
Controls whether or not receptor only accepts connections that use TLS 1.3 or higher. Set to | Optional |
| |
|
| Path to the private key used by receptor to sign communications with other receptor nodes in the network. | Optional | ||
|
| Path to the public key used by receptor to sign communications with other receptor nodes in the network. | Optional | ||
|
|
Denote whether the receptor signing files are local to the installation program ( | Optional |
| |
|
| Path to the TLS certificate file for receptor. | Optional | ||
|
| Path to the TLS key file for receptor. | Optional | ||
|
|
Denote whether the receptor provided certificate files are local to the installation program ( | Optional |
| |
|
|
Controls whether archive compression is enabled or disabled for receptor. You can control this functionality globally by using | Optional |
|
B.10. Redis variables Copy linkLink copied to clipboard!
Inventory file variables for Redis.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
|
|
|
The IPv4 address used by the Redis cluster to identify each host in the cluster. When defining hosts in the | Optional | RPM = Discovered IPv4 address from Ansible facts. If IPv4 address is not available, IPv6 address is used. Container = Discovered IPv4 address from Ansible facts. |
|
|
Controls whether mTLS is enabled or disabled for Redis. Set this variable to | Optional |
| |
|
|
| The firewall zone where Redis related firewall rules are applied. This controls which networks can access Redis based on the zone’s trust level. | Optional |
RPM = no default set. Container = |
|
|
Hostname used by the Redis cluster when identifying and routing the host. By default | Optional |
The value defined in | |
|
|
|
The Redis mode to use for your Ansible Automation Platform installation. Valid options include: | Optional |
|
|
| Denotes whether or not to regenerate the Ansible Automation Platform managed TLS key pair for Redis. | Optional |
| |
|
|
| Path to the Redis server TLS certificate. | Optional | |
|
|
|
Denote whether the Redis provided certificate files are local to the installation program ( | Optional |
|
|
|
| Path to the Redis server TLS certificate key. | Optional | |
|
|
Controls whether archive compression is enabled or disabled for Redis. You can control this functionality globally by using | Optional |
|