이 콘텐츠는 선택한 언어로 제공되지 않습니다.

RPM installation


Red Hat Ansible Automation Platform 2.5

Install the RPM version of Ansible Automation Platform

Red Hat Customer Content Services

Abstract

This guide shows you how to install Red Hat Ansible Automation Platform based on supported installation scenarios.

Preface

Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments.

This guide helps you to understand the installation requirements and processes behind installing Ansible Automation Platform. This document has been updated to include information for the latest release of Ansible Automation Platform.

Providing feedback on Red Hat documentation

If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.

Chapter 1. Red Hat Ansible Automation Platform installation overview

The Red Hat Ansible Automation Platform installation program offers you flexibility, allowing you to install Ansible Automation Platform by using several supported installation scenarios.

Regardless of the installation scenario you choose, installing Ansible Automation Platform involves the following steps:

Editing the Red Hat Ansible Automation Platform installer inventory file
The Ansible Automation Platform installer inventory file allows you to specify your installation scenario and describe host deployments to Ansible. The examples provided in this document show the parameter specifications needed to install that scenario for your deployment.
Running the Red Hat Ansible Automation Platform installer setup script
The setup script installs Ansible Automation Platform by using the required parameters defined in the inventory file.
Verifying your Ansible Automation Platform installation
After installing Ansible Automation Platform, you can verify that the installation has been successful by logging in to the platform UI and seeing the relevant functionality.

Additional resources

  1. For more information about the supported installation scenarios, see the Planning your installation.
  2. For more information on available topologies, see Tested deployment models.

1.1. Prerequisites

Warning

To prevent errors, upgrade your RHEL nodes fully before installing Ansible Automation Platform.

Additional resources

For more information about obtaining a platform installer or system requirements, see the System requirements in the Planning your installation.

Chapter 2. System requirements

Use this information when planning your Red Hat Ansible Automation Platform installations and designing automation mesh topologies that fit your use case.

Prerequisites

  • You can obtain root access either through the sudo command, or through privilege escalation. For more on privilege escalation, see Understanding privilege escalation.
  • You can de-escalate privileges from root to users such as: AWX, PostgreSQL, Event-Driven Ansible, or Pulp.
  • You have configured an NTP client on all nodes.

2.1. Red Hat Ansible Automation Platform system requirements

Your system must meet the following minimum system requirements to install and run Red Hat Ansible Automation Platform. A resilient deployment requires 10 virtual machines with a minimum of 16 gigabytes (GB) of RAM and 4 virtual CPUs (vCPU). See Tested deployment models for more information on topology options.

Table 2.1. Base system
TypeDescriptionNotes

Subscription

Valid Red Hat Ansible Automation Platform subscription

 

Operating system

  • Red Hat Enterprise Linux 8.8 or later minor versions of Red Hat Enterprise Linux 8
  • Red Hat Enterprise Linux 9.2 or later minor versions of Red Hat Enterprise Linux 9

Red Hat Ansible Automation Platform are also supported on OpenShift, see Installing on OpenShift Container Platform for more information.

CPU architecture

x86_64, AArch64, s390x (IBM Z), ppc64le (IBM Power)

 

Ansible-core

Ansible-core version 2.16 or later

Ansible Automation Platform uses the system-wide ansible-core package to install the platform, but uses ansible-core 2.16 for both its control plane and built-in execution environments.

Browser

A currently supported version of Mozilla Firefox or Google Chrome.

 

Database

PostgreSQL 15

Red Hat Ansible Automation Platform 2.5 requires the external (customer supported) databases to have ICU support.

Table 2.2. Virtual machine requirements
ComponentRAMVCPUDisk IOPSStorage

Platform gateway

16GB

4

3000

60GB minimum

Control nodes

16GB

4

3000

80GB minimum with at least 20GB available under /var/lib/awx

Execution nodes

16GB

4

3000

60GB minimum

Hop nodes

16GB

4

3000

60GB minimum

Automation hub

16GB

4

3000

60GB minimum with at least 40GB allocated to /var/lib/pulp

Database

16GB

4

3000

100GB minimum allocated to /var/lib/pgsql

Event-Driven Ansible controller

16GB

4

3000

60GB minimum

Note

These are minimum requirements and can be increased for larger workloads in increments of 2x (for example 16GB becomes 32GB and 4 vCPU becomes 8vCPU). See the horizontal scaling guide for more information.

Repository requirements

Enable the following repositories only when installing Red Hat Ansible Automation Platform:

  • RHEL BaseOS
  • RHEL AppStream
Note

If you enable repositories besides those mentioned above, the Red Hat Ansible Automation Platform installation could fail unexpectedly.

The following are necessary for you to work with project updates and collections:

  • Ensure that the Network ports and protocols listed in Table 6.3. Automation Hub are available for successful connection and download of collections from automation hub or Ansible Galaxy server.

Additional notes for Red Hat Ansible Automation Platform requirements

  • If performing a bundled Ansible Automation Platform installation, the installation setup.sh script attempts to install ansible-core (and its dependencies) from the bundle for you.
  • If you have installed Ansible-core manually, the Ansible Automation Platform installation setup.sh script detects that Ansible has been installed and does not attempt to reinstall it.
Note

You must use Ansible-core, which is installed via dnf. Ansible-core version 2.16 is required for versions 2.5 and later.

2.2. Platform gateway system requirements

The platform gateway is the service that handles authentication and authorization for Ansible Automation Platform. It provides a single entry into the platform and serves the platform’s user interface.

You are required to set umask=0022.

2.3. Automation controller system requirements

Automation controller is a distributed system, where different software components can be co-located or deployed across multiple compute nodes. In the installer, four node types are provided as abstractions to help you design the topology appropriate for your use case: control, hybrid, execution, and hop nodes.

Use the following recommendations for node sizing:

Execution nodes

Execution nodes run automation. Increase memory and CPU to increase capacity for running more forks.

Note
  • The RAM and CPU resources stated are minimum recommendations to handle the job load for a node to run an average number of jobs simultaneously.
  • Recommended RAM and CPU node sizes are not supplied. The required RAM or CPU depends directly on the number of jobs you are running in that environment.
  • For capacity based on forks in your configuration, see Automation controller capacity determination and job impact.

For further information about required RAM and CPU levels, see Performance tuning for automation controller.

Control nodes

Control nodes process events and run cluster jobs including project updates and cleanup jobs. Increasing CPU and memory can help with job event processing.

  • 40GB minimum with at least 20GB available under /var/lib/awx
  • Storage volume must be rated for a minimum baseline of 3000 IOPS
  • Projects are stored on control and hybrid nodes, and for the duration of jobs, are also stored on execution nodes. If the cluster has many large projects, consider doubling the GB in /var/lib/awx/projects, to avoid disk space errors.

Hop nodes

Hop nodes serve to route traffic from one part of the automation mesh to another (for example, a hop node could be a bastion host into another network). RAM can affect throughput, CPU activity is low. Network bandwidth and latency are generally a more important factor than either RAM or CPU.

  • Actual RAM requirements vary based on how many hosts automation controller manages simultaneously (which is controlled by the forks parameter in the job template or the system ansible.cfg file). To avoid possible resource conflicts, Ansible recommends 1 GB of memory per 10 forks and 2 GB reservation for automation controller. See Automation controller capacity determination and job impact. If forks is set to 400, 42 GB of memory is recommended.
  • Automation controller hosts check if umask is set to 0022. If not, the setup fails. Set umask=0022 to avoid this error.
  • A larger number of hosts can be addressed, but if the fork number is less than the total host count, more passes across the hosts are required. You can avoid these RAM limitations by using any of the following approaches:

    • Use rolling updates.
    • Use the provisioning callback system built into automation controller, where each system requesting configuration enters a queue and is processed as quickly as possible.
    • In cases where automation controller is producing or deploying images such as AMIs.

Additional resources

2.4. Automation hub system requirements

Automation hub allows you to discover and use new certified automation content from Red Hat Ansible and Certified Partners. On Ansible automation hub, you can discover and manage Ansible Collections, which are supported automation content developed by Red Hat and its partners for use cases such as cloud automation, network automation, and security automation.

Note

Private automation hub

If you install private automation hub from an internal address, and have a certificate which only encompasses the external address, this can result in an installation which cannot be used as container registry without certificate issues.

To avoid this, use the automationhub_main_url inventory variable with a value such as https://pah.example.com linking to the private automation hub node in the installation inventory file.

This adds the external address to /etc/pulp/settings.py. This implies that you only want to use the external address.

For information about inventory file variables, see Inventory file variables.

2.4.1. High availability automation hub requirements

Before deploying a high availability (HA) automation hub, ensure that you have a shared storage file system installed in your environment and that you have configured your network storage system, if applicable.

2.4.1.1. Required shared storage

Shared storage is required when installing more than one Automation hub with a file storage backend. The supported shared storage type for RPM-based installations is Network File System (NFS).

Before you run the Red Hat Ansible Automation Platform installer, verify that you installed the /var/lib/pulp directory across your cluster as part of the shared storage file system installation. The Red Hat Ansible Automation Platform installer returns an error if /var/lib/pulp is not detected in one of your nodes, causing your high availability automation hub setup to fail.

If you receive an error stating /var/lib/pulp is not detected in one of your nodes, ensure /var/lib/pulp is properly mounted in all servers and re-run the installer.

2.4.1.2. Installing firewalld for HA hub deployment

If you intend to install a HA automation hub using a network storage on the automation hub nodes itself, you must first install and use firewalld to open the necessary ports as required by your shared storage system before running the Ansible Automation Platform installer.

Install and configure firewalld by executing the following commands:

  1. Install the firewalld daemon:

    $ dnf install firewalld
  2. Add your network storage under <service> using the following command:

    $ firewall-cmd --permanent --add-service=<service>
    Note

    For a list of supported services, use the $ firewall-cmd --get-services command

  3. Reload to apply the configuration:

    $ firewall-cmd --reload

2.5. Event-Driven Ansible controller system requirements

The Event-Driven Ansible controller is a single-node system capable of handling a variable number of long-running processes (such as rulebook activations) on-demand, depending on the number of CPU cores.

Note

If you want to use Event-Driven Ansible 2.5 with a 2.4 automation controller version, see Using Event-Driven Ansible 2.5 with Ansible Automation Platform 2.4.

Use the following minimum requirements to run, by default, a maximum of 12 simultaneous activations:

RequirementRequired

RAM

16 GB

CPUs

4

Local disk

  • Hard drive must be 40 GB minimum with at least 20 GB available under /var.
  • Storage volume must be rated for a minimum baseline of 3000 IOPS.
  • If the cluster has many large projects or decision environment images, consider doubling the GB in /var to avoid disk space errors.
Important
  • If you are running Red Hat Enterprise Linux 8 and want to set your memory limits, you must have cgroup v2 enabled before you install Event-Driven Ansible. For specific instructions, see the Knowledge-Centered Support (KCS) article, Ansible Automation Platform Event-Driven Ansible controller for Red Hat Enterprise Linux 8 requires cgroupv2.
  • When you activate an Event-Driven Ansible rulebook under standard conditions, it uses about 250 MB of memory. However, the actual memory consumption can vary significantly based on the complexity of your rules and the volume and size of the events processed. In scenarios where a large number of events are anticipated or the rulebook complexity is high, conduct a preliminary assessment of resource usage in a staging environment. This ensures that your maximum number of activations is based on the capacity of your resources.

For an example of setting Event-Driven Ansible controller maximumrunning activations, see Single automation controller, single automation hub, and single Event-Driven Ansible controller node with external (installer managed) database.

2.6. PostgreSQL requirements

Red Hat Ansible Automation Platform 2.5 uses PostgreSQL 15 and requires the external (customer supported) databases to have ICU support. PostgreSQL user passwords are hashed with SCRAM-SHA-256 secure hashing algorithm before storing in the database.

To determine if your automation controller instance has access to the database, you can do so with the command, awx-manage check_db command.

Note
  • Automation controller data is stored in the database. Database storage increases with the number of hosts managed, number of jobs run, number of facts stored in the fact cache, and number of tasks in any individual job. For example, a playbook runs every hour (24 times a day) across 250 hosts, with 20 tasks, stores over 800000 events in the database every week.
  • If not enough space is reserved in the database, the old job runs and facts must be cleaned on a regular basis. For more information, see Management Jobs in the Configuring automation execution guide.

PostgreSQL Configurations

Optionally, you can configure the PostgreSQL database as separate nodes that are not managed by the Red Hat Ansible Automation Platform installer. When the Ansible Automation Platform installer manages the database server, it configures the server with defaults that are generally recommended for most workloads. For more information about the settings you can use to improve database performance, see PostgreSQL database configuration and maintenance for automation controller in the Configuring automation execution guide.

Additional resources

For more information about tuning your PostgreSQL server, see the PostgreSQL documentation.

2.6.1. Setting up an external (customer supported) database

Important
  • When using an external database with Ansible Automation Platform, you must create and maintain that database. Ensure that you clear your external database when uninstalling Ansible Automation Platform.
  • Red Hat Ansible Automation Platform 2.5 uses PostgreSQL 15 and requires the external (customer supported) databases to have ICU support.
  • During configuration of an external database, you must check the external database coverage. For more information, see Red Hat Ansible Automation Platform Database Scope of Coverage.

Red Hat Ansible Automation Platform 2.5 uses PostgreSQL 15 and requires the external (customer supported) databases to have ICU support. Use the following procedure to configure an external PostgreSQL compliant database for use with an Ansible Automation Platform component, for example automation controller, Event-Driven Ansible, automation hub, and platform gateway.

Procedure

  1. Connect to a PostgreSQL compliant database server with superuser privileges.

    # psql -h <db.example.com> -U superuser -p 5432 -d postgres <Password for user superuser>:
  2. Where the default value for <hostname> is hostname:

    -h hostname
    --host=hostname
  3. Specify the hostname of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the UNIX-domain socket.

    -d dbname
    --dbname=dbname
  4. Specify the name of the database to connect to. This is equal to specifying dbname as the first non-option argument on the command line. The dbname can be a connection string. If so, connection string parameters override any conflicting command line options.

    -U username
    --username=username
  5. Connect to the database as the user username instead of the default (you must have permission to do so).
  6. Create the user, database, and password with the createDB or administrator role assigned to the user. For further information, see Database Roles.
  7. Run the installation program. If you are using a PostgreSQL database, the database is owned by the connecting user and must have a createDB or administrator role assigned to it.
  8. Check that you can connect to the created database with the credentials provided in the inventory file.
  9. Check the permission of the user. The user should have the createDB or administrator role.
  10. After you create the PostgreSQL users and databases for each component, add the database credentials and host details in the inventory file under the [all:vars] group.

    # Automation controller
    pg_host=data.example.com
    pg_database=<database name>
    pg_port=<port_number>
    pg_username=<set your own>
    pg_password=<set your own>
    
    # Platform gateway
    automationgateway_pg_host=aap.example.org
    automationgateway_pg_database=<set your own>
    automationgateway_pg_port=<port_number>
    automationgateway_pg_username=<set your own>
    automationgateway_pg_password=<set your own>
    
    # Automation hub
    automationhub_pg_host=data.example.com
    automationhub_pg_database=<database_name>
    automationhub_pg_port=<port_number>
    automationhub_pg_username=<username>
    automationhub_pg_password=<password>
    
    # Event-Driven Ansible
    automationedacontroller_pg_host=data.example.com
    automationedacontroller_pg_database=<database_name>
    automationedacontroller_pg_port=<port_number>
    automationedacontroller_pg_username=<username>
    automationedacontroller_pg_password=<password>
2.6.1.1. Optional: Enabling mutual TLS (mTLS) authentication

mTLS authentication is disabled by default. To configure each component’s database with mTLS authentication, add the following variables to your inventory file under the [all:vars] group and ensure each component has a different TLS certificate and key:

# Automation controller
pgclient_sslcert=/path/to/awx.cert
pgclient_sslkey=/path/to/awx.key
pg_sslmode=verify-full or verify-ca

# Platform gateway
automationgateway_pgclient_sslcert=/path/to/gateway.cert
automationgateway_pgclient_sslkey=/path/to/gateway.key
automationgateway_pg_sslmode=verify-full or verify-ca

# Automation hub
automationhub_pgclient_sslcert=/path/to/pulp.cert
automationhub_pgclient_sslkey=/path/to/pulp.key
automationhub_pg_sslmode=verify-full or verify-ca

# Event-Driven Ansible
automationedacontroller_pgclient_sslcert=/path/to/eda.cert
automationedacontroller_pgclient_sslkey=/path/to/eda.key
automationedacontroller_pg_sslmode=verify-full or verify-ca
2.6.1.2. Optional: Using custom TLS certificates

By default, the installation program generates self-signed TLS certificates and keys for all Ansible Automation Platform services.

If you want to replace these with your own custom certificate and key, then set the following inventory file variables:

aap_ca_cert_file=<path_to_ca_tls_certificate>
aap_ca_key_file=<path_to_ca_tls_key>

If any of your certificates are signed by a custom Certificate Authority (CA), then you must specify the Certificate Authority’s certificate by using the custom_ca_cert inventory file variable:

custom_ca_cert=<path_to_custom_ca_certificate>
Note

If you have more than one custom CA certificate, combine them into a single file, then reference the combined certificate with the custom_ca_cert inventory file variable.

2.6.2. Enabling the hstore extension for the automation hub PostgreSQL database

Added in Ansible Automation Platform 2.5, the database migration script uses hstore fields to store information, therefore the hstore extension must be enabled in the automation hub PostgreSQL database.

This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server.

If the PostgreSQL database is external, you must enable the hstore extension in the automation hub PostgreSQL database manually before installation.

If the hstore extension is not enabled before installation, a failure raises during database migration.

Procedure

  1. Check if the extension is available on the PostgreSQL server (automation hub database).

    $ psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"
  2. Where the default value for <automation hub database> is automationhub.

    Example output with hstore available:

    name  | default_version | installed_version |comment
    ------+-----------------+-------------------+---------------------------------------------------
     hstore | 1.7           |                   | data type for storing sets of (key, value) pairs
    (1 row)

    Example output with hstore not available:

     name | default_version | installed_version | comment
    ------+-----------------+-------------------+---------
    (0 rows)
  3. On a RHEL based server, the hstore extension is included in the postgresql-contrib RPM package, which is not installed automatically when installing the PostgreSQL server RPM package.

    To install the RPM package, use the following command:

    dnf install postgresql-contrib
  4. Load the hstore PostgreSQL extension into the automation hub database with the following command:

    $ psql -d <automation hub database> -c "CREATE EXTENSION hstore;"

    In the following output, the installed_version field lists the hstore extension used, indicating that hstore is enabled.

    name | default_version | installed_version | comment
    -----+-----------------+-------------------+------------------------------------------------------
    hstore  |     1.7      |       1.7         | data type for storing sets of (key, value) pairs
    (1 row)

2.6.3. Benchmarking storage performance for the Ansible Automation Platform PostgreSQL database

Check whether the minimum Ansible Automation Platform PostgreSQL database requirements are met by using the Flexible I/O Tester (FIO) tool. FIO is a tool used to benchmark read and write IOPS performance of the storage system.

Prerequisites

  • You have installed the Flexible I/O Tester (fio) storage performance benchmarking tool.

    To install fio, run the following command as the root user:

    # yum -y install fio
  • You have adequate disk space to store the fio test data log files.

    The examples shown in the procedure require at least 60GB disk space in the /tmp directory:

    • numjobs sets the number of jobs run by the command.
    • size=10G sets the file size generated by each job.
  • You have adjusted the value of the size parameter. Adjusting this value reduces the amount of test data.

Procedure

  1. Run a random write test:

    $ fio --name=write_iops --directory=/tmp --numjobs=3 --size=10G \
    --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \
    --verify=0 --bs=4K --iodepth=64 --rw=randwrite \
    --group_reporting=1 > /tmp/fio_benchmark_write_iops.log \
    2>> /tmp/fio_write_iops_error.log
  2. Run a random read test:

    $ fio --name=read_iops --directory=/tmp \
    --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s \
    --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randread \
    --group_reporting=1 > /tmp/fio_benchmark_read_iops.log \
    2>> /tmp/fio_read_iops_error.log
  3. Review the results:

    In the log files written by the benchmark commands, search for the line beginning with iops. This line shows the minimum, maximum, and average values for the test.

    The following example shows the line in the log file for the random read test:

    $ cat /tmp/fio_benchmark_read_iops.log
    read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
    […]
       iops        : min=50879, max=61603, avg=56221.33, stdev=679.97, samples=360
    […]
    Note

    The above is a baseline to help evaluate the best case performance on your systems. Systems can and will change and performance may vary depending on what else is happening on your systems, storage or network at the time of testing. You must review, monitor, and revisit the log files according to your own business requirements, application workloads, and new demands.

Chapter 3. Installing Red Hat Ansible Automation Platform

Ansible Automation Platform is a modular platform. The platform gateway deploys automation platform components, such as automation controller, automation hub, and Event-Driven Ansible controller.

For more information about the components provided with Ansible Automation Platform, see Red Hat Ansible Automation Platform components in Planning your installation.

There are several supported installation scenarios for Red Hat Ansible Automation Platform. To install Red Hat Ansible Automation Platform, you must edit the inventory file parameters to specify your installation scenario. You can use the enterprise installer as a basis for your own inventory file.

Additional resources

For a comprehensive list of pre-defined variables used in Ansible installation inventory files, see Ansible variables.

3.1. Editing the Red Hat Ansible Automation Platform installer inventory file

You can use the Red Hat Ansible Automation Platform installer inventory file to specify your installation scenario.

Procedure

  1. Navigate to the installer:

    1. [RPM installed package]

      $ cd /opt/ansible-automation-platform/installer/
    2. [bundled installer]

      $ cd ansible-automation-platform-setup-bundle-<latest-version>
    3. [online installer]

      $ cd ansible-automation-platform-setup-<latest-version>
  2. Open the inventory file with a text editor.
  3. Edit inventory file parameters to specify your installation scenario. You can use one of the supported Installation scenario examples as the basis for your inventory file.

Additional resources

  • For a comprehensive list of pre-defined variables used in Ansible installation inventory files, see Inventory file variables.

3.2. Inventory file examples based on installation scenarios

Red Hat supports several installation scenarios for Ansible Automation Platform. You can develop your own inventory files using the example files as a basis, or you can use the example closest to your preferred installation scenario.

3.2.1. Inventory file recommendations based on installation scenarios

Before selecting your installation method for Ansible Automation Platform, review the following recommendations. Familiarity with these recommendations will streamline the installation process.

  • Provide a reachable IP address or fully qualified domain name (FQDN) for hosts to ensure users can sync and install content from automation hub from a different node.

    The FQDN must not contain either the - or the _ symbols, as it will not be processed correctly.

    Do not use localhost.

  • admin is the default user ID for the initial log in to Ansible Automation Platform and cannot be changed in the inventory file.
  • Use of special characters for pg_password is limited. The !, #, 0 and @ characters are supported. Use of other special characters can cause the setup to fail.
  • Enter your Red Hat Registry Service Account credentials in registry_username and registry_password to link to the Red Hat container registry.
  • The inventory file variables registry_username and registry_password are only required if a non-bundle installer is used.
3.2.1.1. Single platform gateway and automation controller with an external (installer managed) database

Use this example to see what is minimally needed within the inventory file to deploy single instances of platform gateway and automation controller with an external (installer managed) database.

[automationcontroller]
controller.example.com

[automationgateway]
gateway.example.com

[database]
data.example.com

[all:vars]
admin_password='<password>'
redis_mode=standalone
pg_host='data.example.com'
pg_port=5432
pg_database='awx'
pg_username='awx'
pg_password='<password>'
pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL

registry_url='registry.redhat.io'
registry_username='<registry username>'
registry_password='<registry password>'

# Automation Gateway configuration
automationgateway_admin_password=''

automationgateway_pg_host='data.example.com'
automationgateway_pg_port=5432

automationgateway_pg_database='automationgateway'
automationgateway_pg_username='automationgateway'
automationgateway_pg_password=''
automationgateway_pg_sslmode='prefer'

# The main automation gateway URL that clients will connect to (e.g. https://<load balancer host>).
# If not specified, the first node in the [automationgateway] group will be used when needed.
# automationgateway_main_url = ''

# Certificate and key to install in Automation Gateway
# automationgateway_ssl_cert=/path/to/automationgateway.cert
# automationgateway_ssl_key=/path/to/automationgateway.key

# SSL-related variables
# If set, this will install a custom CA certificate to the system trust store.
# custom_ca_cert=/path/to/ca.crt
# Certificate and key to install in nginx for the web UI and API
# web_server_ssl_cert=/path/to/tower.cert
# web_server_ssl_key=/path/to/tower.key
# Server-side SSL settings for PostgreSQL (when we are installing it).
# postgres_use_ssl=False
# postgres_ssl_cert=/path/to/pgsql.crt
# postgres_ssl_key=/path/to/pgsql.key
3.2.1.2. Single platform gateway, automation controller, and automation hub with an external (installer managed) database

Use this example to populate the inventory file to deploy single instances of platform gateway, automation controller, and automation hub with an external (installer managed) database.

[automationcontroller]
controller.example.com

[automationhub]
automationhub.example.com

[automationgateway]
gateway.example.com

[database]
data.example.com

[all:vars]
admin_password='<password>'
redis_mode=standalone
pg_host='data.example.com'
pg_port='5432'
pg_database='awx'
pg_username='awx'
pg_password='<password>'
pg_sslmode='prefer'  # set to 'verify-full' for client-side enforced SSL

registry_url='registry.redhat.io'
registry_username='<registry username>'
registry_password='<registry password>'

automationhub_admin_password= <PASSWORD>

automationhub_pg_host='data.example.com'
automationhub_pg_port=5432

automationhub_pg_database='automationhub'
automationhub_pg_username='automationhub'
automationhub_pg_password=<PASSWORD>
automationhub_pg_sslmode='prefer'

# The default install will deploy a TLS enabled Automation Hub.
# If for some reason this is not the behavior wanted one can
# disable TLS enabled deployment.
#
# automationhub_disable_https = False
# The default install will generate self-signed certificates for the Automation
# Hub service. If you are providing valid certificate via automationhub_ssl_cert
# and automationhub_ssl_key, one should toggle that value to True.
#
# automationhub_ssl_validate_certs = False
# SSL-related variables
# If set, this will install a custom CA certificate to the system trust store.
# custom_ca_cert=/path/to/ca.crt
# Certificate and key to install in Automation Hub node
# automationhub_ssl_cert=/path/to/automationhub.cert
# automationhub_ssl_key=/path/to/automationhub.key

# Automation Gateway configuration
automationgateway_admin_password=''

automationgateway_pg_host=''
automationgateway_pg_port=5432

automationgateway_pg_database='automationgateway'
automationgateway_pg_username='automationgateway'
automationgateway_pg_password=''
automationgateway_pg_sslmode='prefer'

# The main automation gateway URL that clients will connect to (e.g. https://<load balancer host>).
# If not specified, the first node in the [automationgateway] group will be used when needed.
# automationgateway_main_url = ''

# Certificate and key to install in Automation Gateway
# automationgateway_ssl_cert=/path/to/automationgateway.cert
# automationgateway_ssl_key=/path/to/automationgateway.key

# Certificate and key to install in nginx for the web UI and API
# web_server_ssl_cert=/path/to/tower.cert
# web_server_ssl_key=/path/to/tower.key
# Server-side SSL settings for PostgreSQL (when we are installing it).
# postgres_use_ssl=False
# postgres_ssl_cert=/path/to/pgsql.crt
# postgres_ssl_key=/path/to/pgsql.key
3.2.1.3. Single platform gateway, automation controller, automation hub, and Event-Driven Ansible controller with an external (installer managed) database

Use this example to populate the inventory file to deploy single instances of platform gateway, automation controller, automation hub, and Event-Driven Ansible controller with an external (installer managed) database.

Important
  • This scenario requires a minimum of automation controller 2.4 for successful deployment of Event-Driven Ansible controller.
  • Event-Driven Ansible controller must be installed on a separate server and cannot be installed on the same host as automation hub and automation controller.
  • When an Event-Driven Ansible rulebook is activated under standard conditions, it uses approximately 250 MB of memory. However, the actual memory consumption can vary significantly based on the complexity of the rules and the volume and size of the events processed. In scenarios where a large number of events are anticipated or the rulebook complexity is high, conduct a preliminary assessment of resource usage in a staging environment. This ensures that the maximum number of activations is based on the resource capacity. In the following example, the default automationedacontroller_max_running_activations setting is 12, but can be adjusted according to fit capacity.
[automationcontroller]
controller.example.com

[automationhub]
automationhub.example.com

[automationedacontroller]
automationedacontroller.example.com

[automationgateway]
gateway.example.com

[database]
data.example.com

[all:vars]
admin_password='<password>'
redis_mode=standalone
pg_host='data.example.com'
pg_port='5432'
pg_database='awx'
pg_username='awx'
pg_password='<password>'
pg_sslmode='prefer'  # set to 'verify-full' for client-side enforced SSL

registry_url='registry.redhat.io'
registry_username='<registry username>'
registry_password='<registry password>'

# Automation hub configuration

automationhub_admin_password= <PASSWORD>

automationhub_pg_host='data.example.com'
automationhub_pg_port=5432

automationhub_pg_database='automationhub'
automationhub_pg_username='automationhub'
automationhub_pg_password=<PASSWORD>
automationhub_pg_sslmode='prefer'

# Automation Event-Driven Ansible controller configuration

automationedacontroller_admin_password='<eda-password>'

automationedacontroller_pg_host='data.example.com'
automationedacontroller_pg_port=5432

automationedacontroller_pg_database='automationedacontroller'
automationedacontroller_pg_username='automationedacontroller'
automationedacontroller_pg_password='<password>'

# Keystore file to install in SSO node
# sso_custom_keystore_file='/path/to/sso.jks'

# This install will deploy SSO with sso_use_https=True
# Keystore password is required for https enabled SSO
sso_keystore_password=''

# This install will deploy a TLS enabled Automation Hub.
# If for some reason this is not the behavior wanted one can
# disable TLS enabled deployment.
#
# automationhub_disable_https = False
# The default install will generate self-signed certificates for the Automation
# Hub service. If you are providing valid certificate via automationhub_ssl_cert
# and automationhub_ssl_key, one should toggle that value to True.
#
# automationhub_ssl_validate_certs = False
# SSL-related variables
# If set, this will install a custom CA certificate to the system trust store.
# custom_ca_cert=/path/to/ca.crt
# Certificate and key to install in Automation Hub node
# automationhub_ssl_cert=/path/to/automationhub.cert
# automationhub_ssl_key=/path/to/automationhub.key

# Automation Gateway configuration
automationgateway_admin_password=''

automationgateway_pg_host=''
automationgateway_pg_port=5432

automationgateway_pg_database='automationgateway'
automationgateway_pg_username='automationgateway'
automationgateway_pg_password=''
automationgateway_pg_sslmode='prefer'

# The main automation gateway URL that clients will connect to (e.g. https://<load balancer host>).
# If not specified, the first node in the [automationgateway] group will be used when needed.
# automationgateway_main_url = ''

# Certificate and key to install in Automation Gateway
# automationgateway_ssl_cert=/path/to/automationgateway.cert
# automationgateway_ssl_key=/path/to/automationgateway.key

# Certificate and key to install in nginx for the web UI and API
# web_server_ssl_cert=/path/to/tower.cert
# web_server_ssl_key=/path/to/tower.key
# Server-side SSL settings for PostgreSQL (when we are installing it).
# postgres_use_ssl=False
# postgres_ssl_cert=/path/to/pgsql.crt
# postgres_ssl_key=/path/to/pgsql.key

# Boolean flag used to verify Automation Controller's
# web certificates when making calls from Automation Event-Driven Ansible controller.
# automationedacontroller_controller_verify_ssl = true
#
# Certificate and key to install in Automation Event-Driven Ansible controller node
# automationedacontroller_ssl_cert=/path/to/automationeda.crt
# automationedacontroller_ssl_key=/path/to/automationeda.key

Additional resources

For more information about these inventory variables, refer to the Ansible automation hub variables.

3.2.1.4. High availability automation hub

Use the following examples to populate the inventory file to install a highly available automation hub. This inventory file includes a highly available automation hub with a clustered setup.

You can configure your HA deployment further to enable a high availability deployment of automation hub on SELinux.

Specify database host IP

  • Specify the IP address for your database host, using the automation_pg_host and automation_pg_port inventory variables. For example:
automationhub_pg_host='192.0.2.10'
automationhub_pg_port=5432
  • Also specify the IP address for your database host in the [database] section, using the value in the automationhub_pg_host inventory variable:
[database]
192.0.2.10

List all instances in a clustered setup

  • If installing a clustered setup, replace localhost ansible_connection=local in the [automationhub] section with the hostname or IP of all instances. For example:
[automationhub]
automationhub1.testing.ansible.com ansible_user=cloud-user
automationhub2.testing.ansible.com ansible_user=cloud-user
automationhub3.testing.ansible.com ansible_user=cloud-user

Next steps

Check that the following directives are present in /etc/pulp/settings.py in each of the private automation hub servers:

USE_X_FORWARDED_PORT = True
USE_X_FORWARDED_HOST = True
Note

If automationhub_main_url is not specified, the first node in the [automationhub] group will be used as default.

3.2.1.5. Enabling a high availability (HA) deployment of automation hub on SELinux

You can configure the inventory file to enable high availability deployment of automation hub on SELinux. You must create two mount points for /var/lib/pulp and /var/lib/pulp/pulpcore_static, and then assign the appropriate SELinux contexts to each.

Note

You must add the context for /var/lib/pulp pulpcore_static and run the Ansible Automation Platform installer before adding the context for /var/lib/pulp.

Prerequisites

  • You have already configured a NFS export on your server.

    Note

    The NFS share is hosted on an external server and is not a part of high availability automation hub deployment.

Procedure

  1. Create a mount point at /var/lib/pulp:

    $ mkdir /var/lib/pulp/
  2. Open /etc/fstab using a text editor, then add the following values:

    srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:var_lib_t:s0" 0 0
    srv_rhel8:/data/pulpcore_static /var/lib/pulp/pulpcore_static nfs defaults,_netdev,nosharecache,context="system_u:object_r:httpd_sys_content_rw_t:s0" 0 0
  3. Run the reload systemd manager configuration command:

    $ systemctl daemon-reload
  4. Run the mount command for /var/lib/pulp:

    $ mount /var/lib/pulp
  5. Create a mount point at /var/lib/pulp/pulpcore_static:

    $ mkdir /var/lib/pulp/pulpcore_static
  6. Run the mount command:

    $ mount -a
  7. With the mount points set up, run the Ansible Automation Platform installer:

    $ setup.sh -- -b --become-user root
  8. After the installation is complete, unmount the /var/lib/pulp/ mount point.

Additional Resources

3.2.1.5.1. Configuring pulpcore.service

After you have configured the inventory file, and applied the SELinux context, you now need to configure the pulp service.

Procedure

  1. With the two mount points set up, shut down the Pulp service to configure pulpcore.service:

    $ systemctl stop pulpcore.service
  2. Edit pulpcore.service using systemctl:

    $ systemctl edit pulpcore.service
  3. Add the following entry to pulpcore.service to ensure that automation hub services starts only after starting the network and mounting the remote mount points:

    [Unit]
    After=network.target var-lib-pulp.mount
  4. Enable remote-fs.target:

    $ systemctl enable remote-fs.target
  5. Reboot the system:

    $ systemctl reboot

Troubleshooting

A bug in the pulpcore SELinux policies can cause the token authentication public/private keys in etc/pulp/certs/ to not have the proper SELinux labels, causing the pulp process to fail. When this occurs, run the following command to temporarily attach the proper labels:

$ chcon system_u:object_r:pulpcore_etc_t:s0 /etc/pulp/certs/token_{private,public}_key.pem

Repeat this command to reattach the proper SELinux labels whenever you relabel your system.

3.2.1.5.2. Applying the SELinux context

After you have configured the inventory file, you must now apply the context to enable the high availability (HA) deployment of automation hub on SELinux.

Procedure

  1. Shut down the Pulp service:

    $ systemctl stop pulpcore.service
  2. Unmount /var/lib/pulp/pulpcore_static:

    $ umount /var/lib/pulp/pulpcore_static
  3. Unmount /var/lib/pulp/:

    $ umount /var/lib/pulp/
  4. Open /etc/fstab using a text editor, then replace the existing value for /var/lib/pulp with the following:

    srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:pulpcore_var_lib_t:s0" 0 0
  5. Run the mount command:

    $ mount -a
3.2.1.6. Configuring content signing on private automation hub

To successfully sign and publish Ansible Certified Content Collections, you must configure private automation hub for signing.

Prerequisites

  • Your GnuPG key pairs have been securely set up and managed by your organization.
  • Your public-private key pair has proper access for configuring content signing on private automation hub.

Procedure

  1. Create a signing script that accepts only a filename.

    Note

    This script acts as the signing service and must generate an ascii-armored detached gpg signature for that file using the key specified through the PULP_SIGNING_KEY_FINGERPRINT environment variable.

    The script prints out a JSON structure with the following format.

    {"file": "filename", "signature": "filename.asc"}

    All the file names are relative paths inside the current working directory. The file name must remain the same for the detached signature.

    Example:

    The following script produces signatures for content:

    #!/usr/bin/env bash
    
    FILE_PATH=$1
    SIGNATURE_PATH="$1.asc"
    
    ADMIN_ID="$PULP_SIGNING_KEY_FINGERPRINT"
    PASSWORD="password"
    
    # Create a detached signature
    gpg --quiet --batch --pinentry-mode loopback --yes --passphrase \
       $PASSWORD --homedir ~/.gnupg/ --detach-sign --default-key $ADMIN_ID \
       --armor --output $SIGNATURE_PATH $FILE_PATH
    
    # Check the exit status
    STATUS=$?
    if [ $STATUS -eq 0 ]; then
       echo {\"file\": \"$FILE_PATH\", \"signature\": \"$SIGNATURE_PATH\"}
    else
       exit $STATUS
    fi

    After you deploy a private automation hub with signing enabled to your Ansible Automation Platform cluster, new UI additions are displayed in collections.

  2. Review the Ansible Automation Platform installer inventory file for options that begin with automationhub_*.

    [all:vars]
    .
    .
    .
    automationhub_create_default_collection_signing_service = True
    automationhub_auto_sign_collections = True
    automationhub_require_content_approval = True
    automationhub_collection_signing_service_key = /abs/path/to/galaxy_signing_service.gpg
    automationhub_collection_signing_service_script = /abs/path/to/collection_signing.sh

    The two new keys (automationhub_auto_sign_collections and automationhub_require_content_approval) indicate that the collections must be signed and approved after they are uploaded to private automation hub.

3.2.1.7. Adding a safe plugin variable to Event-Driven Ansible controller

When using redhat.insights_eda or similar plugins to run rulebook activations in Event-Driven Ansible controller, you must add a safe plugin variable to a directory in Ansible Automation Platform. This ensures connection between Event-Driven Ansible controller and the source plugin, and displays port mappings correctly.

Procedure

  1. Create a directory for the safe plugin variable: mkdir -p ./group_vars/automationedacontroller
  2. Create a file within that directory for your new setting (for example, touch ./group_vars/automationedacontroller/custom.yml)
  3. Add the variable automationedacontroller_additional_settings to extend the default settings.yaml template for Event-Driven Ansible controller and add the SAFE_PLUGINS field with a list of plugins to enable. For example:

    automationedacontroller_additional_settings:
       SAFE_PLUGINS:
         - ansible.eda.webhook
         - ansible.eda.alertmanager
    Note

    You can also extend the automationedacontroller_additional_settings variable beyond SAFE_PLUGINS in the Django configuration file /etc/ansible-automation-platform/eda/settings.yaml.

3.2.2. Setting registry_username and registry_password

When using the registry_username and registry_password variables for an online non-bundled installation, you need to create a new registry service account.

Registry service accounts are named tokens that can be used in environments where credentials will be shared, such as deployment systems.

Procedure

  1. Go to https://access.redhat.com/terms-based-registry/accounts.
  2. On the Registry Service Accounts page click New Service Account.
  3. Enter a name for the account using only the allowed characters.
  4. Optionally enter a description for the account.
  5. Click Create.
  6. Find the created account in the list by searching for your name in the search field.
  7. Click the name of the account that you created.
  8. Alternatively, if you know the name of your token, you can go directly to the page by entering the URL:

    https://access.redhat.com/terms-based-registry/token/<name-of-your-token>
  9. A token page opens, displaying a generated username (different from the account name) and a token.

    If no token is displayed, click Regenerate Token. You can also click this to generate a new username and token.

  10. Copy the username (for example "1234567|testuser") and use it to set the variable registry_username.
  11. Copy the token and use it to set the variable registry_password.
3.2.2.1. Configuring Redis

Ansible Automation Platform offers a centralized Redis instance in both standalone and clustered topologies.

In RPM deployments, the Redis mode is set to cluster by default. You can change this setting in the inventory file [all:vars] section as in the following example:

[all:vars]
admin_password='<password>'
pg_host='data.example.com'
pg_port='5432'
pg_database='awx'
pg_username='awx'
pg_password='<password>'
pg_sslmode='prefer'  # set to 'verify-full' for client-side enforced SSL

registry_url='registry.redhat.io'
registry_username='<registry username>'
registry_password='<registry password>'

redis_mode=cluster

For more information about Redis, see Caching and queueing system in Planning your installation.

3.3. Running the Red Hat Ansible Automation Platform installer setup script

After you update the inventory file with required parameters, run the installer setup script.

Procedure

  • Run the setup.sh script

    $ sudo ./setup.sh
Note

If you are running the setup as a non-root user with sudo privileges, you can use the following command:

$ ANSIBLE_BECOME_METHOD='sudo'
ANSIBLE_BECOME=True ./setup.sh

Installation of Red Hat Ansible Automation Platform will begin.

Additional resources

See Understanding privilege escalation for additional setup.sh script examples.

3.4. Verifying installation of Ansible Automation Platform

Upon a successful login, your installation of Red Hat Ansible Automation Platform is complete.

Important

If the installation fails and you are a customer who has purchased a valid license for Red Hat Ansible Automation Platform, contact Ansible through the Red Hat Customer portal.

Additional resources

See Getting started with Ansible Automation Platform for post installation instructions.

3.5. Backing up your Ansible Automation Platform instance

Back up an existing Ansible Automation Platform instance by running the .setup.sh script with the backup_dir flag, which saves the content and configuration of your current environment. Use the compression flags use_archive_compression and use_db_compression to compress the backup artifacts before they are sent to the host running the backup operation.

Procedure

  1. Navigate to your Ansible Automation Platform installation directory.
  2. Run the ./setup.sh script following the example below:

    $ ./setup.sh -e ‘backup_dir=/ansible/mybackup’ -e
    ‘use_archive_compression=true’ ‘use_db_compression=true’ @credentials.yml -b

    Where:

    • backup_dir: Specifies a directory to save your backup to.
    • use_archive_compression=true and use_db_compression=true: Compresses the backup artifacts before they are sent to the host running the backup operation.

      You can use the following variables to customize the compression:

      • For global control of compression for filesystem related backup files: use_archive_compression=true
      • For component-level control of compression for filesystem related backup files: <componentName>_use_archive_compression

        For example:

        • automationgateway_use_archive_compression=true
        • automationcontroller_use_archive_compression=true
        • automationhub_use_archive_compression=true
        • automationedacontroller_use_archive_compression=true
      • For global control of compression for database related backup files: use_db_compression=true
      • For component-level control of compression for database related backup files: <componentName>_use_db_compression=true

        For example:

        • automationgateway_use_db_compression=true
        • automationcontroller_use_db_compression=true
        • automationhub_use_db_compression=true
        • automationedacontroller_use_db_compression=true

After a successful backup, a backup file is created at /ansible/mybackup/automation-platform-backup-<date/time>.tar.gz.

3.6. Adding a subscription manifest to Ansible Automation Platform

Before you first log in, you must add your subscription information to the platform. To add a subscription to Ansible Automation Platform, see Obtaining a manifest file in the Access management and authentication guide.

Chapter 4. Horizontal Scaling in Red Hat Ansible Automation Platform

You can set up multi-node deployments for components across Ansible Automation Platform. Whether you require horizontal scaling for Automation Execution, Automation Decisions, or automation mesh, you can scale your deployments based on your organization’s needs.

4.1. Horizontal scaling in Event-Driven Ansible controller

With Event-Driven Ansible controller, you can set up horizontal scaling for your events automation. This multi-node deployment enables you to define as many nodes as you prefer during the installation process. You can also increase or decrease the number of nodes at any time according to your organizational needs.

The following node types are used in this deployment:

API node type
Responds to the HTTP REST API of Event-Driven Ansible controller.
Worker node type
Runs an Event-Driven Ansible worker, which is the component of Event-Driven Ansible that not only manages projects and activations, but also executes the activations themselves.
Hybrid node type
Is a combination of the API node and the worker node.

The following example shows how you can set up an inventory file for horizontal scaling of Event-Driven Ansible controller on Red Hat Enterprise Linux VMs using the host group name [automationedacontroller] and the node type variable eda_node_type:

[automationedacontroller]

3.88.116.111 routable_hostname=automationedacontroller-api.example.com eda_node_type=api

# worker node
3.88.116.112 routable_hostname=automationedacontroller-api.example.com eda_node_type=worker

4.1.1. Sizing and scaling guidelines

API nodes process user requests (interactions with the UI or API) while worker nodes process the activations and other background tasks required for Event-Driven Ansible to function properly. The number of API nodes you require correlates to the desired number of users of the application and the number of worker nodes correlates to the desired number of activations you want to run.

Since activations are variable and controlled by worker nodes, the supported approach for scaling is to use separate API and worker nodes instead of hybrid nodes due to the efficient allocation of hardware resources by worker nodes. By separating the nodes, you can scale each type independently based on specific needs, leading to better resource utilization and cost efficiency.

An example of an instance in which you might consider scaling up your node deployment is when you want to deploy Event-Driven Ansible for a small group of users who will run a large number of activations. In this case, one API node is adequate, but if you require more, you can scale up to three additional worker nodes.

To set up a multi-node deployment, follow the procedure in Setting up horizontal scaling for Event-Driven Ansible controller.

4.1.2. Setting up horizontal scaling for Event-Driven Ansible controller

To scale up (add more nodes) or scale down (remove nodes), you must update the content of the inventory file to add or remove nodes and rerun the installation program.

Procedure

  1. Update the inventory to add two more worker nodes:

    [automationedacontroller]
    
    3.88.116.111 routable_hostname=automationedacontroller-api.example.com eda_node_type=api
    
    3.88.116.112 routable_hostname=automationedacontroller-api.example.com eda_node_type=worker
    
    # two more worker nodes
    3.88.116.113 routable_hostname=automationedacontroller-api.example.com eda_node_type=worker
    
    3.88.116.114 routable_hostname=automationedacontroller-api.example.com eda_node_type=worker
  2. Re-run the installer.

Chapter 5. Disconnected installation

If you are not connected to the internet or do not have access to online repositories, you can install Red Hat Ansible Automation Platform without an active internet connection.

5.1. Prerequisites

Before installing Ansible Automation Platform on a disconnected network, you must meet the following prerequisites:

  • A subscription manifest that you can upload to the platform.

    For more information, see Obtaining a manifest file.

  • The Ansible Automation Platform setup bundle at Customer Portal is downloaded.
  • The DNS records for the automation controller and private automation hub servers are created.

5.2. Ansible Automation Platform installation on disconnected RHEL

You can install Ansible Automation Platform without an internet connection by using the installer-managed database located on the automation controller. The setup bundle is recommended for disconnected installation because it includes additional components that make installing Ansible Automation Platform easier in a disconnected environment. These include the Ansible Automation Platform Red Hat package managers (RPMs) and the default execution environment (EE) images.

Additional Resources

For a comprehensive list of pre-defined variables used in Ansible installation inventory files, see Ansible variables.

5.2.1. System requirements for disconnected installation

Ensure that your system has all the hardware requirements before performing a disconnected installation of Ansible Automation Platform. You can find these in system requirements.

5.2.2. RPM Source

RPM dependencies for Ansible Automation Platform that come from the BaseOS and AppStream repositories are not included in the setup bundle. To add these dependencies, you must first obtain access to BaseOS and AppStream repositories. Use Satellite to sync repositories and add dependencies. If you prefer an alternative tool, you can choose between the following options:

  • Reposync
  • The RHEL Binary DVD
Note

The RHEL Binary DVD method requires the DVD for supported versions of RHEL. See Red Hat Enterprise Linux Life Cycle for information on which versions of RHEL are currently supported.

Additional resources

5.3. Synchronizing RPM repositories using reposync

To perform a reposync you need a RHEL host that has access to the internet. After the repositories are synced, you can move the repositories to the disconnected network hosted from a web server.

When downloading RPM, ensure you use the applicable distro.

Procedure

  1. Attach the BaseOS and AppStream required repositories:

    # subscription-manager repos \
        --enable rhel-9-for-x86_64-baseos-rpms \
        --enable rhel-9-for-x86_64-appstream-rpms
  2. Perform the reposync:

    # dnf install yum-utils
    # reposync -m --download-metadata --gpgcheck \
        -p /path/to/download
    1. Use reposync with --download-metadata and without --newest-only. See RHEL 8 Reposync.

      • If you are not using --newest-only, the repos downloaded may take an extended amount of time to sync due to the large number of GB.
      • If you are using --newest-only, the repos downloaded may take an extended amount of time to sync due to the large number of GB.

    After the reposync is completed, your repositories are ready to use with a web server.

  3. Move the repositories to your disconnected network.

5.4. Creating a new web server to host repositories

If you do not have an existing web server to host your repositories, you can create one with your synced repositories.

Procedure

  1. Install prerequisites:

    $ sudo dnf install httpd
  2. Configure httpd to serve the repo directory:

    /etc/httpd/conf.d/repository.conf
    
    DocumentRoot '/path/to/repos'
    
    <LocationMatch "^/+$">
        Options -Indexes
        ErrorDocument 403 /.noindex.html
    </LocationMatch>
    
    <Directory '/path/to/repos'>
        Options All Indexes FollowSymLinks
        AllowOverride None
        Require all granted
    </Directory>
  3. Ensure that the directory is readable by an apache user:

    $ sudo chown -R apache /path/to/repos
  4. Configure SELinux:

    $ sudo semanage fcontext -a -t httpd_sys_content_t "/path/to/repos(/.*)?"
    $ sudo restorecon -ir /path/to/repos
  5. Enable httpd:

    $ sudo systemctl enable --now httpd.service
  6. Open firewall:

    $ sudo firewall-cmd --zone=public --add-service=http –add-service=https --permanent
    $ sudo firewall-cmd --reload
  7. On automation services, add a repo file at /etc/yum.repos.d/local.repo, and add the optional repos if needed:

    [Local-BaseOS]
    name=Local BaseOS
    baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-baseos-rpms
    enabled=1
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
    
    [Local-AppStream]
    name=Local AppStream
    baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-appstream-rpms
    enabled=1
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

5.5. Accessing RPM repositories from a locally mounted DVD

If you plan to access the repositories from the RHEL binary DVD, you must first set up a local repository.

Procedure

  1. Mount DVD or ISO:

    1. DVD

      # mkdir /media/rheldvd && mount /dev/sr0 /media/rheldvd
    2. ISO

      # mkdir /media/rheldvd && mount -o loop rhrhel-8.6-x86_64-dvd.iso /media/rheldvd
  2. Create yum repo file at /etc/yum.repos.d/dvd.repo

    [dvd-BaseOS]
    name=DVD for RHEL - BaseOS
    baseurl=file:///media/rheldvd/BaseOS
    enabled=1
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
    
    [dvd-AppStream]
    name=DVD for RHEL - AppStream
    baseurl=file:///media/rheldvd/AppStream
    enabled=1
    gpgcheck=1
    gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
  3. Import the gpg key:

    # rpm --import /media/rheldvd/RPM-GPG-KEY-redhat-release
Note

If the key is not imported you will see an error similar to

# Curl error (6): Couldn't resolve host name for
https://www.redhat.com/security/data/fd431d51.txt [Could not resolve host:
www.redhat.com]

Additional Resources

For further detail on setting up a repository see Need to set up yum repository for locally-mounted DVD on Red Hat Enterprise Linux 8.

5.6. Downloading and installing the Ansible Automation Platform setup bundle

Choose the setup bundle to download Ansible Automation Platform for disconnected installations. This bundle includes the RPM content for Ansible Automation Platform and the default execution environment images that will be uploaded to your private automation hub during the installation process.

Procedure

  1. Download the Ansible Automation Platform setup bundle package by navigating to the Red Hat Ansible Automation Platform download page and clicking Download Now for the Ansible Automation Platform 2.5 Setup Bundle.
  2. On control node, untar the bundle:

    $ tar xvf \
       ansible-automation-platform-setup-bundle-2.5-1.tar.gz
    $ cd ansible-automation-platform-setup-bundle-2.5-1
  3. Edit the inventory file to include variables based on your host names and desired password values.
Note

See section 3.2 Inventory file examples base on installation scenarios for a list of examples that best fits your scenario.

5.7. Completing post installation tasks

After you have completed the installation of Ansible Automation Platform, ensure that automation hub and automation controller deploy properly.

Before your first login, you must add your subscription information to the platform. To obtain your subscription information in uploadable form, see Obtaining a manifest file in Access management and authentication.

Once you have obtained your subscription manifest, see Getting started with Ansible Automation Platform for instructions on how to upload your subscription information.

Now that you have successfully installed Ansible Automation Platform, to begin using its features, see the following guides for your next steps:

Getting started with Ansible Automation Platform.

Managing automation content.

Creating and using execution environments.

Chapter 6. Troubleshooting RPM installation of Ansible Automation Platform

Use this information to troubleshoot your RPM installation of Ansible Automation Platform.

6.1. Gathering Ansible Automation Platform logs

With the sos utility, you can collect configuration, diagnostic, and troubleshooting data, and provide those files to Red Hat Technical Support. An sos report is a common starting point for Red Hat technical support engineers when performing analysis of a service request for the Ansible Automation Platform.

As part of the troubleshooting with Red Hat Support, you can collect the sos report for each node in your RPM installation of Ansible Automation Platform using the installation inventory and the installer.

Procedure

  1. Access the installer folder with the inventory file and run the installer setup script the following command:

    $ ./setup.sh -s

    With this command, you can connect to each node present in the inventory, install the sos tool, and generate new logs.

    Note

    If you are running the setup as a non-root user with sudo privileges, you can use the following command:

    $ ANSIBLE_BECOME_METHOD='sudo'
    ANSIBLE_BECOME=True ./setup.sh -s
  2. Optional: If required, change the location of the sos report files.

    The sos report files are copied to the /tmp folder for the current server. To change the location, specify the new location by using the following command:

    $ ./setup.sh -e 'target_sos_directory=/path/to/files' -s

    Where target_sos_directory=/path/to/files is used to specify the destination directory where the sos report will be saved. In this case, the sos report is stored in the directory /path/to/files.

  3. Gather the files described on the playbook output and share with the support engineer or directly upload the sos report to Red Hat.

    To create an sos report with additional information or directly upload the data to Red Hat, use the following command:

    $ ./setup.sh -e 'case_number=0000000' -e 'clean=true' -e 'upload=true' -s
    Table 6.1. Parameter Reference Table

    Parameter

    Description

    Default value

    case_number

    Specifies the support case number that you want.

    -

    clean

    Obfuscates sensitive data that might be present on the sos report.

    false

    upload

    Automatically uploads the sos report data to Red Hat.

    false

To know more about the sos report tool, see the KCS article: What is an sos report and how to create one in Red Hat Enterprise Linux?

Appendix A. Inventory file variables

The following tables contain information about the variables used in Ansible Automation Platform’s installation inventory files. The tables include the variables that you can use for RPM-based installation and container-based installation.

A.1. Ansible variables

The following variables control how Ansible Automation Platform interacts with remote hosts.

For more information about variables specific to certain plugins, see the documentation for Ansible.Builtin.

For a list of global configuration options, see Ansible Configuration Settings.

VariableDescription

ansible_connection

The connection plugin used for the task on the target host.

This can be the name of any of Ansible connection plugins. SSH protocol types are smart, ssh, or paramiko.

Default = smart

ansible_host

The IP address or name of the target host to use instead of inventory_hostname.

ansible_password

The password to authenticate to the host.

Do not store this variable in plain text. Always use a vault. For more information, see Keep vaulted variables safely visible.

ansible_port

The connection port number.

The default for SSH is 22.

ansible_scp_extra_args

This setting is always appended to the default scp command line.

ansible_sftp_extra_args

This setting is always appended to the default sftp command line.

ansible_shell_executable

This sets the shell that the Ansible controller uses on the target machine and overrides the executable in ansible.cfg which defaults to /bin/sh.

ansible_shell_type

The shell type of the target system.

Do not use this setting unless you have set the ansible_shell_executable to a non-Bourne (sh) compatible shell. By default commands are formatted using sh-style syntax. Setting this to csh or fish causes commands executed on target systems to follow the syntax of those shells instead.

ansible_ssh_common_args

This setting is always appended to the default command line for sftp, scp, and ssh. Useful to configure a ProxyCommand for a certain host or group.

ansible_ssh_executable

This setting overrides the default behavior to use the system ssh. This can override the ssh_executable setting in ansible.cfg.

ansible_ssh_extra_args

This setting is always appended to the default ssh command line.

ansible_ssh_pipelining

Determines if SSH pipelining is used.

This can override the pipelining setting in ansible.cfg. If using SSH key-based authentication, the key must be managed by an SSH agent.

ansible_ssh_private_key_file

Private key file used by SSH.

Useful if using multiple keys and you do not want to use an SSH agent.

ansible_user

The user name to use when connecting to the host.

Do not change this variable unless /bin/sh is not installed on the target machine or cannot be run from sudo.

inventory_hostname

This variable takes the hostname of the machine from the inventory script or the Ansible configuration file. You cannot set the value of this variable. Because the value is taken from the configuration file, the actual runtime hostname value can vary from what is returned by this variable.

A.2. Automation hub variables

RPM variable nameContainer variable nameDescriptionRequired or optionalDefault

automationhub_admin_password

hub_admin_password

Automation hub administrator password.
Use of special characters for this variable is limited. The password can include any printable ASCII character except /, , or @.

Required

 

automationhub_api_token

 

Set the existing token for the installation program.
For example, a regenerated token in the automation hub UI will invalidate an existing token. Use this variable to set that token in the installation program the next time you run the installation program.

Optional

 

automationhub_auto_sign_collections

hub_collection_auto_sign

If a collection signing service is enabled, collections are not signed automatically by default.
Set this variable to true to sign collections by default.

Optional

false

automationhub_backup_collections

 

Ansible automation hub provides artifacts in /var/lib/pulp. These artifacts are automatically backed up by default.
Set this variable to false to prevent backup or restore of /var/lib/pulp.

Optional

true

automationhub_client_max_body_size

hub_nginx_client_max_body_size

Maximum allowed size for data sent to automation hub through NGINX.

Optional

20m

automationhub_collection_download_count

 

Denote whether or not the collection download count should be displayed in the UI.

Optional

false

automationhub_collection_seed_repository

 

Controls the type of content to upload when hub_seed_collections is set to true.
Valid options include: certified, validated

Optional

Both certified and validated are enabled by default.

automationhub_collection_signing_service_key

hub_collection_signing_key

Path to the collection signing key file.

Required if a collection signing service is enabled.

 

automationhub_container_repair_media_type

 

Denote whether or not to run the command pulpcore-manager container-repair-media-type.
Valid options include: true, false, auto

Optional

auto

automationhub_container_signing_service_key

hub_container_signing_key

Path to the container signing key file.

Required if a container signing service is enabled.

 

automationhub_create_default_collection_signing_service

hub_collection_signing

Set this variable to true to enable a collection signing service.

Optional

false

automationhub_create_default_container_signing_service

hub_container_signing

Set this variable to true to enable a container signing service.

Optional

false

automationhub_disable_hsts

hub_nginx_disable_hsts

Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for automation hub.
Set this variable to true to disable HSTS.

Optional

false

automationhub_disable_https

hub_nginx_disable_https

Controls whether HTTPS is enabled or disabled for automation hub.
Set this variable to true to disable HTTPS.

Optional

false

automationhub_enable_api_access_log

 

Controls whether logging is enabled or disabled at /var/log/galaxy_api_access.log.
The file logs all user actions made to the platform, including username and IP address.
Set this variable to true to enable this logging.

Optional

false

automationhub_enable_unauthenticated_collection_access

 

Controls whether read-only access is enabled or disabled for unauthorized users viewing collections or namespaces for automation hub.
Set this variable to true to enable read-only access.

Optional

false

automationhub_enable_unauthenticated_collection_download

 

Controls whether or not unauthorized users can download read-only collections from automation hub.
Set this variable to true to enable download of read-only collections.

Optional

false

automationhub_firewalld_zone

hub_firewall_zone

The firewall zone where automation hub related firewall rules are applied. This controls which networks can access automation hub based on the zone’s trust level.

Optional

RPM = no default set. Container = public.

automationhub_force_change_admin_password

 

Denote whether or not to require the change of the default administrator password for automation hub during installation.
Set to true to require the user to change the default administrator password during installation.

Optional

false

automationhub_importer_settings

hub_galaxy_importer

Dictionary of settings to pass to the galaxy-importer.cfg configuration file. These settings control how the galaxy-importer service processes and validates Ansible content.
Example values include: ansible-doc, ansible-lint, and flake8.

Optional

 

automationhub_nginx_tls_files_remote

 

Denote whether the web certificate sources are local to the installation program (false) or on the remote component server (true).

Optional

The value defined in automationhub_tls_files_remote.

automationhub_pg_cert_auth

hub_pg_cert_auth

Controls whether client certificate authentication is enabled or disabled on the automation hub PostgreSQL database.
Set this variable to true to enable client certificate authentication.

Optional

false

automationhub_pg_database

hub_pg_database

Name of the PostgreSQL database used by automation hub.

Optional

RPM = automationhub
Container = pulp

automationhub_pg_host

hub_pg_host

Hostname of the PostgreSQL database used by automation hub.

Required

RPM = 127.0.0.1
Container =

automationhub_pg_password

hub_pg_password

Password for the automation hub PostgreSQL database user.
Use of special characters for this variable is limited. The !, #, 0 and @ characters are supported. Use of other special characters can cause the setup to fail.

Optional

 

automationhub_pg_port

hub_pg_port

Port number for the PostgreSQL database used by automation hub.

Optional

5432

automationhub_pg_sslmode

hub_pg_sslmode

Controls the SSL/TLS mode to use when automation hub connects to the PostgreSQL database.
Valid options include verify-full, verify-ca, require, prefer, allow, disable.

Optional

prefer

automationhub_pg_username

hub_pg_username

Username for the automation hub PostgreSQL database user.

Optional

RPM = automationhub
Container = pulp

automationhub_pgclient_sslcert

hub_pg_tls_cert

Path to the PostgreSQL SSL/TLS certificate file for automation hub.

Required if using client certificate authentication.

 

automationhub_pgclient_sslkey

hub_pg_tls_key

Path to the PostgreSQL SSL/TLS key file for automation hub.

Required if using client certificate authentication.

 

automationhub_pgclient_tls_files_remote

 

Denote whether the PostgreSQL client certificate sources are local to the installation program (false) or on the remote component server (true).

Optional

The value defined in automationhub_tls_files_remote.

automationhub_require_content_approval

 

Controls whether content signing is enabled or disabled for automation hub.
By default when you upload collections to automation hub, an administrator must approve it before they are made available to users.
To disable the content approval flow, set the variable to false.

Optional

true

automationhub_restore_signing_keys

 

Controls whether or not existing signing keys should be restored from a backup.
Set to false to disable restoration of existing signing keys.

Optional

true

automationhub_seed_collections

hub_seed_collections

Controls whether or not pre-loading of collections is enabled.
When you run the bundle installer, validated content is uploaded to the validated repository, and certified content is uploaded to the rh-certified repository. By default, certified content and validated content are both uploaded.
If you do not want to pre-load content, set this variable to false.
For the RPM-based installer, if you only want one type of content, set this variable to true and set the automationhub_collection_seed_repository variable to the type of content you want to include.

Optional

true

automationhub_ssl_cert

hub_tls_cert

Path to the SSL/TLS certificate file for automation hub.

Optional

 

automationhub_ssl_key

hub_tls_key

Path to the SSL/TLS key file for automation hub.

Optional

 

automationhub_tls_files_remote

hub_tls_remote

Denote whether the automation hub provided certificate files are local to the installation program (false) or on the remote component server (true).

Optional

false

automationhub_use_archive_compression

hub_use_archive_compression

Controls whether archive compression is enabled or disabled for automation hub. You can control this functionality globally by using use_archive_compression.

Optional

true

automationhub_use_db_compression

hub_use_db_compression

Controls whether database compression is enabled or disabled for automation hub. You can control this functionality globally by using use_db_compression.

Optional

true

automationhub_user_headers

hub_nginx_user_headers

List of additional NGINX headers to add to automation hub’s NGINX configuration.

Optional

[]

generate_automationhub_token

 

Controls whether or not a token is generated for automation hub during installation. By default, a token is automatically generated during a fresh installation.
If set to true, a token is regenerated during installation.

Optional

false

 

hub_extra_settings

Defines additional settings for use by automation hub during installation.

For example:

hub_extra_settings:
  - setting: REDIRECT_IS_HTTPS
    value: True

Optional

[]

nginx_hsts_max_age

hub_nginx_hsts_max_age

Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for automation hub.

Optional

63072000

pulp_secret

hub_secret_key

Secret key value used by automation hub to sign and encrypt data.

Optional

 
 

hub_azure_account_key

Azure blob storage account key.

Required if using an Azure blob storage backend.

 
 

hub_azure_account_name

Account name associated with the Azure blob storage.

Required when using an Azure blob storage backend.

 
 

hub_azure_container

Name of the Azure blob storage container.

Optional

pulp

 

hub_azure_extra_settings

Defines extra parameters for the Azure blob storage backend.
For more information about the list of parameters, see django-storages documentation - Azure Storage.

Optional

{}

 

hub_collection_signing_pass

Password for the automation content collection signing service.

Required if the collection signing service is protected by a passphrase.

 
 

hub_collection_signing_service

Service for signing collections.

Optional

ansible-default

 

hub_container_signing_pass

Password for the automation content container signing service.

Required if the container signing service is protected by a passphrase.

 
 

hub_container_signing_service

Service for signing containers.

Optional

container-default

 

hub_nginx_http_port

Port number that automation hub listens on for HTTP requests.

Optional

8081

 

hub_nginx_https_port

Port number that automation hub listens on for HTTPS requests.

Optional

8444

nginx_tls_protocols

hub_nginx_https_protocols

Protocols that automation hub will support when handling HTTPS traffic.

Optional

RPM = [TLSv1.2]. Container = [TLSv1.2, TLSv1.3].

 

hub_pg_socket

UNIX socket used by automation hub to connect to the PostgreSQL database.

Optional

 
 

hub_s3_access_key

AWS S3 access key.

Required if using an AWS S3 storage backend.

 
 

hub_s3_bucket_name

Name of the AWS S3 storage bucket.

Optional

pulp

 

hub_s3_extra_settings

Used to define extra parameters for the AWS S3 storage backend.
For more information about the list of parameters, see django-storages documentation - Amazon S3.

Optional

{}

 

hub_s3_secret_key

AWS S3 secret key.

Required if using an AWS S3 storage backend.

 
 

hub_shared_data_mount_opts

Mount options for the Network File System (NFS) share.

Optional

rw,sync,hard

 

hub_shared_data_path

Path to the Network File System (NFS) share with read, write, and execute (RWX) access.

Required if installing more than one instance of automation hub with a file storage backend. When installing a single instance of automation hub, it is optional.

 
 

hub_storage_backend

Automation hub storage backend type.
Possible values include: azure, file, s3.

Optional

file

 

hub_workers

Number of automation hub workers.

Optional

2

A.3. Automation controller variables

RPM variable nameContainer variable nameDescriptionRequired or optionalDefault

admin_email

controller_admin_email

Email address used by Django for the admin user for automation controller.

Optional

admin@example.com

admin_password

controller_admin_password

Automation controller administrator password.
Use of special characters for this variable is limited. The password can include any printable ASCII character except /, , or @.

Required

 

admin_username

controller_admin_user

Username used to identify and create the administrator user in automation controller.

Optional

admin

automationcontroller_client_max_body_size

controller_nginx_client_max_body_size

Maximum allowed size for data sent to automation controller through NGINX.

Optional

5m

automationcontroller_use_archive_compression

controller_use_archive_compression

Controls whether archive compression is enabled or disabled for automation controller. You can control this functionality globally by using use_archive_compression.

Optional

true

automationcontroller_use_db_compression

controller_use_db_compression

Controls whether database compression is enabled or disabled for automation controller. You can control this functionality globally by using use_db_compression.

Optional

true

awx_pg_cert_auth

controller_pg_cert_auth

Controls whether client certificate authentication is enabled or disabled on the automation controller PostgreSQL database.
Set this variable to true to enable client certificate authentication.

Optional

false

controller_firewalld_zone

controller_firewall_zone

The firewall zone where automation controller related firewall rules are applied. This controls which networks can access automation controller based on the zone’s trust level.

Optional

public

controller_nginx_tls_files_remote

 

Denote whether the web certificate sources are local to the installation program (false) or on the remote component server (true).

Optional

The value defined in controller_tls_files_remote.

controller_pgclient_tls_files_remote

 

Denote whether the PostgreSQL client certificate sources are local to the installation program (false) or on the remote component server (true).

Optional

The value defined in controller_tls_files_remote.

controller_tls_files_remote

controller_tls_remote

Denote whether the automation controller provided certificate files are local to the installation program (false) or on the remote component server (true).

Optional

false

nginx_disable_hsts

controller_nginx_disable_hsts

Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for automation controller.
Set this variable to true to disable HSTS.

Optional

false

nginx_disable_https

controller_nginx_disable_https

Controls whether HTTPS is enabled or disabled for automation controller.
Set this variable to true to disable HTTPS.

Optional

false

nginx_hsts_max_age

controller_nginx_hsts_max_age

Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for automation controller.

Optional

63072000

nginx_http_port

controller_nginx_http_port

Port number that automation controller listens on for HTTP requests.

Optional

RPM = 80
Container = 8080

nginx_https_port

controller_nginx_https_port

Port number that automation controller listens on for HTTPS requests.

Optional

RPM = 443
Container = 8443

nginx_tls_protocols

controller_nginx_https_protocols

Protocols that automation controller supports when handling HTTPS traffic.

Optional

RPM = [TLSv1.2]
Container = [TLSv1.2, TLSv1.3]

nginx_user_headers

controller_nginx_user_headers

List of additional NGINX headers to add to automation controller’s NGINX configuration.

Optional

[]

node_state

 

The status of a node or group of nodes.
Valid options include active, deprovision to remove a node from a cluster, or iso_migrate to migrate a legacy isolated node to an execution node.

Optional

active

node_type

See receptor_type for the container equivalent variable.

For the [automationcontroller] group the two options are:

  • node_type=control - The node only runs project and inventory updates, but not regular jobs.
  • node_type=hybrid - The node runs everything.

For the [execution_nodes] group the two options are:

  • node_type=hop - The node forwards jobs to an execution node.
  • node_type=execution - The node can run jobs.

Optional

For [automationcontroller] = hybrid
For [execution_nodes] = execution

peers

See receptor_peers for the container equivalent variable.

Used to indicate which nodes a specific host or group connects to. Wherever this variable is defined, an outbound connection to the specific host or group is established.
This variable can be a comma-separated list of hosts and groups from the inventory. This is resolved into a set of hosts that is used to construct the receptor.conf file.

Optional

 

pg_database

controller_pg_database

Name of the PostgreSQL database used by automation controller.

Optional

awx

pg_host

controller_pg_host

Hostname of the PostgreSQL database used by automation controller.

Required

 

pg_password

controller_pg_password

Password for the automation controller PostgreSQL database user.
Use of special characters for this variable is limited. The !, #, 0 and @ characters are supported. Use of other special characters can cause the setup to fail.

Required if not using client certificate authentication.

 

pg_port

controller_pg_port

Port number for the PostgreSQL database used by automation controller.

Optional

5432

pg_sslmode

controller_pg_sslmode

Controls the SSL/TLS mode to use when automation controller connects to the PostgreSQL database.
Valid options include verify-full, verify-ca, require, prefer, allow, disable.

Optional

prefer

pg_username

controller_pg_username

Username for the automation controller PostgreSQL database user.

Optional

awx

pgclient_sslcert

controller_pg_tls_cert

Path to the PostgreSQL SSL/TLS certificate file for automation controller.

Required if using client certificate authentication.

 

pgclient_sslkey

controller_pg_tls_key

Path to the PostgreSQL SSL/TLS key file for automation controller.

Required if using client certificate authentication.

 

precreate_partition_hours

 

Number of hours worth of events table partitions to pre-create before starting a backup to avoid pg_dump locks.

Optional

3

uwsgi_listen_queue_size

controller_uwsgi_listen_queue_size

Number of requests uwsgi allows in the queue on automation controller until uwsgi_processes can serve them.

Optional

2048

web_server_ssl_cert

controller_tls_cert

Path to the SSL/TLS certificate file for automation controller.

Optional

 

web_server_ssl_key

controller_tls_key

Path to the SSL/TLS key file for automation controller.

Optional

 
 

controller_event_workers

Number of event workers that handle job-related events inside automation controller.

Optional

4

 

controller_extra_settings

Defines additional settings for use by automation controller during installation.

For example:

controller_extra_settings:
  - setting: USE_X_FORWARDED_HOST
    value: true

Optional

[]

 

controller_license_file

Path to the automation controller license file.

  
 

controller_percent_memory_capacity

Memory allocation for automation controller.

Optional

1.0 (allocates 100% of the total system memory to automation controller)

 

controller_pg_socket

UNIX socket used by automation controller to connect to the PostgreSQL database.

Optional

 
 

controller_secret_key

Secret key value used by automation controller to sign and encrypt data.

Optional

 

A.4. Database variables

RPM variable nameContainer variable nameDescriptionRequired or optionalDefault

install_pg_port

postgresql_port

Port number for the PostgreSQL database.

Optional

5432

postgres_firewalld_zone

postgresql_firewall_zone

The firewall zone where PostgreSQL related firewall rules are applied. This controls which networks can access PostgreSQL based on the zone’s trust level.

Optional

RPM = no default set. Container = public.

postgres_max_connections

postgresql_max_connections

Maximum number of concurrent connections to the database if you are using an installer-managed database.
See PostgreSQL database configuration and maintenance for automation controller for help selecting a value.

Optional

1024

postgres_ssl_cert

postgresql_tls_cert

Path to the PostgreSQL SSL/TLS certificate file.

Optional

 

postgres_ssl_key

postgresql_tls_key

Path to the PostgreSQL SSL/TLS key file.

Optional

 

postgres_use_ssl

postgresql_disable_tls

Controls whether SSL/TLS is enabled or disabled for the PostgreSQL database.

Optional

false

 

postgresql_admin_database

Database name used for connections to the PostgreSQL database server.

Optional

postgres

 

postgresql_admin_password

Password for the PostgreSQL admin user.
When used, the installation program creates each component’s database and credentials.

Required if using postgresql_admin_username.

 
 

postgresql_admin_username

Username for the PostgreSQL admin user.
When used, the installation program creates each component’s database and credentials.

Optional

postgres

 

postgresql_effective_cache_size

Memory allocation available (in MB) for caching data.

Optional

 
 

postgresql_keep_databases

Controls whether or not to keep databases during uninstall.
This variable applies to databases managed by the installation program only, and not external (customer-managed) databases.
Set to true to keep databases during uninstall.

Optional

false

 

postgresql_log_destination

Destination for server log output.

Optional

/dev/stderr

 

postgresql_password_encryption

The algorithm for encrypting passwords.

Optional

scram-sha-256

 

postgresql_shared_buffers

Memory allocation (in MB) for shared memory buffers.

Optional

 
 

postgresql_tls_remote

Denote whether the PostgreSQL provided certificate files are local to the installation program (false) or on the remote component server (true).

Optional

false

 

postgresql_use_archive_compression

Controls whether archive compression is enabled or disabled for PostgreSQL. You can control this functionality globally by using use_archive_compression.

Optional

true

A.5. Event-Driven Ansible controller variables

RPM variable nameContainer variable nameDescriptionRequired or optionalDefault

automationedacontroller_activation_workers

eda_activation_workers

Number of workers used for ansible-rulebook activation pods in Event-Driven Ansible.

Optional

RPM = (# of cores or threads) * 2 + 1. Container = 2

automationedacontroller_admin_email

eda_admin_email

Email address used by Django for the admin user for Event-Driven Ansible.

Optional

admin@example.com

automationedacontroller_admin_password

eda_admin_password

Event-Driven Ansible administrator password. Use of special characters for this variable is limited. The password can include any printable ASCII character except /, , or @.

Required

 

automationedacontroller_admin_username

eda_admin_user

Username used to identify and create the administrator user in Event-Driven Ansible.

Optional

admin

automationedacontroller_backend_gunicorn_workers

 

Number of workers for handling the API served through Gunicorn on worker nodes.

Optional

2

automationedacontroller_cache_tls_files_remote

 

Denote whether the cache cert sources are local to the installation program (false) or on the remote component server (true).

Optional

false

automationedacontroller_client_regen_cert

 

Controls whether or not to regenerate Event-Driven Ansible client certificates for the platform cache. Set to true to regenerate Event-Driven Ansible client certificates.

Optional

false

automationedacontroller_default_workers

eda_workers

Number of workers used in Event-Driven Ansible for application work.

Optional

Number of cores or threads

automationedacontroller_disable_hsts

eda_nginx_disable_hsts

Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for Event-Driven Ansible. Set this variable to true to disable HSTS.

Optional

false

automationedacontroller_disable_https

eda_nginx_disable_https

Controls whether HTTPS is enabled or disabled for Event-Driven Ansible. Set this variable to true to disable HTTPS.

Optional

false

automationedacontroller_event_stream_path

eda_event_stream_prefix_path

API prefix path used for Event-Driven Ansible event-stream through platform gateway.

Optional

/eda-event-streams

automationedacontroller_firewalld_zone

eda_firewall_zone

The firewall zone where Event-Driven Ansible related firewall rules are applied. This controls which networks can access Event-Driven Ansible based on the zone’s trust level.

Optional

RPM = no default set. Container = public.

automationedacontroller_gunicorn_event_stream_workers

 

Number of workers for handling event streaming for Event-Driven Ansible.

Optional

2

automationedacontroller_gunicorn_workers

eda_gunicorn_workers

Number of workers for handling the API served through Gunicorn.

Optional

(Number of cores or threads) * 2 + 1

automationedacontroller_http_port

eda_nginx_http_port

Port number that Event-Driven Ansible listens on for HTTP requests.

Optional

RPM = 80. Container = 8082.

automationedacontroller_https_port

eda_nginx_https_port

Port number that Event-Driven Ansible listens on for HTTPS requests.

Optional

RPM = 443. Container = 8445.

automationedacontroller_max_running_activations

eda_max_running_activations

Number of maximum activations running concurrently per node. This is an integer that must be greater than 0.

Optional

12

automationedacontroller_nginx_tls_files_remote

 

Denote whether the web cert sources are local to the installation program (false) or on the remote component server (true).

Optional

false

automationedacontroller_pg_cert_auth

eda_pg_cert_auth

Controls whether client certificate authentication is enabled or disabled on the Event-Driven Ansible PostgreSQL database. Set this variable to true to enable client certificate authentication.

Optional

false

automationedacontroller_pg_database

eda_pg_database

Name of the PostgreSQL database used by Event-Driven Ansible.

Optional

RPM = automationedacontroller. Container = eda.

automationedacontroller_pg_host

eda_pg_host

Hostname of the PostgreSQL database used by Event-Driven Ansible.

Required

 

automationedacontroller_pg_password

eda_pg_password

Password for the Event-Driven Ansible PostgreSQL database user. Use of special characters for this variable is limited. The !, #, 0 and @ characters are supported. Use of other special characters can cause the setup to fail.

Required if not using client certificate authentication.

 

automationedacontroller_pg_port

eda_pg_port

Port number for the PostgreSQL database used by Event-Driven Ansible.

Optional

5342

automationedacontroller_pg_sslmode

eda_pg_sslmode

Determines the level of encryption and authentication for client server connections. Valid options include verify-full, verify-ca, require, prefer, allow, disable.

Optional

prefer

automationedacontroller_pg_username

eda_pg_username

Username for the Event-Driven Ansible PostgreSQL database user.

Optional

RPM = automationedacontroller. Container = eda.

automationedacontroller_pgclient_sslcert

eda_pg_tls_cert

Path to the PostgreSQL SSL/TLS certificate file for Event-Driven Ansible.

Required if using client certificate authentication.

 

automationedacontroller_pgclient_sslkey

eda_pg_tls_key

Path to the PostgreSQL SSL/TLS key file for Event-Driven Ansible.

Required if using client certificate authentication.

 

automationedacontroller_pgclient_tls_files_remote

 

Denote whether the PostgreSQL client cert sources are local to the installation program (false) or on the remote component server (true).

Optional

false

automationedacontroller_public_event_stream_url

eda_event_stream_url

URL for connecting to the event stream. The URL must start with the http:// or https:// prefix

Optional

 

automationedacontroller_redis_host

eda_redis_host

Hostname of the Redis host used by Event-Driven Ansible.

Optional

First node in the [automationgateway] inventory group

automationedacontroller_redis_password

eda_redis_password

Password for Event-Driven Ansible Redis.

Optional

Randomly generated string

automationedacontroller_redis_port

eda_redis_port

Port number for the Redis host for Event-Driven Ansible.

Optional

RPM = The value defined in platform gateway’s implementation (automationgateway_redis_port). Container = 6379

automationedacontroller_redis_username

eda_redis_username

Username for Event-Driven Ansible Redis.

Optional

eda

automationedacontroller_secret_key

eda_secret_key

Secret key value used by Event-Driven Ansible to sign and encrypt data.

Optional

 

automationedacontroller_ssl_cert

eda_tls_cert

Path to the SSL/TLS certificate file for Event-Driven Ansible.

Optional

 

automationedacontroller_ssl_key

eda_tls_key

Path to the SSL/TLS key file for Event-Driven Ansible.

Optional

 

automationedacontroller_tls_files_remote

eda_tls_remote

Denote whether the Event-Driven Ansible provided certificate files are local to the installation program (false) or on the remote component server (true).

Optional

false

automationedacontroller_trusted_origins

 

List of host addresses in the form: <scheme>//:<address>:<port> for trusted Cross-Site Request Forgery (CSRF) origins.

Optional

[]

automationedacontroller_use_archive_compression

eda_use_archive_compression

Controls whether archive compression is enabled or disabled for Event-Driven Ansible. You can control this functionality globally by using use_archive_compression.

Optional

true

automationedacontroller_use_db_compression

eda_use_db_compression

Controls whether database compression is enabled or disabled for Event-Driven Ansible. You can control this functionality globally by using use_db_compression.

Optional

true

automationedacontroller_user_headers

eda_nginx_user_headers

List of additional NGINX headers to add to Event-Driven Ansible’s NGINX configuration.

Optional

[]

automationedacontroller_websocket_ssl_verify

 

Controls whether or not to perform SSL verification for the Daphne WebSocket used by Podman to communicate from the pod to the host. Set to false to disable SSL verification.

Optional

true

eda_node_type

eda_type

Event-Driven Ansible node type. Valid options include api, event-stream, hybrid, worker.

Optional

hybrid

 

eda_debug

Controls whether debug mode is enabled or disabled for Event-Driven Ansible. Set to true to enable debug mode for Event-Driven Ansible.

Optional

false

 

eda_extra_settings

Defines additional settings for use by Event-Driven Ansible during installation.

For example:

eda_extra_settings:
  - setting: RULEBOOK_READINESS_TIMEOUT_SECONDS
    value: 120

Optional

[]

 

eda_nginx_client_max_body_size

Maximum allowed size for data sent to Event-Driven Ansible through NGINX.

Optional

1m

 

eda_nginx_hsts_max_age

Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for Event-Driven Ansible.

Optional

63072000

nginx_tls_protocols

eda_nginx_https_protocols

Protocols that Event-Driven Ansible supports when handling HTTPS traffic.

Optional

RPM = [TLSv1.2]. Container = [TLSv1.2, TLSv1.3].

 

eda_pg_socket

UNIX socket used by Event-Driven Ansible to connect to the PostgreSQL database.

Optional

 

redis_disable_tls

eda_redis_disable_tls

Controls whether TLS is enabled or disabled for Event-Driven Ansible Redis. Set this variable to true to disable TLS.

Optional

false

 

eda_redis_tls_cert

Path to the Event-Driven Ansible Redis certificate file.

Optional

 
 

eda_redis_tls_key

Path to the Event-Driven Ansible Redis key file.

Optional

 
 

eda_safe_plugins

List of plugins that are allowed to run within Event-Driven Ansible. For more information about the usage of this variable, see Adding a safe plugin variable to Event-Driven Ansible controller.

Optional

[]

A.6. General variables

RPM variable nameContainer variable nameDescriptionRequired or optionalDefault

aap_ca_cert_file

ca_tls_cert

Path to the user provided CA certificate file used to generate SSL/TLS certificates for all Ansible Automation Platform services. For more information, see Optional: Using custom TLS certificates.

Optional

 

aap_ca_cert_files_remote

ca_tls_remote

Denote whether the CA certificate files are local to the installation program (false) or on the remote component server (true).

Optional

false

aap_ca_cert_size

 

Bit size of the internally managed CA certificate private key.

Optional

4096

aap_ca_key_file

ca_tls_key

Path to the key file for the CA certificate provided in aap_ca_cert_file (RPM) and ca_tls_cert (Container). For more information, see Optional: Using custom TLS certificates.

Optional

 

aap_ca_passphrase_cipher

 

Cipher used for signing the internally managed CA certificate private key.

Optional

aes256

aap_ca_regenerate

 

Denotes whether or not to re-initiate the internally managed CA certificate key pair.

Optional

false

aap_service_cert_size

 

Bit size of the component key pair managed by the internal CA.

Optional

4096

aap_service_regen_cert

 

Denotes whether or not to re-initiate the component key pair managed by the internal CA.

Optional

false

aap_service_san_records

 

A list of additional SAN records for signing a service. Assign these to components in the inventory file as host variables rather than group or all variables. All strings must also contain their corresponding SAN option prefix such as DNS: or IP:.

Optional

[]

backup_dest

 

Directory local to setup.sh for the final backup file.

Optional

The value defined in setup_dir.

backup_dir

backup_dir

Directory used to store backup files.

Optional

RPM = /var/backups/automation-platform/. Container = ~/backups

backup_file_prefix

 

Prefix used for the file backup name for the final backup file.

Optional

automation-platform-backup

bundle_install

bundle_install

Controls whether or not to perform an offline or bundled installation. Set this variable to true to enable an offline or bundled installation.

Optional

false if using the setup installation program. true if using the setup bundle installation program.

bundle_install_folder

bundle_dir

Path to the bundle directory used when performing a bundle install.

Required if bundle_install=true

RPM = /var/lib/ansible-automation-platform-bundle. Container = <current_dir>/bundle.

custom_ca_cert

custom_ca_cert

Path to the custom CA certificate file. This is required if any of the TLS certificates you manually provided are signed by a custom CA. For more information, see Optional: Using custom TLS certificates.

Optional

 

enable_insights_collection

 

The default install registers the node to the Red Hat Insights for Red Hat Ansible Automation Platform for the Red Hat Ansible Automation Platform Service if the node is registered with Subscription Manager. Set to false to disable this functionality.

Optional

true

registry_password

registry_password

Password credential for access to the registry source defined in registry_url. For more information, see Setting registry_username and registry_password.

RPM = Required if you need a password to access registry_url. Container = Required if registry_auth=true.

 

registry_url

registry_url

URL of the registry source from which to pull execution environment images.

Optional

registry.redhat.io

registry_username

registry_username

Username credential for access to the registry source defined in registry_url. For more information, see Setting registry_username and registry_password.

RPM = Required if you need a password to access registry_url. Container = Required if registry_auth=true.

 

registry_verify_ssl

registry_tls_verify

Controls whether SSL/TLS certificate verification is enabled or disabled when making HTTPS requests.

Optional

true

restore_backup_file

 

Path to the tar file used for the platform restore.

Optional

{{ setup_dir }}/automation-platform-backup-latest.tar.gz

restore_file_prefix

 

Path prefix for the staged restore components.

Optional

automation-platform-restore

routable_hostname

routable_hostname

Used if the machine running the installation program can only route to the target host through a specific URL. For example, if you use short names in your inventory, but the node running the installation program can only resolve that host by using a FQDN. If routable_hostname is not set, it defaults to ansible_host. If you do not set ansible_host, inventory_hostname is used as a last resort. This variable is used as a host variable for particular hosts and not under the [all:vars] section. For further information, see Assigning a variable to one machine: host variables.

Optional

 

use_archive_compression

use_archive_compression

Controls at a global level whether the filesystem-related backup files are compressed before being sent to the host to run the backup operation. If set to true, a tar.gz file is generated on each Ansible Automation Platform host and then gzip compression is used. If set to false, a simple tar file is generated.

You can control this functionality at a component level by using the <component_name>_use_archive_compression variables.

Optional

true

use_db_compression

use_db_compression

Controls at a global level whether the database-related backup files are compressed before being sent to the host to run the backup operation.

You can control this functionality at a component level by using the <component_name>_use_db_compression variables.

Optional

true

 

ca_tls_key_passphrase

Passphrase used to decrypt the key provided in ca_tls_key.

Optional

 
 

container_compress

Compression software to use for compressing container images.

Optional

gzip

 

container_keep_images

Controls whether or not to keep container images when uninstalling Ansible Automation Platform. Set to true to keep container images when uninstalling Ansible Automation Platform.

Optional

false

 

container_pull_images

Controls whether or not to pull newer container images during installation. Set to false to prevent pulling newer container images during installation.

Optional

true

 

pcp_firewall_zone

The firewall zone where Performance Co-Pilot related firewall rules are applied. This controls which networks can access Performance Co-Pilot based on the zone’s trust level.

Optional

public

 

pcp_use_archive_compression

Controls whether archive compression is enabled or disabled for Performance Co-Pilot. You can control this functionality globally by using use_archive_compression.

Optional

true

 

registry_auth

Set whether or not to use registry authentication. When this variable is set to true, registry_username and registry_password are required.

Optional

true

 

registry_ns_aap

Ansible Automation Platform registry namespace.

Optional

ansible-automation-platform-26

 

registry_ns_rhel

RHEL registry namespace.

Optional

rhel8

A.7. Image variables

RPM variable nameContainer variable nameDescriptionRequired or optionalDefault

extra_images

 

Additional container images to pull from the configured container registry during deployment.

Optional

ansible-builder-rhel8

 

controller_image

Container image for automation controller.

Optional

controller-rhel8:latest

 

de_extra_images

Additional decision environment container images to pull from the configured container registry during deployment.

Optional

[]

 

de_supported_image

Supported decision environment container image.

Optional

de-supported-rhel8:latest

 

eda_image

Backend container image for Event-Driven Ansible.

Optional

eda-controller-rhel8:latest

 

eda_web_image

Front-end container image for Event-Driven Ansible.

Optional

eda-controller-ui-rhel8:latest

 

ee_extra_images

Additional execution environment container images to pull from the configured container registry during deployment.

Optional

[]

 

ee_minimal_image

Minimal execution environment container image.

Optional

ee-minimal-rhel8:latest

 

ee_supported_image

Supported execution environment container image.

Optional

ee-supported-rhel8:latest

 

gateway_image

Container image for platform gateway.

Optional

gateway-rhel8:latest

 

gateway_proxy_image

Container image for platform gateway proxy.

Optional

gateway-proxy-rhel8:latest

 

hub_image

Backend container image for automation hub.

Optional

hub-rhel8:latest

 

hub_web_image

Front-end container image for automation hub.

Optional

hub-web-rhel8:latest

 

pcp_image

Container image for Performance Co-Pilot.

Optional

pcp:latest

 

postgresql_image

Container image for PostgreSQL.

Optional

postgresql-15:latest

 

receptor_image

Container image for receptor.

Optional

receptor-rhel8:latest

 

redis_image

Container image for Redis.

Optional

redis-6:latest

A.8. Platform gateway variables

RPM variable nameContainer variable nameDescriptionRequired or optionalDefault

automationgateway_admin_email

gateway_admin_email

Email address used by Django for the admin user for platform gateway.

Optional

admin@example.com

automationgateway_admin_password

gateway_admin_password

Platform gateway administrator password. Use of special characters for this variable is limited. The password can include any printable ASCII character except /, , or @.

Required

 

automationgateway_admin_username

gateway_admin_user

Username used to identify and create the administrator user in platform gateway.

Optional

admin

automationgateway_cache_cert

gateway_redis_tls_cert

Path to the platform gateway Redis certificate file.

Optional

 

automationgateway_cache_key

gateway_redis_tls_key

Path to the platform gateway Redis key file.

Optional

 

automationgateway_cache_tls_files_remote

 

Denote whether the cache client certificate files are local to the installation program (false) or on the remote component server (true).

Optional

The value defined in automationgateway_tls_files_remote which defaults to false.

automationgateway_client_regen_cert

 

Controls whether or not to regenerate platform gateway client certificates for the platform cache. Set to true to regenerate platform gateway client certificates.

Optional

false

automationgateway_control_plane_port

gateway_control_plane_port

Port number for the platform gateway control plane.

Optional

50051

automationgateway_disable_hsts

gateway_nginx_disable_hsts

Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for platform gateway. Set this variable to true to disable HSTS.

Optional

false

automationgateway_disable_https

gateway_nginx_disable_https

Controls whether HTTPS is enabled or disabled for platform gateway. Set this variable to true to disable HTTPS.

Optional

RPM = The value defined in disable_https which defaults to false. Container = false.

automationgateway_firewalld_zone

gateway_proxy_firewall_zone

The firewall zone where platform gateway related firewall rules are applied. This controls which networks can access platform gateway based on the zone’s trust level.

Optional

RPM = no default set. Container = 'public'.

automationgateway_grpc_auth_service_timeout

gateway_grpc_auth_service_timeout

Timeout duration (in seconds) for requests made to the gRPC service on platform gateway.

Optional

30s

automationgateway_grpc_server_max_threads_per_process

gateway_grpc_server_max_threads_per_process

Maximum number of threads that each gRPC server process can create to handle requests on platform gateway.

Optional

10

automationgateway_grpc_server_processes

gateway_grpc_server_processes

Number of processes for handling gRPC requests on platform gateway.

Optional

5

automationgateway_http_port

gateway_nginx_http_port

Port number that platform gateway listens on for HTTP requests.

Optional

RPM = 8080. Container = 8083.

automationgateway_https_port

gateway_nginx_https_port

Port number that platform gateway listens on for HTTPS requests.

Optional

RPM = 8443. Container = 8446.

automationgateway_main_url

gateway_main_url

URL of the main instance of platform gateway that clients connect to. Use if you are performing a clustered deployment and you need to use the URL of the load balancer instead of the component’s server. The URL must start with http:// or https:// prefix.

Optional

 

automationgateway_nginx_tls_files_remote

 

Denote whether the web cert sources are local to the installation program (false) or on the remote component server (true).

Optional

The value defined in automationgateway_tls_files_remote which defaults to false.

automationgateway_pg_cert_auth

gateway_pg_cert_auth

Controls whether client certificate authentication is enabled or disabled on the platform gateway PostgreSQL database. Set this variable to true to enable client certificate authentication.

Optional

false

automationgateway_pg_database

gateway_pg_database

Name of the PostgreSQL database used by platform gateway.

Optional

RPM = automationgateway. Container = gateway.

automationgateway_pg_host

gateway_pg_host

Hostname of the PostgreSQL database used by platform gateway.

Required

 

automationgateway_pg_password

gateway_pg_password

Password for the platform gateway PostgreSQL database user. Use of special characters for this variable is limited. The !, #, 0 and @ characters are supported. Use of other special characters can cause the setup to fail.

Optional

 

automationgateway_pg_port

gateway_pg_port

Port number for the PostgreSQL database used by platform gateway.

Optional

5432

automationgateway_pg_sslmode

gateway_pg_sslmode

Controls the SSL mode to use when platform gateway connects to the PostgreSQL database. Valid options include verify-full, verify-ca, require, prefer, allow, disable.

Optional

prefer

automationgateway_pg_username

gateway_pg_username

Username for the platform gateway PostgreSQL database user.

Optional

RPM = automationgateway. Container = gateway

automationgateway_pgclient_sslcert

gateway_pg_tls_cert

Path to the PostgreSQL SSL/TLS certificate file for platform gateway.

Required if using client certificate authentication.

 

automationgateway_pgclient_sslkey

gateway_pg_tls_key

Path to the PostgreSQL SSL/TLS key file for platform gateway.

Required if using client certificate authentication.

 

automationgateway_pgclient_tls_files_remote

 

Denote whether the PostgreSQL client cert sources are local to the installation program (false) or on the remote component server (true).

Optional

The value defined in automationgateway_tls_files_remote which defaults to false.

automationgateway_redis_host

gateway_redis_host

Hostname of the Redis host used by platform gateway.

Optional

First node in the [automationgateway] inventory group.

automationgateway_redis_password

gateway_redis_password

Password for platform gateway Redis.

Optional

Randomly generated string.

automationgateway_redis_username

gateway_redis_username

Username for platform gateway Redis.

Optional

gateway

automationgateway_secret_key

gateway_secret_key

Secret key value used by platform gateway to sign and encrypt data.

Optional

 

automationgateway_ssl_cert

gateway_tls_cert

Path to the SSL/TLS certificate file for platform gateway.

Optional

 

automationgateway_ssl_key

gateway_tls_key

Path to the SSL/TLS key file for platform gateway.

Optional

 

automationgateway_tls_files_remote

gateway_tls_remote

Denote whether the platform gateway provided certificate files are local to the installation program (false) or on the remote component server (true).

Optional

false

automationgateway_use_archive_compression

gateway_use_archive_compression

Controls whether archive compression is enabled or disabled for platform gateway. You can control this functionality globally by using use_archive_compression.

Optional

true

automationgateway_use_db_compression

gateway_use_db_compression

Controls whether database compression is enabled or disabled for platform gateway. You can control this functionality globally by using use_db_compression.

Optional

true

automationgateway_user_headers

gateway_nginx_user_headers

List of additional NGINX headers to add to platform gateway’s NGINX configuration.

Optional

[]

automationgateway_verify_ssl

 

Denotes whether or not to verify platform gateway’s web certificates when making calls from platform gateway to itself during installation. Set to false to disable web certificate verification.

Optional

true

automationgatewayproxy_disable_https

envoy_disable_https

Controls whether or not HTTPS is disabled when accessing the platform UI. Set to true to disable HTTPS (HTTP is used instead).

Optional

RPM = The value defined in disable_https which defaults to false. Container = false.

automationgatewayproxy_http_port

envoy_http_port

Port number on which the Envoy proxy listens for incoming HTTP connections.

Optional

80

automationgatewayproxy_https_port

envoy_https_port

Port number on which the Envoy proxy listens for incoming HTTPS connections.

Optional

443

nginx_tls_protocols

gateway_nginx_https_protocols

Protocols that platform gateway will support when handling HTTPS traffic.

Optional

RPM = [TLSv1.2]. Container = [TLSv1.2, TLSv1.3].

redis_disable_tls

gateway_redis_disable_tls

Controls whether TLS is enabled or disabled for platform gateway Redis. Set this variable to true to disable TLS.

Optional

false

redis_port

gateway_redis_port

Port number for the Redis host for platform gateway.

Optional

6379

 

gateway_extra_settings

Defines additional settings for use by platform gateway during installation.

For example:

gateway_extra_settings:
  - setting: OAUTH2_PROVIDER['ACCESS_TOKEN_EXPIRE_SECONDS']
    value: 600

Optional

[]

 

gateway_nginx_client_max_body_size

Maximum allowed size for data sent to platform gateway through NGINX.

Optional

5m

 

gateway_nginx_hsts_max_age

Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for platform gateway.

Optional

63072000

 

gateway_uwsgi_listen_queue_size

Number of requests uwsgi will allow in the queue on platform gateway until uwsgi_processes can serve them.

Optional

4096

A.9. Receptor variables

RPM variable nameContainer variable nameDescriptionRequired or optionalDefault

receptor_datadir

 

The directory where receptor stores its runtime data and local artifacts.
The target directory must be accessible to awx users.
If the target directory is a temporary file system tmpfs, ensure it is remounted correctly after a reboot. Failure to do so results in the receptor no longer having a working directory.

Optional

/tmp/receptor

receptor_listener_port

receptor_port

Port number that receptor listens on for incoming connections from other receptor nodes.

Optional

27199

receptor_listener_protocol

receptor_protocol

Protocol that receptor will support when handling traffic.

Optional

tcp

receptor_log_level

receptor_log_level

Controls the verbosity of logging for receptor.
Valid options include: error, warning, info, or debug.

Optional

info

receptor_tls

 

Controls whether TLS is enabled or disabled for receptor. Set this variable to false to disable TLS.

Optional

true

See node_type for the RPM equivalent variable.

receptor_type

For the [automationcontroller] group the two options are:

  • receptor_type=control - The node only runs project and inventory updates, but not regular jobs.
  • receptor_type=hybrid - The node runs everything.

For the [execution_nodes] group the two options are:

  • receptor_type=hop - The node forwards jobs to an execution node.
  • receptor_type=execution - The node can run jobs.

Optional

For the [automationcontroller] group: hybrid.
For the [execution_nodes] group: execution.

See peers for the RPM equivalent variable

receptor_peers

Used to indicate which nodes a specific host connects to. Wherever this variable is defined, an outbound connection to the specific host is established. The value must be a comma-separated list of hostnames. Do not use inventory group names.

This is resolved into a set of hosts that is used to construct the receptor.conf file.

For example usage, see Adding execution nodes.

Optional

[]

 

receptor_disable_signing

Controls whether signing of communications between receptor nodes is enabled or disabled.
Set this variable to true to disable communication signing.

Optional

false

 

receptor_disable_tls

Controls whether TLS is enabled or disabled for receptor.
Set this variable to true to disable TLS.

Optional

false

 

receptor_firewall_zone

The firewall zone where receptor related firewall rules are applied. This controls which networks can access receptor based on the zone’s trust level.

Optional

public

 

receptor_mintls13

Controls whether or not receptor only accepts connections that use TLS 1.3 or higher.
Set to true to only accept connections that use TLS 1.3 or higher.

Optional

false

 

receptor_signing_private_key

Path to the private key used by receptor to sign communications with other receptor nodes in the network.

Optional

 
 

receptor_signing_public_key

Path to the public key used by receptor to sign communications with other receptor nodes in the network.

Optional

 
 

receptor_signing_remote

Denote whether the receptor signing files are local to the installation program (false) or on the remote component server (true).

Optional

false

 

receptor_tls_cert

Path to the TLS certificate file for receptor.

Optional

 
 

receptor_tls_key

Path to the TLS key file for receptor.

Optional

 
 

receptor_tls_remote

Denote whether the receptor provided certificate files are local to the installation program (false) or on the remote component server (true).

Optional

false

 

receptor_use_archive_compression

Controls whether archive compression is enabled or disabled for receptor. You can control this functionality globally by using use_archive_compression.

Optional

true

A.10. Redis variables

RPM variable nameContainer variable nameDescriptionRequired or optionalDefault

redis_cluster_ip

redis_cluster_ip

The IPv4 address used by the Redis cluster to identify each host in the cluster. When defining hosts in the [redis] group, use this variable to identify the IPv4 address if the default is not what you want. Specific to container: Redis clusters cannot use hostnames or IPv6 addresses.

Optional

RPM = Discovered IPv4 address from Ansible facts. If IPv4 address is not available, IPv6 address is used. Container = Discovered IPv4 address from Ansible facts.

redis_disable_mtls

 

Controls whether mTLS is enabled or disabled for Redis. Set this variable to true to disable mTLS.

Optional

false

redis_firewalld_zone

redis_firewall_zone

The firewall zone where Redis related firewall rules are applied. This controls which networks can access Redis based on the zone’s trust level.

Optional

RPM = no default set. Container = public.

redis_hostname

 

Hostname used by the Redis cluster when identifying and routing the host. By default routable_hostname is used.

Optional

The value defined in routable_hostname

redis_mode

redis_mode

The Redis mode to use for your Ansible Automation Platform installation. Valid options include: standalone and cluster. For more information about Redis, see Caching and queueing system in Planning your installation.

Optional

cluster

redis_server_regen_cert

 

Denotes whether or not to regenerate the Ansible Automation Platform managed TLS key pair for Redis.

Optional

false

redis_tls_cert

redis_tls_cert

Path to the Redis server TLS certificate.

Optional

 

redis_tls_files_remote

 

Denote whether the Redis provided certificate files are local to the installation program (false) or on the remote component server (true).

Optional

false

redis_tls_key

redis_tls_key

Path to the Redis server TLS certificate key.

Optional

 
 

redis_use_archive_compression

Controls whether archive compression is enabled or disabled for Redis. You can control this functionality globally by using use_archive_compression.

Optional

true

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.
Red Hat logoGithubRedditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

© 2024 Red Hat, Inc.