이 콘텐츠는 선택한 언어로 제공되지 않습니다.
RPM installation
Install the RPM version of Ansible Automation Platform
Abstract
Preface
Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments.
This guide helps you to understand the installation requirements and processes behind installing Ansible Automation Platform. This document has been updated to include information for the latest release of Ansible Automation Platform.
Providing feedback on Red Hat documentation
If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
Chapter 1. Red Hat Ansible Automation Platform installation overview
The Red Hat Ansible Automation Platform installation program offers you flexibility, allowing you to install Ansible Automation Platform by using several supported installation scenarios.
Regardless of the installation scenario you choose, installing Ansible Automation Platform involves the following steps:
- Editing the Red Hat Ansible Automation Platform installer inventory file
- The Ansible Automation Platform installer inventory file allows you to specify your installation scenario and describe host deployments to Ansible. The examples provided in this document show the parameter specifications needed to install that scenario for your deployment.
- Running the Red Hat Ansible Automation Platform installer setup script
- The setup script installs Ansible Automation Platform by using the required parameters defined in the inventory file.
- Verifying your Ansible Automation Platform installation
- After installing Ansible Automation Platform, you can verify that the installation has been successful by logging in to the platform UI and seeing the relevant functionality.
Additional resources
- For more information about the supported installation scenarios, see the Planning your installation.
- For more information on available topologies, see Tested deployment models.
1.1. Prerequisites
- You chose and obtained a platform installer from the Red Hat Ansible Automation Platform Product Software.
- You are installing on a machine that meets base system requirements.
- You have updated all of the packages to the recent version of your RHEL nodes.
To prevent errors, upgrade your RHEL nodes fully before installing Ansible Automation Platform.
- You have created a Red Hat Registry Service Account, by using the instructions in Creating Registry Service Accounts.
Additional resources
For more information about obtaining a platform installer or system requirements, see the System requirements in the Planning your installation.
Chapter 2. System requirements
Use this information when planning your Red Hat Ansible Automation Platform installations and designing automation mesh topologies that fit your use case.
Prerequisites
-
You can obtain root access either through the
sudo
command, or through privilege escalation. For more on privilege escalation, see Understanding privilege escalation. - You can de-escalate privileges from root to users such as: AWX, PostgreSQL, Event-Driven Ansible, or Pulp.
- You have configured an NTP client on all nodes.
2.1. Red Hat Ansible Automation Platform system requirements
Your system must meet the following minimum system requirements to install and run Red Hat Ansible Automation Platform. A resilient deployment requires 10 virtual machines with a minimum of 16 gigabytes (GB) of RAM and 4 virtual CPUs (vCPU). See Tested deployment models for more information on topology options.
Type | Description | Notes |
---|---|---|
Subscription | Valid Red Hat Ansible Automation Platform subscription | |
Operating system |
| Red Hat Ansible Automation Platform are also supported on OpenShift, see Installing on OpenShift Container Platform for more information. |
CPU architecture | x86_64, AArch64, s390x (IBM Z), ppc64le (IBM Power) | |
Ansible-core | Ansible-core version 2.16 or later | Ansible Automation Platform uses the system-wide ansible-core package to install the platform, but uses ansible-core 2.16 for both its control plane and built-in execution environments. |
Browser | A currently supported version of Mozilla Firefox or Google Chrome. | |
Database | PostgreSQL 15 | Red Hat Ansible Automation Platform 2.5 requires the external (customer supported) databases to have ICU support. |
Component | RAM | VCPU | Disk IOPS | Storage |
---|---|---|---|---|
Platform gateway | 16GB | 4 | 3000 | 60GB minimum |
Control nodes | 16GB | 4 | 3000 |
80GB minimum with at least 20GB available under |
Execution nodes | 16GB | 4 | 3000 | 60GB minimum |
Hop nodes | 16GB | 4 | 3000 | 60GB minimum |
Automation hub | 16GB | 4 | 3000 |
60GB minimum with at least 40GB allocated to |
Database | 16GB | 4 | 3000 |
100GB minimum allocated to |
Event-Driven Ansible controller | 16GB | 4 | 3000 | 60GB minimum |
These are minimum requirements and can be increased for larger workloads in increments of 2x (for example 16GB becomes 32GB and 4 vCPU becomes 8vCPU). See the horizontal scaling guide for more information.
Repository requirements
Enable the following repositories only when installing Red Hat Ansible Automation Platform:
- RHEL BaseOS
- RHEL AppStream
If you enable repositories besides those mentioned above, the Red Hat Ansible Automation Platform installation could fail unexpectedly.
The following are necessary for you to work with project updates and collections:
- Ensure that the Network ports and protocols listed in Table 6.3. Automation Hub are available for successful connection and download of collections from automation hub or Ansible Galaxy server.
Additional notes for Red Hat Ansible Automation Platform requirements
- If performing a bundled Ansible Automation Platform installation, the installation setup.sh script attempts to install ansible-core (and its dependencies) from the bundle for you.
- If you have installed Ansible-core manually, the Ansible Automation Platform installation setup.sh script detects that Ansible has been installed and does not attempt to reinstall it.
You must use Ansible-core, which is installed via dnf. Ansible-core version 2.16 is required for versions 2.5 and later.
2.2. Platform gateway system requirements
The platform gateway is the service that handles authentication and authorization for Ansible Automation Platform. It provides a single entry into the platform and serves the platform’s user interface.
You are required to set umask=0022
.
2.3. Automation controller system requirements
Automation controller is a distributed system, where different software components can be co-located or deployed across multiple compute nodes. In the installer, four node types are provided as abstractions to help you design the topology appropriate for your use case: control, hybrid, execution, and hop nodes.
Use the following recommendations for node sizing:
Execution nodes
Execution nodes run automation. Increase memory and CPU to increase capacity for running more forks.
- The RAM and CPU resources stated are minimum recommendations to handle the job load for a node to run an average number of jobs simultaneously.
- Recommended RAM and CPU node sizes are not supplied. The required RAM or CPU depends directly on the number of jobs you are running in that environment.
- For capacity based on forks in your configuration, see Automation controller capacity determination and job impact.
For further information about required RAM and CPU levels, see Performance tuning for automation controller.
Control nodes
Control nodes process events and run cluster jobs including project updates and cleanup jobs. Increasing CPU and memory can help with job event processing.
- 40GB minimum with at least 20GB available under /var/lib/awx
- Storage volume must be rated for a minimum baseline of 3000 IOPS
- Projects are stored on control and hybrid nodes, and for the duration of jobs, are also stored on execution nodes. If the cluster has many large projects, consider doubling the GB in /var/lib/awx/projects, to avoid disk space errors.
Hop nodes
Hop nodes serve to route traffic from one part of the automation mesh to another (for example, a hop node could be a bastion host into another network). RAM can affect throughput, CPU activity is low. Network bandwidth and latency are generally a more important factor than either RAM or CPU.
-
Actual RAM requirements vary based on how many hosts automation controller manages simultaneously (which is controlled by the
forks
parameter in the job template or the systemansible.cfg
file). To avoid possible resource conflicts, Ansible recommends 1 GB of memory per 10 forks and 2 GB reservation for automation controller. See Automation controller capacity determination and job impact. Ifforks
is set to 400, 42 GB of memory is recommended. -
Automation controller hosts check if
umask
is set to 0022. If not, the setup fails. Setumask=0022
to avoid this error. A larger number of hosts can be addressed, but if the fork number is less than the total host count, more passes across the hosts are required. You can avoid these RAM limitations by using any of the following approaches:
- Use rolling updates.
- Use the provisioning callback system built into automation controller, where each system requesting configuration enters a queue and is processed as quickly as possible.
- In cases where automation controller is producing or deploying images such as AMIs.
Additional resources
- For more information about obtaining an automation controller subscription, see Attaching your Red Hat Ansible Automation Platform subscription.
- For questions, contact Ansible support through the Red Hat Customer Portal.
2.4. Automation hub system requirements
Automation hub allows you to discover and use new certified automation content from Red Hat Ansible and Certified Partners. On Ansible automation hub, you can discover and manage Ansible Collections, which are supported automation content developed by Red Hat and its partners for use cases such as cloud automation, network automation, and security automation.
Private automation hub
If you install private automation hub from an internal address, and have a certificate which only encompasses the external address, this can result in an installation which cannot be used as container registry without certificate issues.
To avoid this, use the automationhub_main_url
inventory variable with a value such as https://pah.example.com linking to the private automation hub node in the installation inventory file.
This adds the external address to /etc/pulp/settings.py
. This implies that you only want to use the external address.
For information about inventory file variables, see Inventory file variables.
2.4.1. High availability automation hub requirements
Before deploying a high availability (HA) automation hub, ensure that you have a shared storage file system installed in your environment and that you have configured your network storage system, if applicable.
2.4.1.2. Installing firewalld for HA hub deployment
If you intend to install a HA automation hub using a network storage on the automation hub nodes itself, you must first install and use firewalld
to open the necessary ports as required by your shared storage system before running the Ansible Automation Platform installer.
Install and configure firewalld
by executing the following commands:
Install the
firewalld
daemon:$ dnf install firewalld
Add your network storage under <service> using the following command:
$ firewall-cmd --permanent --add-service=<service>
NoteFor a list of supported services, use the
$ firewall-cmd --get-services
commandReload to apply the configuration:
$ firewall-cmd --reload
2.5. Event-Driven Ansible controller system requirements
The Event-Driven Ansible controller is a single-node system capable of handling a variable number of long-running processes (such as rulebook activations) on-demand, depending on the number of CPU cores.
If you want to use Event-Driven Ansible 2.5 with a 2.4 automation controller version, see Using Event-Driven Ansible 2.5 with Ansible Automation Platform 2.4.
Use the following minimum requirements to run, by default, a maximum of 12 simultaneous activations:
Requirement | Required |
---|---|
RAM | 16 GB |
CPUs | 4 |
Local disk |
|
- If you are running Red Hat Enterprise Linux 8 and want to set your memory limits, you must have cgroup v2 enabled before you install Event-Driven Ansible. For specific instructions, see the Knowledge-Centered Support (KCS) article, Ansible Automation Platform Event-Driven Ansible controller for Red Hat Enterprise Linux 8 requires cgroupv2.
- When you activate an Event-Driven Ansible rulebook under standard conditions, it uses about 250 MB of memory. However, the actual memory consumption can vary significantly based on the complexity of your rules and the volume and size of the events processed. In scenarios where a large number of events are anticipated or the rulebook complexity is high, conduct a preliminary assessment of resource usage in a staging environment. This ensures that your maximum number of activations is based on the capacity of your resources.
For an example of setting Event-Driven Ansible controller maximumrunning activations, see Single automation controller, single automation hub, and single Event-Driven Ansible controller node with external (installer managed) database.
2.6. PostgreSQL requirements
Red Hat Ansible Automation Platform 2.5 uses PostgreSQL 15 and requires the external (customer supported) databases to have ICU support. PostgreSQL user passwords are hashed with SCRAM-SHA-256 secure hashing algorithm before storing in the database.
To determine if your automation controller instance has access to the database, you can do so with the command, awx-manage check_db
command.
- Automation controller data is stored in the database. Database storage increases with the number of hosts managed, number of jobs run, number of facts stored in the fact cache, and number of tasks in any individual job. For example, a playbook runs every hour (24 times a day) across 250 hosts, with 20 tasks, stores over 800000 events in the database every week.
- If not enough space is reserved in the database, the old job runs and facts must be cleaned on a regular basis. For more information, see Management Jobs in the Configuring automation execution guide.
PostgreSQL Configurations
Optionally, you can configure the PostgreSQL database as separate nodes that are not managed by the Red Hat Ansible Automation Platform installer. When the Ansible Automation Platform installer manages the database server, it configures the server with defaults that are generally recommended for most workloads. For more information about the settings you can use to improve database performance, see PostgreSQL database configuration and maintenance for automation controller in the Configuring automation execution guide.
Additional resources
For more information about tuning your PostgreSQL server, see the PostgreSQL documentation.
2.6.1. Setting up an external (customer supported) database
- When using an external database with Ansible Automation Platform, you must create and maintain that database. Ensure that you clear your external database when uninstalling Ansible Automation Platform.
- Red Hat Ansible Automation Platform 2.5 uses PostgreSQL 15 and requires the external (customer supported) databases to have ICU support.
- During configuration of an external database, you must check the external database coverage. For more information, see Red Hat Ansible Automation Platform Database Scope of Coverage.
Red Hat Ansible Automation Platform 2.5 uses PostgreSQL 15 and requires the external (customer supported) databases to have ICU support. Use the following procedure to configure an external PostgreSQL compliant database for use with an Ansible Automation Platform component, for example automation controller, Event-Driven Ansible, automation hub, and platform gateway.
Procedure
Connect to a PostgreSQL compliant database server with superuser privileges.
# psql -h <db.example.com> -U superuser -p 5432 -d postgres <Password for user superuser>:
Where the default value for <hostname> is hostname:
-h hostname --host=hostname
Specify the hostname of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the UNIX-domain socket.
-d dbname --dbname=dbname
Specify the name of the database to connect to. This is equal to specifying
dbname
as the first non-option argument on the command line. Thedbname
can be a connection string. If so, connection string parameters override any conflicting command line options.-U username --username=username
-
Connect to the database as the user
username
instead of the default (you must have permission to do so). -
Create the user, database, and password with the
createDB
oradministrator
role assigned to the user. For further information, see Database Roles. -
Run the installation program. If you are using a PostgreSQL database, the database is owned by the connecting user and must have a
createDB
or administrator role assigned to it. - Check that you can connect to the created database with the credentials provided in the inventory file.
-
Check the permission of the user. The user should have the
createDB
or administrator role. After you create the PostgreSQL users and databases for each component, add the database credentials and host details in the inventory file under the [all:vars] group.
# Automation controller pg_host=data.example.com pg_database=<database name> pg_port=<port_number> pg_username=<set your own> pg_password=<set your own> # Platform gateway automationgateway_pg_host=aap.example.org automationgateway_pg_database=<set your own> automationgateway_pg_port=<port_number> automationgateway_pg_username=<set your own> automationgateway_pg_password=<set your own> # Automation hub automationhub_pg_host=data.example.com automationhub_pg_database=<database_name> automationhub_pg_port=<port_number> automationhub_pg_username=<username> automationhub_pg_password=<password> # Event-Driven Ansible automationedacontroller_pg_host=data.example.com automationedacontroller_pg_database=<database_name> automationedacontroller_pg_port=<port_number> automationedacontroller_pg_username=<username> automationedacontroller_pg_password=<password>
2.6.1.1. Optional: Enabling mutual TLS (mTLS) authentication
mTLS authentication is disabled by default. To configure each component’s database with mTLS authentication, add the following variables to your inventory file under the [all:vars]
group and ensure each component has a different TLS certificate and key:
# Automation controller pgclient_sslcert=/path/to/awx.cert pgclient_sslkey=/path/to/awx.key pg_sslmode=verify-full or verify-ca # Platform gateway automationgateway_pgclient_sslcert=/path/to/gateway.cert automationgateway_pgclient_sslkey=/path/to/gateway.key automationgateway_pg_sslmode=verify-full or verify-ca # Automation hub automationhub_pgclient_sslcert=/path/to/pulp.cert automationhub_pgclient_sslkey=/path/to/pulp.key automationhub_pg_sslmode=verify-full or verify-ca # Event-Driven Ansible automationedacontroller_pgclient_sslcert=/path/to/eda.cert automationedacontroller_pgclient_sslkey=/path/to/eda.key automationedacontroller_pg_sslmode=verify-full or verify-ca
2.6.1.2. Optional: Using custom TLS certificates
By default, the installation program generates self-signed TLS certificates and keys for all Ansible Automation Platform services.
If you want to replace these with your own custom certificate and key, then set the following inventory file variables:
aap_ca_cert_file=<path_to_ca_tls_certificate> aap_ca_key_file=<path_to_ca_tls_key>
If any of your certificates are signed by a custom Certificate Authority (CA), then you must specify the Certificate Authority’s certificate by using the custom_ca_cert
inventory file variable:
custom_ca_cert=<path_to_custom_ca_certificate>
If you have more than one custom CA certificate, combine them into a single file, then reference the combined certificate with the custom_ca_cert
inventory file variable.
2.6.2. Enabling the hstore extension for the automation hub PostgreSQL database
Added in Ansible Automation Platform 2.5, the database migration script uses hstore
fields to store information, therefore the hstore
extension must be enabled in the automation hub PostgreSQL database.
This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server.
If the PostgreSQL database is external, you must enable the hstore
extension in the automation hub PostgreSQL database manually before installation.
If the hstore
extension is not enabled before installation, a failure raises during database migration.
Procedure
Check if the extension is available on the PostgreSQL server (automation hub database).
$ psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"
Where the default value for
<automation hub database>
isautomationhub
.Example output with
hstore
available:name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)
Example output with
hstore
not available:name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)
On a RHEL based server, the
hstore
extension is included in thepostgresql-contrib
RPM package, which is not installed automatically when installing the PostgreSQL server RPM package.To install the RPM package, use the following command:
dnf install postgresql-contrib
Load the
hstore
PostgreSQL extension into the automation hub database with the following command:$ psql -d <automation hub database> -c "CREATE EXTENSION hstore;"
In the following output, the
installed_version
field lists thehstore
extension used, indicating thathstore
is enabled.name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)
2.6.3. Benchmarking storage performance for the Ansible Automation Platform PostgreSQL database
Check whether the minimum Ansible Automation Platform PostgreSQL database requirements are met by using the Flexible I/O Tester (FIO) tool. FIO is a tool used to benchmark read and write IOPS performance of the storage system.
Prerequisites
You have installed the Flexible I/O Tester (
fio
) storage performance benchmarking tool.To install
fio
, run the following command as the root user:# yum -y install fio
You have adequate disk space to store the
fio
test data log files.The examples shown in the procedure require at least 60GB disk space in the
/tmp
directory:-
numjobs
sets the number of jobs run by the command. -
size=10G
sets the file size generated by each job.
-
-
You have adjusted the value of the
size
parameter. Adjusting this value reduces the amount of test data.
Procedure
Run a random write test:
$ fio --name=write_iops --directory=/tmp --numjobs=3 --size=10G \ --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \ --verify=0 --bs=4K --iodepth=64 --rw=randwrite \ --group_reporting=1 > /tmp/fio_benchmark_write_iops.log \ 2>> /tmp/fio_write_iops_error.log
Run a random read test:
$ fio --name=read_iops --directory=/tmp \ --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s \ --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randread \ --group_reporting=1 > /tmp/fio_benchmark_read_iops.log \ 2>> /tmp/fio_read_iops_error.log
Review the results:
In the log files written by the benchmark commands, search for the line beginning with
iops
. This line shows the minimum, maximum, and average values for the test.The following example shows the line in the log file for the random read test:
$ cat /tmp/fio_benchmark_read_iops.log read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 […] iops : min=50879, max=61603, avg=56221.33, stdev=679.97, samples=360 […]
NoteThe above is a baseline to help evaluate the best case performance on your systems. Systems can and will change and performance may vary depending on what else is happening on your systems, storage or network at the time of testing. You must review, monitor, and revisit the log files according to your own business requirements, application workloads, and new demands.
Chapter 3. Installing Red Hat Ansible Automation Platform
Ansible Automation Platform is a modular platform. The platform gateway deploys automation platform components, such as automation controller, automation hub, and Event-Driven Ansible controller.
For more information about the components provided with Ansible Automation Platform, see Red Hat Ansible Automation Platform components in Planning your installation.
There are several supported installation scenarios for Red Hat Ansible Automation Platform. To install Red Hat Ansible Automation Platform, you must edit the inventory file parameters to specify your installation scenario. You can use the enterprise installer as a basis for your own inventory file.
Additional resources
For a comprehensive list of pre-defined variables used in Ansible installation inventory files, see Ansible variables.
3.1. Editing the Red Hat Ansible Automation Platform installer inventory file
You can use the Red Hat Ansible Automation Platform installer inventory file to specify your installation scenario.
Procedure
Navigate to the installer:
[RPM installed package]
$ cd /opt/ansible-automation-platform/installer/
[bundled installer]
$ cd ansible-automation-platform-setup-bundle-<latest-version>
[online installer]
$ cd ansible-automation-platform-setup-<latest-version>
-
Open the
inventory
file with a text editor. -
Edit
inventory
file parameters to specify your installation scenario. You can use one of the supported Installation scenario examples as the basis for yourinventory
file.
Additional resources
- For a comprehensive list of pre-defined variables used in Ansible installation inventory files, see Inventory file variables.
3.2. Inventory file examples based on installation scenarios
Red Hat supports several installation scenarios for Ansible Automation Platform. You can develop your own inventory files using the example files as a basis, or you can use the example closest to your preferred installation scenario.
3.2.1. Inventory file recommendations based on installation scenarios
Before selecting your installation method for Ansible Automation Platform, review the following recommendations. Familiarity with these recommendations will streamline the installation process.
Provide a reachable IP address or fully qualified domain name (FQDN) for hosts to ensure users can sync and install content from automation hub from a different node.
The FQDN must not contain either the
-
or the_
symbols, as it will not be processed correctly.Do not use
localhost
.-
admin
is the default user ID for the initial log in to Ansible Automation Platform and cannot be changed in the inventory file. -
Use of special characters for
pg_password
is limited. The!
,#
,0
and@
characters are supported. Use of other special characters can cause the setup to fail. -
Enter your Red Hat Registry Service Account credentials in
registry_username
andregistry_password
to link to the Red Hat container registry. -
The inventory file variables
registry_username
andregistry_password
are only required if a non-bundle installer is used.
3.2.1.1. Single platform gateway and automation controller with an external (installer managed) database
Use this example to see what is minimally needed within the inventory file to deploy single instances of platform gateway and automation controller with an external (installer managed) database.
[automationcontroller] controller.example.com [automationgateway] gateway.example.com [database] data.example.com [all:vars] admin_password='<password>' redis_mode=standalone pg_host='data.example.com' pg_port=5432 pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' # Automation Gateway configuration automationgateway_admin_password='' automationgateway_pg_host='data.example.com' automationgateway_pg_port=5432 automationgateway_pg_database='automationgateway' automationgateway_pg_username='automationgateway' automationgateway_pg_password='' automationgateway_pg_sslmode='prefer' # The main automation gateway URL that clients will connect to (e.g. https://<load balancer host>). # If not specified, the first node in the [automationgateway] group will be used when needed. # automationgateway_main_url = '' # Certificate and key to install in Automation Gateway # automationgateway_ssl_cert=/path/to/automationgateway.cert # automationgateway_ssl_key=/path/to/automationgateway.key # SSL-related variables # If set, this will install a custom CA certificate to the system trust store. # custom_ca_cert=/path/to/ca.crt # Certificate and key to install in nginx for the web UI and API # web_server_ssl_cert=/path/to/tower.cert # web_server_ssl_key=/path/to/tower.key # Server-side SSL settings for PostgreSQL (when we are installing it). # postgres_use_ssl=False # postgres_ssl_cert=/path/to/pgsql.crt # postgres_ssl_key=/path/to/pgsql.key
3.2.1.2. Single platform gateway, automation controller, and automation hub with an external (installer managed) database
Use this example to populate the inventory file to deploy single instances of platform gateway, automation controller, and automation hub with an external (installer managed) database.
[automationcontroller] controller.example.com [automationhub] automationhub.example.com [automationgateway] gateway.example.com [database] data.example.com [all:vars] admin_password='<password>' redis_mode=standalone pg_host='data.example.com' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' automationhub_admin_password= <PASSWORD> automationhub_pg_host='data.example.com' automationhub_pg_port=5432 automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' # The default install will deploy a TLS enabled Automation Hub. # If for some reason this is not the behavior wanted one can # disable TLS enabled deployment. # # automationhub_disable_https = False # The default install will generate self-signed certificates for the Automation # Hub service. If you are providing valid certificate via automationhub_ssl_cert # and automationhub_ssl_key, one should toggle that value to True. # # automationhub_ssl_validate_certs = False # SSL-related variables # If set, this will install a custom CA certificate to the system trust store. # custom_ca_cert=/path/to/ca.crt # Certificate and key to install in Automation Hub node # automationhub_ssl_cert=/path/to/automationhub.cert # automationhub_ssl_key=/path/to/automationhub.key # Automation Gateway configuration automationgateway_admin_password='' automationgateway_pg_host='' automationgateway_pg_port=5432 automationgateway_pg_database='automationgateway' automationgateway_pg_username='automationgateway' automationgateway_pg_password='' automationgateway_pg_sslmode='prefer' # The main automation gateway URL that clients will connect to (e.g. https://<load balancer host>). # If not specified, the first node in the [automationgateway] group will be used when needed. # automationgateway_main_url = '' # Certificate and key to install in Automation Gateway # automationgateway_ssl_cert=/path/to/automationgateway.cert # automationgateway_ssl_key=/path/to/automationgateway.key # Certificate and key to install in nginx for the web UI and API # web_server_ssl_cert=/path/to/tower.cert # web_server_ssl_key=/path/to/tower.key # Server-side SSL settings for PostgreSQL (when we are installing it). # postgres_use_ssl=False # postgres_ssl_cert=/path/to/pgsql.crt # postgres_ssl_key=/path/to/pgsql.key
3.2.1.3. Single platform gateway, automation controller, automation hub, and Event-Driven Ansible controller with an external (installer managed) database
Use this example to populate the inventory file to deploy single instances of platform gateway, automation controller, automation hub, and Event-Driven Ansible controller with an external (installer managed) database.
- This scenario requires a minimum of automation controller 2.4 for successful deployment of Event-Driven Ansible controller.
- Event-Driven Ansible controller must be installed on a separate server and cannot be installed on the same host as automation hub and automation controller.
-
When an Event-Driven Ansible rulebook is activated under standard conditions, it uses approximately 250 MB of memory. However, the actual memory consumption can vary significantly based on the complexity of the rules and the volume and size of the events processed. In scenarios where a large number of events are anticipated or the rulebook complexity is high, conduct a preliminary assessment of resource usage in a staging environment. This ensures that the maximum number of activations is based on the resource capacity. In the following example, the default
automationedacontroller_max_running_activations
setting is 12, but can be adjusted according to fit capacity.
[automationcontroller] controller.example.com [automationhub] automationhub.example.com [automationedacontroller] automationedacontroller.example.com [automationgateway] gateway.example.com [database] data.example.com [all:vars] admin_password='<password>' redis_mode=standalone pg_host='data.example.com' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' # Automation hub configuration automationhub_admin_password= <PASSWORD> automationhub_pg_host='data.example.com' automationhub_pg_port=5432 automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' # Automation Event-Driven Ansible controller configuration automationedacontroller_admin_password='<eda-password>' automationedacontroller_pg_host='data.example.com' automationedacontroller_pg_port=5432 automationedacontroller_pg_database='automationedacontroller' automationedacontroller_pg_username='automationedacontroller' automationedacontroller_pg_password='<password>' # Keystore file to install in SSO node # sso_custom_keystore_file='/path/to/sso.jks' # This install will deploy SSO with sso_use_https=True # Keystore password is required for https enabled SSO sso_keystore_password='' # This install will deploy a TLS enabled Automation Hub. # If for some reason this is not the behavior wanted one can # disable TLS enabled deployment. # # automationhub_disable_https = False # The default install will generate self-signed certificates for the Automation # Hub service. If you are providing valid certificate via automationhub_ssl_cert # and automationhub_ssl_key, one should toggle that value to True. # # automationhub_ssl_validate_certs = False # SSL-related variables # If set, this will install a custom CA certificate to the system trust store. # custom_ca_cert=/path/to/ca.crt # Certificate and key to install in Automation Hub node # automationhub_ssl_cert=/path/to/automationhub.cert # automationhub_ssl_key=/path/to/automationhub.key # Automation Gateway configuration automationgateway_admin_password='' automationgateway_pg_host='' automationgateway_pg_port=5432 automationgateway_pg_database='automationgateway' automationgateway_pg_username='automationgateway' automationgateway_pg_password='' automationgateway_pg_sslmode='prefer' # The main automation gateway URL that clients will connect to (e.g. https://<load balancer host>). # If not specified, the first node in the [automationgateway] group will be used when needed. # automationgateway_main_url = '' # Certificate and key to install in Automation Gateway # automationgateway_ssl_cert=/path/to/automationgateway.cert # automationgateway_ssl_key=/path/to/automationgateway.key # Certificate and key to install in nginx for the web UI and API # web_server_ssl_cert=/path/to/tower.cert # web_server_ssl_key=/path/to/tower.key # Server-side SSL settings for PostgreSQL (when we are installing it). # postgres_use_ssl=False # postgres_ssl_cert=/path/to/pgsql.crt # postgres_ssl_key=/path/to/pgsql.key # Boolean flag used to verify Automation Controller's # web certificates when making calls from Automation Event-Driven Ansible controller. # automationedacontroller_controller_verify_ssl = true # # Certificate and key to install in Automation Event-Driven Ansible controller node # automationedacontroller_ssl_cert=/path/to/automationeda.crt # automationedacontroller_ssl_key=/path/to/automationeda.key
Additional resources
For more information about these inventory variables, refer to the Ansible automation hub variables.
3.2.1.4. High availability automation hub
Use the following examples to populate the inventory file to install a highly available automation hub. This inventory file includes a highly available automation hub with a clustered setup.
You can configure your HA deployment further to enable a high availability deployment of automation hub on SELinux.
Specify database host IP
-
Specify the IP address for your database host, using the
automation_pg_host
andautomation_pg_port
inventory variables. For example:
automationhub_pg_host='192.0.2.10' automationhub_pg_port=5432
-
Also specify the IP address for your database host in the [database] section, using the value in the
automationhub_pg_host
inventory variable:
[database] 192.0.2.10
List all instances in a clustered setup
-
If installing a clustered setup, replace
localhost ansible_connection=local
in the [automationhub] section with the hostname or IP of all instances. For example:
[automationhub] automationhub1.testing.ansible.com ansible_user=cloud-user automationhub2.testing.ansible.com ansible_user=cloud-user automationhub3.testing.ansible.com ansible_user=cloud-user
Next steps
Check that the following directives are present in /etc/pulp/settings.py
in each of the private automation hub servers:
USE_X_FORWARDED_PORT = True USE_X_FORWARDED_HOST = True
If automationhub_main_url
is not specified, the first node in the [automationhub] group will be used as default.
3.2.1.5. Enabling a high availability (HA) deployment of automation hub on SELinux
You can configure the inventory file to enable high availability deployment of automation hub on SELinux. You must create two mount points for /var/lib/pulp
and /var/lib/pulp/pulpcore_static
, and then assign the appropriate SELinux contexts to each.
You must add the context for /var/lib/pulp
pulpcore_static and run the Ansible Automation Platform installer before adding the context for /var/lib/pulp
.
Prerequisites
You have already configured a NFS export on your server.
NoteThe NFS share is hosted on an external server and is not a part of high availability automation hub deployment.
Procedure
Create a mount point at
/var/lib/pulp
:$ mkdir /var/lib/pulp/
Open
/etc/fstab
using a text editor, then add the following values:srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:var_lib_t:s0" 0 0 srv_rhel8:/data/pulpcore_static /var/lib/pulp/pulpcore_static nfs defaults,_netdev,nosharecache,context="system_u:object_r:httpd_sys_content_rw_t:s0" 0 0
Run the reload systemd manager configuration command:
$ systemctl daemon-reload
Run the mount command for
/var/lib/pulp
:$ mount /var/lib/pulp
Create a mount point at
/var/lib/pulp/pulpcore_static
:$ mkdir /var/lib/pulp/pulpcore_static
Run the mount command:
$ mount -a
With the mount points set up, run the Ansible Automation Platform installer:
$ setup.sh -- -b --become-user root
-
After the installation is complete, unmount the
/var/lib/pulp/
mount point.
Additional Resources
- See the SELinux Requirements on the Pulp Project documentation for a list of SELinux contexts.
- See the Filesystem Layout for a full description of Pulp folders.
3.2.1.5.1. Configuring pulpcore.service
After you have configured the inventory file, and applied the SELinux context, you now need to configure the pulp service.
Procedure
With the two mount points set up, shut down the Pulp service to configure
pulpcore.service
:$ systemctl stop pulpcore.service
Edit
pulpcore.service
usingsystemctl
:$ systemctl edit pulpcore.service
Add the following entry to
pulpcore.service
to ensure that automation hub services starts only after starting the network and mounting the remote mount points:[Unit] After=network.target var-lib-pulp.mount
Enable
remote-fs.target
:$ systemctl enable remote-fs.target
Reboot the system:
$ systemctl reboot
Troubleshooting
A bug in the pulpcore SELinux policies can cause the token authentication public/private keys in etc/pulp/certs/
to not have the proper SELinux labels, causing the pulp process to fail. When this occurs, run the following command to temporarily attach the proper labels:
$ chcon system_u:object_r:pulpcore_etc_t:s0 /etc/pulp/certs/token_{private,public}_key.pem
Repeat this command to reattach the proper SELinux labels whenever you relabel your system.
3.2.1.5.2. Applying the SELinux context
After you have configured the inventory file, you must now apply the context to enable the high availability (HA) deployment of automation hub on SELinux.
Procedure
Shut down the Pulp service:
$ systemctl stop pulpcore.service
Unmount
/var/lib/pulp/pulpcore_static
:$ umount /var/lib/pulp/pulpcore_static
Unmount
/var/lib/pulp/
:$ umount /var/lib/pulp/
Open
/etc/fstab
using a text editor, then replace the existing value for/var/lib/pulp
with the following:srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:pulpcore_var_lib_t:s0" 0 0
Run the mount command:
$ mount -a
3.2.1.6. Configuring content signing on private automation hub
To successfully sign and publish Ansible Certified Content Collections, you must configure private automation hub for signing.
Prerequisites
- Your GnuPG key pairs have been securely set up and managed by your organization.
- Your public-private key pair has proper access for configuring content signing on private automation hub.
Procedure
Create a signing script that accepts only a filename.
NoteThis script acts as the signing service and must generate an ascii-armored detached
gpg
signature for that file using the key specified through thePULP_SIGNING_KEY_FINGERPRINT
environment variable.The script prints out a JSON structure with the following format.
{"file": "filename", "signature": "filename.asc"}
All the file names are relative paths inside the current working directory. The file name must remain the same for the detached signature.
Example:
The following script produces signatures for content:
#!/usr/bin/env bash FILE_PATH=$1 SIGNATURE_PATH="$1.asc" ADMIN_ID="$PULP_SIGNING_KEY_FINGERPRINT" PASSWORD="password" # Create a detached signature gpg --quiet --batch --pinentry-mode loopback --yes --passphrase \ $PASSWORD --homedir ~/.gnupg/ --detach-sign --default-key $ADMIN_ID \ --armor --output $SIGNATURE_PATH $FILE_PATH # Check the exit status STATUS=$? if [ $STATUS -eq 0 ]; then echo {\"file\": \"$FILE_PATH\", \"signature\": \"$SIGNATURE_PATH\"} else exit $STATUS fi
After you deploy a private automation hub with signing enabled to your Ansible Automation Platform cluster, new UI additions are displayed in collections.
Review the Ansible Automation Platform installer inventory file for options that begin with
automationhub_*
.[all:vars] . . . automationhub_create_default_collection_signing_service = True automationhub_auto_sign_collections = True automationhub_require_content_approval = True automationhub_collection_signing_service_key = /abs/path/to/galaxy_signing_service.gpg automationhub_collection_signing_service_script = /abs/path/to/collection_signing.sh
The two new keys (automationhub_auto_sign_collections and automationhub_require_content_approval) indicate that the collections must be signed and approved after they are uploaded to private automation hub.
3.2.1.7. Adding a safe plugin variable to Event-Driven Ansible controller
When using redhat.insights_eda
or similar plugins to run rulebook activations in Event-Driven Ansible controller, you must add a safe plugin variable to a directory in Ansible Automation Platform. This ensures connection between Event-Driven Ansible controller and the source plugin, and displays port mappings correctly.
Procedure
-
Create a directory for the safe plugin variable:
mkdir -p ./group_vars/automationedacontroller
-
Create a file within that directory for your new setting (for example,
touch ./group_vars/automationedacontroller/custom.yml
) Add the variable
automationedacontroller_additional_settings
to extend the defaultsettings.yaml
template for Event-Driven Ansible controller and add theSAFE_PLUGINS
field with a list of plugins to enable. For example:automationedacontroller_additional_settings: SAFE_PLUGINS: - ansible.eda.webhook - ansible.eda.alertmanager
NoteYou can also extend the
automationedacontroller_additional_settings
variable beyondSAFE_PLUGINS
in the Django configuration file/etc/ansible-automation-platform/eda/settings.yaml
.
3.2.2. Setting registry_username and registry_password
When using the registry_username
and registry_password
variables for an online non-bundled installation, you need to create a new registry service account.
Registry service accounts are named tokens that can be used in environments where credentials will be shared, such as deployment systems.
Procedure
- Go to https://access.redhat.com/terms-based-registry/accounts.
- On the Registry Service Accounts page click .
- Enter a name for the account using only the allowed characters.
- Optionally enter a description for the account.
- Click .
- Find the created account in the list by searching for your name in the search field.
- Click the name of the account that you created.
Alternatively, if you know the name of your token, you can go directly to the page by entering the URL:
https://access.redhat.com/terms-based-registry/token/<name-of-your-token>
A token page opens, displaying a generated username (different from the account name) and a token.
If no token is displayed, click
. You can also click this to generate a new username and token.-
Copy the username (for example "1234567|testuser") and use it to set the variable
registry_username
. -
Copy the token and use it to set the variable
registry_password
.
3.2.2.1. Configuring Redis
Ansible Automation Platform offers a centralized Redis instance in both standalone
and clustered
topologies.
In RPM deployments, the Redis mode is set to cluster
by default. You can change this setting in the inventory file [all:vars]
section as in the following example:
[all:vars] admin_password='<password>' pg_host='data.example.com' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' redis_mode=cluster
For more information about Redis, see Caching and queueing system in Planning your installation.
3.3. Running the Red Hat Ansible Automation Platform installer setup script
After you update the inventory file with required parameters, run the installer setup script.
Procedure
Run the
setup.sh
script$ sudo ./setup.sh
If you are running the setup as a non-root user with sudo
privileges, you can use the following command:
$ ANSIBLE_BECOME_METHOD='sudo' ANSIBLE_BECOME=True ./setup.sh
Installation of Red Hat Ansible Automation Platform will begin.
Additional resources
See Understanding privilege escalation for additional setup.sh
script examples.
3.4. Verifying installation of Ansible Automation Platform
Upon a successful login, your installation of Red Hat Ansible Automation Platform is complete.
If the installation fails and you are a customer who has purchased a valid license for Red Hat Ansible Automation Platform, contact Ansible through the Red Hat Customer portal.
Additional resources
See Getting started with Ansible Automation Platform for post installation instructions.
3.5. Backing up your Ansible Automation Platform instance
Back up an existing Ansible Automation Platform instance by running the .setup.sh
script with the backup_dir
flag, which saves the content and configuration of your current environment. Use the compression flags use_archive_compression
and use_db_compression
to compress the backup artifacts before they are sent to the host running the backup operation.
Procedure
- Navigate to your Ansible Automation Platform installation directory.
Run the
./setup.sh
script following the example below:$ ./setup.sh -e ‘backup_dir=/ansible/mybackup’ -e ‘use_archive_compression=true’ ‘use_db_compression=true’ @credentials.yml -b
Where:
-
backup_dir
: Specifies a directory to save your backup to. use_archive_compression=true
anduse_db_compression=true
: Compresses the backup artifacts before they are sent to the host running the backup operation.You can use the following variables to customize the compression:
-
For global control of compression for filesystem related backup files:
use_archive_compression=true
For component-level control of compression for filesystem related backup files:
<componentName>_use_archive_compression
For example:
-
automationgateway_use_archive_compression=true
-
automationcontroller_use_archive_compression=true
-
automationhub_use_archive_compression=true
-
automationedacontroller_use_archive_compression=true
-
-
For global control of compression for database related backup files:
use_db_compression=true
For component-level control of compression for database related backup files:
<componentName>_use_db_compression=true
For example:
-
automationgateway_use_db_compression=true
-
automationcontroller_use_db_compression=true
-
automationhub_use_db_compression=true
-
automationedacontroller_use_db_compression=true
-
-
For global control of compression for filesystem related backup files:
-
After a successful backup, a backup file is created at /ansible/mybackup/automation-platform-backup-<date/time>.tar.gz
.
3.6. Adding a subscription manifest to Ansible Automation Platform
Before you first log in, you must add your subscription information to the platform. To add a subscription to Ansible Automation Platform, see Obtaining a manifest file in the Access management and authentication guide.
Chapter 4. Horizontal Scaling in Red Hat Ansible Automation Platform
You can set up multi-node deployments for components across Ansible Automation Platform. Whether you require horizontal scaling for Automation Execution, Automation Decisions, or automation mesh, you can scale your deployments based on your organization’s needs.
4.1. Horizontal scaling in Event-Driven Ansible controller
With Event-Driven Ansible controller, you can set up horizontal scaling for your events automation. This multi-node deployment enables you to define as many nodes as you prefer during the installation process. You can also increase or decrease the number of nodes at any time according to your organizational needs.
The following node types are used in this deployment:
- API node type
- Responds to the HTTP REST API of Event-Driven Ansible controller.
- Worker node type
- Runs an Event-Driven Ansible worker, which is the component of Event-Driven Ansible that not only manages projects and activations, but also executes the activations themselves.
- Hybrid node type
- Is a combination of the API node and the worker node.
The following example shows how you can set up an inventory file for horizontal scaling of Event-Driven Ansible controller on Red Hat Enterprise Linux VMs using the host group name [automationedacontroller]
and the node type variable eda_node_type
:
[automationedacontroller] 3.88.116.111 routable_hostname=automationedacontroller-api.example.com eda_node_type=api # worker node 3.88.116.112 routable_hostname=automationedacontroller-api.example.com eda_node_type=worker
4.1.1. Sizing and scaling guidelines
API nodes process user requests (interactions with the UI or API) while worker nodes process the activations and other background tasks required for Event-Driven Ansible to function properly. The number of API nodes you require correlates to the desired number of users of the application and the number of worker nodes correlates to the desired number of activations you want to run.
Since activations are variable and controlled by worker nodes, the supported approach for scaling is to use separate API and worker nodes instead of hybrid nodes due to the efficient allocation of hardware resources by worker nodes. By separating the nodes, you can scale each type independently based on specific needs, leading to better resource utilization and cost efficiency.
An example of an instance in which you might consider scaling up your node deployment is when you want to deploy Event-Driven Ansible for a small group of users who will run a large number of activations. In this case, one API node is adequate, but if you require more, you can scale up to three additional worker nodes.
To set up a multi-node deployment, follow the procedure in Setting up horizontal scaling for Event-Driven Ansible controller.
4.1.2. Setting up horizontal scaling for Event-Driven Ansible controller
To scale up (add more nodes) or scale down (remove nodes), you must update the content of the inventory file to add or remove nodes and rerun the installation program.
Procedure
Update the inventory to add two more worker nodes:
[automationedacontroller] 3.88.116.111 routable_hostname=automationedacontroller-api.example.com eda_node_type=api 3.88.116.112 routable_hostname=automationedacontroller-api.example.com eda_node_type=worker # two more worker nodes 3.88.116.113 routable_hostname=automationedacontroller-api.example.com eda_node_type=worker 3.88.116.114 routable_hostname=automationedacontroller-api.example.com eda_node_type=worker
- Re-run the installer.
Chapter 5. Disconnected installation
If you are not connected to the internet or do not have access to online repositories, you can install Red Hat Ansible Automation Platform without an active internet connection.
5.1. Prerequisites
Before installing Ansible Automation Platform on a disconnected network, you must meet the following prerequisites:
A subscription manifest that you can upload to the platform.
For more information, see Obtaining a manifest file.
- The Ansible Automation Platform setup bundle at Customer Portal is downloaded.
- The DNS records for the automation controller and private automation hub servers are created.
5.2. Ansible Automation Platform installation on disconnected RHEL
You can install Ansible Automation Platform without an internet connection by using the installer-managed database located on the automation controller. The setup bundle is recommended for disconnected installation because it includes additional components that make installing Ansible Automation Platform easier in a disconnected environment. These include the Ansible Automation Platform Red Hat package managers (RPMs) and the default execution environment (EE) images.
Additional Resources
For a comprehensive list of pre-defined variables used in Ansible installation inventory files, see Ansible variables.
5.2.1. System requirements for disconnected installation
Ensure that your system has all the hardware requirements before performing a disconnected installation of Ansible Automation Platform. You can find these in system requirements.
5.2.2. RPM Source
RPM dependencies for Ansible Automation Platform that come from the BaseOS and AppStream repositories are not included in the setup bundle. To add these dependencies, you must first obtain access to BaseOS and AppStream repositories. Use Satellite to sync repositories and add dependencies. If you prefer an alternative tool, you can choose between the following options:
- Reposync
- The RHEL Binary DVD
The RHEL Binary DVD method requires the DVD for supported versions of RHEL. See Red Hat Enterprise Linux Life Cycle for information on which versions of RHEL are currently supported.
Additional resources
5.3. Synchronizing RPM repositories using reposync
To perform a reposync you need a RHEL host that has access to the internet. After the repositories are synced, you can move the repositories to the disconnected network hosted from a web server.
When downloading RPM, ensure you use the applicable distro.
Procedure
Attach the BaseOS and AppStream required repositories:
# subscription-manager repos \ --enable rhel-9-for-x86_64-baseos-rpms \ --enable rhel-9-for-x86_64-appstream-rpms
Perform the reposync:
# dnf install yum-utils # reposync -m --download-metadata --gpgcheck \ -p /path/to/download
Use reposync with
--download-metadata
and without--newest-only
. See RHEL 8 Reposync.-
If you are not using
--newest-only,
the repos downloaded may take an extended amount of time to sync due to the large number of GB. -
If you are using
--newest-only,
the repos downloaded may take an extended amount of time to sync due to the large number of GB.
-
If you are not using
After the reposync is completed, your repositories are ready to use with a web server.
- Move the repositories to your disconnected network.
5.4. Creating a new web server to host repositories
If you do not have an existing web server to host your repositories, you can create one with your synced repositories.
Procedure
Install prerequisites:
$ sudo dnf install httpd
Configure httpd to serve the repo directory:
/etc/httpd/conf.d/repository.conf DocumentRoot '/path/to/repos' <LocationMatch "^/+$"> Options -Indexes ErrorDocument 403 /.noindex.html </LocationMatch> <Directory '/path/to/repos'> Options All Indexes FollowSymLinks AllowOverride None Require all granted </Directory>
Ensure that the directory is readable by an apache user:
$ sudo chown -R apache /path/to/repos
Configure SELinux:
$ sudo semanage fcontext -a -t httpd_sys_content_t "/path/to/repos(/.*)?" $ sudo restorecon -ir /path/to/repos
Enable httpd:
$ sudo systemctl enable --now httpd.service
Open firewall:
$ sudo firewall-cmd --zone=public --add-service=http –add-service=https --permanent $ sudo firewall-cmd --reload
On automation services, add a repo file at /etc/yum.repos.d/local.repo, and add the optional repos if needed:
[Local-BaseOS] name=Local BaseOS baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-baseos-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [Local-AppStream] name=Local AppStream baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-appstream-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
5.5. Accessing RPM repositories from a locally mounted DVD
If you plan to access the repositories from the RHEL binary DVD, you must first set up a local repository.
Procedure
Mount DVD or ISO:
DVD
# mkdir /media/rheldvd && mount /dev/sr0 /media/rheldvd
ISO
# mkdir /media/rheldvd && mount -o loop rhrhel-8.6-x86_64-dvd.iso /media/rheldvd
Create yum repo file at
/etc/yum.repos.d/dvd.repo
[dvd-BaseOS] name=DVD for RHEL - BaseOS baseurl=file:///media/rheldvd/BaseOS enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [dvd-AppStream] name=DVD for RHEL - AppStream baseurl=file:///media/rheldvd/AppStream enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Import the gpg key:
# rpm --import /media/rheldvd/RPM-GPG-KEY-redhat-release
If the key is not imported you will see an error similar to
# Curl error (6): Couldn't resolve host name for https://www.redhat.com/security/data/fd431d51.txt [Could not resolve host: www.redhat.com]
Additional Resources
For further detail on setting up a repository see Need to set up yum repository for locally-mounted DVD on Red Hat Enterprise Linux 8.
5.6. Downloading and installing the Ansible Automation Platform setup bundle
Choose the setup bundle to download Ansible Automation Platform for disconnected installations. This bundle includes the RPM content for Ansible Automation Platform and the default execution environment images that will be uploaded to your private automation hub during the installation process.
Procedure
- Download the Ansible Automation Platform setup bundle package by navigating to the Red Hat Ansible Automation Platform download page and clicking for the Ansible Automation Platform 2.5 Setup Bundle.
On control node, untar the bundle:
$ tar xvf \ ansible-automation-platform-setup-bundle-2.5-1.tar.gz $ cd ansible-automation-platform-setup-bundle-2.5-1
- Edit the inventory file to include variables based on your host names and desired password values.
See section 3.2 Inventory file examples base on installation scenarios for a list of examples that best fits your scenario.
5.7. Completing post installation tasks
After you have completed the installation of Ansible Automation Platform, ensure that automation hub and automation controller deploy properly.
Before your first login, you must add your subscription information to the platform. To obtain your subscription information in uploadable form, see Obtaining a manifest file in Access management and authentication.
Once you have obtained your subscription manifest, see Getting started with Ansible Automation Platform for instructions on how to upload your subscription information.
Now that you have successfully installed Ansible Automation Platform, to begin using its features, see the following guides for your next steps:
Chapter 6. Troubleshooting RPM installation of Ansible Automation Platform
Use this information to troubleshoot your RPM installation of Ansible Automation Platform.
6.1. Gathering Ansible Automation Platform logs
With the sos
utility, you can collect configuration, diagnostic, and troubleshooting data, and provide those files to Red Hat Technical Support. An sos
report is a common starting point for Red Hat technical support engineers when performing analysis of a service request for the Ansible Automation Platform.
As part of the troubleshooting with Red Hat Support, you can collect the sos
report for each node in your RPM installation of Ansible Automation Platform using the installation inventory and the installer.
Procedure
Access the installer folder with the inventory file and run the installer setup script the following command:
$ ./setup.sh -s
With this command, you can connect to each node present in the inventory, install the
sos
tool, and generate new logs.NoteIf you are running the setup as a non-root user with sudo privileges, you can use the following command:
$ ANSIBLE_BECOME_METHOD='sudo' ANSIBLE_BECOME=True ./setup.sh -s
Optional: If required, change the location of the
sos
report files.The
sos
report files are copied to the/tmp
folder for the current server. To change the location, specify the new location by using the following command:$ ./setup.sh -e 'target_sos_directory=/path/to/files' -s
Where
target_sos_directory=/path/to/files
is used to specify the destination directory where thesos
report will be saved. In this case, thesos
report is stored in the directory/path/to/files
.Gather the files described on the playbook output and share with the support engineer or directly upload the
sos
report to Red Hat.To create an
sos
report with additional information or directly upload the data to Red Hat, use the following command:$ ./setup.sh -e 'case_number=0000000' -e 'clean=true' -e 'upload=true' -s
Table 6.1. Parameter Reference Table Parameter
Description
Default value
case_number
Specifies the support case number that you want.
-
clean
Obfuscates sensitive data that might be present on the
sos
report.false
upload
Automatically uploads the
sos
report data to Red Hat.false
To know more about the sos
report tool, see the KCS article: What is an sos report and how to create one in Red Hat Enterprise Linux?
Appendix A. Inventory file variables
The following tables contain information about the variables used in Ansible Automation Platform’s installation inventory
files. The tables include the variables that you can use for RPM-based installation and container-based installation.
A.1. Ansible variables
The following variables control how Ansible Automation Platform interacts with remote hosts.
For more information about variables specific to certain plugins, see the documentation for Ansible.Builtin.
For a list of global configuration options, see Ansible Configuration Settings.
Variable | Description |
---|---|
| The connection plugin used for the task on the target host.
This can be the name of any of Ansible connection plugins. SSH protocol types are
Default = |
|
The IP address or name of the target host to use instead of |
| The password to authenticate to the host. Do not store this variable in plain text. Always use a vault. For more information, see Keep vaulted variables safely visible. |
| The connection port number.
The default for SSH is |
|
This setting is always appended to the default |
|
This setting is always appended to the default |
|
This sets the shell that the Ansible controller uses on the target machine and overrides the executable in |
| The shell type of the target system.
Do not use this setting unless you have set the |
|
This setting is always appended to the default command line for |
|
This setting overrides the default behavior to use the system |
|
This setting is always appended to the default |
|
Determines if SSH
This can override the |
| Private key file used by SSH. Useful if using multiple keys and you do not want to use an SSH agent. |
| The user name to use when connecting to the host.
Do not change this variable unless |
| This variable takes the hostname of the machine from the inventory script or the Ansible configuration file. You cannot set the value of this variable. Because the value is taken from the configuration file, the actual runtime hostname value can vary from what is returned by this variable. |
A.2. Automation hub variables
RPM variable name | Container variable name | Description | Required or optional | Default |
---|---|---|---|---|
|
|
Automation hub administrator password. | Required | |
|
Set the existing token for the installation program. | Optional | ||
|
|
If a collection signing service is enabled, collections are not signed automatically by default. | Optional |
|
|
Ansible automation hub provides artifacts in | Optional |
| |
|
| Maximum allowed size for data sent to automation hub through NGINX. | Optional |
|
| Denote whether or not the collection download count should be displayed in the UI. | Optional |
| |
|
Controls the type of content to upload when | Optional | Both certified and validated are enabled by default. | |
|
| Path to the collection signing key file. | Required if a collection signing service is enabled. | |
|
Denote whether or not to run the command | Optional |
| |
|
| Path to the container signing key file. | Required if a container signing service is enabled. | |
|
|
Set this variable to | Optional |
|
|
|
Set this variable to | Optional |
|
|
|
Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for automation hub. | Optional |
|
|
|
Controls whether HTTPS is enabled or disabled for automation hub. | Optional |
|
|
Controls whether logging is enabled or disabled at | Optional |
| |
|
Controls whether read-only access is enabled or disabled for unauthorized users viewing collections or namespaces for automation hub. | Optional |
| |
|
Controls whether or not unauthorized users can download read-only collections from automation hub. | Optional |
| |
|
| The firewall zone where automation hub related firewall rules are applied. This controls which networks can access automation hub based on the zone’s trust level. | Optional |
RPM = no default set. Container = |
|
Denote whether or not to require the change of the default administrator password for automation hub during installation. | Optional |
| |
|
|
Dictionary of settings to pass to the | Optional | |
|
Denote whether the web certificate sources are local to the installation program ( | Optional |
The value defined in | |
|
|
Controls whether client certificate authentication is enabled or disabled on the automation hub PostgreSQL database. | Optional |
|
|
| Name of the PostgreSQL database used by automation hub. | Optional |
RPM = |
|
| Hostname of the PostgreSQL database used by automation hub. | Required |
RPM = |
|
|
Password for the automation hub PostgreSQL database user. | Optional | |
|
| Port number for the PostgreSQL database used by automation hub. | Optional |
|
|
|
Controls the SSL/TLS mode to use when automation hub connects to the PostgreSQL database. | Optional |
|
|
| Username for the automation hub PostgreSQL database user. | Optional |
RPM = |
|
| Path to the PostgreSQL SSL/TLS certificate file for automation hub. | Required if using client certificate authentication. | |
|
| Path to the PostgreSQL SSL/TLS key file for automation hub. | Required if using client certificate authentication. | |
|
Denote whether the PostgreSQL client certificate sources are local to the installation program ( | Optional |
The value defined in | |
|
Controls whether content signing is enabled or disabled for automation hub. | Optional |
| |
|
Controls whether or not existing signing keys should be restored from a backup. | Optional |
| |
|
|
Controls whether or not pre-loading of collections is enabled. | Optional |
|
|
| Path to the SSL/TLS certificate file for automation hub. | Optional | |
|
| Path to the SSL/TLS key file for automation hub. | Optional | |
|
|
Denote whether the automation hub provided certificate files are local to the installation program ( | Optional |
|
|
|
Controls whether archive compression is enabled or disabled for automation hub. You can control this functionality globally by using | Optional |
|
|
|
Controls whether database compression is enabled or disabled for automation hub. You can control this functionality globally by using | Optional |
|
|
| List of additional NGINX headers to add to automation hub’s NGINX configuration. | Optional |
|
|
Controls whether or not a token is generated for automation hub during installation. By default, a token is automatically generated during a fresh installation. | Optional |
| |
| Defines additional settings for use by automation hub during installation. For example: hub_extra_settings: - setting: REDIRECT_IS_HTTPS value: True | Optional |
| |
|
| Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for automation hub. | Optional |
|
|
| Secret key value used by automation hub to sign and encrypt data. | Optional | |
| Azure blob storage account key. | Required if using an Azure blob storage backend. | ||
| Account name associated with the Azure blob storage. | Required when using an Azure blob storage backend. | ||
| Name of the Azure blob storage container. | Optional |
| |
|
Defines extra parameters for the Azure blob storage backend. | Optional |
| |
| Password for the automation content collection signing service. | Required if the collection signing service is protected by a passphrase. | ||
| Service for signing collections. | Optional |
| |
| Password for the automation content container signing service. | Required if the container signing service is protected by a passphrase. | ||
| Service for signing containers. | Optional |
| |
| Port number that automation hub listens on for HTTP requests. | Optional |
| |
| Port number that automation hub listens on for HTTPS requests. | Optional |
| |
|
| Protocols that automation hub will support when handling HTTPS traffic. | Optional |
RPM = |
| UNIX socket used by automation hub to connect to the PostgreSQL database. | Optional | ||
| AWS S3 access key. | Required if using an AWS S3 storage backend. | ||
| Name of the AWS S3 storage bucket. | Optional |
| |
|
Used to define extra parameters for the AWS S3 storage backend. | Optional |
| |
| AWS S3 secret key. | Required if using an AWS S3 storage backend. | ||
| Mount options for the Network File System (NFS) share. | Optional |
| |
| Path to the Network File System (NFS) share with read, write, and execute (RWX) access. |
Required if installing more than one instance of automation hub with a | ||
|
Automation hub storage backend type. | Optional |
| |
| Number of automation hub workers. | Optional |
|
A.3. Automation controller variables
RPM variable name | Container variable name | Description | Required or optional | Default |
---|---|---|---|---|
|
| Email address used by Django for the admin user for automation controller. | Optional |
|
|
|
Automation controller administrator password. | Required | |
|
| Username used to identify and create the administrator user in automation controller. | Optional |
|
|
| Maximum allowed size for data sent to automation controller through NGINX. | Optional |
|
|
|
Controls whether archive compression is enabled or disabled for automation controller. You can control this functionality globally by using | Optional |
|
|
|
Controls whether database compression is enabled or disabled for automation controller. You can control this functionality globally by using | Optional |
|
|
|
Controls whether client certificate authentication is enabled or disabled on the automation controller PostgreSQL database. | Optional |
|
|
| The firewall zone where automation controller related firewall rules are applied. This controls which networks can access automation controller based on the zone’s trust level. | Optional |
|
|
Denote whether the web certificate sources are local to the installation program ( | Optional |
The value defined in | |
|
Denote whether the PostgreSQL client certificate sources are local to the installation program ( | Optional |
The value defined in | |
|
|
Denote whether the automation controller provided certificate files are local to the installation program ( | Optional |
|
|
|
Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for automation controller. | Optional |
|
|
|
Controls whether HTTPS is enabled or disabled for automation controller. | Optional |
|
|
| Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for automation controller. | Optional |
|
|
| Port number that automation controller listens on for HTTP requests. | Optional |
RPM = |
|
| Port number that automation controller listens on for HTTPS requests. | Optional |
RPM = |
|
| Protocols that automation controller supports when handling HTTPS traffic. | Optional |
RPM = |
|
| List of additional NGINX headers to add to automation controller’s NGINX configuration. | Optional |
|
|
The status of a node or group of nodes. | Optional |
| |
|
See |
For the
For the
| Optional |
For |
|
See |
Used to indicate which nodes a specific host or group connects to. Wherever this variable is defined, an outbound connection to the specific host or group is established. | Optional | |
|
| Name of the PostgreSQL database used by automation controller. | Optional |
|
|
| Hostname of the PostgreSQL database used by automation controller. | Required | |
|
|
Password for the automation controller PostgreSQL database user. | Required if not using client certificate authentication. | |
|
| Port number for the PostgreSQL database used by automation controller. | Optional |
|
|
|
Controls the SSL/TLS mode to use when automation controller connects to the PostgreSQL database. | Optional |
|
|
| Username for the automation controller PostgreSQL database user. | Optional |
|
|
| Path to the PostgreSQL SSL/TLS certificate file for automation controller. | Required if using client certificate authentication. | |
|
| Path to the PostgreSQL SSL/TLS key file for automation controller. | Required if using client certificate authentication. | |
|
Number of hours worth of events table partitions to pre-create before starting a backup to avoid | Optional | 3 | |
|
|
Number of requests | Optional |
|
|
| Path to the SSL/TLS certificate file for automation controller. | Optional | |
|
| Path to the SSL/TLS key file for automation controller. | Optional | |
| Number of event workers that handle job-related events inside automation controller. | Optional |
| |
| Defines additional settings for use by automation controller during installation. For example: controller_extra_settings: - setting: USE_X_FORWARDED_HOST value: true | Optional |
| |
|
Path to the automation controller license file. | |||
| Memory allocation for automation controller. | Optional |
| |
| UNIX socket used by automation controller to connect to the PostgreSQL database. | Optional | ||
| Secret key value used by automation controller to sign and encrypt data. | Optional |
A.4. Database variables
RPM variable name | Container variable name | Description | Required or optional | Default |
---|---|---|---|---|
|
| Port number for the PostgreSQL database. | Optional |
|
|
| The firewall zone where PostgreSQL related firewall rules are applied. This controls which networks can access PostgreSQL based on the zone’s trust level. | Optional |
RPM = no default set. Container = |
|
|
Maximum number of concurrent connections to the database if you are using an installer-managed database. | Optional |
|
|
| Path to the PostgreSQL SSL/TLS certificate file. | Optional | |
|
| Path to the PostgreSQL SSL/TLS key file. | Optional | |
|
| Controls whether SSL/TLS is enabled or disabled for the PostgreSQL database. | Optional |
|
| Database name used for connections to the PostgreSQL database server. | Optional |
| |
|
Password for the PostgreSQL admin user. |
Required if using | ||
|
Username for the PostgreSQL admin user. | Optional |
| |
| Memory allocation available (in MB) for caching data. | Optional | ||
|
Controls whether or not to keep databases during uninstall. | Optional |
| |
| Destination for server log output. | Optional |
| |
| The algorithm for encrypting passwords. | Optional |
| |
| Memory allocation (in MB) for shared memory buffers. | Optional | ||
|
Denote whether the PostgreSQL provided certificate files are local to the installation program ( | Optional |
| |
|
Controls whether archive compression is enabled or disabled for PostgreSQL. You can control this functionality globally by using | Optional |
|
A.5. Event-Driven Ansible controller variables
RPM variable name | Container variable name | Description | Required or optional | Default |
---|---|---|---|---|
|
| Number of workers used for ansible-rulebook activation pods in Event-Driven Ansible. | Optional |
RPM = (# of cores or threads) * 2 + 1. Container = |
|
| Email address used by Django for the admin user for Event-Driven Ansible. | Optional |
|
|
|
Event-Driven Ansible administrator password. Use of special characters for this variable is limited. The password can include any printable ASCII character except | Required | |
|
| Username used to identify and create the administrator user in Event-Driven Ansible. | Optional |
|
| Number of workers for handling the API served through Gunicorn on worker nodes. | Optional |
| |
|
Denote whether the cache cert sources are local to the installation program ( | Optional |
| |
|
Controls whether or not to regenerate Event-Driven Ansible client certificates for the platform cache. Set to | Optional |
| |
|
| Number of workers used in Event-Driven Ansible for application work. | Optional | Number of cores or threads |
|
|
Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for Event-Driven Ansible. Set this variable to | Optional |
|
|
|
Controls whether HTTPS is enabled or disabled for Event-Driven Ansible. Set this variable to | Optional |
|
|
| API prefix path used for Event-Driven Ansible event-stream through platform gateway. | Optional |
|
|
| The firewall zone where Event-Driven Ansible related firewall rules are applied. This controls which networks can access Event-Driven Ansible based on the zone’s trust level. | Optional |
RPM = no default set. Container = |
| Number of workers for handling event streaming for Event-Driven Ansible. | Optional |
| |
|
| Number of workers for handling the API served through Gunicorn. | Optional | (Number of cores or threads) * 2 + 1 |
|
| Port number that Event-Driven Ansible listens on for HTTP requests. | Optional |
RPM = |
|
| Port number that Event-Driven Ansible listens on for HTTPS requests. | Optional |
RPM = |
|
| Number of maximum activations running concurrently per node. This is an integer that must be greater than 0. | Optional |
|
|
Denote whether the web cert sources are local to the installation program ( | Optional |
| |
|
|
Controls whether client certificate authentication is enabled or disabled on the Event-Driven Ansible PostgreSQL database. Set this variable to | Optional |
|
|
| Name of the PostgreSQL database used by Event-Driven Ansible. | Optional |
RPM = |
|
| Hostname of the PostgreSQL database used by Event-Driven Ansible. | Required | |
|
|
Password for the Event-Driven Ansible PostgreSQL database user. Use of special characters for this variable is limited. The | Required if not using client certificate authentication. | |
|
| Port number for the PostgreSQL database used by Event-Driven Ansible. | Optional |
|
|
|
Determines the level of encryption and authentication for client server connections. Valid options include | Optional |
|
|
| Username for the Event-Driven Ansible PostgreSQL database user. | Optional |
RPM = |
|
| Path to the PostgreSQL SSL/TLS certificate file for Event-Driven Ansible. | Required if using client certificate authentication. | |
|
| Path to the PostgreSQL SSL/TLS key file for Event-Driven Ansible. | Required if using client certificate authentication. | |
|
Denote whether the PostgreSQL client cert sources are local to the installation program ( | Optional |
| |
|
|
URL for connecting to the event stream. The URL must start with the | Optional | |
|
| Hostname of the Redis host used by Event-Driven Ansible. | Optional |
First node in the |
|
| Password for Event-Driven Ansible Redis. | Optional | Randomly generated string |
|
| Port number for the Redis host for Event-Driven Ansible. | Optional |
RPM = The value defined in platform gateway’s implementation ( |
|
| Username for Event-Driven Ansible Redis. | Optional |
|
|
| Secret key value used by Event-Driven Ansible to sign and encrypt data. | Optional | |
|
| Path to the SSL/TLS certificate file for Event-Driven Ansible. | Optional | |
|
| Path to the SSL/TLS key file for Event-Driven Ansible. | Optional | |
|
|
Denote whether the Event-Driven Ansible provided certificate files are local to the installation program ( | Optional |
|
|
List of host addresses in the form: | Optional |
| |
|
|
Controls whether archive compression is enabled or disabled for Event-Driven Ansible. You can control this functionality globally by using | Optional |
|
|
|
Controls whether database compression is enabled or disabled for Event-Driven Ansible. You can control this functionality globally by using | Optional |
|
|
| List of additional NGINX headers to add to Event-Driven Ansible’s NGINX configuration. | Optional |
|
|
Controls whether or not to perform SSL verification for the Daphne WebSocket used by Podman to communicate from the pod to the host. Set to | Optional |
| |
|
|
Event-Driven Ansible node type. Valid options include | Optional |
|
|
Controls whether debug mode is enabled or disabled for Event-Driven Ansible. Set to | Optional |
| |
| Defines additional settings for use by Event-Driven Ansible during installation. For example: eda_extra_settings: - setting: RULEBOOK_READINESS_TIMEOUT_SECONDS value: 120 | Optional |
| |
| Maximum allowed size for data sent to Event-Driven Ansible through NGINX. | Optional |
| |
| Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for Event-Driven Ansible. | Optional |
| |
|
| Protocols that Event-Driven Ansible supports when handling HTTPS traffic. | Optional |
RPM = |
| UNIX socket used by Event-Driven Ansible to connect to the PostgreSQL database. | Optional | ||
|
| Controls whether TLS is enabled or disabled for Event-Driven Ansible Redis. Set this variable to true to disable TLS. | Optional |
|
| Path to the Event-Driven Ansible Redis certificate file. | Optional | ||
| Path to the Event-Driven Ansible Redis key file. | Optional | ||
| List of plugins that are allowed to run within Event-Driven Ansible. For more information about the usage of this variable, see Adding a safe plugin variable to Event-Driven Ansible controller. | Optional |
|
A.6. General variables
RPM variable name | Container variable name | Description | Required or optional | Default |
---|---|---|---|---|
|
| Path to the user provided CA certificate file used to generate SSL/TLS certificates for all Ansible Automation Platform services. For more information, see Optional: Using custom TLS certificates. | Optional | |
|
|
Denote whether the CA certificate files are local to the installation program ( | Optional |
|
| Bit size of the internally managed CA certificate private key. | Optional |
| |
|
|
Path to the key file for the CA certificate provided in | Optional | |
| Cipher used for signing the internally managed CA certificate private key. | Optional |
| |
| Denotes whether or not to re-initiate the internally managed CA certificate key pair. | Optional |
| |
| Bit size of the component key pair managed by the internal CA. | Optional |
| |
| Denotes whether or not to re-initiate the component key pair managed by the internal CA. | Optional |
| |
|
A list of additional SAN records for signing a service. Assign these to components in the inventory file as host variables rather than group or all variables. All strings must also contain their corresponding SAN option prefix such as | Optional |
| |
|
Directory local to | Optional |
The value defined in | |
|
| Directory used to store backup files. | Optional |
RPM = |
| Prefix used for the file backup name for the final backup file. | Optional |
| |
|
|
Controls whether or not to perform an offline or bundled installation. Set this variable to | Optional |
|
|
| Path to the bundle directory used when performing a bundle install. |
Required if |
RPM = |
|
| Path to the custom CA certificate file. This is required if any of the TLS certificates you manually provided are signed by a custom CA. For more information, see Optional: Using custom TLS certificates. | Optional | |
|
The default install registers the node to the Red Hat Insights for Red Hat Ansible Automation Platform for the Red Hat Ansible Automation Platform Service if the node is registered with Subscription Manager. Set to | Optional |
| |
|
|
Password credential for access to the registry source defined in |
RPM = Required if you need a password to access | |
|
| URL of the registry source from which to pull execution environment images. | Optional |
|
|
|
Username credential for access to the registry source defined in |
RPM = Required if you need a password to access | |
|
| Controls whether SSL/TLS certificate verification is enabled or disabled when making HTTPS requests. | Optional |
|
| Path to the tar file used for the platform restore. | Optional |
| |
| Path prefix for the staged restore components. | Optional |
| |
|
|
Used if the machine running the installation program can only route to the target host through a specific URL. For example, if you use short names in your inventory, but the node running the installation program can only resolve that host by using a FQDN. If | Optional | |
|
|
Controls at a global level whether the filesystem-related backup files are compressed before being sent to the host to run the backup operation. If set to
You can control this functionality at a component level by using the | Optional |
|
|
| Controls at a global level whether the database-related backup files are compressed before being sent to the host to run the backup operation.
You can control this functionality at a component level by using the | Optional |
|
|
Passphrase used to decrypt the key provided in | Optional | ||
| Compression software to use for compressing container images. | Optional |
| |
|
Controls whether or not to keep container images when uninstalling Ansible Automation Platform. Set to | Optional |
| |
|
Controls whether or not to pull newer container images during installation. Set to | Optional |
| |
| The firewall zone where Performance Co-Pilot related firewall rules are applied. This controls which networks can access Performance Co-Pilot based on the zone’s trust level. | Optional | public | |
|
Controls whether archive compression is enabled or disabled for Performance Co-Pilot. You can control this functionality globally by using | Optional |
| |
|
Set whether or not to use registry authentication. When this variable is set to true, | Optional |
| |
| Ansible Automation Platform registry namespace. | Optional |
| |
| RHEL registry namespace. | Optional |
|
A.7. Image variables
RPM variable name | Container variable name | Description | Required or optional | Default |
---|---|---|---|---|
| Additional container images to pull from the configured container registry during deployment. | Optional |
| |
| Container image for automation controller. | Optional |
| |
| Additional decision environment container images to pull from the configured container registry during deployment. | Optional |
| |
| Supported decision environment container image. | Optional |
| |
| Backend container image for Event-Driven Ansible. | Optional |
| |
| Front-end container image for Event-Driven Ansible. | Optional |
| |
| Additional execution environment container images to pull from the configured container registry during deployment. | Optional |
| |
| Minimal execution environment container image. | Optional |
| |
| Supported execution environment container image. | Optional |
| |
| Container image for platform gateway. | Optional |
| |
| Container image for platform gateway proxy. | Optional |
| |
| Backend container image for automation hub. | Optional |
| |
| Front-end container image for automation hub. | Optional |
| |
| Container image for Performance Co-Pilot. | Optional |
| |
| Container image for PostgreSQL. | Optional |
| |
| Container image for receptor. | Optional |
| |
| Container image for Redis. | Optional |
|
A.8. Platform gateway variables
RPM variable name | Container variable name | Description | Required or optional | Default |
---|---|---|---|---|
|
| Email address used by Django for the admin user for platform gateway. | Optional |
|
|
|
Platform gateway administrator password. Use of special characters for this variable is limited. The password can include any printable ASCII character except | Required | |
|
| Username used to identify and create the administrator user in platform gateway. | Optional |
|
|
| Path to the platform gateway Redis certificate file. | Optional | |
|
| Path to the platform gateway Redis key file. | Optional | |
|
Denote whether the cache client certificate files are local to the installation program ( | Optional |
The value defined in | |
|
Controls whether or not to regenerate platform gateway client certificates for the platform cache. Set to | Optional |
| |
|
| Port number for the platform gateway control plane. | Optional |
|
|
|
Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for platform gateway. Set this variable to | Optional |
|
|
|
Controls whether HTTPS is enabled or disabled for platform gateway. Set this variable to | Optional |
RPM = The value defined in |
|
| The firewall zone where platform gateway related firewall rules are applied. This controls which networks can access platform gateway based on the zone’s trust level. | Optional | RPM = no default set. Container = 'public'. |
|
| Timeout duration (in seconds) for requests made to the gRPC service on platform gateway. | Optional |
|
|
| Maximum number of threads that each gRPC server process can create to handle requests on platform gateway. | Optional |
|
|
| Number of processes for handling gRPC requests on platform gateway. | Optional |
|
|
| Port number that platform gateway listens on for HTTP requests. | Optional |
RPM = |
|
| Port number that platform gateway listens on for HTTPS requests. | Optional |
RPM = |
|
|
URL of the main instance of platform gateway that clients connect to. Use if you are performing a clustered deployment and you need to use the URL of the load balancer instead of the component’s server. The URL must start with | Optional | |
|
Denote whether the web cert sources are local to the installation program ( | Optional |
The value defined in | |
|
|
Controls whether client certificate authentication is enabled or disabled on the platform gateway PostgreSQL database. Set this variable to | Optional |
|
|
| Name of the PostgreSQL database used by platform gateway. | Optional |
RPM = |
|
| Hostname of the PostgreSQL database used by platform gateway. | Required | |
|
|
Password for the platform gateway PostgreSQL database user. Use of special characters for this variable is limited. The | Optional | |
|
| Port number for the PostgreSQL database used by platform gateway. | Optional |
|
|
|
Controls the SSL mode to use when platform gateway connects to the PostgreSQL database. Valid options include | Optional |
|
|
| Username for the platform gateway PostgreSQL database user. | Optional |
RPM = |
|
| Path to the PostgreSQL SSL/TLS certificate file for platform gateway. | Required if using client certificate authentication. | |
|
| Path to the PostgreSQL SSL/TLS key file for platform gateway. | Required if using client certificate authentication. | |
|
Denote whether the PostgreSQL client cert sources are local to the installation program ( | Optional |
The value defined in | |
|
| Hostname of the Redis host used by platform gateway. | Optional |
First node in the |
|
| Password for platform gateway Redis. | Optional | Randomly generated string. |
|
| Username for platform gateway Redis. | Optional |
|
|
| Secret key value used by platform gateway to sign and encrypt data. | Optional | |
|
| Path to the SSL/TLS certificate file for platform gateway. | Optional | |
|
| Path to the SSL/TLS key file for platform gateway. | Optional | |
|
|
Denote whether the platform gateway provided certificate files are local to the installation program ( | Optional |
|
|
|
Controls whether archive compression is enabled or disabled for platform gateway. You can control this functionality globally by using | Optional |
|
|
|
Controls whether database compression is enabled or disabled for platform gateway. You can control this functionality globally by using | Optional |
|
|
| List of additional NGINX headers to add to platform gateway’s NGINX configuration. | Optional |
|
|
Denotes whether or not to verify platform gateway’s web certificates when making calls from platform gateway to itself during installation. Set to | Optional |
| |
|
|
Controls whether or not HTTPS is disabled when accessing the platform UI. Set to | Optional |
RPM = The value defined in |
|
| Port number on which the Envoy proxy listens for incoming HTTP connections. | Optional |
|
|
| Port number on which the Envoy proxy listens for incoming HTTPS connections. | Optional |
|
|
| Protocols that platform gateway will support when handling HTTPS traffic. | Optional |
RPM = |
|
|
Controls whether TLS is enabled or disabled for platform gateway Redis. Set this variable to | Optional |
|
|
| Port number for the Redis host for platform gateway. | Optional |
|
| Defines additional settings for use by platform gateway during installation. For example: gateway_extra_settings: - setting: OAUTH2_PROVIDER['ACCESS_TOKEN_EXPIRE_SECONDS'] value: 600 | Optional |
| |
| Maximum allowed size for data sent to platform gateway through NGINX. | Optional |
| |
| Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for platform gateway. | Optional |
| |
|
Number of requests | Optional |
|
A.9. Receptor variables
RPM variable name | Container variable name | Description | Required or optional | Default |
---|---|---|---|---|
|
The directory where receptor stores its runtime data and local artifacts. | Optional |
| |
|
| Port number that receptor listens on for incoming connections from other receptor nodes. | Optional |
|
|
| Protocol that receptor will support when handling traffic. | Optional |
|
|
|
Controls the verbosity of logging for receptor. | Optional |
|
|
Controls whether TLS is enabled or disabled for receptor. Set this variable to | Optional |
| |
See |
|
For the
For the
| Optional |
For the |
See |
| Used to indicate which nodes a specific host connects to. Wherever this variable is defined, an outbound connection to the specific host is established. The value must be a comma-separated list of hostnames. Do not use inventory group names.
This is resolved into a set of hosts that is used to construct the For example usage, see Adding execution nodes. | Optional |
|
|
Controls whether signing of communications between receptor nodes is enabled or disabled. | Optional |
| |
|
Controls whether TLS is enabled or disabled for receptor. | Optional |
| |
| The firewall zone where receptor related firewall rules are applied. This controls which networks can access receptor based on the zone’s trust level. | Optional |
| |
|
Controls whether or not receptor only accepts connections that use TLS 1.3 or higher. | Optional |
| |
| Path to the private key used by receptor to sign communications with other receptor nodes in the network. | Optional | ||
| Path to the public key used by receptor to sign communications with other receptor nodes in the network. | Optional | ||
|
Denote whether the receptor signing files are local to the installation program ( | Optional |
| |
| Path to the TLS certificate file for receptor. | Optional | ||
| Path to the TLS key file for receptor. | Optional | ||
|
Denote whether the receptor provided certificate files are local to the installation program ( | Optional |
| |
|
Controls whether archive compression is enabled or disabled for receptor. You can control this functionality globally by using | Optional |
|
A.10. Redis variables
RPM variable name | Container variable name | Description | Required or optional | Default |
---|---|---|---|---|
|
|
The IPv4 address used by the Redis cluster to identify each host in the cluster. When defining hosts in the | Optional | RPM = Discovered IPv4 address from Ansible facts. If IPv4 address is not available, IPv6 address is used. Container = Discovered IPv4 address from Ansible facts. |
|
Controls whether mTLS is enabled or disabled for Redis. Set this variable to | Optional |
| |
|
| The firewall zone where Redis related firewall rules are applied. This controls which networks can access Redis based on the zone’s trust level. | Optional |
RPM = no default set. Container = |
|
Hostname used by the Redis cluster when identifying and routing the host. By default | Optional |
The value defined in | |
|
|
The Redis mode to use for your Ansible Automation Platform installation. Valid options include: | Optional |
|
| Denotes whether or not to regenerate the Ansible Automation Platform managed TLS key pair for Redis. | Optional |
| |
|
| Path to the Redis server TLS certificate. | Optional | |
|
Denote whether the Redis provided certificate files are local to the installation program ( | Optional |
| |
|
| Path to the Redis server TLS certificate key. | Optional | |
|
Controls whether archive compression is enabled or disabled for Redis. You can control this functionality globally by using | Optional |
|