Red Hat Ansible Automation Platform installation guide
Install Ansible Automation Platform
Abstract
Preface
Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments.
This guide helps you to understand the installation requirements and processes behind installing Ansible Automation Platform. This document has been updated to include information for the latest release of Ansible Automation Platform.
Providing feedback on Red Hat documentation
If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
Chapter 1. Red Hat Ansible Automation Platform installation overview
The Red Hat Ansible Automation Platform installation program offers you flexibility, allowing you to install Ansible Automation Platform by using a number of supported installation scenarios. Starting with Ansible Automation Platform 2.4, the installation scenarios include the optional deployment of Event-Driven Ansible controller, which introduces the automated resolution of IT requests.
Regardless of the installation scenario you choose, installing Ansible Automation Platform involves the following steps:
- Editing the Red Hat Ansible Automation Platform installer inventory file
- The Ansible Automation Platform installer inventory file allows you to specify your installation scenario and describe host deployments to Ansible. The examples provided in this document show the parameter specifications needed to install that scenario for your deployment.
- Running the Red Hat Ansible Automation Platform installer setup script
- The setup script installs your private automation hub by using the required parameters defined in the inventory file.
- Verifying automation controller installation
- After installing Ansible Automation Platform, you can verify that the installation has been successful by logging in to the automation controller.
- Verifying automation hub installation
- After installing Ansible Automation Platform, you can verify that the installation has been successful by logging in to the automation hub.
- Verifying Event-Driven Ansible controller installation
- After installing Ansible Automation Platform, you can verify that the installation has been successful by logging in to the Event-Driven Ansible controller.
Additional resources
For more information about the supported installation scenarios, see the Red Hat Ansible Automation Platform Planning Guide.
1.1. Prerequisites
- You chose and obtained a platform installer from the Red Hat Ansible Automation Platform Product Software.
- You are installing on a machine that meets base system requirements.
- You have updated all of the packages to the recent version of your RHEL nodes.
To prevent errors, upgrade your RHEL nodes fully before installing Ansible Automation Platform.
- You have created a Red Hat Registry Service Account, by using the instructions in Creating Registry Service Accounts.
Additional resources
For more information about obtaining a platform installer or system requirements, see the Red Hat Ansible Automation Platform system requirements in the Red Hat Ansible Automation Platform Planning Guide.
Chapter 2. System requirements
Use this information when planning your Red Hat Ansible Automation Platform installations and designing automation mesh topologies that fit your use case.
Prerequisites
-
You can obtain root access either through the
sudo
command, or through privilege escalation. For more on privilege escalation see Understanding privilege escalation. - You can de-escalate privileges from root to users such as: AWX, PostgreSQL, Event-Driven Ansible, or Pulp.
- You have configured an NTP client on all nodes. For more information, see Configuring NTP server using Chrony.
2.1. Red Hat Ansible Automation Platform system requirements
Your system must meet the following minimum system requirements to install and run Red Hat Ansible Automation Platform.
Requirement | Required | Notes |
---|---|---|
Subscription | Valid Red Hat Ansible Automation Platform | |
OS | Red Hat Enterprise Linux 8.8 or later 64-bit (x86, ppc64le, s390x, aarch64), or Red Hat Enterprise Linux 9.0 or later 64-bit (x86, ppc64le, s390x, aarch64) | Red Hat Ansible Automation Platform is also supported on OpenShift, see Deploying the Red Hat Ansible Automation Platform operator on OpenShift Container Platform for more information. |
Ansible-core | Ansible-core version 2.14 or later | Ansible Automation Platform includes execution environments that contain ansible-core 2.15. |
Python | 3.9 or later | |
Browser | A currently supported version of Mozilla FireFox or Google Chrome | |
Database | PostgreSQL version 13 |
The following are necessary for you to work with project updates and collections:
- Ensure that the network ports and protocols listed in Table 5.3. Automation Hub are available for successful connection and download of collections from automation hub or Ansible Galaxy server.
- Disable SSL inspection either when using self-signed certificates or for the Red Hat domains.
The requirements for systems managed by Ansible Automation Platform are the same as for Ansible. See Installing Ansible in the Ansible Community Documentation.
Additional notes for Red Hat Ansible Automation Platform requirements
- Red Hat Ansible Automation Platform depends on Ansible Playbooks and requires the installation of the latest stable version of ansible-core. You can download ansible-core manually or download it automatically as part of your installation of Red Hat Ansible Automation Platform.
- For new installations, automation controller installs the latest release package of ansible-core.
- If performing a bundled Ansible Automation Platform installation, the installation setup.sh script attempts to install ansible-core (and its dependencies) from the bundle for you.
- If you have installed Ansible manually, the Ansible Automation Platform installation setup.sh script detects that Ansible has been installed and does not attempt to reinstall it.
You must install Ansible using a package manager such as dnf
, and the latest stable version of the package manager must be installed for Red Hat Ansible Automation Platform to work properly. Ansible version 2.14 is required for versions 2.4 and later.
2.2. Automation controller system requirements
Automation controller is a distributed system, where different software components can be co-located or deployed across multiple compute nodes. In the installer, four node types are provided as abstractions to help you design the topology appropriate for your use case: control, hybrid, execution, and hop nodes.
Use the following recommendations for node sizing:
On control and hybrid nodes, allocate a minimum of 20 GB to /var/lib/awx
for execution environment storage.
Execution nodes
Execution nodes run automation. Increase memory and CPU to increase capacity for running more forks.
- The RAM and CPU resources stated might not be required for packages installed on an execution node but are the minimum recommended to handle the job load for a node to run an average number of jobs simultaneously.
- Recommended RAM and CPU node sizes are not supplied. The required RAM or CPU depends directly on the number of jobs you are running in that environment.
For further information about required RAM and CPU levels, see Performance tuning for automation controller.
Requirement | Minimum required |
---|---|
RAM | 16 GB |
CPUs | 4 |
Local disk | 40GB minimum |
Control nodes
Control nodes process events and run cluster jobs including project updates and cleanup jobs. Increasing CPU and memory can help with job event processing.
Requirement | Minimum required |
---|---|
RAM | 16 GB |
CPUs | 4 |
Local disk |
|
Hop nodes
Hop nodes serve to route traffic from one part of the automation mesh to another (for example, a hop node could be a bastion host into another network). RAM can affect throughput, CPU activity is low. Network bandwidth and latency are generally a more important factor than either RAM or CPU.
Requirement | Minimum required |
---|---|
RAM | 16 GB |
CPUs | 4 |
Local disk | 40 GB |
-
Actual RAM requirements vary based on how many hosts automation controller will manage simultaneously (which is controlled by the
forks
parameter in the job template or the systemansible.cfg
file). To avoid possible resource conflicts, Ansible recommends 1 GB of memory per 10 forks and 2 GB reservation for automation controller. For more information, see Automation controller capacity determination and job impact. Ifforks
is set to 400, 42 GB of memory is recommended. -
Automation controller hosts check if
umask
is set to 0022. If not, the setup fails. Setumask=0022
to avoid this error. A larger number of hosts can be addressed, but if the fork number is less than the total host count, more passes across the hosts are required. You can avoid these RAM limitations by using any of the following approaches:
- Use rolling updates.
- Use the provisioning callback system built into automation controller, where each system requesting configuration enters a queue and is processed as quickly as possible.
- In cases where automation controller is producing or deploying images such as AMIs.
Additional resources
- For more information about obtaining an automation controller subscription, see Importing a subscription.
- For questions, contact Ansible support through the Red Hat Customer Portal.
2.3. Automation hub system requirements
Automation hub enables you to discover and use new certified automation content from Red Hat Ansible and Certified Partners. On Ansible automation hub, you can discover and manage Ansible Collections, which are supported automation content developed by Red Hat and its partners for use cases such as cloud automation, network automation, and security automation.
Automation hub has the following system requirements:
Requirement | Required | Notes |
---|---|---|
RAM | 8 GB minimum |
|
CPUs | 2 minimum | For capacity based on forks in your configuration, see Automation controller capacity determination and job impact. |
Local disk | 60 GB disk |
Dedicate a minimum of 40GB to |
Ansible automation execution nodes and automation hub system requirements are different and might not meet your network’s needs. The general formula for determining how much memory you need is: Total control capacity = Total Memory in MB / Fork size in MB.
Private automation hub
If you install private automation hub from an internal address, and have a certificate which only encompasses the external address, this can result in an installation which cannot be used as container registry without certificate issues.
To avoid this, use the automationhub_main_url
inventory variable with a value such as https://pah.example.com linking to the private automation hub node in the installation inventory file.
This adds the external address to /etc/pulp/settings.py
. This implies that you only want to use the external address.
For information about inventory file variables, see Inventory file variables in the Red Hat Ansible Automation Platform Installation Guide.
2.3.1. High availability automation hub requirements
Before deploying a high availability (HA) automation hub, ensure that you have a shared filesystem installed in your environment and that you have configured your network storage system, if applicable.
2.3.1.2. Installing firewalld for network storage
If you intend to install a HA automation hub using a network storage on the automation hub nodes itself, you must first install and use firewalld
to open the necessary ports as required by your shared storage system before running the Ansible Automation Platform installer.
Install and configure firewalld
by executing the following commands:
Install the
firewalld
daemon:$ dnf install firewalld
Add your network storage under <service> using the following command:
$ firewall-cmd --permanent --add-service=<service>
NoteFor a list of supported services, use the
$ firewall-cmd --get-services
commandReload to apply the configuration:
$ firewall-cmd --reload
2.4. Event-Driven Ansible controller system requirements
The Event-Driven Ansible controller is a single-node system capable of handling a variable number of long-running processes (such as rulebook activations) on-demand, depending on the number of CPU cores. Use the following minimum requirements to run, by default, a maximum of 12 simultaneous activations:
Requirement | Required |
---|---|
RAM | 16 GB |
CPUs | 4 |
Local disk | 40 GB minimum |
- If you are running Red Hat Enterprise Linux 8 and want to set your memory limits, you must have cgroup v2 enabled before you install Event-Driven Ansible. For specific instructions, see the Knowledge-Centered Support (KCS) article, Ansible Automation Platform Event-Driven Ansible controller for Red Hat Enterprise Linux 8 requires cgroupv2.
- When you activate an Event-Driven Ansible rulebook under standard conditions, it uses about 250 MB of memory. However, the actual memory consumption can vary significantly based on the complexity of your rules and the volume and size of the events processed. In scenarios where a large number of events are anticipated or the rulebook complexity is high, conduct a preliminary assessment of resource usage in a staging environment. This ensures that your maximum number of activations is based on the capacity of your resources. See Single automation controller, single automation hub, and single Event-Driven Ansible controller node with external (installer managed) database for an example on setting Event-Driven Ansible controller maximum running activations.
2.5. PostgreSQL requirements
Red Hat Ansible Automation Platform uses PostgreSQL 13. PostgreSQL user passwords are hashed with SCRAM-SHA-256 secure hashing algorithm before storing in the database.
To determine if your automation controller instance has access to the database, you can do so with the command, awx-manage check_db
command.
Service | Required | Notes |
---|---|---|
Database |
|
|
PostgreSQL Configurations
Optionally, you can configure the PostgreSQL database as separate nodes that are not managed by the Red Hat Ansible Automation Platform installer. When the Ansible Automation Platform installer manages the database server, it configures the server with defaults that are generally recommended for most workloads. For more information about the settings you can use to improve database performance, see Database Settings.
Additional resources
For more information about tuning your PostgreSQL server, see the PostgreSQL documentation.
2.5.1. Setting up an external (customer supported) database
Red Hat does not support the use of external (customer supported) databases, however they are used by customers. The following guidance on inital configuration, from a product installation perspective only, is provided to avoid related support requests.
To create a database, user and password on an external PostgreSQL compliant database for use with automation controller, use the following procedure.
Procedure
Install and then connect to a PostgreSQL compliant database server with superuser privileges.
# psql -h <db.example.com> -U superuser -p 5432 -d postgres <Password for user superuser>:
Where:
-h hostname --host=hostname
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix-domain socket.
-d dbname --dbname=dbname
Specifies the name of the database to connect to. This is equivalent to specifying
dbname
as the first non-option argument on the command line. Thedbname
can be a connection string. If so, connection string parameters override any conflicting command line options.-U username --username=username
Connect to the database as the user
username
instead of the default. (You must have permission to do so.)-
Create the user, database, and password with the
createDB
or administrator role assigned to the user. For further information, see Database Roles. Add the database credentials and host details to the automation controller inventory file as an external database.
The default values are used in the following example.
[database] pg_host='db.example.com' pg_port=5432 pg_database='awx' pg_username='awx' pg_password='redhat'
Run the installer.
If you are using a PostgreSQL database with automation controller, the database is owned by the connecting user and must have a
createDB
or administrator role assigned to it.- Check that you are able to connect to the created database with the user, password and database name.
-
Check the permission of the user, the user should have the
createDB
or administrator role.
During this procedure, you must check the External Database coverage. For further information, see https://access.redhat.com/articles/4010491
2.5.2. Enabling the hstore extension for the automation hub PostgreSQL database
From Ansible Automation Platform 2.4, the database migration script uses hstore
fields to store information, therefore the hstore
extension to the automation hub PostgreSQL database must be enabled.
This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server.
If the PostgreSQL database is external, you must enable the hstore
extension to the automation hub PostreSQL database manually before automation hub installation.
If the hstore
extension is not enabled before automation hub installation, a failure is raised during database migration.
Procedure
Check if the extension is available on the PostgreSQL server (automation hub database).
$ psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"
Where the default value for
<automation hub database>
isautomationhub
.Example output with
hstore
available:name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)
Example output with
hstore
not available:name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)
On a RHEL based server, the
hstore
extension is included in thepostgresql-contrib
RPM package, which is not installed automatically when installing the PostgreSQL server RPM package.To install the RPM package, use the following command:
dnf install postgresql-contrib
Create the
hstore
PostgreSQL extension on the automation hub database with the following command:$ psql -d <automation hub database> -c "CREATE EXTENSION hstore;"
The output of which is:
CREATE EXTENSION
In the following output, the
installed_version
field contains thehstore
extension used, indicating thathstore
is enabled.name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)
2.5.3. Benchmarking storage performance for the Ansible Automation Platform PostgreSQL database
Check whether the minimum Ansible Automation Platform PostgreSQL database requirements are met by using the Flexible I/O Tester (FIO) tool. FIO is a tool used to benchmark read and write IOPS performance of the storage system.
Prerequisites
You have installed the Flexible I/O Tester (
fio
) storage performance benchmarking tool.To install
fio
, run the following command as the root user:# yum -y install fio
You have adequate disk space to store the
fio
test data log files.The examples shown in the procedure require at least 60GB disk space in the
/tmp
directory:-
numjobs
sets the number of jobs run by the command. -
size=10G
sets the file size generated by each job.
-
-
You have adjusted the value of the
size
parameter. Adjusting this value reduces the amount of test data.
Procedure
Run a random write test:
$ fio --name=write_iops --directory=/tmp --numjobs=3 --size=10G \ --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \ --verify=0 --bs=4K --iodepth=64 --rw=randwrite \ --group_reporting=1 > /tmp/fio_benchmark_write_iops.log \ 2>> /tmp/fio_write_iops_error.log
Run a random read test:
$ fio --name=read_iops --directory=/tmp \ --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s \ --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randread \ --group_reporting=1 > /tmp/fio_benchmark_read_iops.log \ 2>> /tmp/fio_read_iops_error.log
Review the results:
In the log files written by the benchmark commands, search for the line beginning with
iops
. This line shows the minimum, maximum, and average values for the test.The following example shows the line in the log file for the random read test:
$ cat /tmp/fio_benchmark_read_iops.log read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 […] iops : min=50879, max=61603, avg=56221.33, stdev=679.97, samples=360 […]
You must review, monitor, and revisit the log files according to your own business requirements, application workloads, and new demands.
Chapter 3. Installing Red Hat Ansible Automation Platform
Ansible Automation Platform is a modular platform. You can deploy automation controller with other automation platform components, such as automation hub and Event-Driven Ansible controller. For more information about the components provided with Ansible Automation Platform, see Red Hat Ansible Automation Platform components in the Red Hat Ansible Automation Platform Planning Guide.
There are several supported installation scenarios for Red Hat Ansible Automation Platform. To install Red Hat Ansible Automation Platform, you must edit the inventory file parameters to specify your installation scenario. You can use one of the following as a basis for your own inventory file:
- Single automation controller with external (installer managed) database
- Single automation controller and single automation hub with external (installer managed) database
- Single automation controller, single automation hub, and single event-driven ansible controller node with external (installer managed ) database
3.1. Editing the Red Hat Ansible Automation Platform installer inventory file
You can use the Red Hat Ansible Automation Platform installer inventory file to specify your installation scenario.
Procedure
Navigate to the installer:
[RPM installed package]
$ cd /opt/ansible-automation-platform/installer/
[bundled installer]
$ cd ansible-automation-platform-setup-bundle-<latest-version>
[online installer]
$ cd ansible-automation-platform-setup-<latest-version>
-
Open the
inventory
file with a text editor. -
Edit
inventory
file parameters to specify your installation scenario. You can use one of the supported Installation scenario examples as the basis for yourinventory
file.
Additional resources
- For a comprehensive list of pre-defined variables used in Ansible installation inventory files, see Inventory file variables.
3.2. Inventory file examples based on installation scenarios
Red Hat supports several installation scenarios for Ansible Automation Platform. You can develop your own inventory files using the example files as a basis, or you can use the example closest to your preferred installation scenario.
3.2.1. Inventory file recommendations based on installation scenarios
Before selecting your installation method for Ansible Automation Platform, review the following recommendations. Familiarity with these recommendations will streamline the installation process.
-
For Red Hat Ansible Automation Platform or automation hub: Add an automation hub host in the
[automationhub]
group. - Do not install automation controller and automation hub on the same node for versions of Ansible Automation Platform in a production or customer environment. This can cause contention issues and heavy resource use.
Provide a reachable IP address or fully qualified domain name (FQDN) for the
[automationhub]
and[automationcontroller]
hosts to ensure users can sync and install content from automation hub from a different node.The FQDN must not contain either the
-
or the_
symbols, as it will not be processed correctly.Do not use
localhost
.-
admin
is the default user ID for the initial log in to Ansible Automation Platform and cannot be changed in the inventory file. -
Use of special characters for
pg_password
is limited. The!
,#
,0
and@
characters are supported. Use of other special characters can cause the setup to fail. -
Enter your Red Hat Registry Service Account credentials in
registry_username
andregistry_password
to link to the Red Hat container registry. -
The inventory file variables
registry_username
andregistry_password
are only required if a non-bundle installer is used.
3.2.1.1. Single automation controller with external (installer managed) database
Use this example to populate the inventory file to install Red Hat Ansible Automation Platform. This installation inventory file includes a single automation controller node with an external database on a separate node.
[automationcontroller] controller.example.com [database] data.example.com [all:vars] admin_password='<password>' pg_host='data.example.com' pg_port=5432 pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' # SSL-related variables # If set, this will install a custom CA certificate to the system trust store. # custom_ca_cert=/path/to/ca.crt # Certificate and key to install in nginx for the web UI and API # web_server_ssl_cert=/path/to/tower.cert # web_server_ssl_key=/path/to/tower.key # Server-side SSL settings for PostgreSQL (when we are installing it). # postgres_use_ssl=False # postgres_ssl_cert=/path/to/pgsql.crt # postgres_ssl_key=/path/to/pgsql.key
3.2.1.2. Single automation controller and single automation hub with external (installer managed) database
Use this example to populate the inventory file to deploy single instances of automation controller and automation hub with an external (installer managed) database.
[automationcontroller] controller.example.com [automationhub] automationhub.example.com [database] data.example.com [all:vars] admin_password='<password>' pg_host='data.example.com' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' automationhub_admin_password= <PASSWORD> automationhub_pg_host='data.example.com' automationhub_pg_port=5432 automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' # The default install will deploy a TLS enabled Automation Hub. # If for some reason this is not the behavior wanted one can # disable TLS enabled deployment. # # automationhub_disable_https = False # The default install will generate self-signed certificates for the Automation # Hub service. If you are providing valid certificate via automationhub_ssl_cert # and automationhub_ssl_key, one should toggle that value to True. # # automationhub_ssl_validate_certs = False # SSL-related variables # If set, this will install a custom CA certificate to the system trust store. # custom_ca_cert=/path/to/ca.crt # Certificate and key to install in Automation Hub node # automationhub_ssl_cert=/path/to/automationhub.cert # automationhub_ssl_key=/path/to/automationhub.key # Certificate and key to install in nginx for the web UI and API # web_server_ssl_cert=/path/to/tower.cert # web_server_ssl_key=/path/to/tower.key # Server-side SSL settings for PostgreSQL (when we are installing it). # postgres_use_ssl=False # postgres_ssl_cert=/path/to/pgsql.crt # postgres_ssl_key=/path/to/pgsql.key
3.2.1.2.1. Connecting automation hub to a Red Hat Single Sign-On environment
You can configure the inventory file further to connect automation hub to a Red Hat Single Sign-On installation.
You must configure a different set of variables when connecting to a Red Hat Single Sign-On installation managed by Ansible Automation Platform than when connecting to an external Red Hat Single Sign-On installation.
For more information about these inventory variables, refer to the Installing and configuring central authentication for the Ansible Automation Platform.
3.2.1.3. High availability automation hub
Use the following examples to populate the inventory file to install a highly available automation hub. This inventory file includes a highly available automation hub with a clustered setup.
You can configure your HA deployment further to implement Red Hat Single Sign-On and enable a high availability deployment of automation hub on SELinux.
Specify database host IP
-
Specify the IP address for your database host, using the
automation_pg_host
andautomation_pg_port
inventory variables. For example:
automationhub_pg_host='192.0.2.10' automationhub_pg_port=5432
-
Also specify the IP address for your database host in the [database] section, using the value in the
automationhub_pg_host
inventory variable:
[database] 192.0.2.10
List all instances in a clustered setup
-
If installing a clustered setup, replace
localhost ansible_connection=local
in the [automationhub] section with the hostname or IP of all instances. For example:
[automationhub] automationhub1.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.18 automationhub2.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.20 automationhub3.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.22
Next steps
Check that the following directives are present in /etc/pulp/settings.py
in each of the private automation hub servers:
USE_X_FORWARDED_PORT = True USE_X_FORWARDED_HOST = True
If automationhub_main_url
is not specified, the first node in the [automationhub] group will be used as default.
3.2.1.4. Enabling a high availability (HA) deployment of automation hub on SELinux
You can configure the inventory file to enable high availability deployment of automation hub on SELinux. You must create two mount points for /var/lib/pulp
and /var/lib/pulp/pulpcore_static
, and then assign the appropriate SELinux contexts to each.
You must add the context for /var/lib/pulp
pulpcore_static and run the Ansible Automation Platform installer before adding the context for /var/lib/pulp
.
Prerequisites
- You have already configured a NFS export on your server.
Procedure
Create a mount point at
/var/lib/pulp
:$ mkdir /var/lib/pulp/
Open
/etc/fstab
using a text editor, then add the following values:srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:var_lib_t:s0" 0 0 srv_rhel8:/data/pulpcore_static /var/lib/pulp/pulpcore_static nfs defaults,_netdev,nosharecache,context="system_u:object_r:httpd_sys_content_rw_t:s0" 0 0
Run the reload systemd manager configuration command:
$ systemctl daemon-reload
Run the mount command for
/var/lib/pulp
:$ mount /var/lib/pulp
Create a mount point at
/var/lib/pulp/pulpcore_static
:$ mkdir /var/lib/pulp/pulpcore_static
Run the mount command:
$ mount -a
With the mount points set up, run the Ansible Automation Platform installer:
$ setup.sh -- -b --become-user root
-
After the installation is complete, unmount the
/var/lib/pulp/
mount point.
Additional Resources
- See the SELinux Requirements on the Pulp Project documentation for a list of SELinux contexts.
- See the Filesystem Layout for a full description of Pulp folders.
3.2.1.4.1. Configuring pulpcore.service
After you have configured the inventory file, and applied the SELinux context, you now need to configure the pulp service.
Procedure
With the two mount points set up, shut down the Pulp service to configure
pulpcore.service
:$ systemctl stop pulpcore.service
Edit
pulpcore.service
usingsystemctl
:$ systemctl edit pulpcore.service
Add the following entry to
pulpcore.service
to ensure that automation hub services starts only after starting the network and mounting the remote mount points:[Unit] After=network.target var-lib-pulp.mount
Enable
remote-fs.target
:$ systemctl enable remote-fs.target
Reboot the system:
$ systemctl reboot
Troubleshooting
A bug in the pulpcore SELinux policies can cause the token authentication public/private keys in etc/pulp/certs/
to not have the proper SELinux labels, causing the pulp process to fail. When this occurs, run the following command to temporarily attach the proper labels:
$ chcon system_u:object_r:pulpcore_etc_t:s0 /etc/pulp/certs/token_{private,public}_key.pem
Repeat this command to reattach the proper SELinux labels whenever you relabel your system.
3.2.1.4.2. Applying the SELinux context
After you have configured the inventory file, you must now apply the context to enable the high availability (HA) deployment of automation hub on SELinux.
Procedure
Shut down the Pulp service:
$ systemctl stop pulpcore.service
Unmount
/var/lib/pulp/pulpcore_static
:$ umount /var/lib/pulp/pulpcore_static
Unmount
/var/lib/pulp/
:$ umount /var/lib/pulp/
Open
/etc/fstab
using a text editor, then replace the existing value for/var/lib/pulp
with the following:srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:pulpcore_var_lib_t:s0" 0 0
Run the mount command:
$ mount -a
3.2.1.5. Configuring content signing on private automation hub
To successfully sign and publish Ansible Certified Content Collections, you must configure private automation hub for signing.
Prerequisites
- Your GnuPG key pairs have been securely set up and managed by your organization.
- Your public-private key pair has proper access for configuring content signing on private automation hub.
Procedure
Create a signing script that accepts only a filename.
NoteThis script acts as the signing service and must generate an ascii-armored detached
gpg
signature for that file using the key specified through thePULP_SIGNING_KEY_FINGERPRINT
environment variable.The script prints out a JSON structure with the following format.
{"file": "filename", "signature": "filename.asc"}
All the file names are relative paths inside the current working directory. The file name must remain the same for the detached signature.
Example:
The following script produces signatures for content:
#!/usr/bin/env bash FILE_PATH=$1 SIGNATURE_PATH="$1.asc" ADMIN_ID="$PULP_SIGNING_KEY_FINGERPRINT" PASSWORD="password" # Create a detached signature gpg --quiet --batch --pinentry-mode loopback --yes --passphrase \ $PASSWORD --homedir ~/.gnupg/ --detach-sign --default-key $ADMIN_ID \ --armor --output $SIGNATURE_PATH $FILE_PATH # Check the exit status STATUS=$? if [ $STATUS -eq 0 ]; then echo {\"file\": \"$FILE_PATH\", \"signature\": \"$SIGNATURE_PATH\"} else exit $STATUS fi
After you deploy a private automation hub with signing enabled to your Ansible Automation Platform cluster, new UI additions are displayed in collections.
Review the Ansible Automation Platform installer inventory file for options that begin with
automationhub_*
.[all:vars] . . . automationhub_create_default_collection_signing_service = True automationhub_auto_sign_collections = True automationhub_require_content_approval = True automationhub_collection_signing_service_key = /abs/path/to/galaxy_signing_service.gpg automationhub_collection_signing_service_script = /abs/path/to/collection_signing.sh
The two new keys (automationhub_auto_sign_collections and automationhub_require_content_approval) indicate that the collections must be signed and approved after they are uploaded to private automation hub.
3.2.1.6. LDAP configuration on private automation hub
You must set the following six variables in your Red Hat Ansible Automation Platform installer inventory file to configure your private automation hub for LDAP authentication:
-
automationhub_authentication_backend
-
automationhub_ldap_server_uri
-
automationhub_ldap_bind_dn
-
automationhub_ldap_bind_password
-
automationhub_ldap_user_search_base_dn
-
automationhub_ldap_group_search_base_dn
If any of these variables are missing, the Ansible Automation installer cannot complete the installation.
3.2.1.6.1. Setting up your inventory file variables
When you configure your private automation hub with LDAP authentication, you must set the proper variables in your inventory files during the installation process.
Procedure
- Access your inventory file according to the procedure in Editing the Red Hat Ansible Automation Platform installer inventory file.
Use the following example as a guide to set up your Ansible Automation Platform inventory file:
automationhub_authentication_backend = "ldap" automationhub_ldap_server_uri = "ldap://ldap:389" (for LDAPs use automationhub_ldap_server_uri = "ldaps://ldap-server-fqdn") automationhub_ldap_bind_dn = "cn=admin,dc=ansible,dc=com" automationhub_ldap_bind_password = "GoodNewsEveryone" automationhub_ldap_user_search_base_dn = "ou=people,dc=ansible,dc=com" automationhub_ldap_group_search_base_dn = "ou=people,dc=ansible,dc=com"
NoteThe following variables will be set with default values, unless you set them with other options.
auth_ldap_user_search_scope= 'SUBTREE' auth_ldap_user_search_filter= '(uid=%(user)s)' auth_ldap_group_search_scope= 'SUBTREE' auth_ldap_group_search_filter= '(objectClass=Group)' auth_ldap_group_type_class= 'django_auth_ldap.config:GroupOfNamesType'
- Optional: Set up extra parameters in your private automation hub such as user groups, superuser access, or mirroring. Go to Configuring extra LDAP parameters to complete this optional step.
3.2.1.6.2. Configuring extra LDAP parameters
If you plan to set up superuser access, user groups, mirroring or other extra parameters, you can create a YAML file that comprises them in your ldap_extra_settings
dictionary.
Procedure
Create a YAML file that contains
ldap_extra_settings
.Example:
#ldapextras.yml --- ldap_extra_settings: <LDAP_parameter>: <Values> ...
Add any parameters that you require for your setup. The following examples describe the LDAP parameters that you can set in
ldap_extra_settings
:Use this example to set up a superuser flag based on membership in an LDAP group.
#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_FLAGS_BY_GROUP: {"is_superuser": "cn=pah-admins,ou=groups,dc=example,dc=com",} ...
Use this example to set up superuser access.
#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_FLAGS_BY_GROUP: {"is_superuser": "cn=pah-admins,ou=groups,dc=example,dc=com",} ...
Use this example to mirror all LDAP groups you belong to.
#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_MIRROR_GROUPS: True ...
Use this example to map LDAP user attributes (such as first name, last name, and email address of the user).
#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_ATTR_MAP: {"first_name": "givenName", "last_name": "sn", "email": "mail",} ...
Use the following examples to grant or deny access based on LDAP group membership:
To grant private automation hub access (for example, members of the
cn=pah-nosoupforyou,ou=groups,dc=example,dc=com
group):#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_REQUIRE_GROUP: 'cn=pah-nosoupforyou,ou=groups,dc=example,dc=com' ...
To deny private automation hub access (for example, members of the
cn=pah-nosoupforyou,ou=groups,dc=example,dc=com
group):#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_DENY_GROUP: 'cn=pah-nosoupforyou,ou=groups,dc=example,dc=com' ...
Use this example to enable LDAP debug logging.
#ldapextras.yml --- ldap_extra_settings: GALAXY_LDAP_LOGGING: True ...
NoteIf it is not practical to re-run
setup.sh
or if debug logging is enabled for a short time, you can add a line containingGALAXY_LDAP_LOGGING: True
manually to the/etc/pulp/settings.py
file on private automation hub. Restart bothpulpcore-api.service
andnginx.service
for the changes to take effect. To avoid failures due to human error, use this method only when necessary.Use this example to configure LDAP caching by setting the variable
AUTH_LDAP_CACHE_TIMEOUT
.#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_CACHE_TIMEOUT: 3600 ...
-
Run
setup.sh -e @ldapextras.yml
during private automation hub installation. .Verification To verify you have set up correctly, confirm you can view all of your settings in the/etc/pulp/settings.py
file on your private automation hub.
3.2.1.6.3. LDAP referrals
If your LDAP servers return referrals, you might have to disable referrals to successfully authenticate using LDAP on private automation hub.
If not, the following message is returned:
Operation unavailable without authentication
To disable the LDAP REFERRALS lookup, set:
GALAXY_LDAP_DISABLE_REFERRALS = true
This sets AUTH_LDAP_CONNECTIONS_OPTIONS
to the correct option.
3.2.1.7. Single automation controller, single automation hub, and single Event-Driven Ansible controller node with external (installer managed) database
Use this example to populate the inventory file to deploy single instances of automation controller, automation hub, and Event-Driven Ansible controller with an external (installer managed) database.
- This scenario requires a minimum of automation controller 2.4 for successful deployment of Event-Driven Ansible controller.
- Event-Driven Ansible controller must be installed on a separate server and cannot be installed on the same host as automation hub and automation controller.
-
When you activate an Event-Driven Ansible rulebook under standard conditions, it uses approximately 250 MB of memory. However, the actual memory consumption can vary significantly based on the complexity of your rules and the volume and size of the events processed. In scenarios where a large number of events are anticipated or the rulebook complexity is high, conduct a preliminary assessment of resource usage in a staging environment. This ensures that your maximum number of activations is based on the capacity of your resources. In the following example, the default
automationedacontroller_max_running_activations
setting is 12, but you can adjust according to your capacity.
[automationcontroller] controller.example.com [automationhub] automationhub.example.com [automationedacontroller] automationedacontroller.example.com [database] data.example.com [all:vars] admin_password='<password>' pg_host='data.example.com' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' # Automation hub configuration automationhub_admin_password= <PASSWORD> automationhub_pg_host='data.example.com' automationhub_pg_port=5432 automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' # Automation Event-Driven Ansible controller configuration automationedacontroller_admin_password='<eda-password>' automationedacontroller_pg_host='data.example.com' automationedacontroller_pg_port=5432 automationedacontroller_pg_database='automationedacontroller' automationedacontroller_pg_username='automationedacontroller' automationedacontroller_pg_password='<password>' # Keystore file to install in SSO node # sso_custom_keystore_file='/path/to/sso.jks' # This install will deploy SSO with sso_use_https=True # Keystore password is required for https enabled SSO sso_keystore_password='' # This install will deploy a TLS enabled Automation Hub. # If for some reason this is not the behavior wanted one can # disable TLS enabled deployment. # # automationhub_disable_https = False # The default install will generate self-signed certificates for the Automation # Hub service. If you are providing valid certificate via automationhub_ssl_cert # and automationhub_ssl_key, one should toggle that value to True. # # automationhub_ssl_validate_certs = False # SSL-related variables # If set, this will install a custom CA certificate to the system trust store. # custom_ca_cert=/path/to/ca.crt # Certificate and key to install in Automation Hub node # automationhub_ssl_cert=/path/to/automationhub.cert # automationhub_ssl_key=/path/to/automationhub.key # Certificate and key to install in nginx for the web UI and API # web_server_ssl_cert=/path/to/tower.cert # web_server_ssl_key=/path/to/tower.key # Server-side SSL settings for PostgreSQL (when we are installing it). # postgres_use_ssl=False # postgres_ssl_cert=/path/to/pgsql.crt # postgres_ssl_key=/path/to/pgsql.key # Boolean flag used to verify Automation Controller's # web certificates when making calls from Automation Event-Driven Ansible controller. # automationedacontroller_controller_verify_ssl = true # # Certificate and key to install in Automation Event-Driven Ansible controller node # automationedacontroller_ssl_cert=/path/to/automationeda.crt # automationedacontroller_ssl_key=/path/to/automationeda.key
3.2.1.8. Adding a safe plugin variable to Event-Driven Ansible controller
When using redhat.insights_eda or similar plug-ins to run rulebook activations in Event-Driven Ansible controller, you must add a safe plugin variable to a directory in Ansible Automation Platform. This would ensure connection between Event-Driven Ansible controller and the source plugin, and display port mappings correctly.
Procedure
-
Create a directory for the safe plugin variable:
mkdir -p ./group_vars/automationedacontroller
-
Create a file within that directory for your new setting (for example,
touch ./group_vars/automationedacontroller/custom.yml
) Add the variable
automationedacontroller_safe_plugins
to the file with a comma-separated list of plugins to enable for Event-Driven Ansible controller. For example:automationedacontroller_safe_plugins: “ansible.eda.webhook, ansible.eda.alertmanager”
3.3. Running the Red Hat Ansible Automation Platform installer setup script
After you update the inventory file with required parameters for installing your private automation hub, run the installer setup script.
Procedure
Run the
setup.sh
script$ sudo ./setup.sh
Installation of Red Hat Ansible Automation Platform will begin.
3.4. Verifying installation of automation controller
Verify that you installed automation controller successfully by logging in with the admin credentials you inserted in the inventory
file.
Prerequisite
- Port 443 is available
Procedure
-
Go to the IP address specified for the automation controller node in the
inventory
file. -
Enter your Red Hat Satellite credentials. If this is your first time logging in after installation, upload your
manifest
file. -
Log in with the user ID
admin
and the password credentials you set in theinventory
file.
The automation controller server is accessible from port 80 (https://<CONTROLLER_SERVER_NAME>/) but redirects to port 443.
If the installation fails and you are a customer who has purchased a valid license for Red Hat Ansible Automation Platform, contact Ansible through the Red Hat Customer portal.
Upon a successful log in to automation controller, your installation of Red Hat Ansible Automation Platform 2.4 is complete.
3.4.1. Additional automation controller configuration and resources
See the following resources to explore additional automation controller configurations.
Resource link | Description |
---|---|
Set up automation controller and run your first playbook. | |
Configure automation controller administration through customer scripts, management jobs, etc. | |
Configuring proxy support for Red Hat Ansible Automation Platform | Set up automation controller with a proxy server. |
Managing usability analytics and data collection from automation controller | Manage what automation controller information you share with Red Hat. |
Review automation controller functionality in more detail. |
3.5. Verifying installation of automation hub
Verify that you installed your automation hub successfully by logging in with the admin credentials you inserted into the inventory
file.
Procedure
-
Navigate to the IP address specified for the automation hub node in the
inventory
file. -
Enter your Red Hat Satellite credentials. If this is your first time logging in after installation, upload your
manifest
file. -
Log in with the user ID
admin
and the password credentials you set in theinventory
file.
If the installation fails and you are a customer who has purchased a valid license for Red Hat Ansible Automation Platform, contact Ansible through the Red Hat Customer portal.
Upon a successful login to automation hub, your installation of Red Hat Ansible Automation Platform 2.4 is complete.
3.5.1. Additional automation hub configuration and resources
See the following resources to explore additional automation hub configurations.
Resource link | Description |
---|---|
Configure user access for automation hub. | |
Managing Red Hat Certified, validated, and Ansible Galaxy content in automation hub | Add content to your automation hub. |
Publishing proprietary content collections in automation hub | Publish internally developed collections on your automation hub. |
3.6. Verifying Event-Driven Ansible controller installation
Verify that you installed Event-Driven Ansible controller successfully by logging in with the admin credentials you inserted in the inventory file.
Procedure
-
Navigate to the IP address specified for the Event-Driven Ansible controller node in the
inventory
file. -
Enter your Red Hat Satellite credentials. If this is your first time logging in after installation, upload your
manifest
file. -
Log in with the user ID
admin
and the password credentials you set in theinventory
file.
If the installation fails and you are a customer who has purchased a valid license for Red Hat Ansible Automation Platform, contact Ansible through the Red Hat Customer portal.
Upon a successful login to Event-Driven Ansible controller, your installation of Red Hat Ansible Automation Platform 2.4 is complete.
Chapter 4. Disconnected installation
If you are not connected to the internet or do not have access to online repositories, you can install Red Hat Ansible Automation Platform without an active internet connection.
4.1. Prerequisites
Before installing Ansible Automation Platform on a disconnected network, you must meet the following prerequisites:
- A created subscription manifest. See Obtaining a manifest file for more information.
- The Ansible Automation Platform setup bundle at Customer Portal is downloaded.
- The DNS records for the automation controller and private automation hub servers are created.
4.2. Ansible Automation Platform installation on disconnected RHEL
You can install Ansible Automation Platform automation controller and private automation hub without an internet connection by using the installer-managed database located on the automation controller. Use the setup bundle for a disconnected installation as it includes additional components that make installing Ansible Automation Platform easier in a disconnected environment. These include the Ansible Automation Platform Red Hat package managers (RPMs) and the default execution environment (EE) images.
4.2.1. System requirements for disconnected installation
Ensure that your system has all the hardware requirements before performing a disconnected installation of Ansible Automation Platform. For more information about hardware requirements, see Chapter 2. System requirements.
4.2.2. RPM Source
RPM dependencies for Ansible Automation Platform that come from the BaseOS and AppStream repositories are not included in the setup bundle. To add these dependencies, you must first obtain access to BaseOS and AppStream repositories. Use Satellite to sync repositories and add dependencies. If you prefer an alternative tool, you can choose between the following options:
- Reposync
- The RHEL Binary DVD
The RHEL Binary DVD method requires the DVD for supported versions of RHEL, including version 8.6 or higher. See Red Hat Enterprise Linux Life Cycle for information about which versions of RHEL are currently supported.
Additional Resources
4.3. Synchronizing RPM repositories using reposync
To perform a reposync you need a RHEL host that has access to the internet. After the repositories are synced, you can move the repositories to the disconnected network hosted from a web server.
Procedure
Attach the BaseOS and AppStream required repositories:
# subscription-manager repos \ --enable rhel-8-for-x86_64-baseos-rpms \ --enable rhel-8-for-x86_64-appstream-rpms
Perform the reposync:
# dnf install yum-utils # reposync -m --download-metadata --gpgcheck \ -p /path/to/download
Use reposync with
--download-metadata
and without--newest-only
. See RHEL 8 Reposync.-
If you are not using
--newest-only,
the repos downloaded will be ~90GB. -
If you are using
--newest-only,
the repos downloaded will be ~14GB.
-
If you are not using
If you plan to use Red Hat Single Sign-On, sync these repositories:
- jb-eap-7.3-for-rhel-8-x86_64-rpms
- rh-sso-7.4-for-rhel-8-x86_64-rpms
After the reposync is completed, your repositories are ready to use with a web server.
- Move the repositories to your disconnected network.
4.4. Creating a new web server to host repositories
If you do not have an existing web server to host your repositories, you can create one with your synced repositories.
Procedure
Install prerequisites:
$ sudo dnf install httpd
Configure httpd to serve the repo directory:
/etc/httpd/conf.d/repository.conf DocumentRoot '/path/to/repos' <LocationMatch "^/+$"> Options -Indexes ErrorDocument 403 /.noindex.html </LocationMatch> <Directory '/path/to/repos'> Options All Indexes FollowSymLinks AllowOverride None Require all granted </Directory>
Ensure that the directory is readable by an apache user:
$ sudo chown -R apache /path/to/repos
Configure SELinux:
$ sudo semanage fcontext -a -t httpd_sys_content_t "/path/to/repos(/.*)?" $ sudo restorecon -ir /path/to/repos
Enable httpd:
$ sudo systemctl enable --now httpd.service
Open firewall:
$ sudo firewall-cmd --zone=public --add-service=http –add-service=https --permanent $ sudo firewall-cmd --reload
On automation controller and automation hub, add a repo file at /etc/yum.repos.d/local.repo, and add the optional repos if needed:
[Local-BaseOS] name=Local BaseOS baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-baseos-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [Local-AppStream] name=Local AppStream baseurl=http://<webserver_fqdn>/rhel-8-for-x86_64-appstream-rpms enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
4.5. Accessing RPM repositories from a locally mounted DVD
If you plan to access the repositories from the RHEL binary DVD, you must first set up a local repository.
Procedure
Mount DVD or ISO:
DVD
# mkdir /media/rheldvd && mount /dev/sr0 /media/rheldvd
ISO
# mkdir /media/rheldvd && mount -o loop rhrhel-8.6-x86_64-dvd.iso /media/rheldvd
Create yum repo file at
/etc/yum.repos.d/dvd.repo
[dvd-BaseOS] name=DVD for RHEL - BaseOS baseurl=file:///media/rheldvd/BaseOS enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release [dvd-AppStream] name=DVD for RHEL - AppStream baseurl=file:///media/rheldvd/AppStream enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
Import the gpg key:
# rpm --import /media/rheldvd/RPM-GPG-KEY-redhat-release
If the key is not imported you will see an error similar to
# Curl error (6): Couldn't resolve host name for https://www.redhat.com/security/data/fd431d51.txt [Could not resolve host: www.redhat.com]
Additional Resources
For further detail on setting up a repository see Need to set up yum repository for locally-mounted DVD on Red Hat Enterprise Linux 8.
4.6. Adding a subscription manifest to Ansible Automation Platform without an internet connection
To add a subscription to Ansible Automation Platform without an internet connection, create and import a subscription manifest.
Procedure
- Log in to the Red Hat Customer Portal.
- From the menu bar, select Subscriptions Allocations tab . and select the
- Click .
- Name the new subscription allocation.
- Select Satellite 6.8 from the Type list.
- Click Details tab opens for your subscription allocation. . The
- Select the Subscriptions tab.
- Click .
- Find your Ansible Automation Platform subscription, and in the Entitlements box, add the number of entitlements you want to assign to your environment. A single entitlement is needed for each node that will be managed by Ansible Automation Platform: server, network device, etc.
- Click .
- Click .
This downloads a file manifest_<allocation name>_<date>.zip that be imported with automation controller after installation.
4.7. Downloading and installing the Ansible Automation Platform setup bundle
Choose the setup bundle to download Ansible Automation Platform for disconnected installations. This bundle includes the RPM content for Ansible Automation Platform and the default execution environment images that will be uploaded to your private automation hub during the installation process.
Procedure
- Download the Ansible Automation Platform setup bundle package by navigating to the Red Hat Ansible Automation Platform download page and clicking for the Ansible Automation Platform 2.4 Setup Bundle.
From automation controller, untar the bundle:
$ tar xvf \ ansible-automation-platform-setup-bundle-2.4-1.tar.gz $ cd ansible-automation-platform-setup-bundle-2.4-1
Edit the inventory file to include the required options:
- automationcontroller group
- automationhub group
- admin_password
- pg_password
- automationhub_admin_password
- automationhub_pg_host, automationhub_pg_port
automationhub_pg_password
Example Inventory file
[automationcontroller] automationcontroller.example.org ansible_connection=local [automationcontroller:vars] peers=execution_nodes [automationhub] automationhub.example.org [all:vars] admin_password='password123' pg_database='awx' pg_username='awx' pg_password='dbpassword123' receptor_listener_port=27199 automationhub_admin_password='hubpassword123' automationhub_pg_host='automationcontroller.example.org' automationhub_pg_port=5432 automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password='dbpassword123' automationhub_pg_sslmode='prefer'
Run the Ansible Automation Platform setup bundle executable as the root user:
$ sudo -i # cd /path/to/ansible-automation-platform-setup-bundle-2.4-1 # ./setup.sh
- When installation is complete, navigate to the Fully Qualified Domain Name (FQDN) for the automation controller node that was specified in the installation inventory file.
- Log in using the administrator credentials specified in the installation inventory file.
The inventory file must be kept intact after installation because it is used for backup, restore, and upgrade functions. Keep a backup copy in a secure location, given that the inventory file contains passwords.
4.8. Completing post installation tasks
After you have completed the installation of Ansible Automation Platform, ensure that automation hub and automation controller deploy properly.
4.8.1. Adding a controller subscription
Procedure
-
Navigate to the FQDN of the Automation controller. Log in with the username admin and the password you specified as
admin_password
in your inventory file. - Click manifest.zip you created earlier. and select the
- Click .
- Uncheck and . These rely on an internet connection and must be turned off.
- Click .
- Read the End User License Agreement and click if you agree.
4.8.2. Updating the CA trust store
As part of your post-installation tasks, you must update the software’s certificates. By default, Ansible Automation Platform automation hub and automation controller are installed using self-signed certificates. Because of this, the controller does not trust the hub’s certificate and will not download the execution environment from the hub.
To ensure that automation controller downloads the execution environment from automation hub, you must import the hub’s Certificate Authority (CA) certificate as a trusted certificate on the controller. You can do this in one of two ways, depending on whether SSH is available as root user between automation controller and private automation hub.
4.8.2.1. Using secure copy (SCP) as a root user
If SSH is available as the root user between the controller and private automation hub, use SCP to copy the root certificate on the private automation hub to the controller.
Procedure
-
Run
update-ca-trust
on the controller to update the CA trust store:
$ sudo -i # scp <hub_fqdn>:/etc/pulp/certs/root.crt /etc/pki/ca-trust/source/anchors/automationhub-root.crt # update-ca-trust
4.8.2.2. Copying and pasting as a non root user
If SSH is unavailable as root between the private automation hub and the controller, copy the contents of the file /etc/pulp/certs/root.crt on the private automation hub and paste it into a new file on the controller called /etc/pki/ca-trust/source/anchors/automationhub-root.crt.
Procedure
-
Run
update-ca-trust
to update the CA trust store with the new certificate. On the private automation hub, run:
$ sudo -i # cat /etc/pulp/certs/root.crt (copy the contents of the file, including the lines with 'BEGIN CERTIFICATE' and 'END CERTIFICATE')
- On the automation controller:
$ sudo -i # vi /etc/pki/ca-trust/source/anchors/automationhub-root.crt (paste the contents of the root.crt file from the private automation hub into the new file and write to disk) # update-ca-trust
Additional Resources
- For further information on unknown certificate authority, see Project sync fails with unknown certificate authority error in Ansible Automation Platform 2.1.
4.9. Importing collections into private automation hub
You can download a collection as a tarball file from Ansible automation hub for use in your private automation hub. Certified collections are available on the automation hub Hybrid Cloud Console, and community collections are on Ansible Galaxy. You must also download and install any dependencies needed for the collection.
Procedure
- Navigate to link: console.redhat.com and log in with your Red Hat credentials.
- Click on the collection you want to download.
- Click
- To verify if a collection has dependencies, click the Dependencies tab.
- Download any dependencies needed for this collection.
4.10. Creating a collection namespace
Before importing a collection, you must first create a namespace for the collection in your private automation hub. You can find the namespace name by looking at the first part of the collection tarball filename. For example, the namespace of the collection ansible-netcommon-3.0.0.tar.gz is ansible.
Procedure
- Log in to the automation hub Hybrid Cloud Console.
- From the navigation panel, select → .
- Click .
- Provide the namespace name.
- Click .
4.10.1. Importing the collection tarball by using the web console
Once the namespace has been created, you can import the collection by using the web console.
Procedure
- Log in to automation hub Hybrid Cloud Console.
- From the navigation panel, select → .
- Click next to the namespace you will be importing the collection into.
- Click .
- Click the folder icon and select the tarball of the collection.
- Click .
This opens the 'My Imports' page. You can see the status of the import and various details of the files and modules that have been imported.
4.10.2. Importing the collection tarball by using the CLI
You can import collections into your private automation hub by using the command-line interface rather than the GUI.
Procedure
- Copy the collection tarballs to the private automation hub.
- Log in to the private automation hub server via SSH.
Add the self-signed root CA cert to the trust store on automation hub.
# cp /etc/pulp/certs/root.crt \ /etc/pki/ca-trust/source/anchors/automationhub-root.crt # update-ca-trust
Update the
/etc/ansible/ansible.cfg
file with your automation hub configuration. Use either a token or a username and password for authentication.[galaxy] server_list = private_hub [galaxy_server.private_hub] url=https://<hub_fqdn>/api/galaxy/ token=<token_from_private_hub>
- Import the collection using the ansible-galaxy command.
$ ansible-galaxy collection publish <collection_tarball>
4.11. Approving the imported collections
After you have imported collections by using either the GUI or the CLI method, you must approve them by using the GUI. After they are approved, they are available for use.
Procedure
- Log in to automation hub Hybrid Cloud Console.
- From the navigation panel, select → .
- Click for the collection you want to approve.
- The collection is now available for use in your private automation hub.
- Import any dependency for the collection by repeating steps 2 and 3.
The collection is added to the "Published" repository regardless of its source.
Recommended collections depend on your use case. Ansible and Red Hat provide these collections.
4.11.1. Custom automation execution environments
Use the ansible-builder program to create custom execution environment images. For disconnected environments, custom execution environment images can be built in the following ways:
- Build an execution environment image on an internet-facing system and import it to the disconnected environment.
- Build an execution environment image entirely on the disconnected environment with some modifications to the normal process of using ansible-builder.
- Create a minimal base container image that includes all of the necessary modifications for a disconnected environment, then build custom execution environment images from the base container image.
4.11.1.1. Transferring custom execution environment images across a disconnected boundary
You can build a custom execution environment image on an internet-facing machine. After you create an execution environment, it is available in the local podman image cache. You can then transfer the custom execution environment image across a disconnected boundary.
Procedure
Save the image:
$ podman image save localhost/custom-ee:latest | gzip -c custom-ee-latest.tar.gz
Transfer the file across the disconnected boundary by using an existing mechanism such as sneakernet or one-way diode.
- After the image is available on the disconnected side, import it into the local podman cache, tag it, and push it to the disconnected hub:
$ podman image load -i custom-ee-latest.tar.gz $ podman image tag localhost/custom-ee <hub_fqdn>/custom-ee:latest $ podman login <hub_fqdn> --tls-verify=false $ podman push <hub_fqdn>/custom-ee:latest
4.12. Building an execution environment in a disconnected environment
Creating execution environments for Ansible Automation Platform is a common task which works differently in disconnected environments. When building a custom execution environment, the ansible-builder tool defaults to downloading content from the following locations on the internet:
- Red Hat Automation hub (console.redhat.com) or Ansible Galaxy (galaxy.ansible.com) for any Ansible content collections added to the execution environment image.
- PyPI (pypi.org) for any python packages required as collection dependencies.
- RPM repositories such as the RHEL or UBI repositories (cdn.redhat.com) for adding or updating RPMs to the execution environment image, if needed.
- registry.redhat.io for access to the base container images.
Building an execution environment image in a disconnected environment requires mirroring content from these locations. See Importing Collections into private automation hub for information about importing collections from Ansible Galaxy or automation hub into a private automation hub.
Mirrored PyPI content once transferred into the disconnected network can be made available by using a web server or an artifact repository such as Nexus. The RHEL and UBI repository content can be exported from an internet-facing Red Hat Satellite Server, copied into the disconnected environment, then imported into a disconnected Satellite so it is available for building custom execution environments. See ISS Export Sync in an Air-Gapped Scenario for details.
The default base container image, ee-minimal-rhel8, is used to create custom execution environment images and is included with the bundled installer. This image is added to the private automation hub at install time. If a different base container image such as ee-minimal-rhel9 is required, it must be imported to the disconnected network and added to the private automation hub container registry.
Once all of the prerequisites are available on the disconnected network, the ansible-builder command can be used to create custom execution environment images.
4.12.1. Installing the Ansible Builder RPM
On the RHEL system where custom execution environments will be built, you will install the Ansible Builder RPM by using a Satellite Server that already exists in the environment. This method is preferred because the execution environment images can use any RHEL content from the pre-existing Satellite if required.
Procedure
Install the Ansible Builder RPM from the Ansible Automation Platform repository.
- Subscribe the RHEL system to a Satellite on the disconnected network.
-
Attach the Ansible Automation Platform subscription and enable the Ansible Automation Platform repository. The repository name is either
ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms
oransible-automation-platform-2.4-for-rhel-9-x86_64-rpms
depending on the version of RHEL used on the underlying system. - Install the Ansible Builder RPM. The version of the Ansible Builder RPM must be 3.0.0 or later in order for the examples below to work properly.
Install the Ansible Builder RPM from the Ansible Automation Platform setup bundle. Use this method if a Satellite Server is not available on your disconnected network.
- Unarchive the Ansible Automation Platform setup bundle.
- Install the Ansible Builder RPM and its dependencies from the included content.
$ tar -xzvf ansible-automation-platform-setup-bundle-2.4-3-x86_64.tar.gz $ cd ansible-automation-platform-setup-bundle-2.4-3-x86_64/bundle/packages/el8/repos/ $ sudo dnf install ansible-builder-3.0.0-2.el8ap.noarch.rpm \ python39-requirements-parser-0.2.0-4.el8ap.noarch.rpm \ python39-bindep-2.10.2-3.el8ap.noarch.rpm \ python39-jsonschema-4.16.0-1.el8ap.noarch.rpm \ python39-pbr-5.8.1-2.el8ap.noarch.rpm \ python39-distro-1.6.0-3.el8pc.noarch.rpm \ python39-packaging-21.3-2.el8ap.noarch.rpm \ python39-parsley-1.3-2.el8pc.noarch.rpm \ python39-attrs-21.4.0-2.el8pc.noarch.rpm \ python39-pyrsistent-0.18.1-2.el8ap.x86_64.rpm \ python39-pyparsing-3.0.9-1.el8ap.noarch.rpm
The specific versions may be slightly different depending on the version of the setup bundle being used.
Additional resources
- For details on creating a Satellite environment on a disconnected network see Installing Satellite Server in a Disconnected Network Environment.
4.12.2. Creating the custom execution environment definition
Once the Ansible Builder RPM is installed, use the following steps to create your custom execution environment.
Create a directory for the build artifacts used when creating your custom execution environment. Any new files created with the steps below will be created under this directory.
$ mkdir $HOME/custom-ee $HOME/custom-ee/files $ cd $HOME/custom-ee/
Create an
execution-environment.yml
file that defines the requirements for your custom execution environment.NoteVersion 3 of the execution environment definition format is required, so ensure the
execution-environment.yml
file containsversion: 3
explicitly before continuing.- Override the base image to point to the minimal execution environment available in your private automation hub.
-
Define the additional build files needed to point to any disconnected content sources that will be used in the build process. Your custom
execution-environment.yml
file should look similar to the following example:
$ cat execution-environment.yml --- version: 3 images: base_image: name: private-hub.example.com/ee-minimal-rhel8:latest dependencies: python: requirements.txt galaxy: requirements.yml additional_build_files: - src: files/ansible.cfg dest: configs - src: files/pip.conf dest: configs - src: files/hub-ca.crt dest: configs # uncomment if custom RPM repositories are required #- src: files/custom.repo # dest: configs additional_build_steps: prepend_base: # copy a custom pip.conf to override the location of the PyPI content - ADD _build/configs/pip.conf /etc/pip.conf # remove the default UBI repository definition - RUN rm -f /etc/yum.repos.d/ubi.repo # copy the hub CA certificate and update the trust store - ADD _build/configs/hub-ca.crt /etc/pki/ca-trust/source/anchors - RUN update-ca-trust # if needed, uncomment to add a custom RPM repository configuration #- ADD _build/configs/custom.repo /etc/yum.repos.d/custom.repo prepend_galaxy: - ADD _build/configs/ansible.cfg ~/.ansible.cfg ...
Create an
ansible.cfg
file under thefiles/
subdirectory that points to your private automation hub.$ cat files/ansible.cfg [galaxy] server_list = private_hub [galaxy_server.private_hub] url = https://private-hub.example.com/api/galaxy/
Create a
pip.conf
file under thefiles/
subdirectory which points to the internal PyPI mirror (a web server or something like Nexus):$ cat files/pip.conf [global] index-url = https://<pypi_mirror_fqdn>/ trusted-host = <pypi_mirror_fqdn>
Optional: If you use a
bindep.txt
file to add RPMs the custom execution environment, create acustom.repo
file under thefiles/
subdirectory that points to your disconnected Satellite or other location hosting the RPM repositories. If this step is necessary, uncomment the steps in the exampleexecution-environment.yml
file that correspond with thecustom.repo
file.The following example is for the UBI repos. Other local repos can be added to this file as well. The URL path may need to change depending on where the mirror content is located on the web server.
$ cat files/custom.repo [ubi-8-baseos] name = Red Hat Universal Base Image 8 (RPMs) - BaseOS baseurl = http://<ubi_mirror_fqdn>/repos/ubi-8-baseos enabled = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release gpgcheck = 1 [ubi-8-appstream] name = Red Hat Universal Base Image 8 (RPMs) - AppStream baseurl = http://<ubi_mirror_fqdn>/repos/ubi-8-appstream enabled = 1 gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release gpgcheck = 1
Add the CA certificate used to sign the private automation hub web server certificate. If the private automation hub uses self-signed certificates provided by the installer:
-
Copy the file
/etc/pulp/certs/pulp_webserver.crt
from your private automation hub and name ithub-ca.crt
. -
Add the
hub-ca.crt
file to thefiles/
subdirectory.
-
Copy the file
If the private automation hub uses user-provided certificates signed by a certificate authority:
-
Make a copy of that CA certificate and name it
hub-ca.crt
. -
Add the
hub-ca.crt
file to thefiles/
subdirectory.
-
Make a copy of that CA certificate and name it
Once the preceeding steps have been completed, create your python
requirements.txt
and Ansible collectionrequirements.yml
files, with the content needed for your custom execution environment image.NoteAny required collections must already be uploaded into your private automation hub.
The following files should exist under the
custom-ee/
directory, withbindep.txt
andfiles/custom.repo
being optional:
$ cd $HOME/custom-ee $ tree . . ├── bindep.txt ├── execution-environment.yml ├── files │ ├── ansible.cfg │ ├── custom.repo │ ├── hub-ca.crt │ └── pip.conf ├── requirements.txt └── requirements.yml 1 directory, 8 files
Additional resources
For more information on the Version 3 format and requirements, see Execution Environment Definition: Version 3 Format.
4.12.3. Building the custom execution environment
Before creating the new custom execution environment, an API token from the private hub will be needed in order to download content.
Generate a token by taking the following steps:
- Log in to your private hub.
- Choose "Collections" from the left-hand menu.
- Choose the"API token" under the "Collections" section of the menu.
Once you have the token, set the following environment variable so that Ansible Builder can access the token:
$ export ANSIBLE_GALAXY_SERVER_PRIVATE_HUB_TOKEN=<your_token>
Create the custom execution environment by using the command:
$ cd $HOME/custom-ee $ ansible-builder build -f execution-environment.yml -t private-hub.example.com/custom-ee:latest -v 3
NoteIf the build fails with an error that the private hub certificate is signed by an unknown authority, you can pull the required image into the local image cache by running the command:
$ podman pull private-hub.example.com/ee-minimal-rhel8:latest --tls-verify=false
Alternately, you can add the private hub CA certificate to the podman certificate store:
$ sudo mkdir /etc/containers/certs.d/private-hub.example.com $ sudo cp $HOME/custom-ee/files/hub-ca.crt /etc/containers/certs.d/private-hub.example.com
4.12.4. Uploading the custom execution environment to the private automation hub
Before the new execution environment image can be used for automation jobs, it must be uploaded to the private automation hub.
First, verify that the execution environment image can be seen in the local podman cache:
$ podman images --format "table {{.ID}} {{.Repository}} {{.Tag}}" IMAGE ID REPOSITORY TAG b38e3299a65e private-hub.example.com/custom-ee latest 8e38be53b486 private-hub.example.com/ee-minimal-rhel8 latest
Then log in to the private automation hub’s container registry and push the image to make it available for use with job templates and workflows:
$ podman login private-hub.example.com -u admin Password: Login Succeeded! $ podman push private-hub.example.com/custom-ee:latest
4.13. Upgrading between minor Ansible Automation Platform releases
To upgrade between minor releases of Ansible Automation Platform 2, use this general workflow.
Procedure
- Download and unarchive the latest Ansible Automation Platform 2 setup bundle.
- Create a backup of the existing installation.
- Copy the existing installation inventory file into the new setup bundle directory.
-
Run
./setup.sh
to upgrade the installation.
For example, to upgrade from version 2.2.0-7 to 2.3-1.2, make sure that both setup bundles are on the initial controller node where the installation occurred:
$ ls -1F ansible-automation-platform-setup-bundle-2.2.0-7/ ansible-automation-platform-setup-bundle-2.2.0-7.tar.gz ansible-automation-platform-setup-bundle-2.3-1.2/ ansible-automation-platform-setup-bundle-2.3-1.2.tar.gz
Back up the 2.2.0-7 installation:
$ cd ansible-automation-platform-setup-bundle-2.2.0-7 $ sudo ./setup.sh -b $ cd ..
Copy the 2.2.0-7 inventory file into the 2.3-1.2 bundle directory:
$ cd ansible-automation-platform-setup-bundle-2.2.0-7 $ cp inventory ../ansible-automation-platform-setup-bundle-2.3-1.2/ $ cd ..
Upgrade from 2.2.0-7 to 2.3-1.2 with the setup.sh script:
$ cd ansible-automation-platform-setup-bundle-2.3-1.2 $ sudo ./setup.sh
Appendix A. Inventory file variables
The following tables contain information about the pre-defined variables used in Ansible installation inventory files. Not all of these variables are required.
A.1. General variables
Variable | Description |
---|---|
|
The default install registers the node to the Red Hat Insights for Red Hat Ansible Automation Platform Service if the node is registered with Subscription Manager. Set to
Default = |
|
List of nginx configurations for
Each element in the list is provided into Default = empty list |
|
Password credential for access to
Used for both
Enter your Red Hat Registry Service Account credentials in
When |
|
Used for both
Default = |
|
User credential for access to
Used for both
Enter your Red Hat Registry Service Account credentials in |
|
If
This variable is used as a host variable for particular hosts and not under the |
A.2. Ansible automation hub variables
Variable | Description |
---|---|
| Required Passwords must be enclosed in quotes when they are provided in plain text in the inventory file. |
| If upgrading from Ansible Automation Platform 2.0 or earlier, you must either:
Generating a new token invalidates the existing token. |
|
This variable is not set by default. Set it to
When this is set to
If any of these are absent, the installation will be halted. |
| If a collection signing service is enabled, collections are not signed automatically by default.
Setting this parameter to
Default = |
| Optional
Ansible automation hub provides artifacts in
You can also set
Default = |
| Optional Determines whether download count is displayed on the UI.
Default = |
|
When you run the bundle installer, validated content is uploaded to the By default, both certified and validated content are uploaded. Possible values of this variable are 'certified' or 'validated'.
If you do not want to install content, set
If you only want one type of content, set |
| If a collection signing service is enabled, you must provide this variable to ensure that collections can be properly signed.
|
| If a collection signing service is enabled, you must provide this variable to ensure that collections can be properly signed.
|
| Set this variable to true to create a collection signing service.
Default = |
| If a container signing service is enabled, you must provide this variable to ensure that containers can be properly signed.
|
| If a container signing service is enabled, you must provide this variable to ensure that containers can be properly signed.
|
| Set this variable to true to create a container signing service.
Default = |
| The default installation deploys a TLS enabled Ansible automation hub. Use this variable if you deploy automation hub with HTTP Strict Transport Security (HSTS) web-security policy enabled. This variable disables, the HSTS web-security policy mechanism.
Default = |
| Optional If Ansible automation hub is deployed with HTTPS enabled.
Default = |
|
When set to
Default = |
| A Boolean indicating whether to enable pulp analytics for the version of pulpcore used in automation hub in Ansible Automation Platform 2.4.
To enable pulp analytics, set
Default = |
| Set this variable to true to enable unauthorized users to view collections.
Default = |
| Set this variable to true to enable unauthorized users to download collections.
Default = |
| Optional Dictionary of setting to pass to galaxy-importer. At import time collections can go through a series of checks.
Behavior is driven by
Examples are This parameter enables you to drive this configuration. |
| The main automation hub URL that clients connect to. For example, https://<load balancer host>.
Use
If not specified, the first node in the |
| Required The database name.
Default = |
| Required if not using an internal database. The hostname of the remote PostgreSQL database used by automation hub.
Default = |
| The password for the automation hub PostgreSQL database.
Use of special characters for |
| Required if not using an internal database. Default = 5432. |
| Required.
Default = |
| Required
Default = |
| Optional
Value is By default when you upload collections to automation hub an administrator must approve it before they are made available to the users.
If you want to disable the content approval flow, set the variable to
Default = |
| A Boolean that defines whether or not preloading is enabled.
When you run the bundle installer, validated content is uploaded to the By default, both certified and validated content are uploaded.
If you do not want to install content, set
If you only want one type of content, set
Default = |
| Optional
|
| Optional
Same as |
| For Red Hat Ansible Automation Platform 2.2 and later, this value is no longer used.
Set value to
Default = |
| Deprecated
For Ansible Automation Platform 2.2.1 and later, the value of this has been fixed at Automation hub always updates with the latest packages. |
| List of nginx headers for Ansible automation hub’s web server. Each element in the list is provided to the web server’s nginx configuration as a separate line. Default = empty list |
| When deployed with automation hub the installer pushes execution environment images to automation hub and configures automation controller to pull images from the automation hub registry.
To make automation hub the only registry to pull execution environment images from, set this variable to
If set to
Default = |
| If upgrading from Red Hat Ansible Automation Platform 2.0 or earlier, choose one of the following options:
|
| This variable specifies how long, in seconds, the system should be considered as a HTTP Strict Transport Security (HSTS) host. That is, how long HTTPS is used exclusively for communication. Default = 63072000 seconds, or two years. |
|
Defines support for
Values available The TLSv1.1 and TLSv1.2 parameters only work when OpenSSL 1.0.1 or higher is used. The TLSv1.3 parameter only works when OpenSSL 1.1.1 or higher is used.
If
Default = |
| Relative or absolute path to the Fernet symmetric encryption key that you want to import. The path is on the Ansible management node. It is used to encrypt certain fields in the database, such as credentials. If not specified, a new key will be generated. |
| Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. Path to the directory where theme files are located. If changing this variable, you must provide your own theme files.
Default = |
| Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. The name of the realm in SSO.
Default = |
| Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. Display name for the realm.
Default = |
| Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. SSO administration username.
Default = |
| Required Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On. SSO administration password. |
| Optional Used for Ansible Automation Platform managed Red Hat Single Sign-On only. Customer-provided keystore for SSO. |
| Required Used for Ansible Automation Platform externally managed Red Hat Single Sign-On only. Automation hub requires SSO and SSO administration credentials for authentication. If SSO is not provided in the inventory for configuration, then you must use this variable to define the SSO host. |
| Optional Used for Ansible Automation Platform managed Red Hat Single Sign-On only.
Set to
Default = |
| Optional Used for Ansible Automation Platform managed Red Hat Single Sign-On only. Name of keystore for SSO.
Default = |
| Password for keystore for HTTPS enabled SSO.
Required when using Ansible Automation Platform managed SSO and when HTTPS is enabled. The default install deploys SSO with |
| Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On.
If This must be reachable from client machines. |
| Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On.
Set to
Default = |
| Optional Used for Ansible Automation Platform managed and externally managed Red Hat Single Sign-On if Single Sign On uses HTTPS.
Default = |
For Ansible automation hub to connect to LDAP directly, you must configure the following variables: A list of additional LDAP related variables that can be passed using the ldap_extra_settings
variable, see the Django reference documentation.
Variable | Description |
---|---|
|
The name to use when binding to the LDAP server with Must be set when integrating private automation hub with LDAP, or the installation will fail. |
| Required
The password to use with Must be set when integrating private automation hub LDAP, or the installation will fail. |
| An LDAP Search object that finds all LDAP groups that users might belong to.
If your configuration makes any references to LDAP groups, you must set this variable and Must be set when integrating private automation hub with LDAP, or the installation will fail.
Default = |
| Optional Search filter for finding group membership. Variable identifies what objectClass type to use for mapping groups with automation hub and LDAP. Used for installing automation hub with LDAP.
Default = |
| Optional Scope to search for groups in an LDAP tree using the django framework for LDAP authentication. Used for installing automation hub with LDAP.
Default = |
| Describes the type of group returned by automationhub_ldap_group_search. This is set dynamically based on the the values of automationhub_ldap_group_type_params and automationhub_ldap_group_type_class, otherwise it is the default value coming from django-ldap which is 'None'
Default = |
| Optional The importable path for the django-ldap group type class. Variable identifies the group type used during group searches within the django framework for LDAP authentication. Used for installing automation hub with LDAP.
Default = |
| The URI of the LDAP server. Use any URI that is supported by your underlying LDAP libraries. Must be set when integrating private automation hub LDAP, or the installation will fail. |
| An LDAP Search object that locates a user in the directory. The filter parameter must contain the placeholder %(user)s for the username. It must return exactly one result for authentication to succeed. Must be set when integrating private automation hub with LDAP, or the installation will fail. |
| Optional
Default = |
| Optional Scope to search for users in an LDAP tree by using the django framework for LDAP authentication. Used for installing automation hub with LDAP.
Default = |
A.3. Automation controller variables
Variable | Description |
---|---|
| The admin password used to connect to the automation controller instance. Passwords must be enclosed in quotes when they are provided in plain text in the inventory file. |
| The full URL used by Event-Driven Ansible to connect to a controller host. This URL is required if there is no automation controller configured in the inventory file.
Format example: |
| The username used to identify and create the admin superuser in automation controller. |
| The email address used for the admin user for automation controller. |
| The nginx HTTP server listens for inbound connections. Default = 80 |
| The nginx HTTPS server listens for secure connections. Default = 443 |
| This variable specifies how long, in seconds, the system must be considered as a HTTP Strict Transport Security (HSTS) host. That is, how long HTTPS is used exclusively for communication. Default = 63072000 seconds, or two years. |
|
Defines support for
Values available The TLSv1.1 and TLSv1.2 parameters only work when OpenSSL 1.0.1 or higher is used. The TLSv1.3 parameter only works when OpenSSL 1.1.1 or higher is used.
If
Default = |
| List of nginx headers for the automation controller web server. Each element in the list is provided to the web server’s nginx configuration as a separate line. Default = empty list |
| Optional
The status of a node or group of nodes. Valid options are
Default = |
|
For
Two valid
A
A
Default for this group =
For
Two valid
A
A
Default for this group = |
| Optional
The
This variable is used to add
The peers variable can be a comma-separated list of hosts and groups from the inventory. This is resolved into a set of hosts that is used to construct the |
| The name of the postgreSQL database.
Default = |
| The postgreSQL host, which can be an externally managed database. |
| The password for the postgreSQL database.
Use of special characters for NOTE
You no longer have to provide a
When you supply |
| The postgreSQL port to use. Default = 5432 |
|
Choose one of the two available modes:
Set to
Default = |
| Your postgreSQL database username.
Default = |
| Location of the postgreSQL SSL certificate.
|
| Location of the postgreSQL SSL key.
|
| Location of the postgreSQL user certificate.
|
| Location of the postgreSQL user key.
|
| Use this variable if postgreSQL uses SSL. |
| Maximum database connections setting to apply if you are using installer-managed postgreSQL. See PostgreSQL database configuration in the automation controller administration guide for help selecting a value. Default for VM-based installations = 200 for a single node and 1024 for a cluster. |
| Port to use for receptor connection. Default = 27199 |
|
When specified, it adds
See program:x Section Values for more information about No default value exists. |
| Optional
Same as |
| Optional
Same as |
A.4. Ansible variables
The following variables control how Ansible Automation Platform interacts with remote hosts.
For more information about variables specific to certain plugins, see the documentation for Ansible.Builtin.
For a list of global configuration options, see Ansible Configuration Settings.
Variable | Description |
---|---|
| The connection plugin used for the task on the target host.
This can be the name of any of Ansible connection plugin. SSH protocol types are
Default = |
|
The ip or name of the target host to use instead of |
| The connection port number. Default: 22 for ssh |
| The user name to use when connecting to the host. |
| The password to authenticate to the host. Never store this variable in plain text. Always use a vault. |
| Private key file used by SSH. Useful if using multiple keys and you do not want to use an SSH agent. |
|
This setting is always appended to the default command line for |
|
This setting is always appended to the default |
|
This setting is always appended to the default |
|
This setting is always appended to the default |
|
Determines if SSH pipelining is used. This can override the pipelining setting in |
| Added in version 2.2.
This setting overrides the default behavior to use the system SSH. This can override the ssh_executable setting in |
|
The shell type of the target system. Do not use this setting unless you have set the |
|
This sets the shell that the Ansible controller uses on the target machine, and overrides the executable in
Do not change this variable unless |
| This variable takes the hostname of the machine from the inventory script or the Ansible configuration file. You cannot set the value of this variable. Because the value is taken from the configuration file, the actual runtime hostname value can vary from what is returned by this variable. |
A.5. Event-Driven Ansible controller variables
Variable | Description |
---|---|
| The admin password used by the Event-Driven Ansible controller instance. Passwords must be enclosed in quotes when they are provided in plain text in the inventory file. |
| Username used by django to identify and create the admin superuser in Event-Driven Ansible controller.
Default = |
| Email address used by django for the admin user for Event-Driven Ansible controller.
Default = |
| List of additional addresses to enable for user access to Event-Driven Ansible controller. Default = empty list |
|
Boolean flag used to verify automation controller’s web certificates when making calls from Event-Driven Ansible controller. Verified is
Default = |
| Boolean flag to disable HTTPS Event-Driven Ansible controller.
Default = |
| Boolean flag to disable HSTS Event-Driven Ansible controller.
Default = |
| Number of workers for the API served through gunicorn. Default = (# of cores or threads) * 2 + 1 |
| The number of maximum activations running concurrently per node. This is an integer that must be greater than 0. Default = 12 |
| Boolean flag to specify whether cert sources are on the remote host (true) or local (false).
Default = |
| The Postgres database used by Event-Driven Ansible controller.
Default = |
| The hostname of the Postgres database used by Event-Driven Ansible controller, which can be an externally managed database. |
| The password for the Postgres database used by Event-Driven Ansible controller.
Use of special characters for |
| The port number of the Postgres database used by Event-Driven Ansible controller.
Default = |
| The username for your Event-Driven Ansible controller Postgres database.
Default = |
| Number of Redis Queue (RQ) workers used by Event-Driven Ansible controller. RQ workers are Python processes that run in the background. Default = (# of cores or threads) * 2 + 1 |
| Optional
Same as |
| Optional
Same as |
| List of additional nginx headers to add to Event-Driven Ansible controller’s nginx configuration. Default = empty list |