Chapter 1. Ansible Automation Platform containerized installation
Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments.
This guide helps you to understand the installation requirements and processes behind the containerized version of Ansible Automation Platform.
Upgrades from 2.4 Containerized Ansible Automation Platform Tech Preview to 2.5 Containerized Ansible Automation Platform are not supported at this time.
1.1. Tested deployment topologies
Red Hat tests Ansible Automation Platform 2.5 with a defined set of topologies to give you opinionated deployment options. The supported topologies include infrastructure topology diagrams, tested system configurations, example inventory files, and network ports information.
For containerized Ansible Automation Platform, there are two infrastructure topology shapes:
- Growth - (All-in-one) Intended for organizations that are getting started with Ansible Automation Platform. This topology allows for smaller footprint deployments.
- Enterprise - Intended for organizations that require Ansible Automation Platform deployments to have redundancy or higher compute for large volumes of automation. This is a more future-proofed scaled out architecture.
For more information about the tested deployment topologies for containerized Ansible Automation Platform, see Container topologies in Tested deployment models.
1.2. System requirements
Use this information when planning your installation of containerized Ansible Automation Platform.
Prerequisites
- A non-root user for the Red Hat Enterprise Linux host, with sudo or other Ansible supported privilege escalation (sudo is recommended). This user is responsible for the installation of containerized Ansible Automation Platform.
SSH public key authentication for the non-root user (if installing on remote hosts). For guidelines on setting up SSH public key authentication for the non-root user, see How to configure SSH public key authentication for passwordless login.
-
If doing a self contained local VM based installation, you can use
ansible_connection=local
.
-
If doing a self contained local VM based installation, you can use
- Internet access from the Red Hat Enterprise Linux host if you are using the default online installation method.
- The appropriate network ports are open if a firewall is in place. For more information about the ports to open, see Container topologies in Tested deployment models.
1.2.1. Ansible Automation Platform system requirements
Your system must meet the following minimum system requirements to install and run Red Hat Ansible Automation Platform.
Type | Description |
---|---|
Subscription |
|
Operating system | Red Hat Enterprise Linux 9.2 or later minor versions of Red Hat Enterprise Linux 9 |
CPU architecture | x86_64, AArch64, s390x (IBM Z), ppc64le (IBM Power) |
Ansible-core | Ansible-core version 2.16 or later |
Browser | A currently supported version of Mozilla Firefox or Google Chrome. |
Database | PostgreSQL 15 |
Each virtual machine (VM) has the following system requirements:
Requirement | Minimum requirement |
---|---|
RAM | 16 GB |
CPUs | 4 |
Local disk | 60 GB |
Disk IOPS | 3000 |
If performing a bundled installation of the growth topology with hub_seed_collections=true
, then 32 GB RAM is recommended. Note that with this configuration the install time is going to increase and can take 45 or more minutes alone to complete seeding the collections.
1.2.2. Database requirements
Ansible Automation Platform 2.5 can work with two varieties of database:
- Database installed with Ansible Automation Platform - This database consists of a PostgreSQL installation done as part of an Ansible Automation Platform installation using PostgreSQL packages provided by Red Hat.
- Customer provided or configured database - This is an external database that is provided by the customer, whether on bare metal, virtual machine, container, or cloud hosted service.
Ansible Automation Platform 2.5 uses PostgreSQL 15 and requires the customer provided (external) database to have ICU support.
Additional resources
- For more information about the scope of coverage for each variety of database, see Red Hat Ansible Automation Platform Database Scope of Coverage.
- For more information about setting up an external database, see Setting up a customer provided (external) database.
1.3. Preparing the Red Hat Enterprise Linux host for containerized installation
Containerized Ansible Automation Platform runs the component services as Podman based containers on top of a Red Hat Enterprise Linux host. Prepare the Red Hat Enterprise Linux host to ensure a successful installation.
Procedure
- Log in to the Red Hat Enterprise Linux host as your non-root user.
Set a hostname that is a fully qualified domain name (FQDN):
sudo hostnamectl set-hostname <your_hostname>
Register your Red Hat Enterprise Linux host with
subscription-manager
:sudo subscription-manager register
Run
sudo dnf repolist
to validate that only the BaseOS and AppStream repositories are set up and enabled on the host:$ sudo dnf repolist Updating Subscription Management repositories. repo id repo name rhel-9-for-x86_64-appstream-rpms Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-baseos-rpms Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)
- Ensure that only these repositories are available to the Red Hat Enterprise Linux host. For more information about managing custom repositories, see Managing custom software repositories.
- Ensure that the host has DNS configured and can resolve host names and IP addresses by using a fully qualified domain name (FQDN). This is essential to ensure services can talk to one another.
Install
ansible-core
:sudo dnf install -y ansible-core
Optional: You can install additional utilities that can be useful for troubleshooting purposes, for example
wget
,git-core
,rsync
, andvim
:sudo dnf install -y wget git-core rsync vim
- Optional: To have the installation program automatically pick up and apply your Ansible Automation Platform subscription manifest license, follow the steps in Obtaining a manifest file.
Additional resources
- For more information about registering your RHEL system, see Getting Started with RHEL System Registration.
- For information about configuring unbound DNS, see Setting up an unbound DNS server.
- For information about configuring DNS using BIND, see Setting up and configuring a BIND DNS server.
-
For more information about
ansible-core
, see Ansible Core Documentation.
1.4. Downloading Ansible Automation Platform
Choose the installation program you need based on your Red Hat Enterprise Linux environment internet connectivity and download the installation program to your Red Hat Enterprise Linux host.
Prerequisites
- You are logged in to the Red Hat Enterprise Linux host as your non-root user.
Procedure
Download the latest installer
.tar
file from the Ansible Automation Platform download page.- For online installations: Ansible Automation Platform 2.5 Containerized Setup
- For offline or bundled installations: Ansible Automation Platform 2.5 Containerized Setup Bundle
-
Copy the installation program
.tar
file and the optional manifest.zip
file onto your Red Hat Enterprise Linux host. Decide where you want the installation program to reside on the file system.
- Installation related files are created under this location and require at least 10 GB for the initial installation.
Unpack the installation program
.tar
file into your installation directory, and go to the unpacked directory.To unpack the online installer:
$ tar xfvz ansible-automation-platform-containerized-setup-<version>.tar.gz
To unpack the offline or bundled installer:
$ tar xfvz ansible-automation-platform-containerized-setup-bundle-<version>-<arch_name>.tar.gz
1.5. Configuring the inventory file
You can control the installation of Ansible Automation Platform with inventory files. Inventory files define the information needed to customize the installation. For example, host details, certificate details, and various component-specific settings.
Example inventory files are available in this document that you can copy and change to quickly get started.
Additionally, growth topology and enterprise topology inventory files are available in the following locations:
In the downloaded installation program package:
-
The default inventory file, named
inventory
, is for the enterprise topology pattern. -
To deploy the growth topology (all-in-one) pattern, you need to copy over or use the
inventory-growth
file instead.
-
The default inventory file, named
- In Container topologies in Tested deployment models.
To use the example inventory files, replace the < >
placeholders with your specific variables, and update the host names.
Refer to the README.md
file in the installation directory or Inventory file variables for more information about optional and required variables.
1.5.1. Inventory file for online installation for containerized growth topology (all-in-one)
Use the example inventory file to perform an online installation for the containerized growth topology (all-in-one):
# This is the Ansible Automation Platform installer inventory file intended for the container growth deployment topology. # This inventory file expects to be run from the host where Ansible Automation Platform will be installed. # Consult the Ansible Automation Platform product documentation about this topology's tested hardware configuration. # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/tested_deployment_models/container-topologies # # Consult the docs if you are unsure what to add # For all optional variables consult the included README.md # or the Ansible Automation Platform documentation: # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation # This section is for your platform gateway hosts # ----------------------------------------------------- [automationgateway] aap.example.org # This section is for your automation controller hosts # ----------------------------------------------------- [automationcontroller] aap.example.org # This section is for your automation hub hosts # ----------------------------------------------------- [automationhub] aap.example.org # This section is for your Event-Driven Ansible controller hosts # ----------------------------------------------------- [automationeda] aap.example.org # This section is for the Ansible Automation Platform database # ----------------------------------------------------- [database] aap.example.org [all:vars] # Ansible ansible_connection=local # Common variables # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-general-inventory-variables # ----------------------------------------------------- postgresql_admin_username=postgres postgresql_admin_password=<set your own> registry_username=<your RHN username> registry_password=<your RHN password> redis_mode=standalone # Platform gateway # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-gateway-variables # ----------------------------------------------------- gateway_admin_password=<set your own> gateway_pg_host=aap.example.org gateway_pg_password=<set your own> # Automation controller # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-controller-variables # ----------------------------------------------------- controller_admin_password=<set your own> controller_pg_host=aap.example.org controller_pg_password=<set your own> controller_percent_memory_capacity=0.5 # Automation hub # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-hub-variables # ----------------------------------------------------- hub_admin_password=<set your own> hub_pg_host=aap.example.org hub_pg_password=<set your own> hub_seed_collections=false # Event-Driven Ansible controller # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#event-driven-ansible-controller # ----------------------------------------------------- eda_admin_password=<set your own> eda_pg_host=aap.example.org eda_pg_password=<set your own>
Additional resources
- For more information about the container growth topology (all-in-one), see Container growth topology in Tested deployment models.
1.5.2. Inventory file for online installation for containerized enterprise topology
Use the example inventory file to perform an online installation for the containerized enterprise topology:
# This is the Ansible Automation Platform enterprise installer inventory file # Consult the docs if you are unsure what to add # For all optional variables consult the included README.md # or the Red Hat documentation: # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation # This section is for your platform gateway hosts # ----------------------------------------------------- [automationgateway] gateway1.example.org gateway2.example.org # This section is for your automation controller hosts # ----------------------------------------------------- [automationcontroller] controller1.example.org controller2.example.org # This section is for your Ansible Automation Platform execution hosts # ----------------------------------------------------- [execution_nodes] hop1.example.org receptor_type='hop' exec1.example.org exec2.example.org # This section is for your automation hub hosts # ----------------------------------------------------- [automationhub] hub1.example.org hub2.example.org # This section is for your Event-Driven Ansible controller hosts # ----------------------------------------------------- [automationeda] eda1.example.org eda2.example.org [redis] gateway1.example.org gateway2.example.org hub1.example.org hub2.example.org eda1.example.org eda2.example.org [all:vars] # Common variables # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-general-inventory-variables # ----------------------------------------------------- postgresql_admin_username=<set your own> postgresql_admin_password=<set your own> registry_username=<your RHN username> registry_password=<your RHN password> # Platform gateway # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-gateway-variables # ----------------------------------------------------- gateway_admin_password=<set your own> gateway_pg_host=externaldb.example.org gateway_pg_database=<set your own> gateway_pg_username=<set your own> gateway_pg_password=<set your own> # Automation controller # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-controller-variables # ----------------------------------------------------- controller_admin_password=<set your own> controller_pg_host=externaldb.example.org controller_pg_database=<set your own> controller_pg_username=<set your own> controller_pg_password=<set your own> # Automation hub # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#ref-hub-variables # ----------------------------------------------------- hub_admin_password=<set your own> hub_pg_host=externaldb.example.org hub_pg_database=<set your own> hub_pg_username=<set your own> hub_pg_password=<set your own> # Event-Driven Ansible controller # https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/containerized_installation/appendix-inventory-files-vars#event-driven-ansible-controller # ----------------------------------------------------- eda_admin_password=<set your own> eda_pg_host=externaldb.example.org eda_pg_database=<set your own> eda_pg_username=<set your own> eda_pg_password=<set your own>
Additional resources
- For more information about the container enterprise topology, see Container enterprise topology in Tested deployment models.
- For more information about Redis, see Caching and queueing system in Planning your installation.
1.5.3. Performing an offline or bundled installation
To perform an offline installation, add the following under the [all:vars]
group:
bundle_install=true # The bundle directory must include /bundle in the path bundle_dir=<full_path_to_the_bundle_directory>
1.5.4. Setting registry_username and registry_password
When using the registry_username
and registry_password
variables for an online non-bundled installation, you need to create a new registry service account.
Registry service accounts are named tokens that can be used in environments where credentials will be shared, such as deployment systems.
Procedure
- Go to https://access.redhat.com/terms-based-registry/accounts.
- On the Registry Service Accounts page click .
- Enter a name for the account using only the allowed characters.
- Optionally enter a description for the account.
- Click .
- Find the created account in the list by searching for your name in the search field.
- Click the name of the account that you created.
Alternatively, if you know the name of your token, you can go directly to the page by entering the URL:
https://access.redhat.com/terms-based-registry/token/<name-of-your-token>
A token page opens, displaying a generated username (different from the account name) and a token.
If no token is displayed, click
. You can also click this to generate a new username and token.-
Copy the username (for example "1234567|testuser") and use it to set the variable
registry_username
. -
Copy the token and use it to set the variable
registry_password
.
1.6. Advanced configuration options
Advanced configuration options, such as external database set up and the use of custom TLS certs, are available for more complex deployments of containerized Ansible Automation Platform.
If you are not using these advanced configuration options, go to Installing containerized Ansible Automation Platform to continue with your installation.
1.6.1. Adding a safe plugin variable to Event-Driven Ansible controller
When using redhat.insights_eda
or similar plugins to run rulebook activations in Event-Driven Ansible controller, you must add a safe plugin variable to a directory in Ansible Automation Platform. This ensures connection between Event-Driven Ansible controller and the source plugin, and displays port mappings correctly.
Procedure
Create a directory for the safe plugin variable:
mkdir -p ./group_vars/automationeda
-
Create a file within that directory for your new setting (for example,
touch ./group_vars/automationeda/custom.yml
) Add the variable
eda_safe_plugins
with a list of plugins to enable. For example:eda_safe_plugins: ['ansible.eda.webhook', 'ansible.eda.alertmanager']
1.6.2. Adding execution nodes
The containerized installer can deploy remote execution nodes. The execution_nodes
group in the inventory file handles this.
[execution_nodes] <fqdn_of_your_execution_host>
An execution node is by default configured as an execution type running on port 27199 (TCP). This can be changed by the following variables:
receptor_port=27199 receptor_protocol=tcp receptor_type=hop
The receptor_type
value can be either execution
or hop
, while the receptor_protocol
is either tcp
or udp
. By default, the nodes in the execution_nodes
group are added as peers for the controller node. However, you can change the peers configuration by using the receptor_peers
variable.
[execution_nodes] fqdn_of_your_execution_host fqdn_of_your_hop_host receptor_type=hop receptor_peers='["<fqdn_of_your_execution_host>"]'
1.6.3. Configuring Amazon S3 storage for automation hub
Amazon S3 storage is a type of object storage that is supported in containerized installations. When using an AWS S3 storage backend, set hub_storage_backend
to s3
. The AWS S3 bucket needs to exist before running the installation program.
The variables you can use to configure this storage backend type in your inventory file are:
-
hub_s3_access_key
-
hub_s3_secret_key
-
hub_s3_bucket_name
-
hub_s3_extra_settings
Extra parameters can be passed through an Ansible hub_s3_extra_settings
dictionary.
For example, you can set the following parameters:
hub_s3_extra_settings: AWS_S3_MAX_MEMORY_SIZE: 4096 AWS_S3_REGION_NAME: eu-central-1 AWS_S3_USE_SSL: True
For more information about the list of parameters, see django-storages documentation - Amazon S3.
1.6.4. Configuring Azure Blob Storage for automation hub
Azure Blob storage is a type of object storage that is supported in containerized installations. When using an Azure blob storage backend, set hub_storage_backend
to azure
. The Azure container needs to exist before running the installation program.
The variables you can use to configure this storage backend type in your inventory file are:
-
hub_azure_account_key
-
hub_azure_account_name
-
hub_azure_container
-
hub_azure_extra_settings
Extra parameters can be passed through an Ansible hub_azure_extra_settings
dictionary.
For example, you can set the following parameters:
hub_azure_extra_settings: AZURE_LOCATION: foo AZURE_SSL: True AZURE_URL_EXPIRATION_SECS: 60
For more information about the list of parameters, see django-storages documentation - Azure Storage.
1.6.5. Configuring Network File System (NFS) storage for automation hub
NFS is a type of shared storage that is supported in containerized installations. Shared storage is required when installing more than one instance of automation hub with a file
storage backend. When installing a single instance of the automation hub, shared storage is optional.
- To configure shared storage for automation hub, set the following variable in the inventory file, ensuring your NFS share has read, write, and execute permissions:
hub_shared_data_path=<path_to_nfs_share>
-
To change the mount options for your NFS share, use the
hub_shared_data_mount_opts
variable. This variable is optional and the default value isrw,sync,hard
.
1.6.6. Configuring a HAProxy load balancer
To configure a HAProxy load balancer in front of platform gateway with a custom CA cert, set the following inventory file variables under the [all:vars]
group:
custom_ca_cert=<path_to_cert_crt> gateway_main_url=<https://load_balancer_url>
HAProxy SSL passthrough mode is not supported with platform gateway.
1.6.7. Enabling automation content collection and container signing
Automation content signing is disabled by default. To enable it, the following installation variables are required in the inventory file:
# Collection signing hub_collection_signing=true hub_collection_signing_key=<full_path_to_collection_gpg_key> # Container signing hub_container_signing=true hub_container_signing_key=<full_path_to_container_gpg_key>
The following variables are required if the keys are protected by a passphrase:
# Collection signing hub_collection_signing_pass=<gpg_key_passphrase> # Container signing hub_container_signing_pass=<gpg_key_passphrase>
The hub_collection_signing_key
and hub_container_signing_key
variables require the set up of keys before running an installation.
Automation content signing currently only supports GnuPG (GPG) based signature keys. For more information about GPG, see the GnuPG man page.
The algorithm and cipher used is the responsibility of the customer.
Procedure
On a RHEL9 server run the following command to create a new key pair for collection signing:
gpg --gen-key
Enter your information for "Real name" and "Email address":
Example output:
gpg --gen-key gpg (GnuPG) 2.3.3; Copyright (C) 2021 Free Software Foundation, Inc. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Note: Use "gpg --full-generate-key" for a full featured key generation dialog. GnuPG needs to construct a user ID to identify your key. Real name: Joe Bloggs Email address: jbloggs@example.com You selected this USER-ID: "Joe Bloggs <jbloggs@example.com>" Change (N)ame, (E)mail, or (O)kay/(Q)uit? O
If this fails, your environment does not have the necessary prerequisite packages installed for GPG. Install the necessary packages to proceed.
- A dialog box will appear and ask you for a passphrase. This is optional but recommended.
The keys are then generated, and produce output similar to the following:
We need to generate a lot of random bytes. It is a good idea to perform some other action (type on the keyboard, move the mouse, utilize the disks) during the prime generation; this gives the random number generator a better chance to gain enough entropy. gpg: key 022E4FBFB650F1C4 marked as ultimately trusted gpg: revocation certificate stored as '/home/aapuser/.gnupg/openpgp-revocs.d/F001B037976969DD3E17A829022E4FBFB650F1C4.rev' public and secret key created and signed. pub rsa3072 2024-10-25 [SC] [expires: 2026-10-25] F001B037976969DD3E17A829022E4FBFB650F1C4 uid Joe Bloggs <jbloggs@example.com> sub rsa3072 2024-10-25 [E] [expires: 2026-10-25]
Note the expiry date that you can set based on company standards and needs.
You can view all of your GPG keys by running the following command:
gpg --list-secret-keys --keyid-format=long
To export the public key run the following command:
gpg --export -a --output collection-signing-key.pub <email_address_used_to_generate_key>
To export the private key run the following command:
gpg -a --export-secret-keys <email_address_used_to_generate_key> > collection-signing-key.priv
- If a passphrase is detected, you will be prompted to enter the passphrase.
To view the private key file contents, run the following command:
cat collection-signing-key.priv
Example output:
-----BEGIN PGP PRIVATE KEY BLOCK----- lQWFBGcbN14BDADTg5BsZGbSGMHypUJMuzmIffzzz4LULrZA8L/I616lzpBHJvEs sSN6KuKY1TcIwIDCCa/U5Obm46kurpP2Y+vNA1YSEtMJoSeHeamWMDd99f49ItBp <snippet> j920hRy/3wJGRDBMFa4mlQg= =uYEF -----END PGP PRIVATE KEY BLOCK-----
- Repeat steps 1 to 9 to create a key pair for container signing.
Add the following variables to the inventory file and run the installation to create the signing services:
# Collection signing hub_collection_signing=true hub_collection_signing_key=/home/aapuser/aap/ansible-automation-platform-containerized-setup-2.5-2/collection-signing-key.priv # This variable is required if the key is protected by a passphrase hub_collection_signing_pass=<password> # Container signing hub_container_signing=true hub_container_signing_key=/home/aapuser/aap/ansible-automation-platform-containerized-setup-2.5-2/container-signing-key.priv # This variable is required if the key is protected by a passphrase hub_container_signing_pass=<password>
1.6.7.1. Additional resources
- For more information on working with signed containers following an installation, see Working with signed containers in the Managing automation content guide.
1.6.8. Setting up a customer provided (external) database
- When using an external database with Ansible Automation Platform, you must create and maintain that database. Ensure that you clear your external database when uninstalling Ansible Automation Platform.
- Red Hat Ansible Automation Platform 2.5 uses PostgreSQL 15 and requires the customer provided (external) database to have ICU support.
- During configuration of an external database, you must check the external database coverage. For more information, see Red Hat Ansible Automation Platform Database Scope of Coverage.
There are two possible scenarios for setting up an external database:
- An external database with PostgreSQL admin credentials
- An external database without PostgreSQL admin credentials
1.6.8.1. Setting up an external database with PostgreSQL admin credentials
If you have PostgreSQL admin credentials, you can supply them in the inventory file and the installation program creates the PostgreSQL users and databases for each component for you. The PostgreSQL admin account must have SUPERUSER
privileges.
To configure the PostgreSQL admin credentials, add the following variables to the inventory file under the [all:vars]
group:
postgresql_admin_username=<set your own> postgresql_admin_password=<set your own>
1.6.8.2. Setting up an external database without PostgreSQL admin credentials
If you do not have PostgreSQL admin credentials, then PostgreSQL users and databases need to be created for each component (platform gateway, automation controller, automation hub, and Event-Driven Ansible) before running the installation program.
Procedure
Connect to a PostgreSQL compliant database server with a user that has
SUPERUSER
privileges.# psql -h <hostname> -U <username> -p <port_number>
For example:
# psql -h db.example.com -U superuser -p 5432
Create the user with a password and ensure the
CREATEDB
role is assigned to the user. For more information, see Database Roles.CREATE USER <username> WITH PASSWORD <password> CREATEDB;
For example:
CREATE USER hub_user WITH PASSWORD <password> CREATEDB;
Create the database and add the user you created as the owner.
CREATE DATABASE <database_name> OWNER <username>;
For example:
CREATE DATABASE hub_database OWNER hub_user;
When you have created the PostgreSQL users and databases for each component, you can supply them in the inventory file under the
[all:vars]
group.# Platform gateway gateway_pg_host=aap.example.org gateway_pg_database=<set your own> gateway_pg_username=<set your own> gateway_pg_password=<set your own> # Automation controller controller_pg_host=aap.example.org controller_pg_database=<set your own> controller_pg_username=<set your own> controller_pg_password=<set your own> # Automation hub hub_pg_host=aap.example.org hub_pg_database=<set your own> hub_pg_username=<set your own> hub_pg_password=<set your own> # Event-Driven Ansible eda_pg_host=aap.example.org eda_pg_database=<set your own> eda_pg_username=<set your own> eda_pg_password=<set your own>
1.6.8.3. Enabling the hstore extension for the automation hub PostgreSQL database
Added in Ansible Automation Platform 2.5, the database migration script uses hstore
fields to store information, therefore the hstore
extension must be enabled in the automation hub PostgreSQL database.
This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server.
If the PostgreSQL database is external, you must enable the hstore
extension in the automation hub PostgreSQL database manually before installation.
If the hstore
extension is not enabled before installation, a failure raises during database migration.
Procedure
Check if the extension is available on the PostgreSQL server (automation hub database).
$ psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"
Where the default value for
<automation hub database>
isautomationhub
.Example output with
hstore
available:name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)
Example output with
hstore
not available:name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)
On a RHEL based server, the
hstore
extension is included in thepostgresql-contrib
RPM package, which is not installed automatically when installing the PostgreSQL server RPM package.To install the RPM package, use the following command:
dnf install postgresql-contrib
Load the
hstore
PostgreSQL extension into the automation hub database with the following command:$ psql -d <automation hub database> -c "CREATE EXTENSION hstore;"
In the following output, the
installed_version
field lists thehstore
extension used, indicating thathstore
is enabled.name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)
1.6.8.4. Optional: enabling mutual TLS (mTLS) authentication
mTLS authentication is disabled by default. To configure each component’s database with mTLS authentication, add the following variables to your inventory file under the [all:vars]
group and ensure each component has a different TLS certificate and key:
# Platform gateway gateway_pg_cert_auth=true gateway_pg_tls_cert=/path/to/gateway.cert gateway_pg_tls_key=/path/to/gateway.key gateway_pg_sslmode=verify-full # Automation controller controller_pg_cert_auth=true controller_pg_tls_cert=/path/to/awx.cert controller_pg_tls_key=/path/to/awx.key controller_pg_sslmode=verify-full # Automation hub hub_pg_cert_auth=true hub_pg_tls_cert=/path/to/pulp.cert hub_pg_tls_key=/path/to/pulp.key hub_pg_sslmode=verify-full # Event-Driven Ansible eda_pg_cert_auth=true eda_pg_tls_cert=/path/to/eda.cert eda_pg_tls_key=/path/to/eda.key eda_pg_sslmode=verify-full
1.6.9. Using custom TLS certificates
By default, the installation program generates self-signed TLS certificates and keys for all Ansible Automation Platform services.
To use your own TLS certificates or a custom Certificate Authority (CA), you can set specific variables in your inventory file.
Option 1: Use a custom CA to generate all TLS certificates
To use a custom Certificate Authority (CA) to generate TLS certificates for all Ansible Automation Platform services, set the following variables in your inventory file:
ca_tls_cert=<path_to_ca_tls_certificate> ca_tls_key=<path_to_ca_tls_key>
Use this method when you want Ansible Automation Platform to generate all of the certificates, but you want them signed by a custom CA rather than the default self-signed certificates.
Option 2: Provide custom TLS certificates for each service
To manually provide TLS certificates for each individual service (for example automation controller, automation hub, Event-Driven Ansible), set the following variables in your inventory file:
# Platform gateway gateway_tls_cert=<path_to_tls_certificate> gateway_tls_key=<path_to_tls_key> gateway_pg_tls_cert=<path_to_tls_certificate> gateway_pg_tls_key=<path_to_tls_key> gateway_redis_tls_cert=<path_to_tls_certificate> gateway_redis_tls_key=<path_to_tls_key> # Automation controller controller_tls_cert=<path_to_tls_certificate> controller_tls_key=<path_to_tls_key> controller_pg_tls_cert=<path_to_tls_certificate> controller_pg_tls_key=<path_to_tls_key> # Automation hub hub_tls_cert=<path_to_tls_certificate> hub_tls_key=<path_to_tls_key> hub_pg_tls_cert=<path_to_tls_certificate> hub_pg_tls_key=<path_to_tls_key> # Event-Driven Ansible eda_tls_cert=<path_to_tls_certificate> eda_tls_key=<path_to_tls_key> eda_pg_tls_cert=<path_to_tls_certificate> eda_pg_tls_key=<path_to_tls_key> eda_redis_tls_cert=<path_to_tls_certificate> eda_redis_tls_key=<path_to_tls_key> # PostgreSQL postgresql_tls_cert=<path_to_tls_certificate> postgresql_tls_key=<path_to_tls_key> # Receptor receptor_tls_cert=<path_to_tls_certificate> receptor_tls_key=<path_to_tls_key>
Use this method if your organization manages TLS certificates outside of Ansible Automation Platform and requires manual provisioning.
Providing a custom CA certificate
If any of the TLS certificates you manually provided are signed by a custom CA, you must specify the CA certificate by using the following variable in your inventory file:
custom_ca_cert=<path_to_custom_ca_certificate>
If you have more than one CA certificate, combine them into a single file and reference the combined certificate with the custom_ca_cert
variable.
1.6.10. Using custom Receptor signing keys
Receptor signing is enabled by default unless receptor_disable_signing=true
is set, and an RSA key pair (public and private) is generated by the installation program. However, you can set custom RSA public and private keys by using the following variables:
receptor_signing_private_key=<full_path_to_private_key> receptor_signing_public_key=<full_path_to_public_key>
1.7. Installing containerized Ansible Automation Platform
After you prepare the Red Hat Enterprise Linux host, download Ansible Automation Platform, and configure the inventory file, run the install
playbook to install containerized Ansible Automation Platform.
Prerequisites
You have done the following:
- Prepared the Red Hat Enterprise Linux host
- Downloaded Ansible Automation Platform
- Configured the inventory file
- Logged in to the Red Hat Enterprise Linux host as your non-root user
Procedure
- Go to the installation directory on your Red Hat Enterprise Linux host.
-
Run the
install
playbook:
ansible-playbook -i <inventory_file_name> ansible.containerized_installer.install
For example:
ansible-playbook -i inventory ansible.containerized_installer.install
You can add additional parameters to the installation command as needed:
ansible-playbook -i <inventory_file_name> -e @<vault_file_name> --ask-vault-pass -K -v ansible.containerized_installer.install
For example:
ansible-playbook -i inventory -e @vault.yml --ask-vault-pass -K -v ansible.containerized_installer.install
-
-i <inventory_file_name>
- The inventory file to use for the installation. -
-e @<vault_file_name> --ask-vault-pass
- (Optional) If you are using a vault to store sensitive variables, add this to the installation command. -
-K
- (Optional) If your privilege escalation requires you to enter a password, add this to the installation command. You are then prompted for the BECOME password. -
-v
- (Optional) You can use increasing verbosity, up to 4 v’s (-vvvv
) to see the details of the installation process. However, it is important to note that this can significantly increase installation time, so use it only as needed or requested by Red Hat support.
The installation of containerized Ansible Automation Platform begins.
Verification
- After the installation completes, verify that you can access the platform UI which is available by default at the following URL:
https://<gateway_node>:443
Log in as the admin user with the credentials you created for gateway_admin_username
and gateway_admin_password
.
The default ports and protocols used for Ansible Automation Platform are 80 (HTTP) and 443 (HTTPS). You can customize the ports with the following variables:
envoy_http_port=80 envoy_https_port=443
If you want to disable HTTPS, set envoy_disable_https
to true
:
envoy_disable_https: true
Additional resources
- For more information about privilege escalation, see Understanding privilege escalation: become.
- For more information about securing sensitive variables, see Sensitive variables in the installation inventory in Hardening and compliance.
- For more information about post installation instructions see Getting started with Ansible Automation Platform
1.8. Updating containerized Ansible Automation Platform
Perform a patch update for a container-based installation of Ansible Automation Platform from 2.5 to 2.5.x.
Upgrades from 2.4 Containerized Ansible Automation Platform Tech Preview to 2.5 Containerized Ansible Automation Platform are not supported at this time.
Prerequisites
You have done the following:
- Reviewed the release notes for the associated patch release. For more information, see the Ansible Automation Platform Release notes.
- Created a backup of your Ansible Automation Platform deployment. For more information, see Backing up container-based Ansible Automation Platform.
Procedure
Download the latest version of the containerized installer from the Ansible Automation Platform download.
- For online installations Ansible Automation Platform 2.5 Containerized Setup
- For offline or bundled installations: Ansible Automation Platform 2.5 Containerized Setup Bundle
-
Copy the installation program
.tar
file onto your Red Hat Enterprise Linux host. - Decide where you want the installation program to reside on the filesystem. Installation related files will be created under this location and require at least 10 GB for the initial installation.
Unpack the installation program
.tar
file into your installation directory, and go to the unpacked directory.To unpack the online installer:
$ tar xfvz ansible-automation-platform-containerized-setup-<version>.tar.gz
To unpack the offline or bundled installer:
$ tar xfvz ansible-automation-platform-containerized-setup-bundle-<version>-<arch name>.tar.gz
-
Edit the
inventory
file to match your required configuration. You can keep the same parameters from your existing Ansible Automation Platform deployment or you can change the parameters to match any modifications to your environment. Run the
install
playbook:$ ansible-playbook -i inventory ansible.containerized_installer.install
-
If your privilege escalation requires a password to be entered, append
-K
to the command. You will then be prompted for theBECOME
password. -
You can use increasing verbosity, up to 4 v’s (
-vvvv
) to see the details of the installation process. However it is important to note that this can significantly increase installation time, so it is recommended that you use it only as needed or requested by Red Hat support.
-
If your privilege escalation requires a password to be entered, append
The update begins.
1.9. Backing up containerized Ansible Automation Platform
Perform a backup of your container-based installation of Ansible Automation Platform.
Procedure
- Go to the Red Hat Ansible Automation Platform installation directory on your Red Hat Enterprise Linux host.
Run the
backup
playbook:$ ansible-playbook -i <path_to_inventory> ansible.containerized_installer.backup
This backs up the important data deployed by the containerized installer such as:
- PostgreSQL databases
- Configuration files
- Data files
By default, the backup directory is set to ~/backups
. You can change this by using the backup_dir
variable in your inventory
file.
1.10. Restoring containerized Ansible Automation Platform
Restore your container-based installation of Ansible Automation Platform from a backup.
Procedure
- Go to the Red Hat Ansible Automation Platform installation directory on your Red Hat Enterprise Linux host.
Run the
restore
playbook:$ ansible-playbook -i <path_to_inventory> ansible.containerized_installer.restore
This restores the important data deployed by the containerized installer such as:
- PostgreSQL databases
- Configuration files
- Data files
By default, the backup directory is set to ~/backups
. You can change this by using the backup_dir
variable in your inventory
file.
1.11. Uninstalling containerized Ansible Automation Platform
When performing a reinstall following an uninstall that preserves the databases, you must use the previously generated Ansible Automation Platform secret key values to access the preserved databases.
Before performing an uninstall, collect the existing secret keys by running the following command:
$ podman secret inspect --showsecret <secret_key_variable> | jq -r .[].SecretData
For example:
$ podman secret inspect --showsecret controller_secret_key | jq -r .[].SecretData
For more information about the *_secret_key
variables, see Inventory file variables.
To uninstall a containerized deployment, run the uninstall
playbook:
$ ansible-playbook -i inventory ansible.containerized_installer.uninstall
This stops all systemd units and containers and then deletes all resources used by the containerized installer such as:
- configuration and data directories and files
- systemd unit files
- Podman containers and images
- RPM packages
To keep container images, set the container_keep_images
parameter to true
.
$ ansible-playbook -i inventory ansible.containerized_installer.uninstall -e container_keep_images=true
To keep PostgreSQL databases, set the postgresql_keep_databases
parameter to true
.
$ ansible-playbook -i inventory ansible.containerized_installer.uninstall -e postgresql_keep_databases=true
1.12. Reinstalling containerized Ansible Automation Platform
To reinstall a containerized deployment after uninstalling and preserving the database, run the install
playbook and include the existing secret key value:
$ ansible-playbook -i inventory ansible.containerized_installer.install -e controller_secret_key=<secret_key_value>
For more information about the *_secret_key
variables, see Inventory file variables.