Este contenido no está disponible en el idioma seleccionado.
Chapter 2. Ansible Automation Platform containerized installation
Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments.
This guide helps you to understand the installation requirements and processes behind the containerized version of Ansible Automation Platform.
Upgrades from 2.4 Containerized Ansible Automation Platform Tech Preview to 2.5 Containerized Ansible Automation Platform are not supported.
2.1. Tested deployment models Copiar enlaceEnlace copiado en el portapapeles!
Red Hat tests Ansible Automation Platform 2.5 with a defined set of topologies to give you opinionated deployment options. The supported topologies include infrastructure topology diagrams, tested system configurations, example inventory files, and network ports information.
For containerized Ansible Automation Platform, there are two infrastructure topology shapes:
- Growth - (All-in-one) Intended for organizations that are getting started with Ansible Automation Platform. This topology allows for smaller footprint deployments.
- Enterprise - Intended for organizations that require Ansible Automation Platform deployments to have redundancy or higher compute for large volumes of automation. This is a more future-proofed scaled out architecture.
For more information about the tested deployment topologies for containerized Ansible Automation Platform, see Container topologies in Tested deployment models.
2.2. System requirements Copiar enlaceEnlace copiado en el portapapeles!
Use this information when planning your installation of containerized Ansible Automation Platform.
2.2.1. Prerequisites Copiar enlaceEnlace copiado en el portapapeles!
Ensure a dedicated non-root user is configured on the Red Hat Enterprise Linux host.
-
This user requires
sudoor other Ansible supported privilege escalation (sudois recommended) to perform administrative tasks during the installation. - This user is responsible for the installation of containerized Ansible Automation Platform.
- This user is also the service account for the containers running Ansible Automation Platform.
-
This user requires
- For managed nodes, ensure a dedicated user is configured on each node. Ansible Automation Platform connects as this user to run tasks on the node. For more information about configuring a dedicated user on each node, see Preparing the managed nodes for containerized installation.
- For remote host installations, ensure SSH public key authentication is configured for the non-root user. For guidelines on setting up SSH public key authentication for the non-root user, see How to configure SSH public key authentication for passwordless login.
- Ensure internet access is available from the Red Hat Enterprise Linux host if you are using the default online installation method.
- Ensure the appropriate network ports are open if a firewall is in place. For more information about the ports to open, see Container topologies in Tested deployment models.
Storing container images on an NFS share is not supported by Podman. To use an NFS share for the user home directory, set up the Podman storage backend path outside of the NFS share. For more information, see Rootless Podman and NFS.
2.2.2. Ansible Automation Platform system requirements Copiar enlaceEnlace copiado en el portapapeles!
Your system must meet the following minimum system requirements to install and run Red Hat Ansible Automation Platform.
| Type | Description | Notes |
|---|---|---|
| Subscription |
| |
| Operating system |
| |
| CPU architecture | x86_64, AArch64, s390x (IBM Z), ppc64le (IBM Power) | |
|
|
|
|
| Browser | A currently supported version of Mozilla Firefox or Google Chrome. | |
| Database | PostgreSQL 15 | External (customer supported) databases require International Components for Unicode (ICU) support. |
Each virtual machine (VM) has the following system requirements:
| Requirement | Minimum requirement |
|---|---|
| RAM | 16 GB |
| CPUs | 4 |
| Local disk |
|
| Disk IOPS | 3000 |
If performing a bundled installation of the growth topology with hub_seed_collections=true, then 32 GB RAM is recommended. Note that with this configuration the install time is going to increase and can take 45 or more minutes alone to complete seeding the collections.
2.2.3. Database requirements Copiar enlaceEnlace copiado en el portapapeles!
Ansible Automation Platform can work with two varieties of database:
- Database installed with Ansible Automation Platform - This database consists of a PostgreSQL installation done as part of an Ansible Automation Platform installation using PostgreSQL packages provided by Red Hat.
- Customer provided or configured database - This is an external database that is provided by the customer, whether on bare metal, virtual machine, container, or cloud hosted service.
Ansible Automation Platform requires customer provided (external) database to have International Components for Unicode (ICU) support.
2.3. Preparing the Red Hat Enterprise Linux host for containerized installation Copiar enlaceEnlace copiado en el portapapeles!
Containerized Ansible Automation Platform runs the component services as Podman based containers on top of a Red Hat Enterprise Linux host. Prepare the Red Hat Enterprise Linux host to ensure a successful installation.
Procedure
- Log in to the Red Hat Enterprise Linux host as your non-root user.
Ensure the hostname associated with your host is set as a fully qualified domain name (FQDN).
To check the hostname associated with your host, run the following command:
hostname -f
hostname -fCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
aap.example.org
aap.example.orgCopy to Clipboard Copied! Toggle word wrap Toggle overflow If the hostname is not a FQDN, you can set it with the following command:
sudo hostnamectl set-hostname <your_hostname>
$ sudo hostnamectl set-hostname <your_hostname>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Register your Red Hat Enterprise Linux host with
subscription-manager:sudo subscription-manager register
$ sudo subscription-manager registerCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify that only the BaseOS and AppStream repositories are enabled on the host:
sudo dnf repolist
$ sudo dnf repolistCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output for RHEL 9:
Updating Subscription Management repositories. repo id repo name rhel-9-for-x86_64-appstream-rpms Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-baseos-rpms Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)
Updating Subscription Management repositories. repo id repo name rhel-9-for-x86_64-appstream-rpms Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-baseos-rpms Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output for RHEL 10:
Updating Subscription Management repositories. repo id repo name rhel-10-for-x86_64-appstream-rpms Red Hat Enterprise Linux 10 for x86_64 - AppStream (RPMs) rhel-10-for-x86_64-baseos-rpms Red Hat Enterprise Linux 10 for x86_64 - BaseOS (RPMs)
Updating Subscription Management repositories. repo id repo name rhel-10-for-x86_64-appstream-rpms Red Hat Enterprise Linux 10 for x86_64 - AppStream (RPMs) rhel-10-for-x86_64-baseos-rpms Red Hat Enterprise Linux 10 for x86_64 - BaseOS (RPMs)Copy to Clipboard Copied! Toggle word wrap Toggle overflow - For disconnected installations follow the steps in Obtaining and configuring RPM source dependencies to access these repositories.
- Ensure the host can resolve host names and IP addresses using DNS. This is essential to ensure services can talk to one another.
Install
ansible-core:sudo dnf install -y ansible-core
$ sudo dnf install -y ansible-coreCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: You can install additional utilities that can be useful for troubleshooting purposes, for example
wget,git-core,rsync, andvim:sudo dnf install -y wget git-core rsync vim
$ sudo dnf install -y wget git-core rsync vimCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Optional: To have the installation program automatically pick up and apply your Ansible Automation Platform subscription manifest license, follow the steps in Obtaining a manifest file.
2.4. Preparing the managed nodes for containerized installation Copiar enlaceEnlace copiado en el portapapeles!
Managed nodes, also referred to as hosts, are the devices that Ansible Automation Platform is configured to manage.
To ensure a consistent and secure setup of containerized Ansible Automation Platform, create a dedicated user on each host. Ansible Automation Platform connects as this user to run tasks on the host.
Once configured, you can define the dedicated user for each host by adding ansible_user=<username> in your inventory file, for example: aap.example.org ansible_user=aap.
Complete the following steps for each host:
Procedure
- Log in to the host as the root user.
Create a new user. Replace
<username>with the username you want, for exampleaap.sudo adduser <username>
$ sudo adduser <username>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Set a password for the new user. Replace
<username>with the username you created.sudo passwd <username>
$ sudo passwd <username>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Configure the user to run
sudocommands.For a secure and maintainable installation, it is a best practice to configure
sudoprivileges for the installation user in a dedicated file within the/etc/sudoers.d/directory.Create a dedicated
sudoersfile for the user:sudo visudo -f /etc/sudoers.d/<username>
$ sudo visudo -f /etc/sudoers.d/<username>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following line to the file, replacing
<username>with the username you created:<username> ALL=(ALL) NOPASSWD: ALL
<username> ALL=(ALL) NOPASSWD: ALLCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Save and exit the file.
2.5. Downloading Ansible Automation Platform Copiar enlaceEnlace copiado en el portapapeles!
Choose the installation program you need based on your Red Hat Enterprise Linux environment internet connectivity and download the installation program to your Red Hat Enterprise Linux host.
Prerequisites
- You have logged in to the Red Hat Enterprise Linux host as your non-root user.
Procedure
Download the latest version of containerized Ansible Automation Platform from the Ansible Automation Platform download page.
- For online installations: Ansible Automation Platform 2.5 Containerized Setup
- For offline or bundled installations: Ansible Automation Platform 2.5 Containerized Setup Bundle
Copy the installation program
.tar.gzfile and the optional manifest.zipfile onto your Red Hat Enterprise Linux host.You can use the
scpcommand to securely copy the files. The basic syntax forscpis:scp [options] <path_to_source_file> <path_to_destination>
scp [options] <path_to_source_file> <path_to_destination>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the following
scpcommand to copy the installation program.tar.gzfile to an AWS EC2 instance with a private key (replace the placeholder<>values with your actual information):scp -i <path_to_private_key> ansible-automation-platform-containerized-setup-<version_number>.tar.gz ec2-user@<remote_host_ip_or_hostname>:<path_to_destination>
scp -i <path_to_private_key> ansible-automation-platform-containerized-setup-<version_number>.tar.gz ec2-user@<remote_host_ip_or_hostname>:<path_to_destination>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Decide where you want the installation program to reside on the file system. This is referred to as your installation directory.
- Installation related files are created under this location and require at least 15 GB for the initial installation.
Unpack the installation program
.tar.gzfile into your installation directory, and go to the unpacked directory.To unpack the online installer:
tar xfvz ansible-automation-platform-containerized-setup-<version_number>.tar.gz
$ tar xfvz ansible-automation-platform-containerized-setup-<version_number>.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow To unpack the offline or bundled installer:
tar xfvz ansible-automation-platform-containerized-setup-bundle-<version_number>-<arch_name>.tar.gz
$ tar xfvz ansible-automation-platform-containerized-setup-bundle-<version_number>-<arch_name>.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6. Configuring the inventory file Copiar enlaceEnlace copiado en el portapapeles!
You can control the installation of Ansible Automation Platform with inventory files. Inventory files define the information needed to customize the installation. For example, host details, certificate details, and various component-specific settings.
Example inventory files are available in this document that you can copy and change to quickly get started.
Additionally, growth topology and enterprise topology inventory files are available in the following locations:
In the downloaded installation program package:
-
The default inventory file, named
inventory, is for the enterprise topology pattern. -
To deploy the growth topology (all-in-one) pattern, use the
inventory-growthfile instead.
-
The default inventory file, named
- In Container topologies in Tested deployment models.
To use the example inventory files, replace the < > placeholders with your specific variables, and update the host names.
Refer to the README.md file in the installation directory or Inventory file variables for more information about optional and required variables.
2.6.1. Inventory file for online installation for containerized growth topology (all-in-one) Copiar enlaceEnlace copiado en el portapapeles!
Use the example inventory file to perform an online installation for the containerized growth topology (all-in-one):
ansible_connection=local- Used for all-in-one installations where the installation program is run on the same node that hosts Ansible Automation Platform.-
If the installation program is run from a separate node, do not include
ansible_connection=local. In this case, use an SSH connection instead.
-
If the installation program is run from a separate node, do not include
-
[database]- This group in the inventory file defines the Ansible Automation Platform managed database.
2.6.2. Inventory file for online installation for containerized enterprise topology Copiar enlaceEnlace copiado en el portapapeles!
Use the example inventory file to perform an online installation for the containerized enterprise topology:
2.6.3. Setting registry_username and registry_password Copiar enlaceEnlace copiado en el portapapeles!
When using the registry_username and registry_password variables for an online non-bundled installation, you need to create a new registry service account.
Registry service accounts are named tokens that can be used in environments where credentials will be shared, such as deployment systems.
Procedure
- Go to https://access.redhat.com/terms-based-registry/accounts.
- On the Registry Service Accounts page click .
- Enter a name for the account using only the allowed characters.
- Optionally enter a description for the account.
- Click .
- Find the created account in the list by searching for your name in the search field.
- Click the name of the account that you created.
Alternatively, if you know the name of your token, you can go directly to the page by entering the URL:
https://access.redhat.com/terms-based-registry/token/<name-of-your-token>
https://access.redhat.com/terms-based-registry/token/<name-of-your-token>Copy to Clipboard Copied! Toggle word wrap Toggle overflow A token page opens, displaying a generated username (different from the account name) and a token.
- If no token is displayed, click . You can also click this to generate a new username and token.
-
Copy the username (for example "1234567|testuser") and use it to set the variable
registry_username. -
Copy the token and use it to set the variable
registry_password.
2.7. Advanced configuration options Copiar enlaceEnlace copiado en el portapapeles!
Advanced configuration options, such as external database set up and the use of custom TLS certs, are available for more complex deployments of containerized Ansible Automation Platform.
If you are not using these advanced configuration options, go to Installing containerized Ansible Automation Platform to continue with your installation.
2.7.1. Adding a safe plugin variable to Event-Driven Ansible controller Copiar enlaceEnlace copiado en el portapapeles!
When using redhat.insights_eda or similar plugins to run rulebook activations in Event-Driven Ansible controller, you must add a safe plugin variable to a directory in Ansible Automation Platform. This ensures connection between Event-Driven Ansible controller and the source plugin, and displays port mappings correctly.
Procedure
Create a directory for the safe plugin variable:
mkdir -p ./group_vars/automationeda
mkdir -p ./group_vars/automationedaCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Create a file within that directory for your new setting (for example,
touch ./group_vars/automationeda/custom.yml) Add the variable
eda_safe_pluginswith a list of plugins to enable. For example:eda_safe_plugins: ['ansible.eda.webhook', 'ansible.eda.alertmanager']
eda_safe_plugins: ['ansible.eda.webhook', 'ansible.eda.alertmanager']Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7.2. Adding execution nodes Copiar enlaceEnlace copiado en el portapapeles!
Containerized Ansible Automation Platform can deploy remote execution nodes.
You can define remote execution nodes in the [execution_nodes] group of your inventory file:
[execution_nodes] <fqdn_of_your_execution_host>
[execution_nodes]
<fqdn_of_your_execution_host>
By default, an execution node is configured with the following settings which can be modified as needed:
receptor_port=27199 receptor_protocol=tcp receptor_type=execution
receptor_port=27199
receptor_protocol=tcp
receptor_type=execution
-
receptor_port- The port number that receptor listens on for incoming connections from other receptor nodes. -
receptor_type- The role of the node. Valid options includeexecutionorhop. -
receptor_protocol- The protocol used for communication. Valid options includetcporudp.
By default, all nodes in the [execution_nodes] group are added as peers for the controller node. To change the peer configuration, use the receptor_peers variable.
The value of receptor_peers must be a comma-separated list of host names. Do not use inventory group names.
Example configuration:
2.7.3. Configuring storage for automation hub Copiar enlaceEnlace copiado en el portapapeles!
Configure storage backends for automation hub including Amazon S3, Azure Blob Storage, and Network File System (NFS) storage.
2.7.3.1. Configuring Amazon S3 storage for automation hub Copiar enlaceEnlace copiado en el portapapeles!
Amazon S3 storage is a type of object storage that is supported in containerized installations. When using an AWS S3 storage backend, set hub_storage_backend to s3. The AWS S3 bucket needs to exist before running the installation program.
Procedure
- Ensure your AWS S3 bucket exists before proceeding with the installation.
Add the following variables to your inventory file under the
[all:vars]group to configure S3 storage:-
hub_s3_access_key -
hub_s3_secret_key -
hub_s3_bucket_name hub_s3_extra_settingsYou can pass extra parameters through an Ansible
hub_s3_extra_settingsdictionary. For example:hub_s3_extra_settings: AWS_S3_MAX_MEMORY_SIZE: 4096 AWS_S3_REGION_NAME: eu-central-1 AWS_S3_USE_SSL: True
hub_s3_extra_settings: AWS_S3_MAX_MEMORY_SIZE: 4096 AWS_S3_REGION_NAME: eu-central-1 AWS_S3_USE_SSL: TrueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
-
2.7.3.2. Configuring Azure Blob Storage for automation hub Copiar enlaceEnlace copiado en el portapapeles!
Azure Blob storage is a type of object storage that is supported in containerized installations. When using an Azure blob storage backend, set hub_storage_backend to azure. The Azure container needs to exist before running the installation program.
Procedure
- Ensure your Azure container exists before proceeding with the installation.
Add the following variables to your inventory file under the
[all:vars]group to configure Azure Blob storage:-
hub_azure_account_key -
hub_azure_account_name -
hub_azure_container hub_azure_extra_settingsYou can pass extra parameters through an Ansible
hub_azure_extra_settingsdictionary. For example:hub_azure_extra_settings: AZURE_LOCATION: foo AZURE_SSL: True AZURE_URL_EXPIRATION_SECS: 60
hub_azure_extra_settings: AZURE_LOCATION: foo AZURE_SSL: True AZURE_URL_EXPIRATION_SECS: 60Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
2.7.3.3. Configuring Network File System (NFS) storage for automation hub Copiar enlaceEnlace copiado en el portapapeles!
NFS is a type of shared storage that is supported in containerized installations. Shared storage is required when installing more than one instance of automation hub with a file storage backend. When installing a single instance of the automation hub, shared storage is optional.
Procedure
To configure shared storage for automation hub, set the
hub_shared_data_pathvariable in your inventory file:hub_shared_data_path=<path_to_nfs_share>
hub_shared_data_path=<path_to_nfs_share>Copy to Clipboard Copied! Toggle word wrap Toggle overflow The value must match the format
host:dir, for examplenfs-server.example.com:/exports/hub.-
(Optional) To change the mount options for your NFS share, use the
hub_shared_data_mount_optsvariable. The default value isrw,sync,hard.
2.7.4. Configuring a HAProxy load balancer Copiar enlaceEnlace copiado en el portapapeles!
To configure a HAProxy load balancer in front of platform gateway with a custom CA cert, set the following inventory file variables under the [all:vars] group:
custom_ca_cert=<path_to_cert_crt> gateway_main_url=<https://load_balancer_url>
custom_ca_cert=<path_to_cert_crt>
gateway_main_url=<https://load_balancer_url>
HAProxy SSL passthrough mode is not supported with platform gateway.
2.7.5. Enabling automation content collection and container signing Copiar enlaceEnlace copiado en el portapapeles!
Automation content signing is disabled by default. To enable it, the following installation variables are required in the inventory file:
The following variables are required if the keys are protected by a passphrase:
# Collection signing hub_collection_signing_pass=<gpg_key_passphrase> # Container signing hub_container_signing_pass=<gpg_key_passphrase>
# Collection signing
hub_collection_signing_pass=<gpg_key_passphrase>
# Container signing
hub_container_signing_pass=<gpg_key_passphrase>
The hub_collection_signing_key and hub_container_signing_key variables require the set up of keys before running an installation.
Automation content signing currently only supports GnuPG (GPG) based signature keys. For more information about GPG, see the GnuPG man page.
The algorithm and cipher used is the responsibility of the customer.
Procedure
On a RHEL9 server run the following command to create a new key pair for collection signing:
gpg --gen-key
gpg --gen-keyCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enter your information for "Real name" and "Email address":
Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow If this fails, your environment does not have the necessary prerequisite packages installed for GPG. Install the necessary packages to proceed.
- A dialog box will appear and ask you for a passphrase. This is optional but recommended.
The keys are then generated, and produce output similar to the following:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Note the expiry date that you can set based on company standards and needs.
You can view all of your GPG keys by running the following command:
gpg --list-secret-keys --keyid-format=long
gpg --list-secret-keys --keyid-format=longCopy to Clipboard Copied! Toggle word wrap Toggle overflow To export the public key run the following command:
gpg --export -a --output collection-signing-key.pub <email_address_used_to_generate_key>
gpg --export -a --output collection-signing-key.pub <email_address_used_to_generate_key>Copy to Clipboard Copied! Toggle word wrap Toggle overflow To export the private key run the following command:
gpg -a --export-secret-keys <email_address_used_to_generate_key> > collection-signing-key.priv
gpg -a --export-secret-keys <email_address_used_to_generate_key> > collection-signing-key.privCopy to Clipboard Copied! Toggle word wrap Toggle overflow - If a passphrase is detected, you will be prompted to enter the passphrase.
To view the private key file contents, run the following command:
cat collection-signing-key.priv
cat collection-signing-key.privCopy to Clipboard Copied! Toggle word wrap Toggle overflow Example output:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Repeat steps 1 to 9 to create a key pair for container signing.
Add the following variables to the inventory file and run the installation to create the signing services:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7.6. Setting up a customer provided (external) database Copiar enlaceEnlace copiado en el portapapeles!
There are two possible scenarios for setting up an external database:
- An external database with PostgreSQL admin credentials
- An external database without PostgreSQL admin credentials
- When using an external database with Ansible Automation Platform, you must create and maintain that database. Ensure that you clear your external database when uninstalling Ansible Automation Platform.
- Red Hat Ansible Automation Platform requires customer provided (external) database to have ICU support.
- During configuration of an external database, you must check the external database coverage. For more information, see Red Hat Ansible Automation Platform Database Scope of Coverage.
2.7.6.1. Setting up an external database with PostgreSQL admin credentials Copiar enlaceEnlace copiado en el portapapeles!
If you have PostgreSQL admin credentials, you can supply them in the inventory file and the installation program creates the PostgreSQL users and databases for each component for you. The PostgreSQL admin account must have SUPERUSER privileges.
Procedure
To configure the PostgreSQL admin credentials, add the following variables to the inventory file under the
[all:vars]group:postgresql_admin_username=<set your own> postgresql_admin_password=<set your own>
postgresql_admin_username=<set your own> postgresql_admin_password=<set your own>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7.6.2. Setting up an external database without PostgreSQL admin credentials Copiar enlaceEnlace copiado en el portapapeles!
If you do not have PostgreSQL admin credentials, then PostgreSQL users and databases need to be created for each component (platform gateway, automation controller, automation hub, and Event-Driven Ansible) before running the installation program.
Procedure
Connect to a PostgreSQL compliant database server with a user that has
SUPERUSERprivileges.psql -h <hostname> -U <username> -p <port_number>
# psql -h <hostname> -U <username> -p <port_number>Copy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
psql -h db.example.com -U superuser -p 5432
# psql -h db.example.com -U superuser -p 5432Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the user with a password and ensure the
CREATEDBrole is assigned to the user. For more information, see Database Roles.CREATE USER <username> WITH PASSWORD <password> CREATEDB;
CREATE USER <username> WITH PASSWORD <password> CREATEDB;Copy to Clipboard Copied! Toggle word wrap Toggle overflow Create the database and add the user you created as the owner.
CREATE DATABASE <database_name> OWNER <username>;
CREATE DATABASE <database_name> OWNER <username>;Copy to Clipboard Copied! Toggle word wrap Toggle overflow When you have created the PostgreSQL users and databases for each component, you can supply them in the inventory file under the
[all:vars]group.Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7.6.3. Enabling the hstore extension for the automation hub PostgreSQL database Copiar enlaceEnlace copiado en el portapapeles!
The database migration script uses hstore fields to store information, therefore the hstore extension must be enabled in the automation hub PostgreSQL database.
This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server.
If the PostgreSQL database is external, you must enable the hstore extension in the automation hub PostgreSQL database manually before installation.
If the hstore extension is not enabled before installation, a failure raises during database migration.
Procedure
Check if the extension is available on the PostgreSQL server (automation hub database).
psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"
$ psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where the default value for
<automation hub database>isautomationhub.Example output with
hstoreavailable:name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)
name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output with
hstorenot available:name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)
name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)Copy to Clipboard Copied! Toggle word wrap Toggle overflow On a RHEL based server, the
hstoreextension is included in thepostgresql-contribRPM package, which is not installed automatically when installing the PostgreSQL server RPM package.To install the RPM package, use the following command:
dnf install postgresql-contrib
dnf install postgresql-contribCopy to Clipboard Copied! Toggle word wrap Toggle overflow Load the
hstorePostgreSQL extension into the automation hub database with the following command:psql -d <automation hub database> -c "CREATE EXTENSION hstore;"
$ psql -d <automation hub database> -c "CREATE EXTENSION hstore;"Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the following output, the
installed_versionfield lists thehstoreextension used, indicating thathstoreis enabled.name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)
name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7.6.4. Optional: configuring mutual TLS (mTLS) authentication for an external database Copiar enlaceEnlace copiado en el portapapeles!
mTLS authentication is disabled by default. To configure each component’s database with mTLS authentication, add the following variables to your inventory file under the [all:vars] group and ensure each component has a different TLS certificate and key:
Procedure
Add the following variables to your inventory file under the
[all:vars]group:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7.7. Using custom TLS certificates Copiar enlaceEnlace copiado en el portapapeles!
Red Hat Ansible Automation Platform uses X.509 certificate and key pairs to secure traffic both internally between Ansible Automation Platform components and externally for public UI and API connections.
There are two primary ways to manage TLS certificates for your Ansible Automation Platform deployment:
- Ansible Automation Platform generated certificates (this is the default)
- User-provided certificates
2.7.7.1. Ansible Automation Platform generated certificates Copiar enlaceEnlace copiado en el portapapeles!
By default, the installation program creates a self-signed Certificate Authority (CA) and uses it to generate self-signed TLS certificates for all Ansible Automation Platform services. The self-signed CA certificate and key are generated on one node under the ~/aap/tls/ directory and copied to the same location on all other nodes. This CA is valid for 10 years after the initial creation date.
Self-signed certificates are not part of any public chain of trust. The installation program creates a certificate truststore that includes the self-signed CA certificate under ~/aap/tls/extracted/ and bind-mounts that directory to each Ansible Automation Platform service container under /etc/pki/ca-trust/extracted/. This allows each Ansible Automation Platform component to validate the self-signed certificates of the other Ansible Automation Platform services. The CA certificate can also be added to the truststore of other systems or browsers as needed.
2.7.7.2. User-provided certificates Copiar enlaceEnlace copiado en el portapapeles!
To use your own TLS certificates and keys to replace some or all of the self-signed certificates generated during installation, you can set specific variables in your inventory file. These certificates and keys must be generated by a public or organizational CA in advance so that they are available during the installation process.
2.7.7.2.1. Using a custom CA to generate all TLS certificates Copiar enlaceEnlace copiado en el portapapeles!
Use this method when you want Ansible Automation Platform to generate all of the certificates, but you want them signed by a custom CA rather than the default self-signed certificates.
Procedure
To use a custom Certificate Authority (CA) to generate TLS certificates for all Ansible Automation Platform services, set the following variables in your inventory file:
ca_tls_cert=<path_to_ca_tls_certificate> ca_tls_key=<path_to_ca_tls_key>
ca_tls_cert=<path_to_ca_tls_certificate> ca_tls_key=<path_to_ca_tls_key>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7.7.2.2. Providing custom TLS certificates for each service Copiar enlaceEnlace copiado en el portapapeles!
Use this method if your organization manages TLS certificates outside of Ansible Automation Platform and requires manual provisioning.
Procedure
To manually provide TLS certificates for each individual service (for example, automation controller, automation hub, and Event-Driven Ansible), set the following variables in your inventory file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7.7.2.3. Considerations for certificates provided per service Copiar enlaceEnlace copiado en el portapapeles!
When providing custom TLS certificates for each individual service, consider the following:
-
It is possible to provide unique certificates per host. This requires defining the specific
_tls_certand_tls_keyvariables in your inventory file as shown in the earlier inventory file example. - For services deployed across many nodes (for example, when following the enterprise topology), the provided certificate for that service must include the FQDN of all associated nodes in its Subject Alternative Name (SAN) field.
- If an external-facing service (such as automation controller or platform gateway) is deployed behind a load balancer that performs SSL/TLS offloading, the service’s certificate must include the load balancer’s FQDN in its SAN field, in addition to the FQDNs of the individual service nodes.
2.7.7.2.4. Providing a custom CA certificate Copiar enlaceEnlace copiado en el portapapeles!
When you manually provide TLS certificates, those certificates might be signed by a custom CA. Provide a custom CA certificate to ensure proper authentication and secure communication within your environment. If you have multiple custom CA certificates, you must merge them into a single file.
Procedure
If any of the TLS certificates you manually provided are signed by a custom CA, you must specify the CA certificate by using the following variable in your inventory file:
custom_ca_cert=<path_to_custom_ca_certificate>
custom_ca_cert=<path_to_custom_ca_certificate>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you have more than one CA certificate, combine them into a single file and reference the combined certificate with the
custom_ca_certvariable.
2.7.7.3. Receptor certificate considerations Copiar enlaceEnlace copiado en el portapapeles!
When using a custom certificate for Receptor nodes, the certificate requires the otherName field specified in the Subject Alternative Name (SAN) of the certificate with the value 1.3.6.1.4.1.2312.19.1. For more information, see Above the mesh TLS.
Receptor does not support the usage of wildcard certificates. Additionally, each Receptor certificate must have the host FQDN specified in its SAN for TLS hostname validation to be correctly performed.
2.7.7.4. Redis certificate considerations Copiar enlaceEnlace copiado en el portapapeles!
When using custom TLS certificates for Redis-related services, consider the following for mutual TLS (mTLS) communication if specifying Extended Key Usage (EKU):
-
The Redis server certificate (
redis_tls_cert) should include theserverAuth(web server authentication) andclientAuth(client authentication) EKU. -
The Redis client certificates (
gateway_redis_tls_cert,eda_redis_tls_cert) should include theclientAuth(client authentication) EKU.
2.7.8. Using custom Receptor signing keys Copiar enlaceEnlace copiado en el portapapeles!
Receptor signing is enabled by default unless receptor_disable_signing=true is set, and an RSA key pair (public and private) is generated by the installation program. However, you can set custom RSA public and private keys by using the following variables:
receptor_signing_private_key=<full_path_to_private_key> receptor_signing_public_key=<full_path_to_public_key>
receptor_signing_private_key=<full_path_to_private_key>
receptor_signing_public_key=<full_path_to_public_key>
2.8. Installing containerized Ansible Automation Platform Copiar enlaceEnlace copiado en el portapapeles!
After you prepare the Red Hat Enterprise Linux host, download Ansible Automation Platform, and configure the inventory file, run the install playbook to install containerized Ansible Automation Platform.
Prerequisites
You have done the following:
- Prepared the Red Hat Enterprise Linux host
- Prepared the managed nodes
- Downloaded Ansible Automation Platform
- Configured the inventory file
- Logged in to the Red Hat Enterprise Linux host as your non-root user
Procedure
- Go to the installation directory on your Red Hat Enterprise Linux host.
Run the
installplaybook:ansible-playbook -i <inventory_file_name> ansible.containerized_installer.install
ansible-playbook -i <inventory_file_name> ansible.containerized_installer.installCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
ansible-playbook -i inventory ansible.containerized_installer.install
ansible-playbook -i inventory ansible.containerized_installer.installCopy to Clipboard Copied! Toggle word wrap Toggle overflow You can add additional parameters to the installation command as needed:
ansible-playbook -i <inventory_file_name> -e @<vault_file_name> --ask-vault-pass -K -v ansible.containerized_installer.install
ansible-playbook -i <inventory_file_name> -e @<vault_file_name> --ask-vault-pass -K -v ansible.containerized_installer.installCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
ansible-playbook -i inventory -e @vault.yml --ask-vault-pass -K -v ansible.containerized_installer.install
ansible-playbook -i inventory -e @vault.yml --ask-vault-pass -K -v ansible.containerized_installer.installCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
-i <inventory_file_name>- The inventory file to use for the installation. -
-e @<vault_file_name> --ask-vault-pass- (Optional) If you are using a vault to store sensitive variables, add this to the installation command. -
-K- (Optional) If your privilege escalation requires you to enter a password, add this to the installation command. You are then prompted for the BECOME password. -
-v- (Optional) You can use increasing verbosity, up to 4 v’s (-vvvv) to see the details of the installation process. However, it is important to note that this can significantly increase installation time, so use it only as needed or requested by Red Hat support.
-
Verification
After the installation completes, verify that you can access Ansible Automation Platform which is available by default at the following URL:
https://<gateway_node>:443
https://<gateway_node>:443Copy to Clipboard Copied! Toggle word wrap Toggle overflow -
Log in as the admin user with the credentials you created for
gateway_admin_usernameandgateway_admin_password. The default ports and protocols used for Ansible Automation Platform are 80 (HTTP) and 443 (HTTPS). You can customize the ports with the following variables:
envoy_http_port=80 envoy_https_port=443
envoy_http_port=80 envoy_https_port=443Copy to Clipboard Copied! Toggle word wrap Toggle overflow If you want to disable HTTPS, set
envoy_disable_httpstotrue:envoy_disable_https: true
envoy_disable_https: trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.9. Updating containerized Ansible Automation Platform Copiar enlaceEnlace copiado en el portapapeles!
Perform a patch update for a container-based installation of Ansible Automation Platform from 2.5 to 2.5.x.
Upgrades from 2.4 Containerized Ansible Automation Platform Tech Preview to 2.5 Containerized Ansible Automation Platform are not supported.
Prerequisites
- You have reviewed the release notes for the associated patch release.
- You have created a backup of your Ansible Automation Platform deployment.
Procedure
- Log in to the Red Hat Enterprise Linux host as your dedicated non-root user.
- Follow the steps in Downloading Ansible Automation Platform to download the latest version of containerized Ansible Automation Platform.
- Copy the downloaded installation program to your Red Hat Enterprise Linux Host.
-
Edit the
inventoryfile to match your required configuration. You can keep the same parameters from your existing Ansible Automation Platform deployment or you can change the parameters to match any modifications to your environment. Run the
installplaybook:ansible-playbook -i inventory ansible.containerized_installer.install
$ ansible-playbook -i inventory ansible.containerized_installer.installCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
If your privilege escalation requires a password to be entered, append
-Kto the command. You will then be prompted for theBECOMEpassword. -
You can use increasing verbosity, up to 4 v’s (
-vvvv) to see the details of the installation process. However it is important to note that this can significantly increase installation time, so it is recommended that you use it only as needed or requested by Red Hat support.
-
If your privilege escalation requires a password to be entered, append
- The update begins.
2.10. Backing up containerized Ansible Automation Platform Copiar enlaceEnlace copiado en el portapapeles!
Perform a backup of your container-based installation of Ansible Automation Platform.
- When backing up Ansible Automation Platform, use the installation program that matches your currently installed version of Ansible Automation Platform.
- Backup functionality only works with the PostgreSQL versions supported by your current Ansible Automation Platform version. For more information, see System requirements.
- Backup and restore for content stored in Azure Blob Storage or Amazon S3 must be handled through their respective vendor portals, as each vendor provides their own backup solutions.
Prerequisites
- You have logged in to the Red Hat Enterprise Linux host as your dedicated non-root user.
Procedure
- Go to the Red Hat Ansible Automation Platform installation directory on your Red Hat Enterprise Linux host.
To control compression of the backup artifacts before they are sent to the host running the backup operation, you can use the following variables in your inventory file:
For control of compression for filesystem related backup files:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow For control of compression for database related backup files:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Run the
backupplaybook:ansible-playbook -i <path_to_inventory> ansible.containerized_installer.backup
$ ansible-playbook -i <path_to_inventory> ansible.containerized_installer.backupCopy to Clipboard Copied! Toggle word wrap Toggle overflow The backup process creates archives of the following data:
- PostgreSQL databases
- Configuration files
- Data files
Next steps
To customize the backup process, you can use the following variables in your inventory file:
-
Change the backup destination directory from the default
./backupsby using thebackup_dirvariable. Exclude paths that contain duplicated data, such as snapshot subdirectories, by using the
hub_data_path_excludevariable. For example, to exclude a.snapshotssubdirectory, specifyhub_data_path_exclude=['/.snapshots/']in your inventory file.Alternatively, you can use the command-line interface with the
-eflag to pass this variable at runtime:ansible-playbook -i inventory ansible.containerized_installer.backup -e hub_data_path_exclude="['*/.snapshots/*']"
$ ansible-playbook -i inventory ansible.containerized_installer.backup -e hub_data_path_exclude="['*/.snapshots/*']"Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.11. Restoring containerized Ansible Automation Platform Copiar enlaceEnlace copiado en el portapapeles!
Restore your container-based installation of Ansible Automation Platform from a backup, or to a different environment.
When restoring Ansible Automation Platform, use the latest installation program available at the time of the restore. For example, if you are restoring a backup taken from version 2.5-1, use the latest 2.5-x installation program available at the time of the restore.
Restore functionality only works with the PostgreSQL versions supported by your current Ansible Automation Platform version. For more information, see System requirements.
Prerequisites
- You have logged in to the Red Hat Enterprise Linux host as your dedicated non-root user.
- You have a backup of your Ansible Automation Platform deployment. For more information, see Backing up container-based Ansible Automation Platform.
- If restoring to a different environment with the same hostnames, you have performed a fresh installation on the target environment with the same topology as the original (source) environment.
- You have ensured that the administrator credentials on the target environment match the administrator credentials from the source environment.
Procedure
- Go to the installation directory on your Red Hat Enterprise Linux host.
Perform the relevant restoration steps:
If you are restoring to the same environment with the same hostnames, run the
restoreplaybook:ansible-playbook -i <path_to_inventory> ansible.containerized_installer.restore
$ ansible-playbook -i <path_to_inventory> ansible.containerized_installer.restoreCopy to Clipboard Copied! Toggle word wrap Toggle overflow This restores the important data deployed by the containerized installer such as:
- PostgreSQL databases
- Configuration files
Data files
By default, the backup directory is set to
./backups. You can change this by using thebackup_dirvariable in yourinventoryfile.
If you are restoring to a different environment with different hostnames, perform the following additional steps before running the
restoreplaybook:ImportantRestoring to a different environment with different hostnames is not recommended and is intended only as a workaround.
For each component, identify the backup file from the source environment that contains the PostgreSQL dump file.
For example:
cd ansible-automation-platform-containerized-setup-<version_number>/backups
$ cd ansible-automation-platform-containerized-setup-<version_number>/backupsCopy to Clipboard Copied! Toggle word wrap Toggle overflow tar tvf gateway_env1-gateway-node1.tar.gz | grep db
$ tar tvf gateway_env1-gateway-node1.tar.gz | grep db -rw-r--r-- ansible/ansible 4850774 2025-06-30 11:05 aap/backups/awx.dbCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Copy the backup files from the source environment to the target environment.
Rename the backup files on the target environment to reflect the new node names.
For example:
cd ansible-automation-platform-containerized-setup-<version_number>/backups
$ cd ansible-automation-platform-containerized-setup-<version_number>/backupsCopy to Clipboard Copied! Toggle word wrap Toggle overflow mv gateway_env1-gateway-node1.tar.gz gateway_env2-gateway-node1.tar.gz
$ mv gateway_env1-gateway-node1.tar.gz gateway_env2-gateway-node1.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow For enterprise topologies, ensure that the component backup file containing the
component.dbfile is listed first in its group within the inventory file.For example:
cd ansible-automation-platform-containerized-setup-<version_number>
$ cd ansible-automation-platform-containerized-setup-<version_number>Copy to Clipboard Copied! Toggle word wrap Toggle overflow ls backups/gateway*
$ ls backups/gateway* gateway_env2-gateway-node1.tar.gz gateway_env2-gateway-node2.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow tar tvf backups/gateway_env2-gateway-node1.tar.gz | grep db
$ tar tvf backups/gateway_env2-gateway-node1.tar.gz | grep db -rw-r--r-- ansible/ansible 416687 2025-06-30 11:05 aap/backups/gateway.dbCopy to Clipboard Copied! Toggle word wrap Toggle overflow tar tvf backups/gateway_env2-gateway-node2.tar.gz | grep db
$ tar tvf backups/gateway_env2-gateway-node2.tar.gz | grep dbCopy to Clipboard Copied! Toggle word wrap Toggle overflow vi inventory
$ vi inventory [automationgateway] env2-gateway-node1 env2-gateway-node2Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.12. Uninstalling containerized Ansible Automation Platform Copiar enlaceEnlace copiado en el portapapeles!
Uninstall your container-based installation of Ansible Automation Platform.
Prerequisites
- You have logged in to the Red Hat Enterprise Linux host as your dedicated non-root user.
Procedure
If you intend to reinstall Ansible Automation Platform and want to use the preserved databases, you must collect the existing secret keys:
First, list the available secrets:
podman secret list
$ podman secret listCopy to Clipboard Copied! Toggle word wrap Toggle overflow Next, collect the secret keys by running the following command:
podman secret inspect --showsecret <secret_key_variable> | jq -r .[].SecretData
$ podman secret inspect --showsecret <secret_key_variable> | jq -r .[].SecretDataCopy to Clipboard Copied! Toggle word wrap Toggle overflow For example:
podman secret inspect --showsecret controller_secret_key | jq -r .[].SecretData
$ podman secret inspect --showsecret controller_secret_key | jq -r .[].SecretDataCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Run the
uninstallplaybook:ansible-playbook -i inventory ansible.containerized_installer.uninstall
$ ansible-playbook -i inventory ansible.containerized_installer.uninstallCopy to Clipboard Copied! Toggle word wrap Toggle overflow This stops all systemd units and containers and then deletes all resources used by the containerized installer such as:
- configuration and data directories and files
- systemd unit files
- Podman containers and images
- RPM packages
To keep container images, set the
container_keep_imagesparameter totrue.ansible-playbook -i inventory ansible.containerized_installer.uninstall -e container_keep_images=true
$ ansible-playbook -i inventory ansible.containerized_installer.uninstall -e container_keep_images=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow To keep PostgreSQL databases, set the
postgresql_keep_databasesparameter totrue.ansible-playbook -i inventory ansible.containerized_installer.uninstall -e postgresql_keep_databases=true
$ ansible-playbook -i inventory ansible.containerized_installer.uninstall -e postgresql_keep_databases=trueCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.13. Reinstalling containerized Ansible Automation Platform Copiar enlaceEnlace copiado en el portapapeles!
To reinstall a containerized deployment after uninstalling and preserving the database, follow the steps in Installing containerized Ansible Automation Platform and include the existing secret key value in the playbook command:
ansible-playbook -i inventory ansible.containerized_installer.install -e controller_secret_key=<secret_key_value>
$ ansible-playbook -i inventory ansible.containerized_installer.install -e controller_secret_key=<secret_key_value>