RPM installation
Install the RPM version of Ansible Automation Platform
Abstract
Preface Copy linkLink copied to clipboard!
Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments.
This guide helps you to understand the installation requirements and processes behind installing Ansible Automation Platform. This document has been updated to include information for the latest release of Ansible Automation Platform.
The Ansible Automation Platform RPM installer was deprecated in 2.5 and will be removed in Ansible Automation Platform 2.7. The RPM installer will be supported for RHEL 9 during the lifecycle of Ansible Automation Platform 2.6 to support migrations to existing supported topologies. For more information on upgrade and migration paths, see the Support matrix for upgrade scenarios.
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
Chapter 1. Red Hat Ansible Automation Platform installation overview Copy linkLink copied to clipboard!
The Red Hat Ansible Automation Platform installation program offers you flexibility, allowing you to install Ansible Automation Platform by using several supported installation scenarios.
Regardless of the installation scenario you choose, installing Ansible Automation Platform involves the following steps:
- Editing the Red Hat Ansible Automation Platform installer inventory file
- The Ansible Automation Platform installer inventory file allows you to specify your installation scenario and describe host deployments to Ansible. The examples provided in this document show the parameter specifications needed to install that scenario for your deployment.
- Running the Red Hat Ansible Automation Platform installer setup script
- The setup script installs Ansible Automation Platform by using the required parameters defined in the inventory file.
- Verifying your Ansible Automation Platform installation
- After installing Ansible Automation Platform, you can verify that the installation has been successful by logging in to the platform UI and seeing the relevant functionality.
1.1. Prerequisites Copy linkLink copied to clipboard!
- You chose and obtained a platform installer from the Red Hat Ansible Automation Platform Product Software.
- You are installing on a machine that meets base system requirements.
- You have updated all of the packages to the recent version of your RHEL nodes.
To prevent errors, upgrade your RHEL nodes fully before installing Ansible Automation Platform.
- You have created a Red Hat Registry Service Account, by using the instructions in Creating Registry Service Accounts.
1.2. Managing Ansible Automation Platform subscriptions, updates, and support Copy linkLink copied to clipboard!
Ansible is an open source software project and is licensed under the GNU General Public License version 3, as described in the Ansible Source Code.
You must have valid subscriptions attached before installing Ansible Automation Platform.
1.2.1. Trial and evaluation Copy linkLink copied to clipboard!
You need a subscription to run Ansible Automation Platform. You can start by signing up for a free trial subscription.
- Trial subscriptions for Ansible Automation Platform are available at the Red Hat product trial center.
- Support is not included in a trial subscription or during an evaluation of the Ansible Automation Platform.
1.2.2. Node counting in subscriptions Copy linkLink copied to clipboard!
The Ansible Automation Platform subscription defines the number of Managed Nodes that can be managed as part of your subscription.
For more information about managed node requirements for subscriptions, see How are "managed nodes" defined as part of the Red Hat Ansible Automation Platform offering.
Ansible does not recycle node counts or reset automated hosts.
1.2.3. Subscription Types Copy linkLink copied to clipboard!
Red Hat Ansible Automation Platform is provided at various levels of support and number of machines as an annual subscription.
Standard:
- Manage any size environment
- Enterprise 8x5 support and SLA
- Maintenance and upgrades included
- Review the SLA at Product Support Terms of Service
- Review the Red Hat Support Severity Level Definitions
Premium:
- Manage any size environment, including mission-critical environments
- Premium 24x7 support and SLA
- Maintenance and upgrades included
- Review the SLA at Product Support Terms of Service
- Review the Red Hat Support Severity Level Definitions
All subscription levels include regular updates and releases of automation controller, Ansible, and any other components of the Ansible Automation Platform.
For more information, contact Ansible through the Red Hat Customer Portal or at the Ansible site.
1.2.4. Attaching your Red Hat Ansible Automation Platform subscription Copy linkLink copied to clipboard!
You must have valid subscriptions on all nodes before installing Red Hat Ansible Automation Platform.
Simple Content Access (SCA) is now the default subscription method for all Red Hat accounts. With SCA, you must register your systems to Red Hat Subscription Management (RHSM) or Satellite to access content. Traditional pool-based subscription attachment commands (such as subscription-manager attach --pool or subscription-manager attach --auto) are no longer required. For more information, see Simple Content Access.
Procedure
Register your system with Red Hat Subscription Management:
sudo subscription-manager register --username <$INSERT_USERNAME_HERE> --password <$INSERT_PASSWORD_HERE>
$ sudo subscription-manager register --username <$INSERT_USERNAME_HERE> --password <$INSERT_PASSWORD_HERE>Copy to Clipboard Copied! Toggle word wrap Toggle overflow With Simple Content Access (SCA), registration is the only step required to access Ansible Automation Platform content.
NoteFor accounts still using legacy subscription pools, you might have to manually attach subscriptions using the commands shown in the troubleshooting section.
Verification
Refresh the subscription information on your system:
sudo subscription-manager refresh
$ sudo subscription-manager refreshCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify your registration:
sudo subscription-manager identity
$ sudo subscription-manager identityCopy to Clipboard Copied! Toggle word wrap Toggle overflow This command displays your system identity, name, organization name, and organization ID, confirming successful registration.
Troubleshooting
For legacy accounts not using SCA, you might have to manually attach subscriptions:
sudo subscription-manager list --available --all | grep "Ansible Automation Platform" -B 3 -A 6 sudo subscription-manager attach --pool=<pool_id>
$ sudo subscription-manager list --available --all | grep "Ansible Automation Platform" -B 3 -A 6 $ sudo subscription-manager attach --pool=<pool_id>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteDo not use MCT4022 as a
pool_idas it can cause subscription attachment to fail.For legacy accounts not using SCA, if you are unable to locate certain packages that came bundled with the Ansible Automation Platform installer, or if you are seeing a
Repositories disabled by configurationmessage, use the following steps to identify and enable the required repository:List available repositories:
sudo subscription-manager repos --list | grep -i ansible-automation-platform
$ sudo subscription-manager repos --list | grep -i ansible-automation-platformCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Identify the repository name that matches your RHEL version, Ansible Automation Platform version, and architecture (for example,
ansible-automation-platform-2.6-for-rhel-9-x86_64-rpms). Enable the repository:
sudo subscription-manager repos --enable <repository_name>
$ sudo subscription-manager repos --enable <repository_name>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2.5. Obtaining a manifest file Copy linkLink copied to clipboard!
You can obtain a subscription manifest in the Subscription Allocations section of Red Hat Subscription Management.
After you obtain a subscription allocation, you can download its manifest file and upload it to activate Ansible Automation Platform.
To begin, log in to the Red Hat Customer Portal by using your administrator user account and follow the procedures listed.
1.2.5.1. Create a subscription allocation Copy linkLink copied to clipboard!
With a new subscription allocation you can set aside subscriptions and entitlements for a system that is currently offline or air-gapped. This is necessary before you can download its manifest and upload it to Ansible Automation Platform.
Procedure
- From the Subscription Allocations page, click .
- Enter a name for the allocation so that you can find it later.
- Select Type: Satellite 6.16 as the management application.
- Click .
1.2.5.2. Adding subscriptions to a subscription allocation Copy linkLink copied to clipboard!
After you create an allocation, you can add the subscriptions you need for Ansible Automation Platform to run properly. This step is necessary before you can download the manifest and add it to Ansible Automation Platform.
Procedure
- From the Subscription Allocations page, click the name of the Subscription Allocation to which you want to add a subscription.
- Click the Subscriptions tab.
- Click .
- Enter the number of Ansible Automation Platform Entitlements you plan to add.
- Click .
1.2.5.3. Downloading a manifest file Copy linkLink copied to clipboard!
After you create an allocation with the appropriate subscriptions on it, you can download the manifest file from Red Hat Subscription Management.
Procedure
- From the Subscription Allocations page, click the name of the Subscription Allocation to which you want to generate a manifest.
- Click the Subscriptions tab.
Click to download the manifest file.
This downloads a file
manifest_<allocation name>_<date>.zipto your default downloads folder.
1.2.6. Activating Red Hat Ansible Automation Platform Copy linkLink copied to clipboard!
Red Hat Ansible Automation Platform uses available subscriptions or a subscription manifest to allow the use of Ansible Automation Platform.
To obtain a subscription, you can do either of the following:
- Use your Red Hat username and password, service account credentials, or Satellite credentials when you launch Ansible Automation Platform.
- Upload a subscriptions manifest file either using the Red Hat Ansible Automation Platform interface or manually in an Ansible Playbook.
1.2.6.1. Activate with credentials Copy linkLink copied to clipboard!
Organization Administrators can activate their Ansible Automation Platform subscription on first launch by using a Red Hat service account’s client ID and client secret to automatically retrieve and import the license.
If you do not have administrative access, you can enter your Red Hat username and password in the Username and password tab to locate and add your subscription to your Ansible Automation Platform instance.
You are opted in for Automation Analytics by default when you activate the platform on first login. This helps Red Hat improve the product by delivering you a much better user experience. You can opt out after activating Ansible Automation Platform by taking the following steps:
- From the navigation panel, select → → .
- Clear the Gather data for Automation Analytics option.
- Click .
Procedure
- Log in to Red Hat Ansible Automation Platform.
- Select the Service Account tab in the subscription wizard.
- Enter your Client ID and Client secret.
Select your subscription from the Subscription list.
NoteYou can also enter your Satellite username and password in the Satellite tab if your cluster nodes are registered to Satellite through Subscription Manager.
- Review the End User License Agreement and select I agree to the End User License Agreement.
- Click .
Verification
After your subscription has been accepted, subscription details are displayed. A status of Compliant indicates your subscription is in compliance with the number of hosts you have automated within your subscription count. Otherwise, your status shows as Out of Compliance, indicating you have exceeded the number of hosts in your subscription. Other important information displayed include the following:
- Hosts automated
- Host count automated by the job, which consumes the license count
- Hosts imported
- Host count considering all inventory sources (does not impact hosts remaining)
- Hosts remaining
- Total host count minus hosts automated
1.2.6.2. Activate with a manifest file Copy linkLink copied to clipboard!
If you have a subscriptions manifest, you can upload the manifest file by using the Red Hat Ansible Automation Platform interface.
You are opted in for Automation Analytics by default when you activate the platform on first login. This helps Red Hat improve the product by delivering you a much better user experience. You can opt out after activating Ansible Automation Platform by taking the following steps:
- From the navigation panel, select → → .
- Clear the Gather data for Automation Analytics option.
- Click .
Prerequisites
You must have a Red Hat subscription manifest file exported from the Red Hat Customer Portal. For more information, see Obtaining a manifest file.
Procedure
Log in to Red Hat Ansible Automation Platform.
- If you are not immediately taken to the subscription wizard, go to → .
- Select the Subscription manifest tab.
- Click and select your manifest file.
- Review the End User License Agreement and select I agree to the End User License Agreement.
Click .
NoteIf the button is disabled on the subscription wizard page, clear the USERNAME and PASSWORD fields.
Verification
After your subscription has been accepted, subscription details are displayed. A status of Compliant indicates your subscription is in compliance with the number of hosts you have automated within your subscription count. Otherwise, your status shows as Out of Compliance, indicating you have exceeded the number of hosts in your subscription. Other important information displayed include the following:
- Hosts automated
- Host count automated by the job, which uses the subscription count
- Hosts imported
- Host count considering all inventory sources (does not impact hosts remaining)
- Hosts remaining
- Total host count minus hosts automated
Chapter 2. System requirements Copy linkLink copied to clipboard!
Use this information when planning your Red Hat Ansible Automation Platform installations and designing automation mesh topologies that fit your use case.
2.1. Prerequisites Copy linkLink copied to clipboard!
Obtain root access either through the
sudocommand, or through privilege escalation.- De-escalate privileges from root to users such as: AWX, PostgreSQL, Event-Driven Ansible, or Pulp.
- Configured an NTP client on all nodes.
2.2. Red Hat Ansible Automation Platform system requirements Copy linkLink copied to clipboard!
Your system must meet the following minimum system requirements to install and run Red Hat Ansible Automation Platform. A resilient deployment requires 10 virtual machines with a minimum of 16 gigabytes (GB) of RAM and 4 virtual CPUs (vCPU). See Tested deployment models for more information on topology options.
| Type | Description | Notes |
|---|---|---|
| Subscription | Valid Red Hat Ansible Automation Platform subscription | |
| Operating system |
| Red Hat Ansible Automation Platform are also supported on OpenShift, see Installing on OpenShift Container Platform for more information. |
| CPU architecture | x86_64, AArch64, s390x (IBM Z), ppc64le (IBM Power) | |
| Ansible-core | Ansible-core version 2.16 or later | Ansible Automation Platform uses the system-wide ansible-core package to install the platform, but uses ansible-core 2.16 for both its control plane and built-in execution environments. |
| Browser | A currently supported version of Mozilla Firefox or Google Chrome. | |
| Database |
|
|
| Component | RAM | vCPU | Disk IOPS | Storage |
|---|---|---|---|---|
| Platform gateway | 16GB | 4 | 3000 | 60GB minimum |
| Control nodes | 16GB | 4 | 3000 |
80GB minimum with at least 20GB available under |
| Execution nodes | 16GB | 4 | 3000 | 60GB minimum |
| Hop nodes | 16GB | 4 | 3000 | 60GB minimum |
| Automation hub | 16GB | 4 | 3000 |
60GB minimum with at least 40GB allocated to |
| Database | 16GB | 4 | 3000 |
100GB minimum allocated to |
| Event-Driven Ansible controller | 16GB | 4 | 3000 | 60GB minimum |
These are minimum requirements and can be increased for larger workloads in increments of 2x (for example 16GB becomes 32GB and 4 vCPU becomes 8vCPU). See the horizontal scaling guide for more information.
2.2.1. Repository requirements Copy linkLink copied to clipboard!
Enable the following repositories only when installing Red Hat Ansible Automation Platform:
- RHEL BaseOS
- RHEL AppStream
If you enable repositories besides those mentioned above, the Red Hat Ansible Automation Platform installation could fail unexpectedly.
The following are necessary for you to work with project updates and collections:
- Ensure that the Network ports and protocols listed in Table 6.3. Automation Hub are available for successful connection and download of collections from automation hub or Ansible Galaxy server.
2.2.2. Additional notes for Red Hat Ansible Automation Platform requirements Copy linkLink copied to clipboard!
-
The Ansible Automation Platform database backups are staged on each node at
/var/backups/automation-platformthrough the variablebackup_dir. You might need to mount a new volume to/var/backupsor change the staging location with the variablebackup_dirto prevent issues with disk space before running the./setup.sh -bscript. - If performing a bundled Ansible Automation Platform installation, the installation setup.sh script attempts to install ansible-core (and its dependencies) from the bundle for you.
- If you have installed Ansible-core manually, the Ansible Automation Platform installation setup.sh script detects that Ansible has been installed and does not attempt to reinstall it.
You must use Ansible-core, which is installed by using DNF. Ansible-core version 2.16 is required for versions 2.6 and later.
2.3. Platform gateway system requirements Copy linkLink copied to clipboard!
Platform gateway is the service that handles authentication and authorization for Ansible Automation Platform. It provides a single entry into the platform and serves the platform’s user interface.
2.4. Automation controller system requirements Copy linkLink copied to clipboard!
Automation controller is a distributed system, where different software components can be co-located or deployed across many compute nodes. The installation program provides four node types as abstractions to help you design the topology appropriate for your use case: control, hybrid, execution, and hop nodes.
Use the following recommendations for node sizing:
| Node Type | RAM (Minimum) | vCPU (Minimum) | Disk IOPS (Minimum) | Storage and Notes |
|---|---|---|---|---|
| Execution Node | 16 GB | 4 vCPU | 3000 | Runs automation. Increase RAM/CPU to increase capacity for concurrent job forks. Performance depends heavily on the number of jobs run simultaneously. |
| Control Node | 16 GB | 4 vCPU | 3000 |
Processes events and runs cluster jobs (e.g., project updates). * Storage: 80GB minimum, with at least 20GB available under |
| Hybrid Node | 16 GB | 4 vCPU | 3000 | A combination of Control and Execution node functions. Storage requirements generally match the Control Node. |
| Hop Node | 16 GB | 4 vCPU | 3000 | Routes traffic within the automation mesh (e.g., bastion host). RAM can affect throughput, but CPU activity is typically low. Network latency is a more important factor than RAM or CPU. |
-
Actual RAM requirements vary based on how many hosts automation controller manages simultaneously (which is controlled by the
forksparameter in the job template or the systemansible.cfgfile). To avoid possible resource conflicts, Ansible recommends 1 GB of memory per 10 forks and 2 GB reservation for automation controller. See Automation controller capacity determination and job impact. Ifforksis set to 400, 42 GB of memory is recommended. A larger number of hosts can be addressed, but if the fork number is less than the total host count, more passes across the hosts are required. You can avoid these RAM limitations by using any of the following approaches:
- Use rolling updates.
- Use the provisioning callback system built into automation controller, where each system requesting configuration enters a queue and is processed as quickly as possible.
- In cases where automation controller is producing or deploying images.
2.5. Automation hub system requirements Copy linkLink copied to clipboard!
With Automation hub you can discover and use new certified automation content from Red Hat Ansible and Certified Partners.
On Ansible automation hub, you can discover and manage Ansible Collections, which are supported automation content developed by Red Hat and its partners for use cases such as cloud automation, network automation, and security automation.
Private automation hub
If you install private automation hub from an internal address with a certificate that only encompasses the external address, this can result in an installation that cannot be used as container registry without certificate issues.
To avoid this, use the automationhub_main_url inventory variable with a value such as https://pah.example.com linking to the private automation hub node in the installation inventory file.
This adds the external address to /etc/pulp/settings.py. This implies that you only want to use the external address.
For information about inventory file variables, see Inventory file variables.
2.5.1. High availability automation hub requirements Copy linkLink copied to clipboard!
Before deploying a high availability (HA) automation hub, ensure that you have a shared storage file system installed in your environment and that you have configured your network storage system, if applicable.
2.5.1.2. Installing firewalld for HA hub deployment Copy linkLink copied to clipboard!
If you intend to install a HA automation hub using a network storage on the automation hub nodes itself, you must first install and use firewalld to open the necessary ports as required by your shared storage system before running the Ansible Automation Platform installer.
Install and configure firewalld by executing the following commands:
Install the
firewallddaemon:dnf install firewalld
$ dnf install firewalldCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add your network storage under <service> using the following command:
firewall-cmd --permanent --add-service=<service>
$ firewall-cmd --permanent --add-service=<service>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor a list of supported services, use the
$ firewall-cmd --get-servicescommandReload to apply the configuration:
firewall-cmd --reload
$ firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow
2.6. Event-Driven Ansible controller system requirements Copy linkLink copied to clipboard!
The Event-Driven Ansible controller is a single-node system capable of handling a variable number of long-running processes (such as rulebook activations) on-demand, depending on the number of CPU cores.
Use the following minimum requirements for Event-Driven Ansible controller:
| Requirement | Required |
|---|---|
| RAM | 16 GB |
| CPUs | 4 |
| Local disk |
|
When you activate an Event-Driven Ansible rulebook under standard conditions, it uses about 250 MB of memory. However, the actual memory consumption can vary significantly based on the complexity of your rules and the volume and size of the events processed. In scenarios where a large number of events are anticipated or the rulebook complexity is high, conduct a preliminary assessment of resource usage in a staging environment. This ensures that your maximum number of activations is based on the capacity of your resources.
2.7. PostgreSQL requirements Copy linkLink copied to clipboard!
Red Hat Ansible Automation Platform 2.6 requires the external (customer supported) databases to have ICU support. PostgreSQL user passwords are hashed with SCRAM-SHA-256 secure hashing algorithm before storing in the database.
To determine if your automation controller instance has access to the database, you can do so with the command, awx-manage check_db command.
- Automation controller data is stored in the database. Database storage increases with the number of hosts managed, number of jobs run, number of facts stored in the fact cache, and number of tasks in any individual job. For example, a playbook runs every hour (24 times a day) across 250 hosts, with 20 tasks, stores over 800000 events in the database every week.
- If not enough space is reserved in the database, the old job runs and facts must be cleaned on a regular basis. For more information, see Management Jobs in the Configuring automation execution guide.
2.7.1. PostgreSQL Configurations Copy linkLink copied to clipboard!
Optionally, you can configure the PostgreSQL database as separate nodes that are not managed by the Red Hat Ansible Automation Platform installer. When the Ansible Automation Platform installer manages the database server, it configures the server with defaults that are generally recommended for most workloads. For more information about the settings you can use to improve database performance, see PostgreSQL database configuration and maintenance for automation controller in the Configuring automation execution guide.
2.7.2. Setting up an external (customer supported) database Copy linkLink copied to clipboard!
- When using an external database with Ansible Automation Platform, you must create and maintain that database. Ensure that you clear your external database when uninstalling Ansible Automation Platform.
- Red Hat Ansible Automation Platform 2.6 requires the external (customer supported) databases to have ICU support.
- During configuration of an external database, you must check the external database coverage. For more information, see Red Hat Ansible Automation Platform Database Scope of Coverage.
Red Hat Ansible Automation Platform 2.6 requires the external (customer supported) databases to have ICU support. Use the following procedure to configure an external PostgreSQL compliant database for use with an Ansible Automation Platform component, for example automation controller, Event-Driven Ansible, automation hub, and platform gateway.
Procedure
Connect to a PostgreSQL compliant database server with superuser privileges.
psql -h <hostname> -U superuser -p 5432 -d postgres <password_for_user_superuser>
# psql -h <hostname> -U superuser -p 5432 -d postgres <password_for_user_superuser>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where the default value for <hostname> is hostname:
-h hostname --host=hostname
-h hostname --host=hostnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the hostname of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the UNIX-domain socket.
-d dbname --dbname=dbname
-d dbname --dbname=dbnameCopy to Clipboard Copied! Toggle word wrap Toggle overflow Specify the name of the database to connect to. This is equal to specifying
dbnameas the first non-option argument on the command line. Thedbnamecan be a connection string. If so, connection string parameters override any conflicting command line options.-U username --username=username
-U username --username=usernameCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Connect to the database as the user
usernameinstead of the default (you must have permission to do so). -
Create the user, database, and password with the
createDBoradministratorrole assigned to the user. For further information, see Database Roles. -
Run the installation program. If you are using a PostgreSQL database, the database is owned by the connecting user and must have a
createDBor administrator role assigned to it. - Check that you can connect to the created database with the credentials provided in the inventory file.
-
Check the permission of the user. The user should have the
createDBor administrator role. After you create the PostgreSQL users and databases for each component, add the database credentials and host details in the inventory file under the [all:vars] group.
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7.3. Enabling mutual TLS (mTLS) authentication Copy linkLink copied to clipboard!
Enable mutual TLS authentication to secure PostgreSQL database connections with certificate-based verification. This protects against unauthorized access and man-in-the-middle attacks while meeting enterprise security and compliance requirements.
Procedure
To configure each component’s database with mTLS authentication, add the following variables to your inventory file under the
[all:vars]group and ensure each component has a different TLS certificate and key:Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7.4. Using custom TLS certificates Copy linkLink copied to clipboard!
By default, the installation program generates self-signed TLS certificates and keys for all Ansible Automation Platform services. However, you can optionally use custom TLS certificates.
Procedure
To replace these with your own custom certificate and key, set the following inventory file variables:
aap_ca_cert_file=<path_to_ca_tls_certificate> aap_ca_key_file=<path_to_ca_tls_key>
aap_ca_cert_file=<path_to_ca_tls_certificate> aap_ca_key_file=<path_to_ca_tls_key>Copy to Clipboard Copied! Toggle word wrap Toggle overflow If any of your certificates are signed by a custom Certificate Authority (CA), then you must specify the Certificate Authority’s certificate by using the
custom_ca_certinventory file variable:custom_ca_cert=<path_to_custom_ca_certificate>
custom_ca_cert=<path_to_custom_ca_certificate>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you have more than one custom CA certificate, combine them into a single file, then reference the combined certificate with the
custom_ca_certinventory file variable.
2.7.5. Receptor certificate considerations Copy linkLink copied to clipboard!
When using a custom certificate for Receptor nodes, the certificate requires the otherName field specified in the Subject Alternative Name (SAN) of the certificate with the value 1.3.6.1.4.1.2312.19.1. For more information, see Above the mesh TLS.
Receptor does not support the usage of wildcard certificates. Additionally, each Receptor certificate must have the host FQDN specified in its SAN for TLS hostname validation to be correctly performed.
2.7.6. Enabling the hstore extension for the automation hub PostgreSQL database Copy linkLink copied to clipboard!
The database migration script uses hstore fields to store information, therefore the hstore extension must be enabled in the automation hub PostgreSQL database.
This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server.
If the PostgreSQL database is external, you must enable the hstore extension in the automation hub PostgreSQL database manually before installation.
If the hstore extension is not enabled before installation, a failure raises during database migration.
Procedure
Check if the extension is available on the PostgreSQL server (automation hub database).
psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"
$ psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"Copy to Clipboard Copied! Toggle word wrap Toggle overflow Where the default value for
<automation hub database>isautomationhub.Example output with
hstoreavailable:name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)
name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)Copy to Clipboard Copied! Toggle word wrap Toggle overflow Example output with
hstorenot available:name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)
name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)Copy to Clipboard Copied! Toggle word wrap Toggle overflow On a RHEL based server, the
hstoreextension is included in thepostgresql-contribRPM package, which is not installed automatically when installing the PostgreSQL server RPM package.To install the RPM package, use the following command:
dnf install postgresql-contrib
dnf install postgresql-contribCopy to Clipboard Copied! Toggle word wrap Toggle overflow Load the
hstorePostgreSQL extension into the automation hub database with the following command:psql -d <automation hub database> -c "CREATE EXTENSION hstore;"
$ psql -d <automation hub database> -c "CREATE EXTENSION hstore;"Copy to Clipboard Copied! Toggle word wrap Toggle overflow In the following output, the
installed_versionfield lists thehstoreextension used, indicating thathstoreis enabled.name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)
name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)Copy to Clipboard Copied! Toggle word wrap Toggle overflow
2.7.7. Benchmarking storage performance for the Ansible Automation Platform PostgreSQL database Copy linkLink copied to clipboard!
Use the Flexible I/O Tester (FIO) tool to verify that your storage system meets minimum Ansible Automation Platform PostgreSQL database requirements. FIO benchmarks read and write IOPS performance to help you evaluate storage capabilities.
Prerequisites
You have installed the Flexible I/O Tester (
fio) storage performance benchmarking tool.To install
fio, run the following command as the root user:yum -y install fio
# yum -y install fioCopy to Clipboard Copied! Toggle word wrap Toggle overflow You have adequate disk space to store the
fiotest data log files.The examples shown in the procedure require at least 60GB disk space in the
/tmpdirectory:-
numjobssets the number of jobs run by the command. -
size=10Gsets the file size generated by each job.
-
-
You have adjusted the value of the
sizeparameter. Adjusting this value reduces the amount of test data.
Procedure
Run a random write test:
fio --name=write_iops --directory=/tmp --numjobs=3 --size=10G \ --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \ --verify=0 --bs=4K --iodepth=64 --rw=randwrite \ --group_reporting=1 > /tmp/fio_benchmark_write_iops.log \ 2>> /tmp/fio_write_iops_error.log
$ fio --name=write_iops --directory=/tmp --numjobs=3 --size=10G \ --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \ --verify=0 --bs=4K --iodepth=64 --rw=randwrite \ --group_reporting=1 > /tmp/fio_benchmark_write_iops.log \ 2>> /tmp/fio_write_iops_error.logCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run a random read test:
fio --name=read_iops --directory=/tmp \ --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s \ --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randread \ --group_reporting=1 > /tmp/fio_benchmark_read_iops.log \ 2>> /tmp/fio_read_iops_error.log
$ fio --name=read_iops --directory=/tmp \ --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s \ --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randread \ --group_reporting=1 > /tmp/fio_benchmark_read_iops.log \ 2>> /tmp/fio_read_iops_error.logCopy to Clipboard Copied! Toggle word wrap Toggle overflow Review the results:
In the log files written by the benchmark commands, search for the line beginning with
iops. This line shows the minimum, maximum, and average values for the test.The following example shows the line in the log file for the random read test:
cat /tmp/fio_benchmark_read_iops.log read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 […] iops : min=50879, max=61603, avg=56221.33, stdev=679.97, samples=360 […]
$ cat /tmp/fio_benchmark_read_iops.log read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 […] iops : min=50879, max=61603, avg=56221.33, stdev=679.97, samples=360 […]Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe above is a baseline to help evaluate the best case performance on your systems. Systems can and will change and performance may vary depending on what else is happening on your systems, storage or network at the time of testing. You must review, monitor, and revisit the log files according to your own business requirements, application workloads, and new demands.
Chapter 3. Installing Red Hat Ansible Automation Platform Copy linkLink copied to clipboard!
Ansible Automation Platform is a modular platform. The platform gateway deploys automation platform components, such as automation controller, automation hub, and Event-Driven Ansible controller.
For more information about the components provided with Ansible Automation Platform, see Red Hat Ansible Automation Platform components in Planning your installation.
There are several supported installation scenarios for Red Hat Ansible Automation Platform. To install Red Hat Ansible Automation Platform, you must edit the inventory file parameters to specify your installation scenario. You can use the enterprise installer as a basis for your own inventory file.
3.2. Editing the Red Hat Ansible Automation Platform installer inventory file Copy linkLink copied to clipboard!
You can use the Red Hat Ansible Automation Platform installer inventory file to specify your installation scenario.
3.2.1. For an RPM installation Copy linkLink copied to clipboard!
Procedure
RPM installed package
cd /opt/ansible-automation-platform/installer/
$ cd /opt/ansible-automation-platform/installer/
Bundled installer
cd ansible-automation-platform-setup-bundle-<latest-version>
$ cd ansible-automation-platform-setup-bundle-<latest-version>
Online installer
cd ansible-automation-platform-setup-<latest-version>
$ cd ansible-automation-platform-setup-<latest-version>
-
Open the
inventoryfile with a text editor. Edit the
inventoryfile parameters to specify your installation scenario.For containerized installation, see Configuring the inventory file You can use one of the supported Installation scenario examples as the basis for your
inventoryfile.
3.3. Inventory file examples based on installation scenarios Copy linkLink copied to clipboard!
Red Hat supports several installation scenarios for Ansible Automation Platform. You can develop your own inventory files using the example files as a basis, or you can use the example closest to your preferred installation scenario.
3.3.1. Inventory file recommendations based on installation scenarios Copy linkLink copied to clipboard!
Before selecting your installation method for Ansible Automation Platform, review the following recommendations. Familiarity with these recommendations will streamline the installation process.
Provide a reachable IP address or fully qualified domain name (FQDN) for hosts to ensure users can sync and install content from automation hub from a different node.
The FQDN must not contain either the
-or the_symbols, as it will not be processed correctly.Do not use
localhost.-
adminis the default user ID for the initial log in to Ansible Automation Platform and cannot be changed in the inventory file. -
Use of special characters for
pg_passwordis limited. The!,#,0and@characters are supported. Use of other special characters can cause the setup to fail. -
Enter your Red Hat Registry Service Account credentials in
registry_usernameandregistry_passwordto link to the Red Hat container registry. -
The inventory file variables
registry_usernameandregistry_passwordare only required if a non-bundle installer is used.
3.3.1.1. Single platform gateway and automation controller with an external (installer managed) database Copy linkLink copied to clipboard!
Use this example to see what is minimally needed within the inventory file to deploy single instances of platform gateway and automation controller with an external (installer managed) database.
3.3.1.2. Single platform gateway, automation controller, and automation hub with an external (installer managed) database Copy linkLink copied to clipboard!
Use this example to populate the inventory file to deploy single instances of platform gateway, automation controller, and automation hub with an external (installer managed) database.
Use this example to populate the inventory file to deploy single instances of platform gateway, automation controller, automation hub, and Event-Driven Ansible controller with an external (installer managed) database.
- This scenario requires a minimum of automation controller 2.4 for successful deployment of Event-Driven Ansible controller.
- Event-Driven Ansible controller must be installed on a separate server and cannot be installed on the same host as automation hub and automation controller.
- When an Event-Driven Ansible rulebook is activated under standard conditions, it uses approximately 250 MB of memory. However, the actual memory consumption can vary significantly based on the complexity of the rules and the volume and size of the events processed. In scenarios where a large number of events are anticipated or the rulebook complexity is high, conduct a preliminary assessment of resource usage in a staging environment. This ensures that the maximum number of activations is based on the resource capacity.
Additional resources
For more information about these inventory variables, see Ansible automation hub variables in the Red Hat Ansible Automation Platform Installation Guide.
3.3.1.4. High availability automation hub Copy linkLink copied to clipboard!
Configure inventory files to deploy high availability automation hub with clustered nodes, database hosts, and load balancing for enterprise-scale automation.
Use the following examples to populate the inventory file to install a highly available automation hub. This inventory file includes a highly available automation hub with a clustered setup.
You can configure your HA deployment further to enable a high availability deployment of automation hub on SELinux.
Specify database host IP
-
Specify the IP address for your database host, using the
automation_pg_hostandautomation_pg_portinventory variables. For example:
automationhub_pg_host='192.0.2.10' automationhub_pg_port=5432
automationhub_pg_host='192.0.2.10'
automationhub_pg_port=5432
-
Also specify the IP address for your database host in the [database] section, using the value in the
automationhub_pg_hostinventory variable:
[database] 192.0.2.10
[database]
192.0.2.10
List all instances in a clustered setup
-
If installing a clustered setup, replace
localhost ansible_connection=localin the[automationhub]section with the hostname or IP of all instances. For example:
[automationhub] automationhub1.testing.ansible.com ansible_user=cloud-user automationhub2.testing.ansible.com ansible_user=cloud-user automationhub3.testing.ansible.com ansible_user=cloud-user
[automationhub]
automationhub1.testing.ansible.com ansible_user=cloud-user
automationhub2.testing.ansible.com ansible_user=cloud-user
automationhub3.testing.ansible.com ansible_user=cloud-user
Next steps
Check that the following directives are present in /etc/pulp/settings.py in each of the private automation hub servers:
USE_X_FORWARDED_PORT = True USE_X_FORWARDED_HOST = True
USE_X_FORWARDED_PORT = True
USE_X_FORWARDED_HOST = True
If you are using a load balancer, configure automationgateway_main_url to point to your load balancer. If automationgateway_main_url is not specified, the first node in the [automationgateway] group will be used as default.
3.3.1.5. Enabling a high availability (HA) deployment of automation hub on SELinux Copy linkLink copied to clipboard!
You can configure the inventory file to enable high availability deployment of automation hub on SELinux. You must create two mount points for /var/lib/pulp and /var/lib/pulp/pulpcore_static, and then assign the appropriate SELinux contexts to each mount point.
You must add the context for /var/lib/pulp pulpcore_static and run the Ansible Automation Platform installer before adding the context for /var/lib/pulp.
Prerequisites
You have already configured a NFS export on your server.
NoteThe NFS share is hosted on an external server and is not a part of high availability automation hub deployment.
Procedure
Create a mount point at
/var/lib/pulp:mkdir /var/lib/pulp/
$ mkdir /var/lib/pulp/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open
/etc/fstabusing a text editor, then add the following values:srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:var_lib_t:s0" 0 0 srv_rhel8:/data/pulpcore_static /var/lib/pulp/pulpcore_static nfs defaults,_netdev,nosharecache,context="system_u:object_r:httpd_sys_content_rw_t:s0" 0 0
srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:var_lib_t:s0" 0 0 srv_rhel8:/data/pulpcore_static /var/lib/pulp/pulpcore_static nfs defaults,_netdev,nosharecache,context="system_u:object_r:httpd_sys_content_rw_t:s0" 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the reload systemd manager configuration command:
systemctl daemon-reload
$ systemctl daemon-reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the mount command for
/var/lib/pulp:mount /var/lib/pulp
$ mount /var/lib/pulpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a mount point at
/var/lib/pulp/pulpcore_static:mkdir /var/lib/pulp/pulpcore_static
$ mkdir /var/lib/pulp/pulpcore_staticCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the mount command:
mount -a
$ mount -aCopy to Clipboard Copied! Toggle word wrap Toggle overflow With the mount points set up, run the Ansible Automation Platform installer:
setup.sh -- -b --become-user root
$ setup.sh -- -b --become-user rootCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
After the installation is complete, unmount the
/var/lib/pulp/mount point.
3.3.1.5.1. Configuring pulpcore.service Copy linkLink copied to clipboard!
After you have configured the inventory file and applied the SELinux context, configure the pulp service. This ensures that automation hub services start only after starting the network and mounting the remote mount points.
Procedure
With the two mount points set up, shut down the Pulp service to configure
pulpcore.service:systemctl stop pulpcore.service
$ systemctl stop pulpcore.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit
pulpcore.serviceusingsystemctl:systemctl edit pulpcore.service
$ systemctl edit pulpcore.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following entry to
pulpcore.serviceto ensure that automation hub services starts only after starting the network and mounting the remote mount points:[Unit] After=network.target var-lib-pulp.mount
[Unit] After=network.target var-lib-pulp.mountCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable
remote-fs.target:systemctl enable remote-fs.target
$ systemctl enable remote-fs.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Reboot the system:
systemctl reboot
$ systemctl rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Troubleshooting
A bug in the pulpcore SELinux policies can cause the token authentication public/private keys in etc/pulp/certs/ to not have the proper SELinux labels, causing the pulp process to fail. When this occurs, run the following command to temporarily attach the proper labels:
chcon system_u:object_r:pulpcore_etc_t:s0 /etc/pulp/certs/token_{private,public}_key.pem
$ chcon system_u:object_r:pulpcore_etc_t:s0 /etc/pulp/certs/token_{private,public}_key.pem
Repeat this command to reattach the proper SELinux labels whenever you relabel your system.
3.3.1.5.2. Applying the SELinux context Copy linkLink copied to clipboard!
Apply the correct SELinux context to the Pulp directories to ensure proper file access permissions and security policy compliance.
After you have configured the inventory file, you must now apply the context to enable the high availability (HA) deployment of automation hub on SELinux.
Procedure
Shut down the Pulp service:
systemctl stop pulpcore.service
$ systemctl stop pulpcore.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Unmount
/var/lib/pulp/pulpcore_static:umount /var/lib/pulp/pulpcore_static
$ umount /var/lib/pulp/pulpcore_staticCopy to Clipboard Copied! Toggle word wrap Toggle overflow Unmount
/var/lib/pulp/:umount /var/lib/pulp/
$ umount /var/lib/pulp/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open
/etc/fstabusing a text editor, then replace the existing value for/var/lib/pulpwith the following:srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:pulpcore_var_lib_t:s0" 0 0
srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:pulpcore_var_lib_t:s0" 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the mount command:
mount -a
$ mount -aCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3.1.6. Configuring content signing on private automation hub Copy linkLink copied to clipboard!
To successfully sign and publish Ansible Certified Content Collections, you must configure private automation hub for signing.
Prerequisites
- Your GnuPG key pairs have been securely set up and managed by your organization.
- Your public-private key pair has proper access for configuring content signing on private automation hub.
Procedure
Create a signing script that accepts only a filename.
NoteThis script acts as the signing service and must generate an ascii-armored detached
gpgsignature for that file using the key specified through thePULP_SIGNING_KEY_FINGERPRINTenvironment variable.The script prints out a JSON structure with the following format.
{"file": "filename", "signature": "filename.asc"}{"file": "filename", "signature": "filename.asc"}Copy to Clipboard Copied! Toggle word wrap Toggle overflow All the file names are relative paths inside the current working directory. The file name must remain the same for the detached signature.
Example: The following script produces signatures for content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After you deploy a private automation hub with signing enabled to your Ansible Automation Platform cluster, new UI additions are displayed in collections.
Review the Ansible Automation Platform installer inventory file for options that begin with
automationhub_*.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The two new keys (automationhub_auto_sign_collections and automationhub_require_content_approval) indicate that the collections must be signed and approved after they are uploaded to private automation hub.
3.3.1.7. Adding a safe plugin variable to Event-Driven Ansible controller Copy linkLink copied to clipboard!
When using redhat.insights_eda or similar plugins to run rulebook activations in Event-Driven Ansible controller, you must add a safe plugin variable to a directory in Ansible Automation Platform. This ensures connection between Event-Driven Ansible controller and the source plugin, and displays port mappings correctly.
Procedure
-
Create a directory for the safe plugin variable:
mkdir -p ./group_vars/automationedacontroller -
Create a file within that directory for your new setting (for example,
touch ./group_vars/automationedacontroller/custom.yml) Add the variable
automationedacontroller_additional_settingsto extend the defaultsettings.yamltemplate for Event-Driven Ansible controller and add theSAFE_PLUGINSfield with a list of plugins to enable. For example:automationedacontroller_additional_settings: SAFE_PLUGINS: - ansible.eda.webhook - ansible.eda.alertmanagerautomationedacontroller_additional_settings: SAFE_PLUGINS: - ansible.eda.webhook - ansible.eda.alertmanagerCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteYou can also extend the
automationedacontroller_additional_settingsvariable beyondSAFE_PLUGINSin the Django configuration file/etc/ansible-automation-platform/eda/settings.yaml.
3.3.2. Setting registry_username and registry_password Copy linkLink copied to clipboard!
When using the registry_username and registry_password variables for an online non-bundled installation, you need to create a new registry service account.
Registry service accounts are named tokens that you can use in environments where you share credentials, such as deployment systems.
Procedure
- Go to https://access.redhat.com/terms-based-registry/accounts.
- On the Registry Service Accounts page click .
- Enter a name for the account using only the allowed characters.
- Optionally enter a description for the account.
- Click .
- Find the created account in the list by searching for your name in the search field.
- Click the name of the account that you created.
Alternatively, if you know the name of your token, you can go directly to the page by entering the URL:
https://access.redhat.com/terms-based-registry/token/<name-of-your-token>
https://access.redhat.com/terms-based-registry/token/<name-of-your-token>Copy to Clipboard Copied! Toggle word wrap Toggle overflow A token page opens, displaying a generated username (different from the account name) and a token.
- If no token is displayed, click . You can also click this to generate a new username and token.
-
Copy the username (for example "1234567|testuser") and use it to set the variable
registry_username. -
Copy the token and use it to set the variable
registry_password.
3.3.2.1. Configuring Redis Copy linkLink copied to clipboard!
Ansible Automation Platform offers a centralized Redis instance in both standalone and clustered topologies.
In RPM deployments, the Redis mode is set to cluster by default. You can change this setting in the inventory file [all:vars] section as in the following example:
For more information about Redis, see Caching and queueing system in Planning your installation.
3.4. Running the Red Hat Ansible Automation Platform installer setup script Copy linkLink copied to clipboard!
After you update the inventory file with required parameters, run the installation program setup script.
RPM installer
Procedure
Run the
setup.shscriptsudo ./setup.sh
$ sudo ./setup.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf you are running the setup as a non-root user with
sudoprivileges, you can use the following command:ANSIBLE_BECOME_METHOD='sudo'
$ ANSIBLE_BECOME_METHOD='sudo' ANSIBLE_BECOME=True ./setup.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow Installation of Red Hat Ansible Automation Platform will begin.
3.5. Verifying installation of Ansible Automation Platform Copy linkLink copied to clipboard!
Additional resources
Upon a successful login, your installation of Red Hat Ansible Automation Platform is complete.
If the installation fails and you are a customer who has purchased a valid license for Red Hat Ansible Automation Platform, contact Ansible through the Red Hat Customer portal.
3.6. Backing up your Ansible Automation Platform instance Copy linkLink copied to clipboard!
Back up an existing Ansible Automation Platform instance by running the .setup.sh script with the backup_dest flag. This saves the content and configuration of your current environment.
Use the compression flags use_archive_compression and use_db_compression to compress the backup artifacts before they are sent to the host running the backup operation.
Procedure
- Navigate to your Ansible Automation Platform installation directory.
Run the
./setup.shscript following the example below:./setup.sh -e 'backup_dest=/ansible/mybackup' -e
$ ./setup.sh -e 'backup_dest=/ansible/mybackup' -e 'use_archive_compression=true' 'use_db_compression=true @credentials.yml -bCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where:
-
backup_dest: Specifies a directory to save your backup to. use_archive_compression=trueanduse_db_compression=true: Compresses the backup artifacts before they are sent to the host running the backup operation.You can use the following variables to customize the compression:
-
For global control of compression for filesystem related backup files:
use_archive_compression=true For component-level control of compression for filesystem related backup files:
<componentName>_use_archive_compressionFor example:
-
automationgateway_use_archive_compression=true -
automationcontroller_use_archive_compression=true -
automationhub_use_archive_compression=true -
automationedacontroller_use_archive_compression=true
-
-
For global control of compression for database related backup files:
use_db_compression=true For component-level control of compression for database related backup files:
<componentName>_use_db_compression=trueFor example:
-
automationgateway_use_db_compression=true -
automationcontroller_use_db_compression=true -
automationhub_use_db_compression=true -
automationedacontroller_use_db_compression=true
-
-
For global control of compression for filesystem related backup files:
-
Result
After a successful backup, a backup file is created at /ansible/mybackup/automation-platform-backup-<date/time>.tar.gz.
3.7. Adding a subscription manifest to Ansible Automation Platform Copy linkLink copied to clipboard!
Before you first log in, you must add your subscription information to the platform. To add a subscription to Ansible Automation Platform, see Obtaining a manifest file in the Access management and authentication guide.
Chapter 4. Horizontal scaling in Red Hat Ansible Automation Platform Copy linkLink copied to clipboard!
You can set up multi-node deployments for components across Ansible Automation Platform. Whether you require horizontal scaling for Automation Execution, Automation Decisions, or automation mesh, you can scale your deployments based on your organization’s needs.
4.1. Horizontal scaling in Event-Driven Ansible controller Copy linkLink copied to clipboard!
With Event-Driven Ansible controller, you can set up horizontal scaling for your events automation. This multi-node deployment enables you to define as many nodes as you prefer during the installation process. You can also increase or decrease the number of nodes at any time according to your organizational needs.
The following node types are used in this deployment:
- API node type
- Responds to the HTTP REST API of Event-Driven Ansible controller.
- Worker node type
- Runs an Event-Driven Ansible worker, which is the component of Event-Driven Ansible that not only manages projects and activations, but also executes the activations themselves.
- Hybrid node type
- Is a combination of the API node and the worker node.
The following example shows how you can set up an inventory file for horizontal scaling of Event-Driven Ansible controller on Red Hat Enterprise Linux VMs using the host group name [automationedacontroller] and the node type variable eda_node_type:
4.1.1. Sizing and scaling guidelines Copy linkLink copied to clipboard!
API nodes process user requests (interactions with the UI or API) while worker nodes process the activations and other background tasks required for Event-Driven Ansible to function properly. The number of API nodes you require correlates to the required number of users of the application and the number of worker nodes correlates to the required number of activations you want to run.
Since activations are variable and controlled by worker nodes, the supported approach for scaling is to use separate API and worker nodes instead of hybrid nodes due to the efficient allocation of hardware resources by worker nodes. By separating the nodes, you can scale each type independently based on specific needs, leading to better resource utilization and cost efficiency.
An example of an instance in which you might consider scaling up your node deployment is when you want to deploy Event-Driven Ansible for a small group of users who will run a large number of activations. In this case, one API node is adequate, but if you require more, you can scale up to three additional worker nodes.
4.1.2. Setting up horizontal scaling for Event-Driven Ansible controller Copy linkLink copied to clipboard!
To scale up (add more nodes) or scale down (remove nodes), you must update the content of the inventory file to add or remove nodes and rerun the installation program.
Procedure
Update the inventory to add two more worker nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Re-run the installer.
Chapter 5. Disconnected installation Copy linkLink copied to clipboard!
If you are not connected to the internet or do not have access to online repositories, you can install Red Hat Ansible Automation Platform without an active internet connection.
5.1. Prerequisites Copy linkLink copied to clipboard!
Before installing Ansible Automation Platform on a disconnected network, you must meet the following prerequisites:
- A subscription manifest that you can upload to the platform.
For more information, see Obtaining a manifest file.
- The Ansible Automation Platform setup bundle at Customer Portal is downloaded.
- The DNS records for the automation controller and private automation hub servers are created.
5.2. Ansible Automation Platform installation on disconnected RHEL Copy linkLink copied to clipboard!
You can install Ansible Automation Platform without an internet connection by using the installer-managed database located on the automation controller. The setup bundle is recommended for disconnected installation because it includes additional components that make installing Ansible Automation Platform easier in a disconnected environment. These include the Ansible Automation Platform Red Hat package managers (RPMs) and the default execution environment (EE) images.
5.2.1. System requirements for disconnected installation Copy linkLink copied to clipboard!
Ensure that your system has all the hardware requirements before performing a disconnected installation of Ansible Automation Platform. You can find these in system requirements.
5.2.2. RPM Source Copy linkLink copied to clipboard!
RPM dependencies for Ansible Automation Platform that come from the BaseOS and AppStream repositories are not included in the setup bundle. To add these dependencies, you must first obtain access to BaseOS and AppStream repositories. Use Satellite to sync repositories and add dependencies. If you prefer an alternative tool, you can choose between the following options:
- Reposync
- The RHEL Binary DVD
The RHEL Binary DVD method requires the DVD for supported versions of RHEL. See Red Hat Enterprise Linux Life Cycle for information on which versions of RHEL are currently supported.
5.3. Synchronizing RPM repositories using reposync Copy linkLink copied to clipboard!
To perform a reposync you need a Red Hat Enterprise Linux host that has access to the internet. After the repositories are synced, you can move the repositories to the disconnected network hosted from a web server.
When downloading RPM, ensure you use the applicable distro.
Procedure
Attach the BaseOS and AppStream required repositories:
subscription-manager repos \ --enable rhel-9-for-x86_64-baseos-rpms \ --enable rhel-9-for-x86_64-appstream-rpms# subscription-manager repos \ --enable rhel-9-for-x86_64-baseos-rpms \ --enable rhel-9-for-x86_64-appstream-rpmsCopy to Clipboard Copied! Toggle word wrap Toggle overflow Perform the reposync:
dnf install yum-utils reposync -m --download-metadata --gpgcheck \ -p /path/to/download# dnf install yum-utils # reposync -m --download-metadata --gpgcheck \ -p /path/to/downloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Use reposync with
--download-metadataand without--newest-only. See RHEL 8 Reposync.-
If you are not using
--newest-only,the repos downloaded may take an extended amount of time to sync due to the large number of GB. -
If you are using
--newest-only,the repos downloaded may take an extended amount of time to sync due to the large number of GB.
-
If you are not using
After the reposync is completed, your repositories are ready to use with a web server.
- Move the repositories to your disconnected network.
5.4. Creating a new web server to host repositories Copy linkLink copied to clipboard!
If you do not have an existing web server to host your repositories, you can create one with your synced repositories. This web server will host the repositories for your disconnected environment.
Procedure
Install prerequisites:
sudo dnf install httpd
$ sudo dnf install httpdCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure httpd to serve the repo directory:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow Ensure that the directory is readable by an apache user:
sudo chown -R apache /path/to/repos
$ sudo chown -R apache /path/to/reposCopy to Clipboard Copied! Toggle word wrap Toggle overflow Configure SELinux:
sudo semanage fcontext -a -t httpd_sys_content_t "/path/to/repos(/.*)?" sudo restorecon -ir /path/to/repos
$ sudo semanage fcontext -a -t httpd_sys_content_t "/path/to/repos(/.*)?" $ sudo restorecon -ir /path/to/reposCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable httpd:
sudo systemctl enable --now httpd.service
$ sudo systemctl enable --now httpd.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Open firewall:
sudo firewall-cmd --zone=public --add-service=http –add-service=https --permanent sudo firewall-cmd --reload
$ sudo firewall-cmd --zone=public --add-service=http –add-service=https --permanent $ sudo firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow On automation services, add a repo file at /etc/yum.repos.d/local.repo, and add the optional repos if needed:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.5. Accessing RPM repositories from a locally mounted DVD Copy linkLink copied to clipboard!
In disconnected or air-gapped environments, you can install Ansible Automation Platform by using packages from a locally mounted RHEL DVD or ISO image. Learn how to mount the media and configure yum repositories to access BaseOS and AppStream packages for offline installation.
If you plan to access the repositories from the RHEL binary DVD, you must first set up a local repository.
Procedure
Mount DVD or ISO:
DVD
mkdir /media/rheldvd && mount /dev/sr0 /media/rheldvd
# mkdir /media/rheldvd && mount /dev/sr0 /media/rheldvdCopy to Clipboard Copied! Toggle word wrap Toggle overflow ISO
mkdir /media/rheldvd && mount -o loop rhrhel-8.6-x86_64-dvd.iso /media/rheldvd
# mkdir /media/rheldvd && mount -o loop rhrhel-8.6-x86_64-dvd.iso /media/rheldvdCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Create yum repo file at
/etc/yum.repos.d/dvd.repoCopy to Clipboard Copied! Toggle word wrap Toggle overflow Import the gpg key:
rpm --import /media/rheldvd/RPM-GPG-KEY-redhat-release
# rpm --import /media/rheldvd/RPM-GPG-KEY-redhat-releaseCopy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf the key is not imported you will see an error similar to
Curl error (6): Couldn't resolve host name for
# Curl error (6): Couldn't resolve host name for https://www.redhat.com/security/data/fd431d51.txt [Could not resolve host: www.redhat.com]Copy to Clipboard Copied! Toggle word wrap Toggle overflow
5.6. Downloading and installing the Ansible Automation Platform setup bundle Copy linkLink copied to clipboard!
Choose the setup bundle to download Ansible Automation Platform for disconnected installations. This bundle includes the RPM content for Ansible Automation Platform and the default execution environment images that will be uploaded to your private automation hub during the installation process.
Procedure
- Download the Ansible Automation Platform setup bundle package by navigating to the Red Hat Ansible Automation Platform download page and clicking for the Ansible Automation Platform 2.6 Setup Bundle.
On control node, untar the bundle:
tar xvf \ ansible-automation-platform-setup-bundle-2.6-1.tar.gz cd ansible-automation-platform-setup-bundle-2.6-1
$ tar xvf \ ansible-automation-platform-setup-bundle-2.6-1.tar.gz $ cd ansible-automation-platform-setup-bundle-2.6-1Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Edit the inventory file to include variables based on your host names and desired password values.
5.7. Completing post installation tasks Copy linkLink copied to clipboard!
After you have completed the installation of Ansible Automation Platform, ensure that automation hub and automation controller deploy properly.
Before your first login, you must add your subscription information to the platform. To obtain your subscription information in uploadable form, see Obtaining a manifest file.
Once you have obtained your subscription manifest, see Getting started with Ansible Automation Platform for instructions on how to upload your subscription information.
Now that you have successfully installed Ansible Automation Platform, to begin using its features, see the following guides for your next steps:
Chapter 6. Troubleshooting RPM installation of Ansible Automation Platform Copy linkLink copied to clipboard!
Resolve common installation issues and errors that can occur when installing RPM-based Ansible Automation Platform. Learn how to generate diagnostic logs to identify problems.
6.1. Gathering Ansible Automation Platform logs Copy linkLink copied to clipboard!
With the sos utility, you can collect configuration, diagnostic, and troubleshooting data, and give those files to Red Hat Technical Support.
An sos report is a common starting point for Red Hat technical support engineers when performing analysis of a service request for the Ansible Automation Platform.
As part of the troubleshooting with Red Hat Support, you can collect the sos report for each node in your RPM-based installation of Ansible Automation Platform using the installation inventory and the installation program.
Procedure
Access the installation program folder with the inventory file and run the installation program setup script the following command:
$ ./setup.sh -sWith this command, you can connect to each node present in the inventory, install the
sostool, and generate new logs.NoteIf you are running the setup as a non-root user with sudo privileges, you can use the following command:
ANSIBLE_BECOME_METHOD='sudo'
$ ANSIBLE_BECOME_METHOD='sudo' ANSIBLE_BECOME=True ./setup.sh -sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Optional: If required, change the location of the
sosreport files.The
sosreport files are copied to the/tmpfolder for the current server. To change the location, specify the new location by using the following command:./setup.sh -e 'target_sos_directory=/path/to/files' -s
$ ./setup.sh -e 'target_sos_directory=/path/to/files' -sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Where
target_sos_directory=/path/to/filesis used to specify the destination directory where thesosreport will be saved. In this case, thesosreport is stored in the directory/path/to/files.Gather the files described on the playbook output and share with the support engineer or directly upload the
sosreport to Red Hat.To create an
sosreport with additional information or directly upload the data to Red Hat, use the following command:./setup.sh -e 'case_number=0000000' -e 'clean=true' -e 'upload=true' -s
$ ./setup.sh -e 'case_number=0000000' -e 'clean=true' -e 'upload=true' -sCopy to Clipboard Copied! Toggle word wrap Toggle overflow Expand Table 6.1. Parameter Reference Table Parameter Description Default value case_numberSpecifies the support case number that you want.
-
cleanObfuscates sensitive data that might be present on the
sosreport.falseuploadAutomatically uploads the
sosreport data to Red Hat.falseTo learn more about the
sosreport tool, see: What is an SOS report and how to create one in Red Hat Enterprise Linux?
Appendix A. Inventory file variables Copy linkLink copied to clipboard!
The following tables contain information about the variables used in Ansible Automation Platform’s installation inventory files. The tables include the variables that you can use for RPM-based installation and container-based installation.
A.1. Ansible variables Copy linkLink copied to clipboard!
The following variables control how Ansible Automation Platform interacts with remote hosts.
| Variable | Description |
|---|---|
|
| The connection plugin used for the task on the target host. This can be the name of any Ansible connection plugin.
SSH protocol types are
Default = |
|
|
The IP address or name of the target host to use instead of |
|
| The password to authenticate to the host. Do not store this variable in plain text. Always use a vault. |
|
| The connection port number.
The default for SSH is |
|
|
This setting is always appended to the default |
|
|
This setting is always appended to the default |
|
|
This sets the shell that the Ansible controller uses on the target machine and overrides the executable in |
|
| The shell type of the target system.
Do not use this setting unless you have set the |
|
|
This setting is always appended to the default command line for |
|
|
This setting overrides the default behavior to use the system |
|
|
This setting is always appended to the default |
|
|
Determines if SSH
This can override the |
|
| Private key file used by SSH. Useful if using multiple keys and you do not want to use an SSH agent. |
|
| The user name to use when connecting to the host.
Do not change this variable unless |
|
| This variable takes the hostname of the machine from the inventory script or the Ansible configuration file. You cannot set the value of this variable. Because the value is taken from the configuration file, the actual runtime hostname value can vary from what is returned by this variable. |
A.2. Automation hub variables Copy linkLink copied to clipboard!
Inventory file variables for automation hub.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
|
|
|
Automation hub administrator password. Use of special characters for this variable is limited. The password can include any printable ASCII character except | Required | |
|
| Set the existing token for the installation program. For example, a regenerated token in the automation hub UI will invalidate an existing token. Use this variable to set that token in the installation program the next time you run the installation program. | Optional | ||
|
|
|
If a collection signing service is enabled, collections are not signed automatically by default. Set this variable to | Optional |
|
|
|
Ansible automation hub provides artifacts in | Optional |
| |
|
|
| Maximum allowed size for data sent to automation hub through NGINX. | Optional |
|
|
| Denote whether or not the collection download count should be displayed in the UI. | Optional |
| |
|
|
Controls the type of content to upload when | Optional | Both certified and validated are enabled by default. | |
|
|
| Path to the collection signing key file. | Required if a collection signing service is enabled. | |
|
|
Denote whether or not to run the command | Optional |
| |
|
|
| Path to the container signing key file. | Required if a container signing service is enabled. | |
|
|
|
Set this variable to | Optional |
|
|
|
|
Set this variable to | Optional |
|
|
| automation hub backup path to exclude. | Optional |
| |
|
|
|
Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for automation hub. Set this variable to | Optional |
|
|
|
|
Controls whether HTTPS is enabled or disabled for automation hub. Set this variable to | Optional |
|
|
|
Controls whether logging is enabled or disabled at | Optional |
| |
|
|
Controls whether read-only access is enabled or disabled for unauthorized users viewing collections or namespaces for automation hub. Set this variable to | Optional |
| |
|
|
Controls whether or not unauthorized users can download read-only collections from automation hub. Set this variable to | Optional |
| |
|
|
| The firewall zone where automation hub related firewall rules are applied. This controls which networks can access automation hub based on the zone’s trust level. | Optional |
RPM = no default set. Container = |
|
|
Denote whether or not to require the change of the default administrator password for automation hub during installation. Set to | Optional |
| |
|
|
|
Dictionary of settings to pass to the | Optional | |
|
|
Denote whether the web certificate sources are local to the installation program ( | Optional |
The value defined in | |
|
|
|
Controls whether client certificate authentication is enabled or disabled on the automation hub PostgreSQL database. Set this variable to | Optional |
|
|
|
| Name of the PostgreSQL database used by automation hub. | Optional |
RPM = |
|
|
| Hostname of the PostgreSQL database used by automation hub. | Required |
RPM = |
|
|
|
Password for the automation hub PostgreSQL database user. Use of special characters for this variable is limited. The | Optional | |
|
|
| Port number for the PostgreSQL database used by automation hub. | Optional |
|
|
|
|
Controls the SSL/TLS mode to use when automation hub connects to the PostgreSQL database. Valid options include | Optional |
|
|
|
| Username for the automation hub PostgreSQL database user. | Optional |
RPM = |
|
|
| Path to the PostgreSQL SSL/TLS certificate file for automation hub. | Required if using client certificate authentication. | |
|
|
| Path to the PostgreSQL SSL/TLS key file for automation hub. | Required if using client certificate authentication. | |
|
|
Denote whether the PostgreSQL client certificate sources are local to the installation program ( | Optional |
The value defined in | |
|
|
Controls whether content signing is enabled or disabled for automation hub. By default when you upload collections to automation hub, an administrator must approve it before they are made available to users. To disable the content approval flow, set the variable to | Optional |
| |
|
|
Controls whether or not existing signing keys should be restored from a backup. Set to | Optional |
| |
|
|
|
Controls whether or not pre-loading of collections is enabled. When you run the bundle installer, validated content is uploaded to the | Optional |
|
|
|
| Path to the SSL/TLS certificate file for automation hub. | Optional | |
|
|
| Path to the SSL/TLS key file for automation hub. | Optional | |
|
|
|
Denote whether the automation hub provided certificate files are local to the installation program ( | Optional |
|
|
|
|
Controls whether archive compression is enabled or disabled for automation hub. You can control this functionality globally by using | Optional |
|
|
|
|
Controls whether database compression is enabled or disabled for automation hub. You can control this functionality globally by using | Optional |
|
|
|
| List of additional NGINX headers to add to automation hub’s NGINX configuration. | Optional |
|
|
|
Controls whether automation hub is the only registry for execution environment images. If set to | Optional |
| |
|
|
Controls whether or not a token is generated for automation hub during installation. By default, a token is automatically generated during a fresh installation. If set to | Optional |
| |
|
| Defines additional settings for use by automation hub during installation. For example: hub_extra_settings:
- setting: REDIRECT_IS_HTTPS
value: True
| Optional |
| |
|
|
| Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for automation hub. | Optional |
|
|
|
| Secret key value used by automation hub to sign and encrypt data. | Optional | |
|
| Azure blob storage account key. | Required if using an Azure blob storage backend. | ||
|
| Account name associated with the Azure blob storage. | Required when using an Azure blob storage backend. | ||
|
| Name of the Azure blob storage container. | Optional |
| |
|
| Defines extra parameters for the Azure blob storage backend. For more information about the list of parameters, see django-storages documentation - Azure Storage. | Optional |
| |
|
| Password for the automation content collection signing service. | Required if the collection signing service is protected by a passphrase. | ||
|
| Service for signing collections. | Optional |
| |
|
| Password for the automation content container signing service. | Required if the container signing service is protected by a passphrase. | ||
|
| Service for signing containers. | Optional |
| |
|
| Port number that automation hub listens on for HTTP requests. | Optional |
| |
|
| Port number that automation hub listens on for HTTPS requests. | Optional |
| |
|
|
| Protocols that automation hub will support when handling HTTPS traffic. | Optional |
|
|
| UNIX socket used by automation hub to connect to the PostgreSQL database. | Optional | ||
|
| AWS S3 access key. | Required if using an AWS S3 storage backend. | ||
|
| Name of the AWS S3 storage bucket. | Optional |
| |
|
| Used to define extra parameters for the AWS S3 storage backend. For more information about the list of parameters, see django-storages documentation - Amazon S3. | Optional |
| |
|
| AWS S3 secret key. | Required if using an AWS S3 storage backend. | ||
|
| Mount options for the Network File System (NFS) share. | Optional |
| |
|
|
Path to the Network File System (NFS) share with read, write, and execute (RWX) access. The value must match the format |
Required if installing more than one instance of automation hub with a | ||
|
|
Automation hub storage backend type. Possible values include: | Optional |
| |
|
| Number of automation hub workers. | Optional |
|
A.3. Automation controller variables Copy linkLink copied to clipboard!
Inventory file variables for automation controller.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
|
|
| Email address used by Django for the admin user for automation controller. | Optional |
|
|
|
|
Automation controller administrator password. Use of special characters for this variable is limited. The password can include any printable ASCII character except | Required | |
|
|
| Username used to identify and create the administrator user in automation controller. | Optional |
|
|
|
| Maximum allowed size for data sent to automation controller through NGINX. | Optional |
|
|
|
|
Controls whether archive compression is enabled or disabled for automation controller. You can control this functionality globally by using | Optional |
|
|
|
|
Controls whether database compression is enabled or disabled for automation controller. You can control this functionality globally by using | Optional |
|
|
|
|
Controls whether client certificate authentication is enabled or disabled on the automation controller PostgreSQL database. Set this variable to | Optional |
|
|
|
| The firewall zone where automation controller related firewall rules are applied. This controls which networks can access automation controller based on the zone’s trust level. | Optional |
|
|
|
Denote whether the web certificate sources are local to the installation program ( | Optional |
The value defined in | |
|
|
Denote whether the PostgreSQL client certificate sources are local to the installation program ( | Optional |
The value defined in | |
|
|
|
Denote whether the automation controller provided certificate files are local to the installation program ( | Optional |
|
|
|
|
Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for automation controller. Set this variable to | Optional |
|
|
|
|
Controls whether HTTPS is enabled or disabled for automation controller. Set this variable to | Optional |
|
|
|
| Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for automation controller. | Optional |
|
|
|
| Port number that automation controller listens on for HTTP requests. | Optional |
RPM = |
|
|
| Port number that automation controller listens on for HTTPS requests. | Optional |
RPM = |
|
|
| Protocols that automation controller supports when handling HTTPS traffic. | Optional |
|
|
|
| List of additional NGINX headers to add to automation controller’s NGINX configuration. | Optional |
|
|
| Controls whether or not to create preloaded content during installation. | Optional |
| |
|
|
The status of a node or group of nodes. Valid options include | Optional |
| |
|
|
See |
For the
For the
| Optional |
For |
|
|
See |
Used to indicate which nodes a specific host or group connects to. Wherever this variable is defined, an outbound connection to the specific host or group is established. This variable can be a comma-separated list of hosts and groups from the inventory. This is resolved into a set of hosts that is used to construct the | Optional | |
|
|
| Name of the PostgreSQL database used by automation controller. | Optional |
|
|
|
| Hostname of the PostgreSQL database used by automation controller. | Required | |
|
|
|
Password for the automation controller PostgreSQL database user. Use of special characters for this variable is limited. The | Required if not using client certificate authentication. | |
|
|
| Port number for the PostgreSQL database used by automation controller. | Optional |
|
|
|
|
Controls the SSL/TLS mode to use when automation controller connects to the PostgreSQL database. Valid options include | Optional |
|
|
|
| Username for the automation controller PostgreSQL database user. | Optional |
|
|
|
| Path to the PostgreSQL SSL/TLS certificate file for automation controller. | Required if using client certificate authentication. | |
|
|
| Path to the PostgreSQL SSL/TLS key file for automation controller. | Required if using client certificate authentication. | |
|
|
Number of hours worth of events table partitions to pre-create before starting a backup to avoid | Optional | 3 | |
|
|
|
Number of requests | Optional |
|
|
|
| Path to the SSL/TLS certificate file for automation controller. | Optional | |
|
|
| Path to the SSL/TLS key file for automation controller. | Optional | |
|
| Number of event workers that handle job-related events inside automation controller. | Optional |
| |
|
| Defines additional settings for use by automation controller during installation. For example: controller_extra_settings:
- setting: USE_X_FORWARDED_HOST
value: true
| Optional |
| |
|
| Path to the automation controller license file. | |||
|
| Memory allocation for automation controller. | Optional |
| |
|
| UNIX socket used by automation controller to connect to the PostgreSQL database. | Optional | ||
|
| Secret key value used by automation controller to sign and encrypt data. | Optional |
A.4. Database variables Copy linkLink copied to clipboard!
Inventory file variables for the database used with Ansible Automation Platform.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
|
|
| Port number for the PostgreSQL database. | Optional |
|
|
|
| Defines additional settings for use by PostgreSQL. Example usage for RPM: postgresql_extra_settings: ssl_ciphers: 'HIGH:!aNULL:!MD5'
Example usage for containerized: postgresql_extra_settings:
- setting: ssl_ciphers
value: 'HIGH:!aNULL:!MD5'
| Optional | |
|
|
| The firewall zone where PostgreSQL related firewall rules are applied. This controls which networks can access PostgreSQL based on the zone’s trust level. | Optional |
RPM = no default set. Container = |
|
|
| Maximum number of concurrent connections to the database if you are using an installer-managed database. For more information see PostgreSQL database configuration and maintenance for automation controller. | Optional |
|
|
|
| Path to the PostgreSQL SSL/TLS certificate file. | Optional | |
|
|
| Path to the PostgreSQL SSL/TLS key file. | Optional | |
|
|
| Controls whether SSL/TLS is enabled or disabled for the PostgreSQL database. | Optional |
|
|
| Database name used for connections to the PostgreSQL database server. | Optional |
| |
|
| Password for the PostgreSQL admin user. When used, the installation program creates each component’s database and credentials. |
Required if using | ||
|
| Username for the PostgreSQL admin user. When used, the installation program creates each component’s database and credentials. | Optional |
| |
|
| Memory allocation available (in MB) for caching data. | Optional | ||
|
|
Controls whether or not to keep databases during uninstall. This variable applies to databases managed by the installation program only, and not external (customer-managed) databases. Set to | Optional |
| |
|
| Destination for server log output. | Optional |
| |
|
| The algorithm for encrypting passwords. | Optional |
| |
|
| Memory allocation (in MB) for shared memory buffers. | Optional | ||
|
|
Denote whether the PostgreSQL provided certificate files are local to the installation program ( | Optional |
| |
|
|
Controls whether archive compression is enabled or disabled for PostgreSQL. You can control this functionality globally by using | Optional |
|
A.5. Event-Driven Ansible controller variables Copy linkLink copied to clipboard!
Inventory file variables for Event-Driven Ansible controller.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
|
|
| Number of workers used for ansible-rulebook activation pods in Event-Driven Ansible. | Optional |
RPM = (# of cores or threads) * 2 + 1. Container = |
|
|
| Email address used by Django for the admin user for Event-Driven Ansible. | Optional |
|
|
|
|
Event-Driven Ansible administrator password. Use of special characters for this variable is limited. The password can include any printable ASCII character except | Required | |
|
|
| Username used to identify and create the administrator user in Event-Driven Ansible. | Optional |
|
|
| Number of workers for handling the API served through Gunicorn on worker nodes. | Optional |
| |
|
|
Denote whether the cache cert sources are local to the installation program ( | Optional |
| |
|
|
Controls whether or not to regenerate Event-Driven Ansible client certificates for the platform cache. Set to | Optional |
| |
|
|
| Number of workers used in Event-Driven Ansible for application work. | Optional | Number of cores or threads |
|
|
|
Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for Event-Driven Ansible. Set this variable to | Optional |
|
|
|
|
Controls whether HTTPS is enabled or disabled for Event-Driven Ansible. Set this variable to | Optional |
|
|
|
|
Controls whether event stream mutual TLS (mTLS) authentication is enabled or disabled for Event-Driven Ansible. Set this variable to | Optional |
|
|
|
| The prefix path for the event stream mTLS URLs. | Optional |
|
|
|
| API prefix path used for Event-Driven Ansible event-stream through platform gateway. | Optional |
|
|
|
| The firewall zone where Event-Driven Ansible related firewall rules are applied. This controls which networks can access Event-Driven Ansible based on the zone’s trust level. | Optional |
RPM = no default set. Container = |
|
| Number of workers for handling event streaming for Event-Driven Ansible. | Optional |
| |
|
|
| Number of workers for handling the API served through Gunicorn. | Optional | (Number of cores or threads) * 2 + 1 |
|
|
| Port number that Event-Driven Ansible listens on for HTTP requests. | Optional |
RPM = |
|
|
| Port number that Event-Driven Ansible listens on for HTTPS requests. | Optional |
RPM = |
|
|
Denote whether the web cert sources are local to the installation program ( | Optional |
| |
|
|
|
Controls whether client certificate authentication is enabled or disabled on the Event-Driven Ansible PostgreSQL database. Set this variable to | Optional |
|
|
|
| Name of the PostgreSQL database used by Event-Driven Ansible. | Optional |
RPM = |
|
|
| Hostname of the PostgreSQL database used by Event-Driven Ansible. | Required | |
|
|
|
Password for the Event-Driven Ansible PostgreSQL database user. Use of special characters for this variable is limited. The | Required if not using client certificate authentication. | |
|
|
| Port number for the PostgreSQL database used by Event-Driven Ansible. | Optional |
|
|
|
|
Determines the level of encryption and authentication for client server connections. Valid options include | Optional |
|
|
|
| Username for the Event-Driven Ansible PostgreSQL database user. | Optional |
RPM = |
|
|
| Path to the PostgreSQL SSL/TLS certificate file for Event-Driven Ansible. | Required if using client certificate authentication. | |
|
|
| Path to the PostgreSQL SSL/TLS key file for Event-Driven Ansible. | Required if using client certificate authentication. | |
|
|
Denote whether the PostgreSQL client cert sources are local to the installation program ( | Optional |
| |
|
|
|
URL for connecting to the event stream. The URL must start with the | Optional | |
|
|
| Hostname of the Redis host used by Event-Driven Ansible. | Optional |
First node in the |
|
|
| Password for Event-Driven Ansible Redis. | Optional | Randomly generated string |
|
|
| Port number for the Redis host for Event-Driven Ansible. | Optional |
RPM = The value defined in platform gateway’s implementation ( |
|
|
| Username for Event-Driven Ansible Redis. | Optional |
|
|
|
| Secret key value used by Event-Driven Ansible to sign and encrypt data. | Optional | |
|
|
| Path to the SSL/TLS certificate file for Event-Driven Ansible. | Optional | |
|
|
| Path to the SSL/TLS key file for Event-Driven Ansible. | Optional | |
|
|
|
Denote whether the Event-Driven Ansible provided certificate files are local to the installation program ( | Optional |
|
|
|
List of host addresses in the form: | Optional |
| |
|
|
|
Controls whether archive compression is enabled or disabled for Event-Driven Ansible. You can control this functionality globally by using | Optional |
|
|
|
|
Controls whether database compression is enabled or disabled for Event-Driven Ansible. You can control this functionality globally by using | Optional |
|
|
|
| List of additional NGINX headers to add to Event-Driven Ansible’s NGINX configuration. | Optional |
|
|
|
Controls whether or not to perform SSL verification for the Daphne WebSocket used by Podman to communicate from the pod to the host. Set to | Optional |
| |
|
|
|
Event-Driven Ansible node type. Valid options include | Optional |
|
|
|
Controls whether debug mode is enabled or disabled for Event-Driven Ansible. Set to | Optional |
| |
|
| Defines additional settings for use by Event-Driven Ansible during installation. For example: eda_extra_settings:
- setting: RULEBOOK_READINESS_TIMEOUT_SECONDS
value: 120
| Optional |
| |
|
| Maximum allowed size for data sent to Event-Driven Ansible through NGINX. | Optional |
| |
|
| Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for Event-Driven Ansible. | Optional |
| |
|
|
| Protocols that Event-Driven Ansible supports when handling HTTPS traffic. | Optional |
|
|
| UNIX socket used by Event-Driven Ansible to connect to the PostgreSQL database. | Optional | ||
|
|
| Controls whether TLS is enabled or disabled for Event-Driven Ansible Redis. Set this variable to true to disable TLS. | Optional |
|
|
| Path to the Event-Driven Ansible Redis certificate file. | Optional | ||
|
| Path to the Event-Driven Ansible Redis key file. | Optional | ||
|
| List of plugins that are allowed to run within Event-Driven Ansible. For more information, see Adding a safe plugin variable to Event-Driven Ansible controller. | Optional |
|
A.6. General variables Copy linkLink copied to clipboard!
General inventory file variables for Ansible Automation Platform.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
|
|
| Path to the user provided CA certificate file used to generate SSL/TLS certificates for all Ansible Automation Platform services. For more information, see Optional: Using custom TLS certificates. | Optional | |
|
|
|
Denote whether the CA certificate files are local to the installation program ( | Optional |
|
|
| Bit size of the internally managed CA certificate private key. | Optional |
| |
|
|
|
Path to the key file for the CA certificate provided in | Optional | |
|
| Cipher used for signing the internally managed CA certificate private key. | Optional |
| |
|
| Denotes whether or not to regenerate the internally managed CA certificate key pair. | Optional |
| |
|
| Bit size of the component key pair managed by the internal CA. | Optional |
| |
|
| Denotes whether or not to regenerate the component key pair managed by the internal CA. | Optional |
| |
|
|
A list of additional SAN records for signing a service. Assign these to components in the inventory file as host variables rather than group or all variables. All strings must also contain their corresponding SAN option prefix such as | Optional |
| |
|
|
Directory local to | Optional |
The value defined in | |
|
|
| Directory used to store backup files. | Optional |
RPM = |
|
| Prefix used for the file backup name for the final backup file. | Optional |
| |
|
|
|
Controls whether or not to perform an offline or bundled installation. Set this variable to | Optional |
|
|
|
| Path to the bundle directory used when performing a bundle install. |
Required if |
RPM = |
|
|
| Path to the custom CA certificate file. This is required if any of the TLS certificates you manually provided are signed by a custom CA. For more information, see Optional: Using custom TLS certificates. | Optional | |
|
|
The default install registers the node to the Red Hat Insights for Red Hat Ansible Automation Platform for the Red Hat Ansible Automation Platform Service if the node is registered with Subscription Manager. Set to | Optional |
| |
|
|
|
Password credential for access to the registry source defined in |
RPM = Required if you need a password to access | |
|
|
| URL of the registry source from which to pull execution environment images. | Optional |
|
|
|
|
Username credential for access to the registry source defined in |
RPM = Required if you need a password to access | |
|
|
| Controls whether SSL/TLS certificate verification is enabled or disabled when making HTTPS requests. | Optional |
|
|
| Path to the tar file used for the platform restore. | Optional |
| |
|
| Path prefix for the staged restore components. | Optional |
| |
|
|
|
Used if the machine running the installation program can only route to the target host through a specific URL. For example, if you use short names in your inventory, but the node running the installation program can only resolve that host by using a FQDN. If | Optional | |
|
|
|
Controls at a global level whether the filesystem-related backup files are compressed before being sent to the host to run the backup operation. If set to
You can control this functionality at a component level by using the | Optional |
|
|
|
| Controls at a global level whether the database-related backup files are compressed before being sent to the host to run the backup operation.
You can control this functionality at a component level by using the | Optional |
|
|
|
Passphrase used to decrypt the key provided in | Optional | ||
|
|
Sets the HTTP timeout for end-user requests. The minimum value is | Optional |
| |
|
| Compression software to use for compressing container images. | Optional |
| |
|
|
Controls whether or not to keep container images when uninstalling Ansible Automation Platform. Set to | Optional |
| |
|
|
Controls whether or not to pull newer container images during installation. Set to | Optional |
| |
|
| The directory where the installation program temporarily stores container images during installation. | Optional | The system’s temporary directory. | |
|
| The firewall zone where Performance Co-Pilot related firewall rules are applied. This controls which networks can access Performance Co-Pilot based on the zone’s trust level. | Optional | public | |
|
|
Controls whether archive compression is enabled or disabled for Performance Co-Pilot. You can control this functionality globally by using | Optional |
| |
|
|
Controls whether to use registry authentication. When set to | Optional |
| |
|
| Ansible Automation Platform registry namespace. | Optional |
| |
|
| RHEL registry namespace. | Optional |
|
A.7. Image variables Copy linkLink copied to clipboard!
Inventory file variables for images.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
|
| Additional container images to pull from the configured container registry during deployment. | Optional |
| |
|
| Container image for automation controller. | Optional |
| |
|
| Additional decision environment container images to pull from the configured container registry during deployment. | Optional |
| |
|
| Supported decision environment container image. | Optional |
| |
|
| Backend container image for Event-Driven Ansible. | Optional |
| |
|
| Front-end container image for Event-Driven Ansible. | Optional |
| |
|
| Additional execution environment container images to pull from the configured container registry during deployment. | Optional |
| |
|
| Minimal execution environment container image. | Optional |
| |
|
| Supported execution environment container image. | Optional |
| |
|
| Container image for platform gateway. | Optional |
| |
|
| Container image for platform gateway proxy. | Optional |
| |
|
| Backend container image for automation hub. | Optional |
| |
|
| Front-end container image for automation hub. | Optional |
| |
|
| Container image for Performance Co-Pilot. | Optional |
| |
|
| Container image for PostgreSQL. | Optional |
| |
|
| Container image for receptor. | Optional |
| |
|
| Container image for Redis. | Optional |
|
A.8. Platform gateway variables Copy linkLink copied to clipboard!
Inventory file variables for platform gateway.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
|
|
| Email address used by Django for the admin user for platform gateway. | Optional |
|
|
|
|
Platform gateway administrator password. Use of special characters for this variable is limited. The password can include any printable ASCII character except | Required | |
|
|
| Username used to identify and create the administrator user in platform gateway. | Optional |
|
|
|
| Path to the platform gateway Redis certificate file. | Optional | |
|
|
| Path to the platform gateway Redis key file. | Optional | |
|
|
Denote whether the cache client certificate files are local to the installation program ( | Optional |
The value defined in | |
|
|
Controls whether or not to regenerate platform gateway client certificates for the platform cache. Set to | Optional |
| |
|
|
| Port number for the platform gateway control plane. | Optional |
|
|
|
|
Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for platform gateway. Set this variable to | Optional |
|
|
|
|
Controls whether HTTPS is enabled or disabled for platform gateway. Set this variable to | Optional |
RPM = The value defined in |
|
|
| The firewall zone where platform gateway related firewall rules are applied. This controls which networks can access platform gateway based on the zone’s trust level. | Optional | RPM = no default set. Container = 'public'. |
|
|
| Timeout duration (in seconds) for requests made to the gRPC service on platform gateway. | Optional |
|
|
|
| Maximum number of threads that each gRPC server process can create to handle requests on platform gateway. | Optional |
|
|
|
| Number of processes for handling gRPC requests on platform gateway. | Optional |
|
|
|
| Port number that platform gateway listens on for HTTP requests. | Optional |
RPM = |
|
|
| Port number that platform gateway listens on for HTTPS requests. | Optional |
RPM = |
|
|
|
URL of the main instance of platform gateway that clients connect to. Use if you are performing a clustered deployment and you need to use the URL of the load balancer instead of the component’s server. The URL must start with | Optional | |
|
|
Denote whether the web cert sources are local to the installation program ( | Optional |
The value defined in | |
|
|
|
Controls whether client certificate authentication is enabled or disabled on the platform gateway PostgreSQL database. Set this variable to | Optional |
|
|
|
| Name of the PostgreSQL database used by platform gateway. | Optional |
RPM = |
|
|
| Hostname of the PostgreSQL database used by platform gateway. | Required | |
|
|
|
Password for the platform gateway PostgreSQL database user. Use of special characters for this variable is limited. The | Optional | |
|
|
| Port number for the PostgreSQL database used by platform gateway. | Optional |
|
|
|
|
Controls the SSL mode to use when platform gateway connects to the PostgreSQL database. Valid options include | Optional |
|
|
|
| Username for the platform gateway PostgreSQL database user. | Optional |
RPM = |
|
|
| Path to the PostgreSQL SSL/TLS certificate file for platform gateway. | Required if using client certificate authentication. | |
|
|
| Path to the PostgreSQL SSL/TLS key file for platform gateway. | Required if using client certificate authentication. | |
|
|
Denote whether the PostgreSQL client cert sources are local to the installation program ( | Optional |
The value defined in | |
|
|
| Hostname of the Redis host used by platform gateway. | Optional |
First node in the |
|
|
| Password for platform gateway Redis. | Optional | Randomly generated string. |
|
|
| Username for platform gateway Redis. | Optional |
|
|
|
| Secret key value used by platform gateway to sign and encrypt data. | Optional | |
|
|
| Path to the SSL/TLS certificate file for platform gateway. | Optional | |
|
|
| Path to the SSL/TLS key file for platform gateway. | Optional | |
|
|
|
Denote whether the platform gateway provided certificate files are local to the installation program ( | Optional |
|
|
|
|
The number of | Optional | The number of vCPUs multiplied by two, plus one. |
|
|
|
Controls whether archive compression is enabled or disabled for platform gateway. You can control this functionality globally by using | Optional |
|
|
|
|
Controls whether database compression is enabled or disabled for platform gateway. You can control this functionality globally by using | Optional |
|
|
|
| List of additional NGINX headers to add to platform gateway’s NGINX configuration. | Optional |
|
|
|
Denotes whether or not to verify platform gateway’s web certificates when making calls from platform gateway to itself during installation. Set to | Optional |
| |
|
|
|
Controls whether or not HTTPS is disabled when accessing the platform UI. Set to | Optional |
RPM = The value defined in |
|
|
| Port number on which the Envoy proxy listens for incoming HTTP connections. | Optional |
|
|
|
| Port number on which the Envoy proxy listens for incoming HTTPS connections. | Optional |
|
|
|
| Protocols that platform gateway will support when handling HTTPS traffic. | Optional |
|
|
|
|
Controls whether TLS is enabled or disabled for platform gateway Redis. Set this variable to | Optional |
|
|
|
| Port number for the Redis host for platform gateway. | Optional |
|
|
| Defines additional settings for use by platform gateway during installation. For example: gateway_extra_settings:
- setting: OAUTH2_PROVIDER['ACCESS_TOKEN_EXPIRE_SECONDS']
value: 600
| Optional |
| |
|
| Maximum allowed size for data sent to platform gateway through NGINX. | Optional |
| |
|
| Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for platform gateway. | Optional |
| |
|
|
Number of requests | Optional |
|
A.9. Receptor variables Copy linkLink copied to clipboard!
Inventory file variables for Receptor.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
|
| The directory where receptor stores its runtime data and local artifacts. The target directory must be accessible to awx users. If the target directory is a temporary file system tmpfs, ensure it is remounted correctly after a reboot. Failure to do so results in the receptor no longer having a working directory. | Optional |
| |
|
|
| Port number that receptor listens on for incoming connections from other receptor nodes. | Optional |
|
|
|
| Protocol that receptor will support when handling traffic. | Optional |
|
|
|
|
Controls the verbosity of logging for receptor. Valid options include: | Optional |
|
|
|
Controls whether TLS is enabled or disabled for receptor. Set this variable to | Optional |
| |
|
See |
|
For the
For the
| Optional |
For the |
|
See |
| Used to indicate which nodes a specific host connects to. Wherever this variable is defined, an outbound connection to the specific host is established. The value must be a comma-separated list of hostnames. Do not use inventory group names.
This is resolved into a set of hosts that is used to construct the | Optional |
|
|
|
Controls whether signing of communications between receptor nodes is enabled or disabled. Set this variable to | Optional |
| |
|
|
Controls whether TLS is enabled or disabled for receptor. Set this variable to | Optional |
| |
|
| The firewall zone where receptor related firewall rules are applied. This controls which networks can access receptor based on the zone’s trust level. | Optional |
| |
|
|
Controls whether or not receptor only accepts connections that use TLS 1.3 or higher. Set to | Optional |
| |
|
| Path to the private key used by receptor to sign communications with other receptor nodes in the network. | Optional | ||
|
| Path to the public key used by receptor to sign communications with other receptor nodes in the network. | Optional | ||
|
|
Denote whether the receptor signing files are local to the installation program ( | Optional |
| |
|
| Path to the TLS certificate file for receptor. | Optional | ||
|
| Path to the TLS key file for receptor. | Optional | ||
|
|
Denote whether the receptor provided certificate files are local to the installation program ( | Optional |
| |
|
|
Controls whether archive compression is enabled or disabled for receptor. You can control this functionality globally by using | Optional |
|
A.10. Redis variables Copy linkLink copied to clipboard!
Inventory file variables for Redis.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
|
|
|
The IPv4 address used by the Redis cluster to identify each host in the cluster. When defining hosts in the | Optional | RPM = Discovered IPv4 address from Ansible facts. If IPv4 address is not available, IPv6 address is used. Container = Discovered IPv4 address from Ansible facts. |
|
|
Controls whether mTLS is enabled or disabled for Redis. Set this variable to | Optional |
| |
|
|
| The firewall zone where Redis related firewall rules are applied. This controls which networks can access Redis based on the zone’s trust level. | Optional |
RPM = no default set. Container = |
|
|
Hostname used by the Redis cluster when identifying and routing the host. By default | Optional |
The value defined in | |
|
|
|
The Redis mode to use for your Ansible Automation Platform installation. Valid options include: | Optional |
|
|
| Denotes whether or not to regenerate the Ansible Automation Platform managed TLS key pair for Redis. | Optional |
| |
|
|
| Path to the Redis server TLS certificate. | Optional | |
|
|
|
Denote whether the Redis provided certificate files are local to the installation program ( | Optional |
|
|
|
| Path to the Redis server TLS certificate key. | Optional | |
|
|
Controls whether archive compression is enabled or disabled for Redis. You can control this functionality globally by using | Optional |
|
A.11. Red Hat Ansible Lightspeed variables Copy linkLink copied to clipboard!
Configure Red Hat Ansible Lightspeed by setting inventory file variables during installation. Use this reference to determine which variables to set for your deployment requirements.
A.11.1. Red Hat Ansible Lightspeed variables Copy linkLink copied to clipboard!
Inventory file variables for Red Hat Ansible Lightspeed.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
| N/A |
|
Red Hat Ansible Lightspeed administrator password. Use of special characters for this variable is limited. The password can include any printable ASCII character except | Required | |
| N/A |
| Username used to identify and create the Red Hat Ansible Lightspeed admin user. | Optional |
|
| N/A |
| Chat rate throttle. | Optional |
|
| N/A |
| Maximum allowed size for data sent to Red Hat Ansible Lightspeed through NGINX. | Optional |
|
| N/A |
|
Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for Red Hat Ansible Lightspeed. Set this variable to | Optional |
|
| N/A |
|
Controls whether HTTPS is enabled or disabled for Red Hat Ansible Lightspeed. Set this variable to | Optional |
|
| N/A |
| Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for Red Hat Ansible Lightspeed. | Optional |
|
| N/A |
| Port number that Red Hat Ansible Lightspeed listens on for HTTP requests. | Optional |
|
| N/A |
| Port number that Red Hat Ansible Lightspeed listens on for HTTPS requests. | Optional |
|
| N/A |
| Protocols that Red Hat Ansible Lightspeed will support when handling HTTPS traffic. | Optional |
|
| N/A |
| Custom Nginx headers. List of additional NGINX headers to add to Red Hat Ansible Lightspeed’s NGINX configuration. | Optional | [] |
| N/A |
|
Sets the HTTP timeout for end-user requests. The minimum value is | Optional |
|
| N/A |
|
Controls whether client certificate authentication is enabled or disabled on the Red Hat Ansible Lightspeed PostgreSQL database. Set this variable to | Optional |
|
| N/A |
| Name of the PostgreSQL database used by Red Hat Ansible Lightspeed. | Optional |
|
| N/A |
| Hostname of the PostgreSQL database used by Red Hat Ansible Lightspeed. | Required | |
| N/A |
|
Password for the Red Hat Ansible Lightspeed PostgreSQL database user. Use of special characters for this variable is limited. The | Optional | |
| N/A |
| Port number for the PostgreSQL database used by Red Hat Ansible Lightspeed. | Optional |
|
| N/A |
|
Controls the SSL mode to use when platform gateway connects to the PostgreSQL database. Valid options include | Optional |
|
| N/A |
| Path to the PostgreSQL SSL/TLS certificate file for Red Hat Ansible Lightspeed. | Optional | |
| N/A |
| Path to the PostgreSQL SSL/TLS key file for Red Hat Ansible Lightspeed. | Optional | |
| N/A |
| Username for the Red Hat Ansible Lightspeed PostgreSQL database user. | Optional |
|
| N/A |
| Secret key value used by Red Hat Ansible Lightspeed to sign and encrypt data. | Optional | |
| N/A |
| Path to the SSL/TLS certificate file for Red Hat Ansible Lightspeed. | Optional | |
| N/A |
| Path to the SSL/TLS key file for Red Hat Ansible Lightspeed. | Optional | |
| N/A |
|
Denote whether the Red Hat Ansible Lightspeed provided certificate files are local to the installation program ( | Optional |
|
| N/A |
|
Controls whether archive compression is enabled or disabled for Red Hat Ansible Lightspeed. You can control this functionality globally by using | Optional |
|
| N/A |
|
Controls whether database compression is enabled or disabled for Red Hat Ansible Lightspeed. You can control this functionality globally by using | Optional |
|
A.11.2. Ansible Lightspeed coding assistant variables Copy linkLink copied to clipboard!
Inventory file variables for Ansible Lightspeed coding assistant.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
| N/A |
|
IBM watsonx Code Assistant model deployment mode, cloud ( | Optional |
|
| N/A |
|
URL of the IBM watsonx Code Assistant model. For cloud deployment, the URL could be | Optional | |
| N/A |
| API key of the IBM watsonx Code Assistant model that was generated during the model installation. | Required | |
| N/A |
| ID of the IBM watsonx Code Assistant model. | Optional | |
| N/A |
|
Denotes whether or not to verify IBM watsonx Code Assistant’s web certificates when making calls from Red Hat Ansible Lightspeed to itself during installation. Set to | Optional |
|
| N/A |
| Controls whether the anonymization of Personally Identifiable Information (PII) is enabled. PII information includes passwords, IP addresses, email addresses, and other sensitive data. When PII anonymization is enabled, users' personal information is modified to some generic values to protect their data and reduce the risk of data leaks.
You can turn off the anonymization by specifying the value as
If you set the value to | Optional |
|
| N/A |
| For on-premise deployment only. The username you use to connect to an IBM Cloud Pak for Data deployment. | Optional | |
| N/A |
| Enables or disables IBM watsonx Code Assistant health check. | Optional |
|
| N/A |
| For cloud deployment only. The IBM watsonx Code Assistant Identity Provider (IdP) URL. | Optional | |
| N/A |
| For cloud deployment only. The IBM watsonx Code Assistant Identity Provider (IdP) username. | Optional | |
| N/A |
| For cloud deployment only. The IBM watsonx Code Assistant Identity Provider (IdP) password. | Optional |
A.11.3. Ansible Lightspeed intelligent assistant variables Copy linkLink copied to clipboard!
Inventory file variables for Ansible Lightspeed intelligent assistant.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
| N/A |
|
The inference API base URL on your LLM setup. For example, | Optional | |
| N/A |
| Controls whether SSL/TLS certificate verification is enabled or disabled when making HTTPS requests. | Optional |
|
| N/A |
| The provider type of your LLM setup by using one of the following values:
| Optional |
|
| N/A |
| Use this parameter to pass a JSON dictionary of extra parameters to pass directly to the model provider, for settings not covered by other standard fields.
If you want to use Microsoft Azure OpenAI as the LLM provider, specify the value as | Optional |
|
| N/A |
| Maximum number of tokens to generate a chat response. | Optional |
|
| N/A |
| Port number that Ansible Lightspeed intelligent assistant listens on for HTTP requests. | Optional |
|
| N/A |
| The ID of the LLM model that is configured on your LLM setup. | Optional | |
| N/A |
| The API token or the API key of your LLM setup. This token is sent along with the authorization header when an inference API is called. | Optional |
A.11.4. Ansible Lightspeed intelligent assistant integration with MCP server variables Copy linkLink copied to clipboard!
Inventory file variables for Ansible Lightspeed intelligent assistant integration with Model Context Protocol (MCP) server.
| RPM variable name | Container variable name | Description | Required or optional | Default |
|---|---|---|---|---|
| N/A |
| Controls whether the Ansible Lightspeed MCP controller is enabled or disabled. | Optional |
|
| N/A |
| Ansible Lightspeed MCP controller port. | Optional |
|
| N/A |
| Ansible Lightspeed MCP lightspeed enabled. | Optional |
|
| N/A |
| Ansible Lightspeed MCP lightspeed port. | Optional |
|