Planning your installation
Plan for installation of Ansible Automation Platform
Abstract
Preface Copy linkLink copied to clipboard!
Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multitiered deployments by adding control, knowledge, and delegation to Ansible-powered environments.
Use the information in this guide to plan your Red Hat Ansible Automation Platform installation.
Providing feedback on Red Hat documentation Copy linkLink copied to clipboard!
If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
Chapter 1. Planning your Red Hat Ansible Automation Platform installation Copy linkLink copied to clipboard!
Red Hat Ansible Automation Platform is supported on both Red Hat Enterprise Linux and Red Hat OpenShift. Use this guide to plan your Red Hat Ansible Automation Platform installation on Red Hat Enterprise Linux.
To install Red Hat Ansible Automation Platform on your Red Hat OpenShift Container Platform environment, see Installing on OpenShift Container Platform.
Chapter 2. Red Hat Ansible Automation Platform components Copy linkLink copied to clipboard!
Ansible Automation Platform is composed of services that are connected together to meet your automation needs. These services provide the ability to store, make decisions for, and execute automation. All of these functions are available through a user interface (UI) and RESTful application programming interface (API). Deploy each of the following components so that all features and capabilities are available for use without the need to take further action:
- Platform gateway
- Automation controller
- Automation hub
- Private automation hub
- High availability automation hub
- Event-Driven Ansible controller
- Automation mesh
- Automation execution environments
- Ansible Galaxy
- Automation content navigator
- PostgreSQL
2.1. Platform gateway Copy linkLink copied to clipboard!
Platform gateway is the service that handles authentication and authorization for the Ansible Automation Platform. It provides a single entry into the Ansible Automation Platform and serves the platform user interface so you can authenticate and access all of the Ansible Automation Platform services from a single location. For more information about the services available in the Ansible Automation Platform, refer to Key functionality and concepts in Getting started with Ansible Automation Platform.
The platform gateway includes an activity stream that captures changes to gateway resources, such as the creation or modification of organizations, users, and service clusters, among others. For each change, the activity stream collects information about the time of the change, the user that initiated the change, the action performed, and the actual changes made to the object, when possible. The information gathered varies depending on the type of change.
You can access the details captured by the activity stream from the API:
/api/gateway/v1/activitystream/
/api/gateway/v1/activitystream/
2.2. Automation controller Copy linkLink copied to clipboard!
Automation controller is an enterprise framework enabling users to define, operate, scale, and delegate Ansible automation across their enterprise.
2.3. Ansible automation hub Copy linkLink copied to clipboard!
Ansible automation hub is a repository for certified content of Ansible Content Collections. It is the centralized repository for Red Hat and its partners to publish content, and for customers to discover certified, supported Ansible Content Collections. Red Hat Ansible Certified Content provides users with content that has been tested and is supported by Red Hat.
2.4. Private automation hub Copy linkLink copied to clipboard!
Private automation hub provides both disconnected and on-premise solutions for synchronizing content. You can synchronize collections and execution environment images from Red Hat cloud automation hub, storing and serving your own custom automation collections and execution images. You can also use other sources such as Ansible Galaxy or other container registries to provide content to your private automation hub. Private automation hub can integrate into your enterprise directory and your CI/CD pipelines.
2.5. High availability automation hub Copy linkLink copied to clipboard!
A high availability (HA) configuration increases reliability and scalablility for automation hub deployments.
HA deployments of automation hub have multiple nodes that concurrently run the same service with a load balancer distributing workload (an "active-active" configuration). This configuration eliminates single points of failure to minimize service downtime and allows you to easily add or remove nodes to meet workload demands.
2.6. Event-Driven Ansible controller Copy linkLink copied to clipboard!
The Event-Driven Ansible controller is the interface for event-driven automation and introduces automated resolution of IT requests. Event-Driven Ansible controller helps you connect to sources of events and act on those events by using rulebooks. This technology improves IT speed and agility, and enables consistency and resilience. With Event-Driven Ansible, you can:
- Automate decision making
- Use many event sources
- Implement event-driven automation within and across many IT use cases
2.7. Automation mesh Copy linkLink copied to clipboard!
Automation mesh is an overlay network intended to ease the distribution of work across a large and dispersed collection of workers through nodes that establish peer-to-peer connections with each other using existing networks.
Automation mesh provides:
- Dynamic cluster capacity that scales independently, allowing you to create, register, group, ungroup and deregister nodes with minimal downtime.
- Control and execution plane separation that enables you to scale playbook execution capacity independently from control plane capacity.
- Deployment choices that are resilient to latency, reconfigurable without outage, and that dynamically re-reroute to choose a different path when outages exist.
- Mesh routing changes.
- Connectivity that includes bi-directional, multi-hopped mesh communication possibilities which are Federal Information Processing Standards (FIPS) compliant.
2.8. Automation execution environments Copy linkLink copied to clipboard!
Automation execution environments are container images on which all automation in Red Hat Ansible Automation Platform is run. They provide a solution that includes the Ansible execution engine and hundreds of modules that help users automate all aspects of IT environments and processes. Automation execution environments automate commonly used operating systems, infrastructure platforms, network devices, and clouds.
2.9. Ansible Galaxy Copy linkLink copied to clipboard!
Ansible Galaxy is a hub for finding, reusing, and sharing Ansible content. Community-provided Galaxy content, in the form of prepackaged roles, can help start automation projects. Roles for provisioning infrastructure, deploying applications, and completing other tasks can be dropped into Ansible Playbooks and be applied immediately to customer environments.
2.11. PostgreSQL Copy linkLink copied to clipboard!
PostgreSQL (Postgres) is an open-source relational database management system. For Ansible Automation Platform, Postgres serves as the backend database to store automation data such as job templates, inventory, credentials, and execution history.
Chapter 3. Caching and queueing system Copy linkLink copied to clipboard!
In Ansible Automation Platform 2.6, Redis (REmote DIctionary Server) is used as the caching and queueing system. Redis is an open source, in-memory, NoSQL key/value store that is used primarily as an application cache, quick-response database and lightweight message broker.
Centralized Redis is provided for the platform gateway and Event-Driven Ansible and shared between those components. Automation controller and automation hub have their own instances of Redis.
This cache and queue system stores data in memory, rather than on a disk or solid-state drive (SSD), which helps deliver speed, reliability, and performance. In Ansible Automation Platform, the system caches the following types of data for the various services in Ansible Automation Platform:
| Automation controller | Event-Driven Ansible server | Automation hub | Platform gateway |
|---|---|---|---|
| N/A automation controller does not use shared Redis in Ansible Automation Platform 2.6 | Event queues | N/A automation hub does not use shared Redis in Ansible Automation Platform 2.6 | Settings, Session Information, JSON Web Tokens |
This data can contain sensitive Personal Identifiable Information (PII). Your data is protected through secure communication with the cache and queue system through both Transport Layer Security (TLS) encryption and authentication.
The data in Redis from both the platform gateway and Event-Driven Ansible are partitioned; therefore, neither service can access the other’s data.
3.1. Centralized Redis Copy linkLink copied to clipboard!
Ansible Automation Platform offers a centralized Redis instance in both standalone and clustered topologies. This enables resiliency by providing consistent performance and reliability.
3.2. Clustered Redis Copy linkLink copied to clipboard!
With clustered Redis, data is automatically partitioned over multiple nodes to provide performance stability and nodes are assigned as replicas to provide reliability. Clustered Redis, shared between the platform gateway and Event-Driven Ansible, is provided by default when installing Ansible Automation Platform in containerized and operator-based deployments.
6 VMs are required for a Redis high availability (HA) compatible deployment. In RPM deployments, Redis can be colocated on each Ansible Automation Platform component VM except for automation controller, execution nodes, or the PostgreSQL database. In containerized deployments, Redis can be colocated on any Ansible Automation Platform component VMs of your choice except for execution nodes or the PostgreSQL database. See Tested deployment models for the opinionated deployment options available.
A cluster contains three primary nodes and each primary node contains a replica node.
If a primary instance becomes unavailable due to failures, the other primary nodes will initiate a failover state to promote a replica node to a primary node.
The benefits of deploying clustered Redis over standalone Redis include the following:
- Data is automatically split across multiple nodes.
- Data can be dynamically adjusted.
- Automatic failover of the primary nodes is initiated during system failures.
Therefore, if you need data scalability and automatic failover, deploy Ansible Automation Platform with a clustered Redis. For more information about scalability with Redis, refer to Scale with Redis Cluster in the Redis product documentation.
For information on deploying Ansible Automation Platform with clustered Redis, refer to the RPM installation, Containerized installation, and Installing on OpenShift Container Platform guides.
Disclaimer: Links contained in this information to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.
3.3. Standalone Redis Copy linkLink copied to clipboard!
Standalone Redis consists of a simple architecture that is easy to deploy and configure.
If a resilient solution is not a requirement, deploy Ansible Automation Platform with a standalone Redis.
Chapter 4. Overview of tested deployment models Copy linkLink copied to clipboard!
Red Hat tests Ansible Automation Platform 2.6 with a defined set of topologies to give you opinionated deployment options. Deploy all components of Ansible Automation Platform so that all features and capabilities are available for use without the need to take further action.
Red Hat tests the installation of Ansible Automation Platform 2.6 based on a defined set of infrastructure topologies or reference architectures. Enterprise organizations can use one of the enterprise topologies for production deployments to ensure the highest level of uptime, performance, and continued scalability. Organizations or deployments that are resource constrained can use a growth topology.
It is possible to install Ansible Automation Platform on different infrastructure topologies and with different environment configurations. Red Hat does not fully test topologies outside of published reference architectures. Red Hat recommends using a tested topology for all new deployments and provides commercially reasonable support for deployments that meet minimum requirements.
4.1. Installation and deployment models Copy linkLink copied to clipboard!
The following table outlines the different ways to install or deploy Ansible Automation Platform:
| Mode | Infrastructure | Description | Tested topologies |
|---|---|---|---|
| Containers | Virtual machines and bare metal | The containerized installer deploys Ansible Automation Platform on Red Hat Enterprise Linux by using Podman which runs the platform in containers on host machines. Customers manage the product and infrastructure lifecycle. | |
| Operator | Red Hat OpenShift | The Operator uses Red Hat OpenShift Operators to deploy Ansible Automation Platform within Red Hat OpenShift. Customers manage the product and infrastructure lifecycle. | |
| RPM | Virtual machines and bare metal | The RPM installer deploys Ansible Automation Platform on Red Hat Enterprise Linux by using RPMs to install the platform on host machines. Customers manage the product and infrastructure lifecycle. |
Chapter 5. System requirements Copy linkLink copied to clipboard!
Use this information when planning your Red Hat Ansible Automation Platform installations and designing automation mesh topologies that fit your use case.
Prerequisites
-
You can obtain root access either through the
sudocommand, or through privilege escalation. For more on privilege escalation, see Understanding privilege escalation. - You can de-escalate privileges from root to users such as: AWX, PostgreSQL, Event-Driven Ansible, or Pulp.
- You have configured an NTP client on all nodes.
5.1. System requirements for RPM installation Copy linkLink copied to clipboard!
For system requirements for the RPM installation method of Ansible Automation Platform, see the System requirements section of RPM installation.
5.2. System requirements for containerized installation Copy linkLink copied to clipboard!
For system requirements for the containerized installation method of Ansible Automation Platform, see the System requirements section of Containerized installation.
5.3. System requirements for installing on OpenShift Container Platform Copy linkLink copied to clipboard!
For system requirements for installing Ansible Automation Platform on OpenShift Container Platform, see the Tested system configurations section of Tested deployment models.
Chapter 6. Network ports and protocols Copy linkLink copied to clipboard!
Red Hat Ansible Automation Platform uses several ports to communicate with its services. These ports must be open and available for incoming connections to the Red Hat Ansible Automation Platform server in order for it to work. Ensure that these ports are available and are not blocked by the server firewall.
The following architectural diagram is an example of a fully deployed Ansible Automation Platform with all possible components.
In some of the following use cases, hop nodes are used instead of a direct link from an execution node. Hop nodes are an option for connecting control and execution nodes. Hop nodes use minimal CPU and memory, so vertically scaling hop nodes does not impact system capacity.
The following diagraam shows client initiated connections between Ansible Automation Platform components. Direct connections shown in the diagram between the Client and automation hub, Event-Driven Ansible, and automation controller only apply to systems upgraded from Red Hat Ansible Automation Platform 2.4 to Red Hat Ansible Automation Platform 2.6 to provide backward compatibility.
Figure 6.1. Ansible Automation Platform Client initiated network ports and protocols
The following diagram shows internally initiated connections between Ansible Automation Platform components for new installs Red Hat Ansible Automation Platform 2.6.
Figure 6.2. Ansible Automation Platform Internally initiated network ports and protocols
The following table indicates the destination port and the direction of network traffic:
The following default destination ports and installer inventory listed are configurable. If you choose to configure them to suit your environment, you might experience a change in behavior.
| Node | Port | Source | Protocol | Service | Required for | Installer Inventory Variable |
|---|---|---|---|---|---|---|
| Automation hub | 22 | Installer node | TCP | SSH | Management of Ansible Automation Platform
|
|
| Automation hub | 80/443 | Installer node | TCP | HTTP/HTTPS | Enables installer node to push the execution environment image to automation hub when using the bundle installer. |
|
| Automation hub | 80/443 | Automation controller | TCP | HTTP/HTTPS | Pull collections | |
| Automation hub | 80/443 | Event-Driven Ansible node | TCP | HTTP/HTTPS | Pull container decision environments | |
| Automation hub | 80/443 | Execution node | TCP | HTTP/HTTPS | Allows execution nodes to pull the execution environment image from automation hub | |
| Automation hub | 80/443 | Gateway load balancer/Ingress node | TCP | HTTP/HTTPS | Only relevant if accessing the component directly from platform gateway |
|
| Automation hub | 443 | Platform gateway | TCP | HTTPS | Link between platform gateway and Ansible Automation Platform components | |
| Automation hub | 6379 | Event-Driven Ansible | TCP | Redis | ||
| Automation controller | 22 | Installer node | TCP | SSH | Management of Ansible Automation Platform
|
|
| Automation controller | 80/443 | Event-Driven Ansible | TCP | HTTP/HTTPS | Launch automation controller jobs | |
| Automation controller | 80/443 | Platform gateway | TCP | HTTP/HTTPS | Link between platform gateway and Ansible Automation Platform components | |
| Automation controller | 80/443 | Gateway load balancer/Ingress node | TCP | HTTP/HTTPS | Only relevant if accessing the component directly from Platform gateway | |
| Automation controller | 27199 | Execution node | TCP | Receptor | Configurable Mesh nodes directly peered to controllers. Direct nodes involved. The execution nodes support bidirectional communication through port 27199. This is established in RPM installations through the installation inventory. You can establish the connection in either direction. But communications once established are always bidirectional. For more information on use of peers in inventory scripts, see Defining automation mesh node types |
|
| Event-Driven Ansible | 22 | Installer node | TCP | SSH | Management of Ansible Automation Platform
|
|
| Event-Driven Ansible | 80/443 | Platform gateway | TCP | HTTP/HTTPS | Link between platform gateway and Ansible Automation Platform components | |
| Event-Driven Ansible | 80/443 | Gateway load balancer/Ingress node | TCP | HTTP/HTTPS | Only relevant if accessing the component directly from platform gateway | `automationgateway_main_url |
| Event-Driven Ansible | 8443 | Platform gateway | TCP | HTTPS | Receiving event stream traffic | |
| Execution node | 22 | Installer node | TCP | SSH | Management of Ansible Automation Platform
|
|
| Execution node | 443 | Gateway load balancer/Ingress node | TCP | HTTPS |
| |
| Execution node | 27199 | Automation controller | TCP | Receptor | Configurable Mesh nodes directly peered to controllers. Direct nodes involved. The execution nodes support bidirectional communication through port 27199. This is established in RPM installations through the installation inventory. You can establish the connection in either direction. But communications once established are always bidirectional. For more information on use of peers in inventory scripts, see Defining automation mesh node types |
|
| Execution node | 27199 | OpenShift Container Platform | TCP | Receptor | ||
| Hop node | 22 | Installer node | TCP | SSH | Management of Ansible Automation Platform
|
|
| Hop node | 27199 | Automation controller | TCP | Receptor | Configurable ENABLE connections from hop nodes to Receptor port if relayed through hop nodes. |
|
| Hop node | 27199 | Execution node | TCP | Receptor | Configurable Mesh nodes directly peered to controllers. Direct nodes involved. The execution nodes support bidirectional communication through port 27199. This is established in RPM installations through the installation inventory. You can establish the connection in either direction. But communications once established are always bidirectional. For more information on use of peers in inventory scripts, see Defining automation mesh node types |
|
| Hybrid node | 22 | Installer node | TCP | SSH | Management of Ansible Automation Platform
|
|
| Hybrid node | 27199 | Automation controller | TCP | Receptor | Configurable ENABLE connections from automation controller to Receptor port if relayed through non-hop connected nodes. |
|
| PostgreSQL database | 22 | Installer node | TCP | SSH | Management of Ansible Automation Platform
|
|
| PostgreSQL database | 5432 | Automation controller | TCP | PostgreSQL | Open only if the internal database is used along with another component. Otherwise, this port should not be open. |
|
| PostgreSQL database | 5432 | Event-Driven Ansible | TCP | PostgreSQL | Open only if the internal database is used along with another component. Otherwise, this port should not be open. |
|
| PostgreSQL | 5432 | Automation hub | TCP | PostgreSQL | Open only if the internal database is used along with another component. Otherwise, this port should not be open |
|
| OpenShift Container Platform | 6443 | Automation controller | TCP | HTTP/HTTPS | Only required when using container groups to run jobs. | Host name of OpenShift API server |
| Redis node | 6379 | Automation controller | TCP | Redis | Job launching | |
| Redis node | 6379 | Event-Driven Ansible | TCP | Redis | Job launching | |
| Redis node | 6379 | Automation hub | TCP | Redis | Job launching | |
| Redis node | 6379 | Platform gateway | TCP | Redis | Data storage and retrieval | |
| Redis node | 16379 | Redis node | TCP | Redis | Redis cluster bus port for a resilient Redis configuration | |
| Mesh ingress | 443 | Execution node | Receptor | HTTPS | If using mesh ingress, ensure that outbound HTTPS (port 443) is allowed from the execution nodes to the OpenShift route URL. | |
| Platform gateway | 8443 | Platform gateway | TCP | HTTPS | nginx |
- Hybrid nodes act as a combination of control and execution nodes, and therefore Hybrid nodes share the connections of both.
-
If
receptor_listener_portis defined, the machine also requires an available open port on which to establish inbound TCP connections, for example, 27199. It might be the case that some servers do not listen on receptor port (the default is 27199)
Suppose you have a Control plane with nodes A, B, C, D
The RPM installer creates a strongly connected peering between the control plane nodes with a least privileged approach and opens the tcp listener only on those nodes where it is required. All the receptor connections are bidirectional, so once the connection is created, the receptor can communicate in both directions.
The following is an example peering set up for three controller nodes:
Controller node A -→ Controller node B
Controller node A -→ Controller node C
Controller node B -→ Controller node C
You can force the listener by setting
receptor_listener=TrueHowever, a connection Controller B -→ A is likely to be rejected as that connection already exists.
This means that nothing connects to Controller A as Controller A is creating the connections to the other nodes, and the following command does not return anything on Controller A:
[root@controller1 ~]# ss -ntlp | grep 27199 [root@controller1 ~]#
| URL | Required for |
|---|---|
| General account services, subscriptions | |
| Insights data upload | |
| Inventory upload and Cloud Connector connection | |
| Access to Insights dashboard |
| URL | Required for |
|---|---|
| General account services, subscriptions | |
| Indexing execution environments | |
| TCP | |
|
https://automation-hub-prd.s3.amazonaws.com | Firewall access |
| Ansible Community curated Ansible content | |
| https://ansible-galaxy-ng.s3.dualstack.us-east-1.amazonaws.com | Dual Stack IPv6 endpoint for Community curated Ansible content repository |
| Access to container images provided by Red Hat and partners | |
| Red Hat and partner curated Ansible Collections |
| URL | Required for |
|---|---|
| Access to container images provided by Red Hat and partners | |
|
| Access to container images provided by Red Hat and partners |
|
| Access to container images provided by Red Hat and partners |
|
| Access to container images provided by Red Hat and partners |
|
| Access to container images provided by Red Hat and partners |
As of April 1st, 2025, quay.io is adding three additional endpoints. As a result, customers must adjust allow/block lists within their firewall systems lists to include the following endpoints:
-
cdn04.quay.io -
cdn05.quay.io -
cdn06.quay.io
To avoid problems pulling container images, customers must allow outbound TCP connections (ports 80 and 443) to the following hostnames:
-
cdn.quay.io -
cdn01.quay.io -
cdn02.quay.io -
cdn03.quay.io -
cdn04.quay.io -
cdn05.quay.io -
cdn06.quay.io
This change should be made to any firewall configuration that specifically enables outbound connections to registry.redhat.io or registry.access.redhat.com.
Use the hostnames instead of IP addresses when configuring firewall rules.
After making this change, you can continue to pull images from registry.redhat.io or registry.access.redhat.com. You do not require a quay.io login, or need to interact with the quay.io registry directly in any way to continue pulling Red Hat container images.
For more information, see Firewall changes for container image pulls 2024/2025.
Chapter 7. Choosing and obtaining a Red Hat Ansible Automation Platform installer Copy linkLink copied to clipboard!
Choose the Red Hat Ansible Automation Platform installer you need based on your Red Hat Enterprise Linux environment internet connectivity. Review the following scenarios to decide which Red Hat Ansible Automation Platform installer meets your needs.
7.1. Installing with internet access Copy linkLink copied to clipboard!
Choose the Red Hat Ansible Automation Platform installer if your Red Hat Enterprise Linux environment is connected to the internet. Installing with internet access retrieves the latest required repositories, packages, and dependencies. Choose one of the following ways to set up your Ansible Automation Platform installer.
Tarball install
- Navigate to the Red Hat Ansible Automation Platform download page.
- Click for the Ansible Automation Platform <latest-version> Setup.
Transfer the file to the target server using
scporcurl:Using
scp:-
Run the following command, replacing
private_key.pem,user, andserver_ipwith your appropriate values:
-
Run the following command, replacing
scp -i private_key.pem aap-bundled-installer.tar.gz user@server_ip:
$ scp -i private_key.pem aap-bundled-installer.tar.gz user@server_ip:
Using
curl:-
If the setup file URL is available, you can download it directly to the target server using
curl. Replace<download_url>with the file URL:
-
If the setup file URL is available, you can download it directly to the target server using
curl -0 <download_url>
$ curl -0 <download_url>
If the file needs to be extracted after downloading, run the following command:
tar xvzf aap-bundled-installer.tar.gz
$ tar xvzf aap-bundled-installer.tar.gz
RPM install
Install Ansible Automation Platform Installer Package
v.2.6 for RHEL 9 for x86_64
sudo dnf install --enablerepo=ansible-automation-platform-2.6-for-rhel-9-x86_64-rpms ansible-automation-platform-installer
$ sudo dnf install --enablerepo=ansible-automation-platform-2.6-for-rhel-9-x86_64-rpms ansible-automation-platform-installerCopy to Clipboard Copied! Toggle word wrap Toggle overflow
dnf install enables the repo as the repo is disabled by default.
When you use the RPM installer, the files are placed under the /opt/ansible-automation-platform/installer directory.
7.2. Installing without internet access Copy linkLink copied to clipboard!
Use the Red Hat Ansible Automation Platform Bundle installer if you are unable to access the internet, or would prefer not to install separate components and dependencies from online repositories. Access to Red Hat Enterprise Linux repositories is still needed. All other dependencies are included in the tar archive.
Procedure
- Go to the Red Hat Ansible Automation Platform download page.
- Click for the Ansible Automation Platform <latest-version> Setup Bundle.
Extract the files:
tar xvzf ansible-automation-platform-setup-bundle-<latest-version>.tar.gz
$ tar xvzf ansible-automation-platform-setup-bundle-<latest-version>.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 8. About the installer inventory file Copy linkLink copied to clipboard!
Red Hat Ansible Automation Platform works against a list of managed nodes or hosts in your infrastructure that are logically organized, using an inventory file. You can use the Red Hat Ansible Automation Platform installer inventory file to specify your installation scenario and describe host deployments to Ansible. By using an inventory file, Ansible can manage a large number of hosts with a single command. Inventories also help you use Ansible more efficiently by reducing the number of command line options you have to specify.
The inventory file can be in one of many formats, depending on the inventory plugins that you have. The most common formats are INI and YAML. Inventory files listed in this document are shown in INI format.
The location of the inventory file depends on the installer you used. The following table shows possible locations:
| Installer | Location |
|---|---|
| RPM |
|
| RPM bundle tar |
|
| RPM non-bundle tar |
|
| Container bundle tar |
|
| Container non-bundle tar |
|
You can verify the hosts in your inventory using the command:
ansible all -i <path-to-inventory-file. --list-hosts
ansible all -i <path-to-inventory-file. --list-hosts
Example inventory file
The first part of the inventory file specifies the hosts or groups that Ansible can work with.
For more information on registry_username and registry_password, see Setting registry_username and registry_password.
Platform gateway is the service that handles authentication and authorization for the Ansible Automation Platform. It provides a single entry into the Ansible Automation Platform and serves the platform user interface so you can authenticate and access all of the Ansible Automation Platform services from a single location.
8.1. Guidelines for hosts and groups Copy linkLink copied to clipboard!
Databases
-
When using an external database, ensure the
[database]sections of your inventory file are properly set up. - To improve performance, do not colocate the database and the automation controller on the same server.
When using an external database with Ansible Automation Platform, you must create and maintain that database. Ensure that you clear your external database when uninstalling the Ansible Automation Platform.
Automation hub
-
If there is an
[automationhub]group, you must include the variablesautomationhub_pg_hostandautomationhub_pg_port. -
Add Ansible automation hub information in the
[automationhub]group. - Do not install Ansible automation hub and automation controller on the same node.
Provide a reachable IP address or fully qualified domain name (FQDN) for the
[automationhub]and[automationcontroller]hosts to ensure that users can synchronize and install content from Ansible automation hub and automation controller from a different node.The FQDN must not contain the
_symbol, as it will not be processed correctly in Skopeo. You may use the-symbol, as long as it is not at the start or the end of the host name.Do not use
localhost.
Private automation hub
- Do not install private automation hub and automation controller on the same node.
- You can use the same PostgreSQL (database) instance, but they must use a different (database) name.
- If you install private automation hub from an internal address, and have a certificate which only encompasses the external address, it can result in an installation you cannot use as a container registry without certificate issues.
You must separate the installation of automation controller and Ansible automation hub because the [database] group does not distinguish between the two if both are installed at the same time.
If you use one value in [database] and both automation controller and Ansible automation hub define it, they would use the same database.
Automation controller
- Automation controller does not configure replication or failover for the database that it uses.
- Automation controller works with any replication that you have.
Event-Driven Ansible controller
- Event-Driven Ansible controller must be installed on a separate server and cannot be installed on the same host as automation hub and automation controller.
Platform gateway
- The platform gateway is the service that handles authentication and authorization for Ansible Automation Platform. It provides a single entry into the platform and serves the platform’s user interface.
Clustered installations
- When upgrading an existing cluster, you can also reconfigure your cluster to omit existing instances or instance groups. Omitting the instance or the instance group from the inventory file is not enough to remove them from the cluster. In addition to omitting instances or instance groups from the inventory file, you must also deprovision instances or instance groups before starting the upgrade. For more information, see Deprovisioning nodes or groups. Otherwise, omitted instances or instance groups continue to communicate with the cluster, which can cause issues with automation controller services during the upgrade.
If you are creating a clustered installation setup, you must replace
[localhost]with the hostname or IP address of all instances. Installers for automation controller and automation hub do not accept[localhost]All nodes and instances must be able to reach any others by using this hostname or address. You cannot use the localhostansible_connection=localon one of the nodes. Use the same format for the host names of all the nodes.Therefore, this does not work:
[automationhub] localhost ansible_connection=local hostA hostB.example.com 172.27.0.4
[automationhub] localhost ansible_connection=local hostA hostB.example.com 172.27.0.4Copy to Clipboard Copied! Toggle word wrap Toggle overflow Instead, use these formats:
[automationhub] hostA hostB hostC
[automationhub] hostA hostB hostCCopy to Clipboard Copied! Toggle word wrap Toggle overflow or
[automationhub] hostA.example.com hostB.example.com hostC.example.com
[automationhub] hostA.example.com hostB.example.com hostC.example.comCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.2. Deprovisioning nodes or groups Copy linkLink copied to clipboard!
You can deprovision nodes and instance groups using the Ansible Automation Platform installer. Running the installer will remove all configuration files and logs attached to the nodes in the group.
You can deprovision any hosts in your inventory except for the first host specified in the [automationcontroller] group.
To deprovision nodes, append node_state=deprovision to the node or group within the inventory file.
For example:
To remove a single node from a deployment:
[automationcontroller] host1.example.com host2.example.com host4.example.com node_state=deprovision
[automationcontroller]
host1.example.com
host2.example.com
host4.example.com node_state=deprovision
or
To remove an entire instance group from a deployment:
8.3. Inventory variables Copy linkLink copied to clipboard!
The second part of the example inventory file, following [all:vars], is a list of variables used by the installer. Using all means the variables apply to all hosts.
To apply variables to a particular host, use [hostname:vars]. For example, [automationhub:vars].
8.4. Rules for declaring variables in inventory files Copy linkLink copied to clipboard!
The values of string variables are declared in quotes. For example:
pg_database='awx' pg_username='awx' pg_password='<password>'
pg_database='awx'
pg_username='awx'
pg_password='<password>'
When declared in a :vars section, INI values are interpreted as strings. For example, var=FALSE creates a string equal to FALSE. Unlike host lines, :vars sections accept only a single entry per line, so everything after the = must be the value for the entry. Host lines accept multiple key=value parameters per line. Therefore they need a way to indicate that a space is part of a value rather than a separator. Values that contain whitespace can be quoted (single or double). For more information, see Python shlex parsing rules.
If a variable value set in an INI inventory must be a certain type (for example, a string or a boolean value), always specify the type with a filter in your task. Do not rely on types set in INI inventories when consuming variables.
Consider using YAML format for inventory sources to avoid confusion on the actual type of a variable. The YAML inventory plugin processes variable values consistently and correctly.
If a parameter value in the Ansible inventory file contains special characters, such as #, { or }, you must double-escape the value (that is enclose the value in both single and double quotation marks).
For example, to use mypasswordwith#hashsigns as a value for the variable pg_password, declare it as pg_password='"mypasswordwith#hashsigns"' in the Ansible host inventory file.
Disclaimer: Links contained in this information to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content.
8.5. Securing secrets in the inventory file Copy linkLink copied to clipboard!
You can encrypt sensitive or secret variables with Ansible Vault. However, encrypting the variable names and the variable values makes it hard to find the source of the values. To circumvent this, you can encrypt the variables individually by using ansible-vault encrypt_string, or encrypt a file containing the variables.
Procedure
Create a file labeled
credentials.ymlto store the encrypted credentials.cat credentials.yml admin_password: my_long_admin_pw pg_password: my_long_pg_pw registry_password: my_long_registry_pw
$ cat credentials.yml admin_password: my_long_admin_pw pg_password: my_long_pg_pw registry_password: my_long_registry_pwCopy to Clipboard Copied! Toggle word wrap Toggle overflow Encrypt the
credentials.ymlfile usingansible-vault.ansible-vault encrypt credentials.yml
$ ansible-vault encrypt credentials.yml New Vault password: Confirm New Vault password: Encryption successfulCopy to Clipboard Copied! Toggle word wrap Toggle overflow ImportantStore your encrypted vault password in a safe place.
Verify that the
credentials.ymlfile is encrypted.cat credentials.yml $ANSIBLE_VAULT;1.1; AES256363836396535623865343163333339613833363064653364656138313534353135303764646165393765393063303065323466663330646232363065316666310a373062303133376339633831303033343135343839626136323037616366326239326530623438396136396536356433656162333133653636616639313864300a353239373433313339613465326339313035633565353464356538653631633464343835346432376638623533613666326136343332313163343639393964613265616433363430633534303935646264633034383966336232303365383763
$ cat credentials.yml $ANSIBLE_VAULT;1.1; AES256363836396535623865343163333339613833363064653364656138313534353135303764646165393765393063303065323466663330646232363065316666310a373062303133376339633831303033343135343839626136323037616366326239326530623438396136396536356433656162333133653636616639313864300a353239373433313339613465326339313035633565353464356538653631633464343835346432376638623533613666326136343332313163343639393964613265616433363430633534303935646264633034383966336232303365383763Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run
setup.shfor installation of Ansible Automation Platform 2.6 and pass bothcredentials.ymland the--ask-vault-pass option.ANSIBLE_BECOME_METHOD='sudo' ANSIBLE_BECOME=True ANSIBLE_HOST_KEY_CHECKING=False ./setup.sh -e @credentials.yml -- --ask-vault-pass
$ ANSIBLE_BECOME_METHOD='sudo' ANSIBLE_BECOME=True ANSIBLE_HOST_KEY_CHECKING=False ./setup.sh -e @credentials.yml -- --ask-vault-passCopy to Clipboard Copied! Toggle word wrap Toggle overflow
8.6. Additional inventory file variables Copy linkLink copied to clipboard!
You can further configure your Red Hat Ansible Automation Platform installation by including additional variables in the inventory file. These configurations add optional features for managing your Red Hat Ansible Automation Platform. Add these variables by editing the inventory file using a text editor.
A table of predefined values for inventory file variables can be found in Inventory file variables in the Red Hat Ansible Automation Platform Installation Guide.
Chapter 9. Product Notification Feed Copy linkLink copied to clipboard!
Effective July 2025, the Ansible Automation Platform RSS notification feed will be available. This feed serves as a method for communicating various product updates and changes to customers.
Customers can subscribe to the notifications by visiting announcements.ansiblecloud.redhat.com/feed.atom through an RSS feed reader. This feed is updated with events such as Ansible Automation Platform upgrades and system maintenance.
All Ansible Automation Platform customers can subscribe to this content. Messages include categorization tags to specify deployment types: managed, self-managed (on-prem), or a combination. Red Hat is developing a future enhancement to integrate this feature directly into the UI.