Chapter 2. System requirements


Use this information when planning your Red Hat Ansible Automation Platform installations and designing automation mesh topologies that fit your use case.

Prerequisites

  • You can obtain root access either through the sudo command, or through privilege escalation. For more on privilege escalation, see Understanding privilege escalation.
  • You can de-escalate privileges from root to users such as: AWX, PostgreSQL, Event-Driven Ansible, or Pulp.
  • You have configured an NTP client on all nodes.

Your system must meet the following minimum system requirements to install and run Red Hat Ansible Automation Platform. A resilient deployment requires 10 virtual machines with a minimum of 16 gigabytes (GB) of RAM and 4 virtual CPUs (vCPU). See Tested deployment models for more information on topology options.

Expand
Table 2.1. Base system
TypeDescriptionNotes

Subscription

Valid Red Hat Ansible Automation Platform subscription

 

Operating system

  • Red Hat Enterprise Linux 8.8 or later minor versions of Red Hat Enterprise Linux 8
  • Red Hat Enterprise Linux 9.2 or later minor versions of Red Hat Enterprise Linux 9

Red Hat Ansible Automation Platform are also supported on OpenShift, see Installing on OpenShift Container Platform for more information.

CPU architecture

x86_64, AArch64, s390x (IBM Z), ppc64le (IBM Power)

 

Ansible-core

Ansible-core version 2.16 or later

Ansible Automation Platform uses the system-wide ansible-core package to install the platform, but uses ansible-core 2.16 for both its control plane and built-in execution environments.

Browser

A currently supported version of Mozilla Firefox or Google Chrome.

 

Database

PostgreSQL 15

Red Hat Ansible Automation Platform 2.5 requires the external (customer supported) databases to have ICU support.

Expand
Table 2.2. Virtual machine requirements
ComponentRAMVCPUDisk IOPSStorage

Platform gateway

16GB

4

3000

60GB minimum

Control nodes

16GB

4

3000

80GB minimum with at least 20GB available under /var/lib/awx

Execution nodes

16GB

4

3000

60GB minimum

Hop nodes

16GB

4

3000

60GB minimum

Automation hub

16GB

4

3000

60GB minimum with at least 40GB allocated to /var/lib/pulp

Database

16GB

4

3000

100GB minimum allocated to /var/lib/pgsql

Event-Driven Ansible controller

16GB

4

3000

60GB minimum

Note

These are minimum requirements and can be increased for larger workloads in increments of 2x (for example 16GB becomes 32GB and 4 vCPU becomes 8vCPU). See the horizontal scaling guide for more information.

2.1.1. Repository requirements

Enable the following repositories only when installing Red Hat Ansible Automation Platform:

  • RHEL BaseOS
  • RHEL AppStream
Note

If you enable repositories besides those mentioned above, the Red Hat Ansible Automation Platform installation could fail unexpectedly.

The following are necessary for you to work with project updates and collections:

  • Ensure that the Network ports and protocols listed in Table 6.3. Automation Hub are available for successful connection and download of collections from automation hub or Ansible Galaxy server.
  • The Ansible Automation Platform database backups are staged on each node at /var/backups/automation-platform through the variable backup_dir. You might need to mount a new volume to /var/backups or change the staging location with the variable backup_dir to prevent issues with disk space before running the ./setup.sh -b script.
  • If performing a bundled Ansible Automation Platform installation, the installation setup.sh script attempts to install ansible-core (and its dependencies) from the bundle for you.
  • If you have installed Ansible-core manually, the Ansible Automation Platform installation setup.sh script detects that Ansible has been installed and does not attempt to reinstall it.
Note

You must use Ansible-core, which is installed via dnf. Ansible-core version 2.16 is required for versions 2.5 and later.

2.2. Platform gateway system requirements

The platform gateway is the service that handles authentication and authorization for Ansible Automation Platform. It provides a single entry into the platform and serves the platform’s user interface.

2.3. Automation controller system requirements

Automation controller is a distributed system, where different software components can be co-located or deployed across multiple compute nodes. In the installer, four node types are provided as abstractions to help you design the topology appropriate for your use case: control, hybrid, execution, and hop nodes.

Use the following recommendations for node sizing:

Execution nodes

Execution nodes run automation. Increase memory and CPU to increase capacity for running more forks.

Note
  • The RAM and CPU resources stated are minimum recommendations to handle the job load for a node to run an average number of jobs simultaneously.
  • Recommended RAM and CPU node sizes are not supplied. The required RAM or CPU depends directly on the number of jobs you are running in that environment.
  • For capacity based on forks in your configuration, see Automation controller capacity determination and job impact.

For further information about required RAM and CPU levels, see Performance tuning for automation controller.

Control nodes

Control nodes process events and run cluster jobs including project updates and cleanup jobs. Increasing CPU and memory can help with job event processing.

  • 40GB minimum with at least 20GB available under /var/lib/awx
  • Storage volume must be rated for a minimum baseline of 3000 IOPS
  • Projects are stored on control and hybrid nodes, and for the duration of jobs, are also stored on execution nodes. If the cluster has many large projects, consider doubling the GB in /var/lib/awx/projects, to avoid disk space errors.

Hop nodes

Hop nodes serve to route traffic from one part of the automation mesh to another (for example, a hop node could be a bastion host into another network). RAM can affect throughput, CPU activity is low. Network bandwidth and latency are generally a more important factor than either RAM or CPU.

  • Actual RAM requirements vary based on how many hosts automation controller manages simultaneously (which is controlled by the forks parameter in the job template or the system ansible.cfg file). To avoid possible resource conflicts, Ansible recommends 1 GB of memory per 10 forks and 2 GB reservation for automation controller. See Automation controller capacity determination and job impact. If forks is set to 400, 42 GB of memory is recommended.
  • A larger number of hosts can be addressed, but if the fork number is less than the total host count, more passes across the hosts are required. You can avoid these RAM limitations by using any of the following approaches:

    • Use rolling updates.
    • Use the provisioning callback system built into automation controller, where each system requesting configuration enters a queue and is processed as quickly as possible.
    • In cases where automation controller is producing or deploying images such as AMIs.

2.4. Automation hub system requirements

Automation hub allows you to discover and use new certified automation content from Red Hat Ansible and Certified Partners. On Ansible automation hub, you can discover and manage Ansible Collections, which are supported automation content developed by Red Hat and its partners for use cases such as cloud automation, network automation, and security automation.

Note

Private automation hub

If you install private automation hub from an internal address, and have a certificate which only encompasses the external address, this can result in an installation which cannot be used as container registry without certificate issues.

To avoid this, use the automationhub_main_url inventory variable with a value such as https://pah.example.com linking to the private automation hub node in the installation inventory file.

This adds the external address to /etc/pulp/settings.py. This implies that you only want to use the external address.

For information about inventory file variables, see Inventory file variables.

Before deploying a high availability (HA) automation hub, ensure that you have a shared storage file system installed in your environment and that you have configured your network storage system, if applicable.

2.4.1.1. Required shared storage

Shared storage is required when installing more than one Automation hub with a file storage backend. The supported shared storage type for RPM-based installations is Network File System (NFS).

Before you run the Red Hat Ansible Automation Platform installer, verify that you installed the /var/lib/pulp directory across your cluster as part of the shared storage file system installation. The Red Hat Ansible Automation Platform installer returns an error if /var/lib/pulp is not detected in one of your nodes, causing your high availability automation hub setup to fail.

If you receive an error stating /var/lib/pulp is not detected in one of your nodes, ensure /var/lib/pulp is properly mounted in all servers and re-run the installer.

If you intend to install a HA automation hub using a network storage on the automation hub nodes itself, you must first install and use firewalld to open the necessary ports as required by your shared storage system before running the Ansible Automation Platform installer.

Install and configure firewalld by executing the following commands:

  1. Install the firewalld daemon:

    $ dnf install firewalld
    Copy to Clipboard Toggle word wrap
  2. Add your network storage under <service> using the following command:

    $ firewall-cmd --permanent --add-service=<service>
    Copy to Clipboard Toggle word wrap
    Note

    For a list of supported services, use the $ firewall-cmd --get-services command

  3. Reload to apply the configuration:

    $ firewall-cmd --reload
    Copy to Clipboard Toggle word wrap

The Event-Driven Ansible controller is a single-node system capable of handling a variable number of long-running processes (such as rulebook activations) on-demand, depending on the number of CPU cores.

Note

If you want to use Event-Driven Ansible 2.5 with a 2.4 automation controller version, see Using Event-Driven Ansible 2.5 with Ansible Automation Platform 2.4.

Use the following minimum requirements to run, by default, a maximum of 12 simultaneous activations:

Expand
RequirementRequired

RAM

16 GB

CPUs

4

Local disk

  • Hard drive must be 40 GB minimum with at least 20 GB available under /var.
  • Storage volume must be rated for a minimum baseline of 3000 IOPS.
  • If the cluster has many large projects or decision environment images, consider doubling the GB in /var to avoid disk space errors.
Important
  • If you are running Red Hat Enterprise Linux 8 and want to set your memory limits, you must have cgroup v2 enabled before you install Event-Driven Ansible. For specific instructions, see the Knowledge-Centered Support (KCS) article, Ansible Automation Platform Event-Driven Ansible controller for Red Hat Enterprise Linux 8 requires cgroupv2.
  • When you activate an Event-Driven Ansible rulebook under standard conditions, it uses about 250 MB of memory. However, the actual memory consumption can vary significantly based on the complexity of your rules and the volume and size of the events processed. In scenarios where a large number of events are anticipated or the rulebook complexity is high, conduct a preliminary assessment of resource usage in a staging environment. This ensures that your maximum number of activations is based on the capacity of your resources.

For an example of setting Event-Driven Ansible controller maximumrunning activations, see Single automation controller, single automation hub, and single Event-Driven Ansible controller node with external (installer managed) database.

2.6. PostgreSQL requirements

Red Hat Ansible Automation Platform 2.5 uses PostgreSQL 15 and requires the external (customer supported) databases to have ICU support. PostgreSQL user passwords are hashed with SCRAM-SHA-256 secure hashing algorithm before storing in the database.

To determine if your automation controller instance has access to the database, you can do so with the command, awx-manage check_db command.

Note
  • Automation controller data is stored in the database. Database storage increases with the number of hosts managed, number of jobs run, number of facts stored in the fact cache, and number of tasks in any individual job. For example, a playbook runs every hour (24 times a day) across 250 hosts, with 20 tasks, stores over 800000 events in the database every week.
  • If not enough space is reserved in the database, the old job runs and facts must be cleaned on a regular basis. For more information, see Management Jobs in the Configuring automation execution guide.

2.6.1. PostgreSQL Configurations

Optionally, you can configure the PostgreSQL database as separate nodes that are not managed by the Red Hat Ansible Automation Platform installer. When the Ansible Automation Platform installer manages the database server, it configures the server with defaults that are generally recommended for most workloads. For more information about the settings you can use to improve database performance, see PostgreSQL database configuration and maintenance for automation controller in the Configuring automation execution guide.

Important
  • When using an external database with Ansible Automation Platform, you must create and maintain that database. Ensure that you clear your external database when uninstalling Ansible Automation Platform.
  • Red Hat Ansible Automation Platform 2.5 uses PostgreSQL 15 and requires the external (customer supported) databases to have ICU support.
  • During configuration of an external database, you must check the external database coverage. For more information, see Red Hat Ansible Automation Platform Database Scope of Coverage.

Red Hat Ansible Automation Platform 2.5 uses PostgreSQL 15 and requires the external (customer supported) databases to have ICU support. Use the following procedure to configure an external PostgreSQL compliant database for use with an Ansible Automation Platform component, for example automation controller, Event-Driven Ansible, automation hub, and platform gateway.

Procedure

  1. Connect to a PostgreSQL compliant database server with superuser privileges.

    # psql -h <db.example.com> -U superuser -p 5432 -d postgres <Password for user superuser>:
    Copy to Clipboard Toggle word wrap
  2. Where the default value for <hostname> is hostname:

    -h hostname
    --host=hostname
    Copy to Clipboard Toggle word wrap
  3. Specify the hostname of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the UNIX-domain socket.

    -d dbname
    --dbname=dbname
    Copy to Clipboard Toggle word wrap
  4. Specify the name of the database to connect to. This is equal to specifying dbname as the first non-option argument on the command line. The dbname can be a connection string. If so, connection string parameters override any conflicting command line options.

    -U username
    --username=username
    Copy to Clipboard Toggle word wrap
  5. Connect to the database as the user username instead of the default (you must have permission to do so).
  6. Create the user, database, and password with the createDB or administrator role assigned to the user. For further information, see Database Roles.
  7. Run the installation program. If you are using a PostgreSQL database, the database is owned by the connecting user and must have a createDB or administrator role assigned to it.
  8. Check that you can connect to the created database with the credentials provided in the inventory file.
  9. Check the permission of the user. The user should have the createDB or administrator role.
  10. After you create the PostgreSQL users and databases for each component, add the database credentials and host details in the inventory file under the [all:vars] group.

    # Automation controller
    pg_host=data.example.com
    pg_database=<database name>
    pg_port=<port_number>
    pg_username=<set your own>
    pg_password=<set your own>
    
    # Platform gateway
    automationgateway_pg_host=aap.example.org
    automationgateway_pg_database=<set your own>
    automationgateway_pg_port=<port_number>
    automationgateway_pg_username=<set your own>
    automationgateway_pg_password=<set your own>
    
    # Automation hub
    automationhub_pg_host=data.example.com
    automationhub_pg_database=<database_name>
    automationhub_pg_port=<port_number>
    automationhub_pg_username=<username>
    automationhub_pg_password=<password>
    
    # Event-Driven Ansible
    automationedacontroller_pg_host=data.example.com
    automationedacontroller_pg_database=<database_name>
    automationedacontroller_pg_port=<port_number>
    automationedacontroller_pg_username=<username>
    automationedacontroller_pg_password=<password>
    Copy to Clipboard Toggle word wrap

2.6.3. Enabling mutual TLS (mTLS) authentication

mTLS authentication is disabled by default; however, you can optionally enable the authentication.

Procedure

  • To configure each component’s database with mTLS authentication, add the following variables to your inventory file under the [all:vars] group and ensure each component has a different TLS certificate and key:

    # Automation controller
    pgclient_sslcert=/path/to/awx.cert
    pgclient_sslkey=/path/to/awx.key
    pg_sslmode=verify-full or verify-ca
    
    # Platform gateway
    automationgateway_pgclient_sslcert=/path/to/gateway.cert
    automationgateway_pgclient_sslkey=/path/to/gateway.key
    automationgateway_pg_sslmode=verify-full or verify-ca
    
    # Automation hub
    automationhub_pgclient_sslcert=/path/to/pulp.cert
    automationhub_pgclient_sslkey=/path/to/pulp.key
    automationhub_pg_sslmode=verify-full or verify-ca
    
    # Event-Driven Ansible
    automationedacontroller_pgclient_sslcert=/path/to/eda.cert
    automationedacontroller_pgclient_sslkey=/path/to/eda.key
    automationedacontroller_pg_sslmode=verify-full or verify-ca
    Copy to Clipboard Toggle word wrap

2.6.4. Using custom TLS certificates

By default, the installation program generates self-signed TLS certificates and keys for all Ansible Automation Platform services. However, you can optionally use custom TLS certificates.

Procedure

  • To replace these with your own custom certificate and key, set the following inventory file variables:

    aap_ca_cert_file=<path_to_ca_tls_certificate>
    aap_ca_key_file=<path_to_ca_tls_key>
    Copy to Clipboard Toggle word wrap
  • If any of your certificates are signed by a custom Certificate Authority (CA), then you must specify the Certificate Authority’s certificate by using the custom_ca_cert inventory file variable:

    custom_ca_cert=<path_to_custom_ca_certificate>
    Copy to Clipboard Toggle word wrap
    Note

    If you have more than one custom CA certificate, combine them into a single file, then reference the combined certificate with the custom_ca_cert inventory file variable.

2.6.5. Receptor certificate considerations

When using a custom certificate for Receptor nodes, the certificate requires the otherName field specified in the Subject Alternative Name (SAN) of the certificate with the value 1.3.6.1.4.1.2312.19.1. For more information, see Above the mesh TLS.

Receptor does not support the usage of wildcard certificates. Additionally, each Receptor certificate must have the host FQDN specified in its SAN for TLS hostname validation to be correctly performed.

The database migration script uses hstore fields to store information, therefore the hstore extension must be enabled in the automation hub PostgreSQL database.

This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server.

If the PostgreSQL database is external, you must enable the hstore extension in the automation hub PostgreSQL database manually before installation.

If the hstore extension is not enabled before installation, a failure raises during database migration.

Procedure

  1. Check if the extension is available on the PostgreSQL server (automation hub database).

    $ psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"
    Copy to Clipboard Toggle word wrap
  2. Where the default value for <automation hub database> is automationhub.

    Example output with hstore available:

    name  | default_version | installed_version |comment
    ------+-----------------+-------------------+---------------------------------------------------
     hstore | 1.7           |                   | data type for storing sets of (key, value) pairs
    (1 row)
    Copy to Clipboard Toggle word wrap

    Example output with hstore not available:

     name | default_version | installed_version | comment
    ------+-----------------+-------------------+---------
    (0 rows)
    Copy to Clipboard Toggle word wrap
  3. On a RHEL based server, the hstore extension is included in the postgresql-contrib RPM package, which is not installed automatically when installing the PostgreSQL server RPM package.

    To install the RPM package, use the following command:

    dnf install postgresql-contrib
    Copy to Clipboard Toggle word wrap
  4. Load the hstore PostgreSQL extension into the automation hub database with the following command:

    $ psql -d <automation hub database> -c "CREATE EXTENSION hstore;"
    Copy to Clipboard Toggle word wrap

    In the following output, the installed_version field lists the hstore extension used, indicating that hstore is enabled.

    name | default_version | installed_version | comment
    -----+-----------------+-------------------+------------------------------------------------------
    hstore  |     1.7      |       1.7         | data type for storing sets of (key, value) pairs
    (1 row)
    Copy to Clipboard Toggle word wrap

Check whether the minimum Ansible Automation Platform PostgreSQL database requirements are met by using the Flexible I/O Tester (FIO) tool. FIO is a tool used to benchmark read and write IOPS performance of the storage system.

Prerequisites

  • You have installed the Flexible I/O Tester (fio) storage performance benchmarking tool.

    To install fio, run the following command as the root user:

    # yum -y install fio
    Copy to Clipboard Toggle word wrap
  • You have adequate disk space to store the fio test data log files.

    The examples shown in the procedure require at least 60GB disk space in the /tmp directory:

    • numjobs sets the number of jobs run by the command.
    • size=10G sets the file size generated by each job.
  • You have adjusted the value of the size parameter. Adjusting this value reduces the amount of test data.

Procedure

  1. Run a random write test:

    $ fio --name=write_iops --directory=/tmp --numjobs=3 --size=10G \
    --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \
    --verify=0 --bs=4K --iodepth=64 --rw=randwrite \
    --group_reporting=1 > /tmp/fio_benchmark_write_iops.log \
    2>> /tmp/fio_write_iops_error.log
    Copy to Clipboard Toggle word wrap
  2. Run a random read test:

    $ fio --name=read_iops --directory=/tmp \
    --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s \
    --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randread \
    --group_reporting=1 > /tmp/fio_benchmark_read_iops.log \
    2>> /tmp/fio_read_iops_error.log
    Copy to Clipboard Toggle word wrap
  3. Review the results:

    In the log files written by the benchmark commands, search for the line beginning with iops. This line shows the minimum, maximum, and average values for the test.

    The following example shows the line in the log file for the random read test:

    $ cat /tmp/fio_benchmark_read_iops.log
    read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
    […]
       iops        : min=50879, max=61603, avg=56221.33, stdev=679.97, samples=360
    […]
    Copy to Clipboard Toggle word wrap
    Note

    The above is a baseline to help evaluate the best case performance on your systems. Systems can and will change and performance may vary depending on what else is happening on your systems, storage or network at the time of testing. You must review, monitor, and revisit the log files according to your own business requirements, application workloads, and new demands.

Back to top
Red Hat logoGithubredditYoutubeTwitter

Learn

Try, buy, & sell

Communities

About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust. Explore our recent updates.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Theme

© 2025 Red Hat