Chapter 4. System requirements
Use this information when planning your Red Hat Ansible Automation Platform installations and designing automation mesh topologies that fit your use case.
Prerequisites
-
You must be able to obtain root access either through the
sudo
command, or through privilege escalation. For more on privilege escalation see Understanding Privilege Escalation. - You must be able to de-escalate privileges from root to users such as: AWX, PostgreSQL, or Pulp.
- You must configure an NTP client on all nodes. For more information, see Configuring NTP server using Chrony.
4.1. Red Hat Ansible Automation Platform system requirements
Your system must meet the following minimum system requirements to install and run Red Hat Ansible Automation Platform.
Requirement | Required | Notes |
---|---|---|
Subscription | Valid Red Hat Ansible Automation Platform | |
OS | Red Hat Enterprise Linux 8.6 or later 64-bit (x86,ppc64le, s390x, aarch64), or Red Hat Enterprise Linux 9.0 or later 64-bit (x86,ppc64le, s390x, aarch64) | Red Hat Ansible Automation Platform is also supported on OpenShift, see Deploying the Red Hat Ansible Automation Platform operator on OpenShift Container Platform for more information. |
Ansible | version 2.14 (to install) | Ansible Automation Platform ships with execution environments that contain ansible-core 2.14. |
Python | 3.8 or later | |
Browser | A currently supported version of Mozilla FireFox or Google Chrome | |
Database | PostgreSQL version 13 |
The following are necessary for you to work with project updates and collections:
Ensure that the following domain names are part of either the firewall or the proxy’s allowlist for successful connection and download of collections from automation hub or Galaxy server:
-
galaxy.ansible.com
-
cloud.redhat.com
-
console.redhat.com
-
sso.redhat.com
-
- SSL inspection must be disabled either when using self signed certificates or for the Red Hat domains.
The requirements for systems managed by Ansible Automation Platform are the same as for Ansible. See Getting Started in the Ansible User Guide.
Additional notes for Red Hat Ansible Automation Platform requirements
- The requirements for systems managed by Ansible Automation Platform are the same as for Ansible. See Getting Started in the Ansible User Guide.
- Although Red Hat Ansible Automation Platform depends on Ansible Playbooks and requires the installation of the latest stable version of Ansible before installing automation controller, manual installations of Ansible are no longer required.
- For new installations, automation controller installs the latest release package of Ansible 2.3.
- If performing a bundled Ansible Automation Platform installation, the installation program attempts to install Ansible (and its dependencies) from the bundle for you.
- If you choose to install Ansible on your own, the Ansible Automation Platform installation program detects that Ansible has been installed and does not attempt to reinstall it.
You must install Ansible using a package manager such as yum
, and the latest stable version of the package manager must be installed for Red Hat Ansible Automation Platform to work properly. Ansible version 2.14 is required for versions 2.3 and later.
4.2. Automation controller system requirements
Automation controller is a distributed system, where different software components can be co-located or deployed across multiple compute nodes. In the installer, node types of control, hybrid, execution, and hop are provided as abstractions to help you design the topology appropriate for your use case.
Use the following recommendations for node sizing:
On control and hybrid nodes, allocate a minimum of 20 GB to /var/lib/awx
for execution environment storage.
Execution nodes
Runs automation. Increases memory and CPU to increase capacity for running more forks
Requirement | Required |
---|---|
RAM | 16 GB |
CPUs | 4 |
Local disk | 40GB minimum |
Control nodes
Processes events and runs cluster jobs including project updates and cleanup jobs. Increasing CPU and memory can help with job event processing.
Requirement | Required |
---|---|
RAM | 16 GB |
CPUs | 4 |
Local disk |
|
Hybrid nodes
Runs both automation and cluster jobs. Comments on CPU and memory for execution and control nodes also apply to this node type.
Requirement | Required |
---|---|
RAM | 16 GB |
CPUs | 4 |
Local disk |
|
Hop nodes
Serves to route traffic from one part of the automation mesh to another (for example, could be a bastion host into another network). RAM could affect throughput, CPU activity is low. Network bandwidth and latency are generally a more important factor than either RAM or CPU.
Requirement | Required |
---|---|
RAM | 16 GB |
CPUs | 4 |
Local disk | 40GB |
-
Actual RAM requirements vary based on how many hosts automation controller will manage simultaneously (which is controlled by the
forks
parameter in the job template or the systemansible.cfg
file). To avoid possible resource conflicts, Ansible recommends 1 GB of memory per 10 forks + 2 GB reservation for automation controller, see Automation controller Capacity Determination and Job Impact for further details. Ifforks
is set to 400, 42 GB of memory is recommended. -
Automation controller hosts check if
umask
is set to 0022. If not, the setup fails. Setumask=0022
to avoid this error. A larger number of hosts can be addressed, but if the fork number is less than the total host count, more passes across the hosts are required. You can avoid these RAM limitations by using any of the following approaches:
- Use rolling updates.
- Use the provisioning callback system built into automation controller, where each system requesting configuration enters a queue and is processed as quickly as possible.
- In cases where automation controller is producing or deploying images such as AMIs.
Additional resources
- For more information about obtaining an automation controller subscription, see Import a subscription.
- For questions, contact Ansible support through the Red Hat Customer portal.
4.3. Automation hub system requirements
Automation hub enables you to discover and use new certified automation content from Red Hat Ansible and Certified Partners. On Ansible automation hub, you can discover and manage Ansible Collections, which are supported automation content developed by Red Hat and its partners for use cases such as cloud automation, network automation, and security automation.
Automation hub has the following system requirements:
Requirement | Required | Notes |
---|---|---|
RAM | 8 GB minimum |
|
CPUs | 2 minimum | For capacity based on forks in your configuration, see additional resources. |
Local disk | 60 GB disk | A minimum of 40GB should be dedicated to /var for collection storage. |
Private automation hub
If you install private automation hub from an internal address, and have a certificate which only encompasses the external address, this can result in an installation which cannot be used as container registry without certificate issues.
To avoid this, use the automationhub_main_url
inventory variable with a value like https://pah.example.com linking to the private automation hub node in the installation inventory file.
This adds the external address to /etc/pulp/settings.py
.
This implies that you only want to use the external address.
For information on inventory file variables, see Inventory File Variables in the Red Hat Ansible Automation Platform Installation Guide.
4.4. PostgreSQL requirements
Red Hat Ansible Automation Platform uses PostgreSQL 13.
- PostgreSQL user passwords are hashed with SCRAM-SHA-256 secure hashing algorithm before storing in the database.
-
To determine if your automation controller instance has access to the database, you can do so with the command,
awx-manage check_db
.
Service | Required | Notes |
---|---|---|
Each automation controller | 40 GB dedicated hard disk space |
|
Each automation hub | 60 GB dedicated hard disk space | Storage volume must be rated for a minimum baseline of 1500 IOPS. |
Database | 20 GB dedicated hard disk space |
|
PostgreSQL Configurations
Optionally, you can configure the PostgreSQL database as separate nodes that are not managed by the Red Hat Ansible Automation Platform installer. When the Ansible Automation Platform installer manages the database server, it configures the server with defaults that are generally recommended for most workloads. However, you can adjust these PostgreSQL settings for standalone database server node where ansible_memtotal_mb
is the total memory size of the database server:
max_connections == 1024 shared_buffers == ansible_memtotal_mb*0.3 work_mem == ansible_memtotal_mb*0.03 maintenance_work_mem == ansible_memtotal_mb*0.04
Additional resources
For more detail on tuning your PostgreSQL server, see the PostgreSQL documentation.
4.4.1. Setting up an external (customer supported) database
Red Hat does not support the use of external (customer supported) databases, however they are used by customers. The following guidance on inital configuration, from a product installation perspective only, is provided to avoid related support requests.
To create a database, user and password on an external PostgreSQL compliant database for use with automation controller, use the following procedure.
Procedure
Install and then connect to a PostgreSQL compliant database server with superuser privileges.
# psql -h <db.example.com> -U superuser -p 5432 -d postgres <Password for user superuser>:
Where:
-h hostname --host=hostname
Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix-domain socket.
-d dbname --dbname=dbname
Specifies the name of the database to connect to. This is equivalent to specifying
dbname
as the first non-option argument on the command line. Thedbname
can be a connection string. If so, connection string parameters override any conflicting command line options.-U username --username=username
Connect to the database as the user
username
instead of the default. (You must have permission to do so.)-
Create the user, database, and password with the
createDB
or administrator role assigned to the user. For further information, see Database Roles. Add the database credentials and host details to the automation controller inventory file as an external database.
The default values are used in the following example:
[database] pg_host='db.example.com' pg_port=5432 pg_database='awx' pg_username='awx' pg_password='redhat'
Run the installer.
If you are using a PostgreSQL database with automation controller, the database is owned by the connecting user and must have a
createDB
or administrator role assigned to it.- Check that you are able to connect to the created database with the user, password, and database name.
-
Check the permission of the user, the user should have the
createDB
or administrator role.
During this procedure, you must check the External Database coverage. For further information, see https://access.redhat.com/articles/4010491
4.4.2. Benchmarking storage performance for the Ansible Automation Platform PostgreSQL database
The following procedure describes how to benchmark the write/read IOPS performance of the storage system to check whether the minimum Ansible Automation Platform PostgreSQL database requirements are met.
Prerequisites
You have installed the Flexible I/O Tester (fio) storage performance benchmarking tool.
To install fio, run the following command as the root user:
# yum -y install fio
You have adequate disk space to store the fio test data log files.
The examples shown in the procedure require at least 60GB disk space in the
/tmp
directory:-
numjobs
sets the number of jobs run by the command. -
size=10G
sets the file size generated by each job.
To reduce the amount of test data, adjust the value of the
size
parameter.-
Procedure
Run a random write test:
$ fio --name=write_iops --directory=/tmp --numjobs=3 --size=10G \ --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \ --verify=0 --bs=4K --iodepth=64 --rw=randwrite \ --group_reporting=1 > /tmp/fio_benchmark_write_iops.log \ 2>> /tmp/fio_write_iops_error.log
Run a random read test:
$ fio --name=read_iops --directory=/tmp \ --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s \ --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randread \ --group_reporting=1 > /tmp/fio_benchmark_read_iops.log \ 2>> /tmp/fio_read_iops_error.log
Review the results:
In the log files written by the benchmark commands, search for the line beginning with
iops
. This line shows the minimum, maximum, and average values for the test.The following example shows the line in the log file for the random read test:
$ cat /tmp/fio_benchmark_read_iops.log read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 […] iops : min=50879, max=61603, avg=56221.33, stdev=679.97, samples=360 […]
You must review, monitor, and revisit the log files according to your own business requirements, application workloads, and new demands.