Chapter 4. Advanced containerized deployment


Configure external databases, custom TLS certificates, execution nodes, HAProxy load balancers, and hub storage for complex containerized Ansible Automation Platform deployments.

If you are not using these advanced configuration options, go to Installing containerized Ansible Automation Platform to continue with your installation.

4.1. Adding a safe plugin variable to Event-Driven Ansible controller

When using redhat.insights_eda or similar plugins to run rulebook activations in Event-Driven Ansible controller, you must add a safe plugin variable to a directory in Ansible Automation Platform. This ensures connection between Event-Driven Ansible controller and the source plugin, and displays port mappings correctly.

Procedure

  1. Create a directory for the safe plugin variable:

    mkdir -p ./group_vars/automationeda
    Copy to Clipboard Toggle word wrap
  2. Create a file within that directory for your new setting (for example, touch ./group_vars/automationeda/custom.yml)
  3. Add the variable eda_safe_plugins with a list of plugins to enable. For example:

    eda_safe_plugins: ['ansible.eda.webhook', 'ansible.eda.alertmanager']
    Copy to Clipboard Toggle word wrap

4.2. Adding execution nodes

Containerized Ansible Automation Platform can deploy remote execution nodes.

You can define remote execution nodes in the [execution_nodes] group of your inventory file:

[execution_nodes]
<fqdn_of_your_execution_host>
Copy to Clipboard Toggle word wrap

By default, an execution node uses the following settings that you can update as needed:

receptor_port=27199
receptor_protocol=tcp
receptor_type=execution
Copy to Clipboard Toggle word wrap
  • receptor_port - The port number that receptor listens on for incoming connections from other receptor nodes.
  • receptor_type - The role of the node. Valid options include execution or hop.
  • receptor_protocol - The protocol used for communication. Valid options include tcp or udp.

By default, the installation program adds all nodes in the [execution_nodes] group as peers for the controller node. To change the peer configuration, use the receptor_peers variable.

Note

The value of receptor_peers must be a comma-separated list of host names. Do not use inventory group names.

Example configuration:

[execution_nodes]
# Execution nodes
exec1.example.com
exec2.example.com
# Hop node that peers with the two execution nodes above
hop1.example.com receptor_type=hop receptor_peers='["exec1.example.com","exec2.example.com"]'
Copy to Clipboard Toggle word wrap

4.3. Configuring storage for automation hub

Configure storage backends for automation hub to store automation content by using Amazon S3, Azure Blob Storage, or Network File System (NFS).

4.3.1. Configuring Amazon S3 storage for automation hub

Amazon S3 storage is a type of object storage that is supported in containerized installations. When using an AWS S3 storage backend, set hub_storage_backend to s3. The AWS S3 bucket needs to exist before running the installation program.

Procedure

  1. Ensure your AWS S3 bucket exists before proceeding with the installation.
  2. Add the following variables to your inventory file under the [all:vars] group to configure S3 storage:

    • hub_s3_access_key
    • hub_s3_secret_key
    • hub_s3_bucket_name
    • hub_s3_extra_settings

      You can pass extra parameters through an Ansible hub_s3_extra_settings dictionary. For example:

      hub_s3_extra_settings:
        AWS_S3_MAX_MEMORY_SIZE: 4096
        AWS_S3_REGION_NAME: eu-central-1
        AWS_S3_USE_SSL: True
      Copy to Clipboard Toggle word wrap

4.3.2. Configuring Azure Blob Storage for automation hub

Azure Blob storage is a type of object storage that is supported in containerized installations. When using an Azure blob storage backend, set hub_storage_backend to azure. The Azure container needs to exist before running the installation program.

Procedure

  1. Ensure your Azure container exists before proceeding with the installation.
  2. Add the following variables to your inventory file under the [all:vars] group to configure Azure Blob storage:

    • hub_azure_account_key
    • hub_azure_account_name
    • hub_azure_container
    • hub_azure_extra_settings

      You can pass extra parameters through an Ansible hub_azure_extra_settings dictionary. For example:

      hub_azure_extra_settings:
        AZURE_LOCATION: foo
        AZURE_SSL: True
        AZURE_URL_EXPIRATION_SECS: 60
      Copy to Clipboard Toggle word wrap

4.3.3. Configuring Network File System (NFS) storage for automation hub

NFS is a type of shared storage that is supported in containerized installations. Shared storage is required when installing more than one instance of automation hub with a file storage backend. When installing a single instance of the automation hub, shared storage is optional.

Procedure

  1. To configure shared storage for automation hub, set the hub_shared_data_path variable in your inventory file:

    hub_shared_data_path=<path_to_nfs_share>
    Copy to Clipboard Toggle word wrap

    The value must match the format host:dir, for example nfs-server.example.com:/exports/hub.

  2. (Optional) To change the mount options for your NFS share, use the hub_shared_data_mount_opts variable. The default value is rw,sync,hard.

4.4. Configuring a HAProxy load balancer

To configure a HAProxy load balancer in front of platform gateway with a custom CA cert, set the following inventory file variables under the [all:vars] group:

custom_ca_cert=<path_to_cert_crt>
gateway_main_url=<https://load_balancer_url>
Copy to Clipboard Toggle word wrap
Note

HAProxy SSL passthrough mode is not supported with platform gateway.

4.5. Enabling automation content collection and container signing

Automation content signing is disabled by default. To enable it, the following installation variables are required in the inventory file:

# Collection signing
hub_collection_signing=true
hub_collection_signing_key=<full_path_to_collection_gpg_key>

# Container signing
hub_container_signing=true
hub_container_signing_key=<full_path_to_container_gpg_key>
Copy to Clipboard Toggle word wrap

The following variables are required if the keys are protected by a passphrase:

# Collection signing
hub_collection_signing_pass=<gpg_key_passphrase>

# Container signing
hub_container_signing_pass=<gpg_key_passphrase>
Copy to Clipboard Toggle word wrap

The hub_collection_signing_key and hub_container_signing_key variables require the set up of keys before running an installation.

Automation content signing currently only supports GnuPG (GPG) based signature keys. For more information about GPG, see the GnuPG man page.

Note

The algorithm and cipher used is the responsibility of the customer.

Procedure

  1. On a RHEL9 server run the following command to create a new key pair for collection signing:

    gpg --gen-key
    Copy to Clipboard Toggle word wrap
  2. Enter your information for "Real name" and "Email address":

    Example output:

    gpg --gen-key
    gpg (GnuPG) 2.3.3; Copyright (C) 2021 Free Software Foundation, Inc.
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.
    
    Note: Use "gpg --full-generate-key" for a full featured key generation dialog.
    
    GnuPG needs to construct a user ID to identify your key.
    
    Real name: Joe Bloggs
    Email address: jbloggs@example.com
    You selected this USER-ID:
        "Joe Bloggs <jbloggs@example.com>"
    
    Change (N)ame, (E)mail, or (O)kay/(Q)uit? O
    Copy to Clipboard Toggle word wrap
    • If this fails, your environment does not have the necessary prerequisite packages installed for GPG. Install the necessary packages to proceed.
    • A dialog box will appear and ask you for a passphrase. This is optional but recommended.
    • The keys are then generated, and produce output similar to the following:

      We need to generate a lot of random bytes. It is a good idea to perform
      some other action (type on the keyboard, move the mouse, utilize the
      disks) during the prime generation; this gives the random number
      generator a better chance to gain enough entropy.
      gpg: key 022E4FBFB650F1C4 marked as ultimately trusted
      gpg: revocation certificate stored as '/home/aapuser/.gnupg/openpgp-revocs.d/F001B037976969DD3E17A829022E4FBFB650F1C4.rev'
      public and secret key created and signed.
      
      pub   rsa3072 2024-10-25 [SC] [expires: 2026-10-25]
            F001B037976969DD3E17A829022E4FBFB650F1C4
      uid                      Joe Bloggs <jbloggs@example.com>
      sub   rsa3072 2024-10-25 [E] [expires: 2026-10-25]
      Copy to Clipboard Toggle word wrap
    • Note the expiry date that you can set based on company standards and needs.
  3. You can view all of your GPG keys by running the following command:

    gpg --list-secret-keys --keyid-format=long
    Copy to Clipboard Toggle word wrap
  4. To export the public key run the following command:

    gpg --export -a --output collection-signing-key.pub <email_address_used_to_generate_key>
    Copy to Clipboard Toggle word wrap
  5. To export the private key run the following command:

    gpg -a --export-secret-keys <email_address_used_to_generate_key> > collection-signing-key.priv
    Copy to Clipboard Toggle word wrap
    • Enter the passphrase if prompted.
  6. To view the private key file contents, run the following command:

    cat collection-signing-key.priv
    Copy to Clipboard Toggle word wrap

    Example output:

    -----BEGIN PGP PRIVATE KEY BLOCK-----
    
    lQWFBGcbN14BDADTg5BsZGbSGMHypUJMuzmIffzzz4LULrZA8L/I616lzpBHJvEs
    sSN6KuKY1TcIwIDCCa/U5Obm46kurpP2Y+vNA1YSEtMJoSeHeamWMDd99f49ItBp
    
    <snippet>
    
    j920hRy/3wJGRDBMFa4mlQg=
    =uYEF
    -----END PGP PRIVATE KEY BLOCK-----
    Copy to Clipboard Toggle word wrap
  7. Repeat steps 1 to 7 to create a key pair for container signing.
  8. Add the following variables to the inventory file and run the installation to create the signing services:

    # Collection signing
    hub_collection_signing=true
    hub_collection_signing_key=/home/aapuser/aap/ansible-automation-platform-containerized-setup-<version_number>/collection-signing-key.priv
    # This variable is required if the key is protected by a passphrase
    hub_collection_signing_pass=<password>
    
    # Container signing
    hub_container_signing=true
    hub_container_signing_key=/home/aapuser/aap/ansible-automation-platform-containerized-setup-<version_number>/container-signing-key.priv
    # This variable is required if the key is protected by a passphrase
    hub_container_signing_pass=<password>
    Copy to Clipboard Toggle word wrap

4.6. Configuring an external (customer provided) PostgreSQL database

Set up an external (customer provided) PostgreSQL database for containerized Ansible Automation Platform to use your own database infrastructure.

There are two possible scenarios for setting up an external database:

  1. An external database with PostgreSQL admin credentials
  2. An external database without PostgreSQL admin credentials
Important
  • When using an external database with Ansible Automation Platform, you must create and support that database. Ensure that you clear your external database when uninstalling Ansible Automation Platform.
  • Red Hat Ansible Automation Platform requires customer provided (external) database to have ICU support.
  • During configuration of an external database, you must check the external database coverage. For more information, see Red Hat Ansible Automation Platform Database Scope of Coverage.

4.6.1. Setting up an external database with PostgreSQL admin credentials

If you have PostgreSQL admin credentials, you can supply them in the inventory file and the installation program creates the PostgreSQL users and databases for each component for you. The PostgreSQL admin account must have SUPERUSER privileges.

Procedure

  • To configure the PostgreSQL admin credentials, add the following variables to the inventory file under the [all:vars] group:

    postgresql_admin_username=<set your own>
    postgresql_admin_password=<set your own>
    Copy to Clipboard Toggle word wrap

4.6.2. Setting up an external database without PostgreSQL admin credentials

If you do not have PostgreSQL admin credentials, then PostgreSQL users and databases need to be created for each component (platform gateway, automation controller, automation hub, and Event-Driven Ansible) before running the installation program.

Procedure

  1. Connect to a PostgreSQL compliant database server with a user that has SUPERUSER privileges.

    # psql -h <hostname> -U <username> -p <port_number>
    Copy to Clipboard Toggle word wrap

    For example:

    # psql -h db.example.com -U superuser -p 5432
    Copy to Clipboard Toggle word wrap
  2. Create the user with a password and ensure the CREATEDB role is assigned to the user. For more information, see Database Roles.

    CREATE USER <username> WITH PASSWORD <password> CREATEDB;
    Copy to Clipboard Toggle word wrap
  3. Create the database and add the user you created as the owner.

    CREATE DATABASE <database_name> OWNER <username>;
    Copy to Clipboard Toggle word wrap
  4. When you have created the PostgreSQL users and databases for each component, you can supply them in the inventory file under the [all:vars] group.

    # Platform gateway
    gateway_pg_host=aap.example.org
    gateway_pg_database=<set your own>
    gateway_pg_username=<set your own>
    gateway_pg_password=<set your own>
    
    # Automation controller
    controller_pg_host=aap.example.org
    controller_pg_database=<set your own>
    controller_pg_username=<set your own>
    controller_pg_password=<set your own>
    
    # Automation hub
    hub_pg_host=aap.example.org
    hub_pg_database=<set your own>
    hub_pg_username=<set your own>
    hub_pg_password=<set your own>
    
    # Event-Driven Ansible
    eda_pg_host=aap.example.org
    eda_pg_database=<set your own>
    eda_pg_username=<set your own>
    eda_pg_password=<set your own>
    Copy to Clipboard Toggle word wrap

4.6.3. Enabling the hstore extension for the automation hub PostgreSQL database

The database migration script uses hstore fields to store information, therefore the hstore extension must be enabled in the automation hub PostgreSQL database.

This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server.

If the PostgreSQL database is external, you must enable the hstore extension in the automation hub PostgreSQL database manually before installation.

If the hstore extension is not enabled before installation, a failure raises during database migration.

Procedure

  1. Check if the extension is available on the PostgreSQL server (automation hub database).

    $ psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'"
    Copy to Clipboard Toggle word wrap
  2. Where the default value for <automation hub database> is automationhub.

    Example output with hstore available:

    name  | default_version | installed_version |comment
    ------+-----------------+-------------------+---------------------------------------------------
     hstore | 1.7           |                   | data type for storing sets of (key, value) pairs
    (1 row)
    Copy to Clipboard Toggle word wrap

    Example output with hstore not available:

     name | default_version | installed_version | comment
    ------+-----------------+-------------------+---------
    (0 rows)
    Copy to Clipboard Toggle word wrap
  3. On a RHEL based server, the hstore extension is included in the postgresql-contrib RPM package, which is not installed automatically when installing the PostgreSQL server RPM package.

    To install the RPM package, use the following command:

    dnf install postgresql-contrib
    Copy to Clipboard Toggle word wrap
  4. Load the hstore PostgreSQL extension into the automation hub database with the following command:

    $ psql -d <automation hub database> -c "CREATE EXTENSION hstore;"
    Copy to Clipboard Toggle word wrap

    In the following output, the installed_version field lists the hstore extension used, indicating that hstore is enabled.

    name | default_version | installed_version | comment
    -----+-----------------+-------------------+------------------------------------------------------
    hstore  |     1.7      |       1.7         | data type for storing sets of (key, value) pairs
    (1 row)
    Copy to Clipboard Toggle word wrap

4.6.4. Optional: configuring mutual TLS (mTLS) authentication for an external database

mTLS authentication is disabled by default. To configure each component’s database with mTLS authentication, add the following variables to your inventory file under the [all:vars] group and ensure each component has a different TLS certificate and key:

Procedure

  • Add the following variables to your inventory file under the [all:vars] group:

    # Platform gateway
    gateway_pg_cert_auth=true
    gateway_pg_tls_cert=/path/to/gateway.cert
    gateway_pg_tls_key=/path/to/gateway.key
    gateway_pg_sslmode=verify-full
    
    # Automation controller
    controller_pg_cert_auth=true
    controller_pg_tls_cert=/path/to/awx.cert
    controller_pg_tls_key=/path/to/awx.key
    controller_pg_sslmode=verify-full
    
    # Automation hub
    hub_pg_cert_auth=true
    hub_pg_tls_cert=/path/to/pulp.cert
    hub_pg_tls_key=/path/to/pulp.key
    hub_pg_sslmode=verify-full
    
    # Event-Driven Ansible
    eda_pg_cert_auth=true
    eda_pg_tls_cert=/path/to/eda.cert
    eda_pg_tls_key=/path/to/eda.key
    eda_pg_sslmode=verify-full
    Copy to Clipboard Toggle word wrap

4.7. Configuring custom TLS certificates

Red Hat Ansible Automation Platform uses X.509 certificate and key pairs to secure traffic. These certificates secure internal traffic between Ansible Automation Platform components and external traffic for public UI and API connections.

There are two primary ways to manage TLS certificates for your Ansible Automation Platform deployment:

  1. Ansible Automation Platform generated certificates (this is the default)
  2. User-provided certificates

4.7.1. Ansible Automation Platform generated certificates

By default, the installation program creates a self-signed Certificate Authority (CA) and uses it to generate self-signed TLS certificates for all Ansible Automation Platform services. The self-signed CA certificate and key are generated on one node under the ~/aap/tls/ directory and copied to the same location on all other nodes. This CA is valid for 10 years after the initial creation date.

Self-signed certificates are not part of any public chain of trust. The installation program creates a certificate truststore that includes the self-signed CA certificate under ~/aap/tls/extracted/ and bind-mounts that directory to each Ansible Automation Platform service container under /etc/pki/ca-trust/extracted/. This allows each Ansible Automation Platform component to validate the self-signed certificates of the other Ansible Automation Platform services. The CA certificate can also be added to the truststore of other systems or browsers as needed.

4.7.2. User-provided certificates

To use your own TLS certificates and keys to replace some or all of the self-signed certificates generated during installation, you can set specific variables in your inventory file. A public or organizational CA must generate these certificates and keys in advance so that they are available during the installation process.

4.7.2.1. Using a custom CA to generate all TLS certificates

Use this method when you want Ansible Automation Platform to generate all of the certificates, but you want them signed by a custom CA rather than the default self-signed certificates.

Procedure

  • To use a custom Certificate Authority (CA) to generate TLS certificates for all Ansible Automation Platform services, set the following variables in your inventory file:

    ca_tls_cert=<path_to_ca_tls_certificate>
    ca_tls_key=<path_to_ca_tls_key>
    Copy to Clipboard Toggle word wrap

4.7.2.2. Providing custom TLS certificates for each service

Use this method if your organization manages TLS certificates outside of Ansible Automation Platform and requires manual provisioning.

Procedure

  • To manually provide TLS certificates for each individual service (for example, automation controller, automation hub, and Event-Driven Ansible), set the following variables in your inventory file:

    # Platform gateway
    gateway_tls_cert=<path_to_tls_certificate>
    gateway_tls_key=<path_to_tls_key>
    gateway_pg_tls_cert=<path_to_tls_certificate>
    gateway_pg_tls_key=<path_to_tls_key>
    gateway_redis_tls_cert=<path_to_tls_certificate>
    gateway_redis_tls_key=<path_to_tls_key>
    
    # Automation controller
    controller_tls_cert=<path_to_tls_certificate>
    controller_tls_key=<path_to_tls_key>
    controller_pg_tls_cert=<path_to_tls_certificate>
    controller_pg_tls_key=<path_to_tls_key>
    
    # Automation hub
    hub_tls_cert=<path_to_tls_certificate>
    hub_tls_key=<path_to_tls_key>
    hub_pg_tls_cert=<path_to_tls_certificate>
    hub_pg_tls_key=<path_to_tls_key>
    
    # Event-Driven Ansible
    eda_tls_cert=<path_to_tls_certificate>
    eda_tls_key=<path_to_tls_key>
    eda_pg_tls_cert=<path_to_tls_certificate>
    eda_pg_tls_key=<path_to_tls_key>
    eda_redis_tls_cert=<path_to_tls_certificate>
    eda_redis_tls_key=<path_to_tls_key>
    
    # PostgreSQL
    postgresql_tls_cert=<path_to_tls_certificate>
    postgresql_tls_key=<path_to_tls_key>
    
    # Receptor
    receptor_tls_cert=<path_to_tls_certificate>
    receptor_tls_key=<path_to_tls_key>
    
    # Redis
    redis_tls_cert=<path_to_tls_certificate>
    redis_tls_key=<path_to_tls_key>
    Copy to Clipboard Toggle word wrap

4.7.2.3. Considerations for certificates provided per service

When providing custom TLS certificates for each individual service, consider the following:

  • It is possible to provide unique certificates per host. This requires defining the specific _tls_cert and _tls_key variables in your inventory file as shown in the earlier inventory file example.
  • For services deployed across many nodes (for example, when following the enterprise topology), the provided certificate for that service must include the FQDN of all associated nodes in its Subject Alternative Name (SAN) field.
  • If an external-facing service (such as automation controller or platform gateway) is deployed behind a load balancer that performs SSL/TLS offloading, the service’s certificate must include the load balancer’s FQDN in its SAN field, in addition to the FQDNs of the individual service nodes.

4.7.2.4. Providing a custom CA certificate

When you manually provide TLS certificates, those certificates might be signed by a custom CA. Provide a custom CA certificate to ensure proper authentication and secure communication within your environment. If you have multiple custom CA certificates, you must merge them into a single file.

Procedure

  • If any of the TLS certificates you manually provided are signed by a custom CA, you must specify the CA certificate by using the following variable in your inventory file:

    custom_ca_cert=<path_to_custom_ca_certificate>
    Copy to Clipboard Toggle word wrap

    If you have more than one CA certificate, combine them into a single file and reference the combined certificate with the custom_ca_cert variable.

4.7.3. Receptor certificate considerations

When using a custom certificate for Receptor nodes, the certificate requires the otherName field specified in the Subject Alternative Name (SAN) of the certificate with the value 1.3.6.1.4.1.2312.19.1. For more information, see Above the mesh TLS.

Receptor does not support the usage of wildcard certificates. Additionally, each Receptor certificate must have the host FQDN specified in its SAN for TLS hostname validation to be correctly performed.

4.7.4. Redis certificate considerations

When using custom TLS certificates for Redis-related services, consider the following for mutual TLS (mTLS) communication if specifying Extended Key Usage (EKU):

  • The Redis server certificate (redis_tls_cert) should include the serverAuth (web server authentication) and clientAuth (client authentication) EKU.
  • The Redis client certificates (gateway_redis_tls_cert, eda_redis_tls_cert) should include the clientAuth (client authentication) EKU.

4.7.5. Using custom Receptor signing keys

Receptor signing is enabled by default unless receptor_disable_signing=true is set, and an RSA key pair (public and private) is generated by the installation program. However, you can set custom RSA public and private keys by using the following variables:

receptor_signing_private_key=<full_path_to_private_key>
receptor_signing_public_key=<full_path_to_public_key>
Copy to Clipboard Toggle word wrap
맨 위로 이동
Red Hat logoGithubredditYoutubeTwitter

자세한 정보

평가판, 구매 및 판매

커뮤니티

Red Hat 문서 정보

Red Hat을 사용하는 고객은 신뢰할 수 있는 콘텐츠가 포함된 제품과 서비스를 통해 혁신하고 목표를 달성할 수 있습니다. 최신 업데이트를 확인하세요.

보다 포괄적 수용을 위한 오픈 소스 용어 교체

Red Hat은 코드, 문서, 웹 속성에서 문제가 있는 언어를 교체하기 위해 최선을 다하고 있습니다. 자세한 내용은 다음을 참조하세요.Red Hat 블로그.

Red Hat 소개

Red Hat은 기업이 핵심 데이터 센터에서 네트워크 에지에 이르기까지 플랫폼과 환경 전반에서 더 쉽게 작업할 수 있도록 강화된 솔루션을 제공합니다.

Theme

© 2025 Red Hat