Chapter 4. Disconnected installation

download PDF

4.1. Ansible Automation Platform installation on disconnected RHEL

Install Ansible Automation Platform automation controller and a private automation hub, with an installer-managed database located on the automation controller without an Internet connection.

4.1.1. Prerequisites

To install Ansible Automation Platform on a disconnected network, complete the following prerequisites:

  • Create a subscription manifest.
  • Download the Ansible Automation Platform setup bundle.
  • Create DNS records for automation controller and private automation hub servers.

The setup bundle includes additional components that make installing Ansible Automation Platform easier in a disconnected environment. These include the Ansible Automation Platform RPMs and the default execution environment (EE) images.

4.1.2. System Requirements

Hardware requirements are documented in the Automation Platform Installation Guide. Reference the "Red Hat Ansible Automation Platform Installation Guide" in the Ansible Automation Platform Product Documentation for your version of Ansible Automation Platform.

4.1.3. RPM Source

RPM dependencies for Ansible Automation Platform that come from the BaseOS and AppStream repositories are not included in the setup bundle. To add these dependencies, you must obtain access to BaseOS and AppStream repositories.

  • Satellite is the recommended method from Red Hat to synchronize repositories
  • reposync - Makes full copies of the required RPM repositories and hosts them on the disconnected network
  • RHEL Binary DVD - Use the RPMs available on the RHEL 8 Binary DVD

The RHEL Binary DVD method requires the DVD for supported versions of RHEL 8.4 or higher. See Red Hat Enterprise Linux Life Cycle for information on which versions of RHEL are currently supported.

4.2. Synchronizing RPM repositories by using reposync

To perform a reposync you need a RHEL host that has access to the Internet. After the repositories are synced, you can move the repositories to the disconnected network hosted from a web server.


  1. Attach the BaseOS and AppStream required repositories:

    # subscription-manager repos \
        --enable rhel-8-for-x86_64-baseos-rpms \
        --enable rhel-8-for-x86_64-appstream-rpms
  2. Perform the reposync:

    # dnf install yum-utils
    # reposync -m --download-metadata --gpgcheck \
        -p /path/to/download
    1. Make certain that you use reposync with --download-metadata and without --newest-only. See [RHEL 8] Reposync.
    2. If not using --newest-only the repos downloaded will be ~90GB.
    3. If using --newest-only the repos downloaded will be ~14GB.
  3. If you plan to use Red Hat Single Sign-On (RHSSO) you must also sync these repositories.

    1. jb-eap-7.3-for-rhel-8-x86_64-rpms
    2. rh-sso-7.4-for-rhel-8-x86_64-rpms
  4. After the reposync is completed your repositories are ready to use with a web server.
  5. Move the repositories to your disconnected network.

4.3. Creating a new web server to host repositories

If you do not have an existing web server to host your repositories, create one with the synced repositories.


Use the following steps if creating a new web server.

  1. Install prerequisites:

    $ sudo dnf install httpd
  2. Configure httpd to serve the repo directory:

    DocumentRoot '/path/to/repos'
    <LocationMatch "^/+$">
        Options -Indexes
        ErrorDocument 403 /.noindex.html
    <Directory '/path/to/repos'>
        Options All Indexes FollowSymLinks
        AllowOverride None
        Require all granted
  3. Ensure that the directory is readable by an apache user:

    $ sudo chown -R apache /path/to/repos
  4. Configure SELinux:

    $ sudo semanage fcontext -a -t httpd_sys_content_t "/path/to/repos(/.*)?"
    $ sudo restorecon -ir /path/to/repos
  5. Enable httpd:

    $ sudo systemctl enable --now httpd.service
  6. Open firewall:

    $ sudo firewall-cmd --zone=public --add-service=http –add-service=https
    $ sudo firewall-cmd --reload
  7. On automation controller and automation hub, add a repo file at /etc/yum.repos.d/local.repo, add the optional repos if needed:

    name=Local BaseOS
    name=Local AppStream

4.4. Accessing RPM Repositories for Locally Mounted DVD

If you are going to access the repositories from the DVD, it is necessary to set up a local repository. This section shows how to do that.


  1. Mount DVD or ISO

    1. DVD

      # mkdir /media/rheldvd && mount /dev/sr0 /media/rheldvd
    2. ISO

      # mkdir /media/rheldvd && mount -o loop rhrhel-8.6-x86_64-dvd.iso /media/rheldvd
  2. Create yum repo file at /etc/yum.repos.d/dvd.repo

    name=DVD for RHEL - BaseOS
    name=DVD for RHEL - AppStream
  3. Import the gpg key

    # rpm --import /media/rheldvd/RPM-GPG-KEY-redhat-release

If the key is not imported you will see an error similar to

# Curl error (6): Couldn't resolve host name for [Could not resolve host:]

In order to set up a repository see Need to set up yum repository for locally-mounted DVD on Red Hat Enterprise Linux 8.

4.5. Adding a Subscription Manifest to Ansible Automation Platform without an Internet connection

To add a subscription to Ansible Automation Platform without an Internet connection, create and import a subscription manifest.


  1. Login to
  2. Navigate to Subscriptions Subscriptions.
  3. Click Subscription Allocations.
  4. Click Create New subscription allocation.
  5. Name the new subscription allocation.
  6. Select Satellite 6.14 Satellite 6.14 as the type.
  7. Click Create. The Details tab will open for your subscription allocation.
  8. Click Subscriptions tab.
  9. Click Add Subscription.
  10. Find your Ansible Automation Platform subscription, in the Entitlements box add the number of entitlements you want to assign to your environment. A single entitlement is needed for each node that is managed by Ansible Automation Platform: server, network device, etc.
  11. Click Submit.
  12. Click Export Manifest.
  13. This downloads a file manifest_<allocation name>_<date>.zip that be imported with automation controller after installation.

4.6. Installing the Ansible Automation Platform Setup Bundle

The “bundle” version is strongly recommended for disconnected installations as it comes with the RPM content for Ansible Automation Platform as well as the default execution environment images that are uploaded to your private automation hub during the installation process.

4.6.1. Downloading the Setup Bundle


  1. Download the Ansible Automation Platform setup bundle package by navigating to and click Download Now for the Ansible Automation Platform 2.3 Setup Bundle. Installing the Setup Bundle

The download and installation of the setup bundle needs to be located on automation controller. From automation controller, untar the bundle, edit the inventory file, and run the setup.

  1. Untar the bundle

    $ tar xvf \
    $ cd ansible-automation-platform-setup-bundle-2.3-1.2
  2. Edit the inventory file to include the required options

    1. automationcontroller group
    2. automationhub group
    3. admin_password
    4. pg_password
    5. automationhub_admin_password
    6. automationhub_pg_host, automationhub_pg_port
    7. automationhub_pg_password

      Example Inventory

      [automationcontroller] ansible_connection=local

      The inventory should be kept intact after installation since it is used for backup, restore, and upgrade functions. Consider keeping a backup copy in a secure location, given that the inventory file contains passwords.

  3. Run the AAP setup bundle executable as the root user

    $ sudo -i
    # cd /path/to/ansible-automation-platform-setup-bundle-2.3-1.2
    # ./
  4. Once installation is complete, navigate to the Fully Qualified Domain Name (FQDN) for the automation controller node that was specified in the installation inventory file.
  5. Log in with the administrator credentials specified in the installation inventory file.

4.7. Completing Post Installation Tasks

4.7.1. Adding an automation controller Subscription


  1. Navigate to the FQDN of the Automation controller. Login with admin and the password you specified as admin_password in your inventory file.
  2. Click Browse and select the you created earlier.
  3. Click Next.
  4. Uncheck User analytics and Automation analytics. These rely on an Internet connection and should be turned off.
  5. Click Next.
  6. Read the End User License Agreement and click Submit if you agree.

4.7.2. Updating the CA trust store Self-Signed Certificates

By default, automation hub and automation controller are installed using self signed certificates. This creates an issue where automation controller does not trust automation hub’s certificate and does not download the execution environments from automation hub. The solution is to import automation hub’s CA cert as a trusted cert on automation controller. You can use SCP or directly copy and paste from one file into another to perform this action. The following steps are copied from a KB article found at Copying the root certificate on the private automation hub to the automation controller using secure copy (SCP)

If SSH is available as the root user between automation controller and private automation hub, use SCP to copy the root certificate on private automation hub to automation controller and run update-ca-trust on automation controller to update the CA trust store.

On the Automation controller

$ sudo -i
# scp <hub_fqdn>:/etc/pulp/certs/root.crt
# update-ca-trust Copying and Pasting

If SSH is unavailable as root between private automation hub and automation controller, copy the contents of the file /etc/pulp/certs/root.crt on private automation hub and paste it into a new file on automation controller called /etc/pki/ca-trust/source/anchors/automationhub-root.crt. After the new file is created, run the command update-ca-trust to update the CA trust store with the new certificate.

On the Private automation hub

$ sudo -i
# cat /etc/pulp/certs/root.crt
(copy the contents of the file, including the lines with 'BEGIN CERTIFICATE' and

On automation controller

$ sudo -i
# vi /etc/pki/ca-trust/source/anchors/automationhub-root.crt
(paste the contents of the root.crt file from the {PrivateHubName} into the new file and write to disk)
# update-ca-trust

4.8. Importing Collections into Private Automation Hub

You can download collection tarball files from the following sources:

4.8.1. Downloading collection from Red Hat Automation Hub

This section gives instructions on how to download a collection from Red Hat Automation Hub. If the collection has dependencies, they will also need to be downloaded and installed.


  1. Navigate to and login with your Red Hat credentials.
  2. Click on the collection you wish to download.
  3. Click Download tarball
  4. To verify if a collection has dependencies, click the Dependencies tab.
  5. Download any dependencies needed for this collection.

4.9. Creating Collection Namespace

The namespace of the collection must exist for the import to be successful. You can find the namespace name by looking at the first part of the collection tarball filename. For example the namespace of the collection ansible-netcommon-3.0.0.tar.gz is ansible.


  1. Login to private automation hub web console.
  2. Navigate to Collections Namespaces.
  3. Click Create.
  4. Provide the namespace name.
  5. Click Create.

4.9.1. Importing the collection tarball with GUI

  1. Login to private automation hub web console.
  2. Navigate to Collections Namespaces.
  3. Click on View collections of the namespace you will be importing the collection into.
  4. Click Upload collection.
  5. Click the folder icon and select the tarball of the collection.
  6. Click Upload.

This opens the 'My Imports' page. You can see the status of the import and various details of the files and modules that have been imported. Importing the collection tarball using ansible-galaxy via CLI

You can import collections into the private automation hub by using the command-line interface rather than the GUI.

  1. Copy the collection tarballs to the private automation hub.
  2. Log in to the private automation hub server through SSH.
  3. Add the self-signed root CA cert to the trust store on the automation hub.

    # cp /etc/pulp/certs/root.crt \
    # update-ca-trust
  4. Update the /etc/ansible/ansible.cfg file with your hub configuration. Use either a token or a username and password for authentication.

    server_list = private_hub
  5. Import the collection using the ansible-galaxy command.
$ ansible-galaxy collection publish <collection_tarball>

Create the namespace that the collection belongs to in advance or publishing the collection will fail.

4.10. Approving the Imported Collection

After you have imported collections with either the GUI or the CLI method, you must approve them by using the GUI. After they are approved, they are available for use.


  1. Log in to private automation hub web console.
  2. Go to Collections Approval.
  3. Click Approve for the collection you wish to approve.
  4. The collection is now available for use in your private automation hub.

The collection is added to the "Published" repository regardless of its source.

  1. Import any dependency for the collection using these same steps.

Recommended collections depend on your use case. Ansible and Red Hat provide these collections.

4.10.1. Custom Execution Environments

Use the ansible-builder program to create custom execution environment images. For disconnected environments, custom EE images can be built in the following ways:

  • Build an EE image on an internet-facing system and import it to the disconnected environment
  • Build an EE image entirely on the disconnected environment with some modifications to the normal process of using ansible-builder
  • Create a minimal base container image that includes all of the necessary modifications for a disconnected environment, then build custom EE images from the base container image Transferring a Custom EE Images Across a Disconnected Boundary

A custom execution environment image can be built on an internet-facing machine using the existing documentation. Once an execution environment has been created it is available in the local Podman image cache. You can then transfer the custom EE image across a disconnected boundary. To transfer the custom EE image across a disconnected boundary, first save the image:

  1. Save the image:
$ podman image save localhost/custom-ee:latest | gzip -c custom-ee-latest.tar.gz

Transfer the file across the disconnected boundary by using an existing mechanism such as sneakernet, one-way diode, etc.. After the image is available on the disconnected side, import it into the local podman cache, tag it, and push it to the disconnected hub:

$ podman image load -i custom-ee-latest.tar.gz
$ podman image tag localhost/custom-ee <hub_fqdn>/custom-ee:latest
$ podman login <hub_fqdn> --tls-verify=false
$ podman push <hub_fqdn>/custom-ee:latest

4.11. Building an Execution Environment in a Disconnected Environment

When building a custom execution environment, the ansible-builder tool defaults to downloading the following requirements from the internet:

  • Ansible Galaxy ( or Automation Hub ( for any collections added to the EE image.
  • PyPI ( for any python packages required as collection dependencies.
  • The UBI repositories ( for updating any UBI-based EE images.

    • The RHEL repositories might also be needed to meet certain collection requirements.
  • for access to the ansible-builder-rhel8 container image.

Building an EE image in a disconnected environment requires a subset of all of these mirrored, or otherwise made available on the disconnected network. See Importing Collections into Private Automation Hub for information about importing collections from Galaxy or Automation Hub into a private automation hub.

Mirrored PyPI content once transferred into the high-side network can be made available using a web server or an artifact repository like Nexus.

The UBI repositories can be mirrored on the low-side using a tool like reposync, imported to the disconnected environment, and made available from Satellite or a simple web server (since the content is freely redistributable).

The ansible-builder-rhel8 container image can be imported into a private automation hub in the same way a custom EE can be imported. See Transferring a Custom EE Images Across a Disconnected Boundary for details substituting localhost/custom-ee for This will make the ansible-builder-rhel8 image available in the private automation hub registry along with the default EE images.

Once all of the prerequisites are available on the high-side network, ansible-builder and Podman can be used to create a custom execution environment image.

4.12. Installing the ansible-builder RPM


  1. On a RHEL system, install the ansible-builder RPM. This can be done in one of several ways:

    1. Subscribe the RHEL box to a Satellite on the disconnected network.
    2. Attach the Ansible Automation Platform subscription and enable the Ansible Automation Platform repository.
    3. Install the ansible-builder RPM.


      This is preferred if a Satellite exists because the execution environment images can use RHEL content from the Satellite if the underlying build host is registered.

  2. Unarchive the Ansible Automation Platform setup bundle.
  3. Install the ansible-builder RPM and its dependencies from the included content:

    $ tar -xzvf ansible-automation-platform-setup-bundle-2.3-1.2.tar.gz
    $ cd ansible-automation-platform-setup-bundle-2.3-1.2/bundle/el8/repos/
    $ sudo yum install ansible-builder-1.2.0-1.el9ap.noarch.rpm
  4. Create a directory for your custom EE build artifacts.

    $ mkdir custom-ee
    $ cd custom-ee/
  5. Create an execution-environment.yml file that defines the requirements for your custom EE following the documentation at Override the EE_BASE_IMAGE and EE_BUILDER_IMAGE variables to point to the EEs available in your private automation hub.

    $ cat execution-environment.yml
    version: 1
      EE_BASE_IMAGE: '<hub_fqdn>/ee-supported-rhel8:latest'
      EE_BUILDER_IMAGE: '<hub_fqdn>/ansible-builder-rhel8:latest'
      python: requirements.txt
      galaxy: requirements.yml
  6. Create an ansible.cfg file that points to your private automation hub and contains credentials that allow uploading, such as an admin user token.

    $ cat ansible.cfg
    server_list = private_hub
  7. Create a ubi.repo file that points to your disconnected UBI repo mirror (this could be your Satellite if the UBI content is hosted there).

    This is an example output where reposync was used to mirror the UBI repos.

    $ cat ubi.repo
    name = Red Hat Universal Base Image 8 (RPMs) - BaseOS
    baseurl = http://<ubi_mirror_fqdn>/repos/ubi-8-baseos
    enabled = 1
    gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
    gpgcheck = 1
    name = Red Hat Universal Base Image 8 (RPMs) - AppStream
    baseurl = http://<ubi_mirror_fqdn>/repos/ubi-8-appstream
    enabled = 1
    gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
    gpgcheck = 1
  8. Add the CA certificate used to sign the private automation hub web server certificate.

    1. For self-signed certificates (the installer default), make a copy of the file /etc/pulp/certs/root.crt from your private automation hub and name it hub-root.crt.
    2. If an internal certificate authority was used to request and sign the private automation hub web server certificate, make a copy of that CA certificate called hub-root.crt.
  9. Create your python requirements.txt and ansible collection requirements.yml with the content needed for your custom EE image. Note that any collections you require should already be uploaded into your private automation hub.
  10. Use ansible-builder to create the context directory used to build the EE image.

    $ ansible-builder create
    Complete! The build context can be found at: /home/cloud-user/custom-ee/context
    $ ls -1F
  11. Copy the files used to override the internet-facing defaults into the context directory.

    $ cp ansible.cfg hub-root.crt pip.conf ubi.repo context/
  12. Edit the file context/Containerfile and add the following modifications.

    1. In the first EE_BASE_IMAGE build section, add the ansible.cfg and hub-root.crt files and run the update-ca-trust command.
    2. In the EE_BUILDER_IMAGE build section, add the ubi.repo and pip.conf files.
    3. In the final EE_BASE_IMAGE build section, add the ubi.repo and pip.conf files.

      $ cat context/Containerfile
      ARG EE_BASE_IMAGE=<hub_fqdn>/ee-supported-rhel8:latest
      ARG EE_BUILDER_IMAGE=<hub_fqdn>/ansible-builder-rhel8:latest
      FROM $EE_BASE_IMAGE as galaxy
      USER root
      ADD _build /build
      WORKDIR /build
      # this section added
      ADD ansible.cfg /etc/ansible/ansible.cfg
      ADD hub-root.crt /etc/pki/ca-trust/source/anchors/hub-root.crt
      RUN update-ca-trust
      # end additions
      RUN ansible-galaxy role install -r requirements.yml \
          --roles-path /usr/share/ansible/roles
      RUN ansible-galaxy collection install \
          $ANSIBLE_GALAXY_CLI_COLLECTION_OPTS -r requirements.yml \
          --collections-path /usr/share/ansible/collections
      FROM $EE_BUILDER_IMAGE as builder
      COPY --from=galaxy /usr/share/ansible /usr/share/ansible
      ADD _build/requirements.txt requirements.txt
      RUN ansible-builder introspect --sanitize \
          --user-pip=requirements.txt \
          --write-bindep=/tmp/src/bindep.txt \
      # this section added
      ADD ubi.repo /etc/yum.repos.d/ubi.repo
      ADD pip.conf /etc/pip.conf
      # end additions
      RUN assemble
      USER root
      COPY --from=galaxy /usr/share/ansible /usr/share/ansible
      # this section added
      ADD ubi.repo /etc/yum.repos.d/ubi.repo
      ADD pip.conf /etc/pip.conf
      # end additions
      COPY --from=builder /output/ /output/
      RUN /output/install-from-bindep && rm -rf /output/wheels
  13. Create the EE image in the local podman cache using the podman command.

    $ podman build -f context/Containerfile \
        -t <hub_fqdn>/custom-ee:latest
  14. Once the custom EE image builds successfully, push it to the private automation hub.

    $ podman push <hub_fqdn>/custom-ee:latest

4.12.1. Workflow for upgrading between minor Ansible Automation Platform releases

To upgrade between minor releases of Ansible Automation Platform 2, use this general workflow.


  1. Download and unarchive the latest Ansible Automation Platform 2 setup bundle.
  2. Take a backup of the existing installation.
  3. Copy the existing installation inventory file into the new setup bundle directory.
  4. Run ./ to upgrade the installation.

For example, to upgrade from version 2.2.0-7 to 2.3-1.2, make sure that both setup bundles are on the initial controller node where the installation occurred:

    $ ls -1F

Back up the 2.2.0-7 installation:

$ cd ansible-automation-platform-setup-bundle-2.2.0-7
$ sudo ./ -b
$ cd ..

Copy the 2.2.0-7 inventory file into the 2.3-1.2 bundle directory:

$ cd ansible-automation-platform-setup-bundle-2.2.0-7
$ cp inventory ../ansible-automation-platform-setup-bundle-2.3-1.2/
$ cd ..

Upgrade from 2.2.0-7 to 2.3-1.2 with the script:

$ cd ansible-automation-platform-setup-bundle-2.3-1.2
$ sudo ./
Red Hat logoGithubRedditYoutubeTwitter


Try, buy, & sell


About Red Hat Documentation

We help Red Hat users innovate and achieve their goals with our products and services with content they can trust.

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. For more details, see the Red Hat Blog.

About Red Hat

We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

© 2024 Red Hat, Inc.