Red Hat Ansible Automation Platform Upgrade and Migration Guide
Upgrading to the latest version of Ansible Automation Platform and migrating legacy virtual environments to automation execution environments
Abstract
Making open source more inclusive Copy linkLink copied to clipboard!
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Upgrading isolated nodes to execution nodes Copy linkLink copied to clipboard!
Upgrading from version 1.x to the latest version of the Red Hat Ansible Automation Platform requires platform administrations to migrate data from isolated legacy nodes to execution nodes. This migration is necessary to deploy the automation mesh.
This guide explains how to perform a side-by-side migration. This ensures that the data on your original automation environment remains untouched during the migration process.
The migration process involves the following steps:
- Verify upgrade configurations.
- Backup original instance.
- Deploy new instance for a side-by-side upgrade.
- Recreate instance groups in the new instance using ansible controller.
- Restore original backup to new instance.
- Set up execution nodes and upgrade instance to Red Hat Ansible Automation Platform 2.1.
- Configure upgraded controller instance.
1.1. Prerequisites for upgrading Ansible Automation Platform Copy linkLink copied to clipboard!
Before you begin to upgrade Ansible Automation Platform, ensure your environment meets the following node and configuration requirements.
1.1.1. Node requirements Copy linkLink copied to clipboard!
The following specifications are required for the nodes involved in the Ansible Automation Platform upgrade process:
- 16 GB of RAM for controller nodes, database node, execution nodes and hop nodes.
- 4 CPUs for controller nodes, database nodes, execution nodes, and hop nodes.
- 150 GB+ disk space for database node.
- 40 GB+ disk space for non-database nodes.
- DHCP reservations use infinite leases to deploy the cluster with static IP addresses.
- DNS records for all nodes.
- Red Hat Enterprise Linux 8 or later 64-bit (x86) installed for all nodes.
- Chrony configured for all nodes.
- Python 3.8 or later for all content dependencies.
1.1.2. Automation controller configuration requirements Copy linkLink copied to clipboard!
The following automation controller configurations are required before you proceed with the Ansible Automation Platform upgrade process:
Configuring NTP server using Chrony
Each Ansible Automation Platform node in the cluster must have access to an NTP server. Use the chronyd to synchronize the system clock with NTP servers. This ensures that cluster nodes using SSL certificates that require validation do not fail if the date and time between nodes are not in sync.
This is required for all nodes used in the upgraded Ansible Automation Platform cluster:
Install
chrony:dnf install chrony --assumeyes
# dnf install chrony --assumeyesCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Open
/etc/chrony.confusing a text editor. Locate the public server pool section and modify it to include the appropriate NTP server addresses. Only one server is required, but three are recommended. Add the
iburstoption to speed up the time it takes to properly sync with the servers:Use public servers from the pool.ntp.org project. Please consider joining the pool (http://www.pool.ntp.org/join.html).
# Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). server <ntp-server-address> iburstCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
Save changes within the
/etc/chrony.conffile. Start the host and enable the
chronyddaemon:systemctl --now enable chronyd.service
# systemctl --now enable chronyd.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Verify the
chronyddaemon status:systemctl status chronyd.service
# systemctl status chronyd.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Attaching Red Hat subscription on all nodes
Red Hat Ansible Automation Platform requires you to have valid subscriptions attached to all nodes. You can verify that your current node has a Red Hat subscription by running the following command:
subscription-manager list --consumed
# subscription-manager list --consumed
If there is not a Red Hat subscription attached to the node, see attaching your Ansible Automation Platform subscription for more information.
Creating non-root user with sudo privileges
Before you upgrade Ansible Automation Platform, it is recommended to create a non-root user with sudo privileges for the deployment process. This user is used for:
- SSH connectivity.
- Passwordless authentication during installation.
- Privilege escalation (sudo) permissions.
The following example uses ansible to name this user. On all nodes used in the upgraded Ansible Automation Platform cluster, create a non-root user named ansible and generate an ssh key:
Create a non-root user:
useradd ansible
# useradd ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow Set a password for your user:
passwd ansible
# passwd ansible1 Changing password for ansible. Old Password: New Password: Retype New Password:Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Replace
ansiblewith the non-root user from step 1, if using a different name
Generate an
sshkey as the user:ssh-keygen -t rsa
$ ssh-keygen -t rsaCopy to Clipboard Copied! Toggle word wrap Toggle overflow Disable password requirements when using
sudo:echo "ansible ALL=(ALL) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/ansible
# echo "ansible ALL=(ALL) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/ansibleCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Copying SSH keys to all nodes
With the ansible user created, copy the ssh key to all the nodes used in the upgraded Ansible Automation Platform cluster. This ensures that when the Ansible Automation Platform installation runs, it can ssh to all the nodes without a password:
ssh-copy-id ansible@node-1.example.com
$ ssh-copy-id ansible@node-1.example.com
If running within a cloud provider, you might need to instead create an ~/.ssh/authorized_keys file containing the public key for the ansible user on all your nodes and set the permissions to the authorized_keys file to only the owner (ansible) having read and write access (permissions 600).
Configuring firewall settings
Configure the firewall settings on all the nodes used in the upgraded Ansible Automation Platform cluster to permit access to the appropriate services and ports for a successful Ansible Automation Platform upgrade. For Red Hat Enterprise Linux 8 or later, enable the firewalld daemon to enable the access needed for all nodes:
Install the
firewalldpackage:dnf install firewalld --assumeyes
# dnf install firewalld --assumeyesCopy to Clipboard Copied! Toggle word wrap Toggle overflow Start the
firewalldservice:systemctl start firewalld
# systemctl start firewalldCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable the
firewalldservice:systemctl enable --now firewalld
# systemctl enable --now firewalldCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.1.3. Ansible Automation Platform configuration requirements Copy linkLink copied to clipboard!
The following Ansible Automation Platform configurations are required before you proceed with the Ansible Automation Platform upgrade process:
Configuring firewall settings for execution and hop nodes
After upgrading your Red Hat Ansible Automation Platform instance, add the automation mesh port on the mesh nodes (execution and hop nodes) to enable automation mesh functionality. The default port used for the mesh networks on all nodes is 27199/tcp. You can configure the mesh network to use a different port by specifying recptor_listener_port as the variable for each node within your inventory file.
Within your hop and execution node set the firewalld port to be used for installation.
Ensure that
firewalldis running:sudo systemctl status firewalld
$ sudo systemctl status firewalldCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the
firewalldport to your controller database node (e.g. port 27199):sudo firewall-cmd --permanent --zone=public --add-port=27199/tcp
$ sudo firewall-cmd --permanent --zone=public --add-port=27199/tcpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Reload
firewalld:sudo firewall-cmd --reload
$ sudo firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Confirm that the port is open:
sudo firewall-cmd --list-ports
$ sudo firewall-cmd --list-portsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
1.2. Back up your Ansible Automation Platform instance Copy linkLink copied to clipboard!
Back up an existing Ansible Automation Platform instance by running the .setup.sh script with the backup_dir flag, which saves the content and configuration of your current environment:
-
Navigate to your
ansible-tower-setup-latestdirectory. Run the
./setup.shscript following the example below:./setup.sh -e ‘backup_dir=/ansible/mybackup’ -e ‘use_compression=True’ @credentials.yml -b
$ ./setup.sh -e ‘backup_dir=/ansible/mybackup’ -e ‘use_compression=True’ @credentials.yml -b1 2 Copy to Clipboard Copied! Toggle word wrap Toggle overflow
With a successful backup, a backup file is created at /ansible/mybackup/tower-backup-latest.tar.gz .
This backup will be necessary later to migrate content from your old instance to the new one.
1.3. Deploy a new instance for a side-by-side upgrade Copy linkLink copied to clipboard!
To proceed with the side-by-side upgrade process, deploy a second instance of Ansible Tower 3.8.x with the same instance group configurations. This new instance will receive the content and configuration from your original instance, and will later be upgraded to Red Hat Ansible Automation Platform 2.1.
1.3.1. Deploy a new instance of Ansible Tower Copy linkLink copied to clipboard!
To deploy a new Ansible Tower instance, do the following:
- Download the Tower installer version that matches your original Tower instance by navigating to the Ansible Tower installer page.
Navigate to the installer, then open the
inventoryfile using a text editor to configure theinventoryfile for a Tower installation:In addition to any Tower configurations, remove any fields containing
isolated_grouporinstance_group.NoteFor more information about installing Tower using the Ansible Automation Platform installer, see the Ansible Automation Platform Installation Guide for your specific installation scenario.
-
Run the
setup.shscript to begin the installation.
Once the new instance is installed, configure the Tower settings to match the instance groups from your original Tower instance.
1.3.2. Recreate instance groups in the new instance Copy linkLink copied to clipboard!
To recreate your instance groups in the new instance, do the following:
Make note of all instance groups from your original Tower instance. You will need to recreate these groups in your new instance.
- Log in to your new instance of Tower.
- In the navigation pane, select → .
- Click .
- Enter a Name that matches an instance group from your original instance, then click
- Repeat until all instance groups from your original instance have been recreated.
1.4. Restore backup to new instance Copy linkLink copied to clipboard!
Running the ./setup.sh script with the restore_backup_file flag migrates content from the backup file of your original 1.x instance to the new instance. This effectively migrates all job histories, templates, and other Ansible Automation Platform related content.
Procedure
Run the following command:
./setup.sh -r -e ‘restore_backup_file=/ansible/mybackup/tower-backup-latest.tar.gz’ -e ‘use_compression=True’ -e @credentials.yml -r -- --ask-vault-pass
$ ./setup.sh -r -e ‘restore_backup_file=/ansible/mybackup/tower-backup-latest.tar.gz’ -e ‘use_compression=True’ -e @credentials.yml -r -- --ask-vault-pass1 2 3 Copy to Clipboard Copied! Toggle word wrap Toggle overflow Log in to your new RHEL 8 Tower 3.8 instance to verify whether the content from your original instance has been restored:
- Navigate to → . The recreated instance groups should now contain the Total Jobs from your original instance.
- Using the side navigation panel, check that your content has been imported from your original instance, including Jobs, Templates, Inventories, Credentials, and Users.
You now have a new instance of Ansible Tower with all the Ansible content from your original instance.
You will upgrade this new instance to Ansible Automation Platform 2.1 so that you keep all your previous data without overwriting your original instance.
1.5. Upgrading to Ansible Automation Platform 2.1 Copy linkLink copied to clipboard!
To upgrade your instance of Ansible Tower to Ansible Automation Platform 2.1, copy the inventory file from your original Tower instance to your new Tower instance and run the installer. The Red Hat Ansible Automation Platform installer detects a pre-2.1 inventory file and offers an upgraded inventory file to continue with the upgrade process:
- Download the latest installer for Red Hat Ansible Automation Platform from the Red Hat Customer Portal.
Extract the files:
tar xvzf ansible-automation-platform-setup-<latest-version>.tar.gz
$ tar xvzf ansible-automation-platform-setup-<latest-version>.tar.gzCopy to Clipboard Copied! Toggle word wrap Toggle overflow Navigate into your Ansible Automation Platform installation directory:
cd ansible-automation-platform-setup-<latest-version>/
$ cd ansible-automation-platform-setup-<latest-version>/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Copy the
inventoryfile from your original instance into the directory of the latest installer:cp ansible-tower-setup-3.8.x.x/inventory ansible-automation-platform-setup-<latest-version>
$ cp ansible-tower-setup-3.8.x.x/inventory ansible-automation-platform-setup-<latest-version>Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the
setup.shscript:./setup.sh
$ ./setup.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow The setup script pauses and indicates that a "pre-2.x" inventory file was detected, but offers a new file called
inventory.new.iniallowing you to continue to upgrade your original instance.Open
inventory.new.iniwith a text editor.NoteBy running the setup script, the Installer modified a few fields from your original inventory file, such as renaming [tower] to [automationcontroller].
Modify the newly generated
inventory.new.inifile to configure your automation mesh by assigning relevant variables, nodes, and relevant node-to-node peer connections:NoteThe design of your automation mesh topology depends on the automation needs of your environment. The example below offers one possible scenario for automation mesh design, and the design of your automation mesh topology depends on the automation needs of your environment. Review the full Ansible Automation Platform automation mesh guide for information on designing it for your needs.
Example inventory file with a standard control plane consisting of three nodes utilizing hop nodes:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow - 1
- Specifies a control node that runs project and inventory updates and system jobs, but not regular jobs. Execution capabilities are disabled on these nodes.
- 2
- Specifies peer relationships for node-to-node connections in the
[execution_nodes]group. - 3
- Specifies hop nodes that route traffic to other execution nodes. Hop nodes cannot execute automation.
Once you have finished configuring your
inventory.new.inifor automation mesh, run the setup script usinginventory.new.ini:./setup.sh -i inventory.new.ini -e @credentials.yml -- --ask-vault-pass
$ ./setup.sh -i inventory.new.ini -e @credentials.yml -- --ask-vault-passCopy to Clipboard Copied! Toggle word wrap Toggle overflow - Once the installation completes, verify that your Ansible Automation Platform has been installed successfully by logging in to the Ansible Automation Platform dashboard UI across all automation controller nodes.
Additional resources
- For general information on using the Ansible Automation Platform installer, see the Red Hat Ansible Automation Platform installation guide.
- For more information about automation mesh, see the Ansible Automation Platform automation mesh guide
1.6. Configuring your upgraded Ansible Automation Platform Copy linkLink copied to clipboard!
1.6.1. Configuring automation controller instance groups Copy linkLink copied to clipboard!
After upgrading your Red Hat Ansible Automation Platform instance, associate your original instance with its corresponding instance groups by configuring settings in the automation controller UI:
- Log into the new Controller instance.
- Content from the old instance, such as credentials, jobs, inventories should now be visible on your Controller instance.
- Navigate to → .
- Associate execution nodes by clicking on an instance group, then click the Instances tab.
- Click . Select the node(s) to associate to this instance group, then click .
- You can also modify the default instance to disassociate your new execution nodes.
Chapter 2. Migrating to automation execution environments Copy linkLink copied to clipboard!
2.1. Why upgrade to automation execution environments? Copy linkLink copied to clipboard!
Red Hat Ansible Automation Platform 2.1 introduces automation execution environments. Automation execution environments are container images that allow for easier administration of Ansible by including everything needed to run Ansible automation within a single container. Automation execution environments include:
- RHEL UBI 8
- Ansible 2.9 or Ansible Core 2.11
- Python 3.8 or later.
- Any Ansible Content Collections
- Collection python or binary dependencies
By including these elements, Ansible provides platform administrators a standardized way to define, build, and distribute the environments the automation runs in.
Due to the new automation execution environment, it is no longer necessary for administrators to create custom plugins and automation content. Administrators can now spin up smaller automation execution environments in less time to create their content.
All custom dependencies are now defined in the development phase instead of the administration and deployment phase. Decoupling from the control plane enables faster development cycles, scalability, reliability, and portability across environments. Automation execution environments enables the Ansible Automation Platform to move to a distributed architecture allowing administrators to scale automation across their organization.
2.2. About migrating legacy venvs to automation execution environments Copy linkLink copied to clipboard!
When upgrading from older versions of automation controller to version 4.0 or later, the controller can detect previous versions of virtual environments associated with Organizations, Inventory and Job Templates and informs you to migrate to the new automation execution environments model. A new installation of automation controller creates two virtualenvs during the installation; one runs the controller and the other runs Ansible. Like legacy virtual environments, automation execution environments allow the controller to run in a stable environment, while allowing you to add or update modules to your automation execution environments as necessary to run your playbooks.
You can duplicate your setup in an automation execution environment from a previous custom virtual environment by migrating it to the new automation execution environment. Use the awx-manage commands in this section to:
-
list of all the current custom virtual environments and their paths (
list_custom_venvs) -
view the resources that rely a particular custom virtual environment (
custom_venv_associations) -
export a particular custom virtual environment to a format that can be used to migrate to an automation execution environment (
export_custom_venv)
The below workflow describes how to migrate from legacy venvs to automation execution environments using the awx-manage command.
2.3. Migrating virtual envs to automation execution environments Copy linkLink copied to clipboard!
Use the following sections to assist with additional steps in the migration process once you have upgraded to Red Hat Ansible Automation Platform 2.0 and automation controller 4.0.
2.3.1. Listing custom virtual environments Copy linkLink copied to clipboard!
You can list the virtual environments on your automation controller instance using the awx-manage command.
Procedure
SSH into your automation controller instance and run:
awx-manage list_custom_venvs
$ awx-manage list_custom_venvsCopy to Clipboard Copied! Toggle word wrap Toggle overflow
A list of discovered virtual environments will appear.
2.3.2. Viewing objects associated with a custom virtual environment Copy linkLink copied to clipboard!
View the organizations, jobs, and inventory sources associated with a custom virtual environment using the awx-manage command.
Procedure
SSH into your automation controller instance and run:
awx-manage custom_venv_associations /path/to/venv
$ awx-manage custom_venv_associations /path/to/venvCopy to Clipboard Copied! Toggle word wrap Toggle overflow
A list of associated objects will appear.
2.3.3. Selecting the custom virtual environment to export Copy linkLink copied to clipboard!
Select the custom virtual environment you wish to export using awx-manage export_custom_venv command.
Procedure
SSH into your automation controller instance and run:
awx-manage export_custom_venv /path/to/venv
$ awx-manage export_custom_venv /path/to/venvCopy to Clipboard Copied! Toggle word wrap Toggle overflow
The output from this command will show a pip freeze of what is in the specified virtual environment. This information can be copied into a requirements.txt file for Ansible Builder to use for creating a new automation execution environments image
Pass the -q flag when running awx-manage list_custom_venvs to reduce output.
Chapter 3. Ansible content migration Copy linkLink copied to clipboard!
If you are migrating from an ansible-core version to ansible-core 2.12+, consider reviewing Ansible Core Porting Guides to familiarize yourself with changes and updates between each version. When reviewing the Ansible Core porting guides, ensure that you select the latest version of ansible-core or devel, which is located at the top left column of the guide.
For a list of fully supported and certified Ansible Content Collections, see Ansible Automation hub on console.redhat.com.
3.1. Migrating your Ansible playbooks and roles to Core 2.12 Copy linkLink copied to clipboard!
When you are migrating from non collection-based content to collection-based content, you should use the Fully Qualified Collection Names (FQCN) in playbooks and roles to avoid unexpected behavior.
Example playbook with FQCN:
If you are using ansible-core modules and are not calling a module from a different Collection, you should use the FQCN ansible.builtin.copy.
Example module with FQCN: