Chapter 5. Migrate virtual environments to execution environments
Ansible Automation Platform 2 comes with a re-imagined architecture that fully decouples the automation control plane and execution plane. The new capabilities enable easier to scale automation across the globe and allow you to run your automation as close to the source as possible without being bound to running automation in a single data center. It’s more dynamic, scalable, resilient and flexible compared to Ansible Automation Platform 1.2.
With the introduction of automation execution environments, these container images allow for all the automation needed to be packaged and run including the key components such as Ansible Core, Ansible Content Collections, Python dependencies, Red Hat Enterprise Linux UBI 8, and any additional package dependencies.
This chapter focuses on migrating your custom Python virtual environments in your Ansible Automation Platform 1.2 cluster to user-built automation execution environments.
This one-time effort opens the door to take advantage of the latest Ansible Automation Platform 2 capabilities and the ability to execute consistent automation across multiple platforms with lower long-term maintenance.
In order to access user-built execution environments, they are required to be hosted within private automation hub or a container registry. For more information on how to install private automation hub visit our Deploying Ansible Automation Platform 2.1 reference architecture.
5.1. Automating the migration of virtual environments to execution environments
For simplicity, we are including supplemental Ansible Playbooks that automate the process by simply running an Ansible command.
For completeness, the manual process consists of:
- An Ansible Automation Platform 1.2 environment with custom Python virtual environments
-
Using of the
awx-manage
command line utility to get a custom list of Python virtual environments -
Running the
awx-manage export_custom_venv
command on each Python virtual environment to get the list of Python packages installed -
Checking the association of a Python virtual environment using the
awx-manage custom_venv_associations
command -
Filtering the above information to create execution environments using the
ansible-builder
tool
The automated process consists of:
- Pulling a list of packages from each custom Python virtual environments present on the Ansible Automation Platform 1.2 environment
- Comparing the package lists from the previous step with the package list of the Ansible-2.9[1] to find the packages that are not present in the base Ansible-2.9 execution environment
- Create a new custom execution environment that uses the Ansible-2.9 execution environment as the base and including the missing dependencies from the list in the previous step
To run a scenario of what that may look like, let’s take the following example.
In our existing Ansible Automation Platform 1.2, there are two custom Python virtual environments labeled custom-venv1 and custom-venv2.
Using the role virtualenv_migrate
packaged in the redhat_cop.ee_utilities collection we will run it against our Ansible Automation Platform 1.2 environment via ssh
access to the Ansible Tower node to extract the the packages and their versions that are not currently part of the base execution environment we are comparing against (Ansible 2.9 execution environment).
The redhat_cop.ee_utilities collection is a community project and officially not supported by Red Hat.
A sample playbook and inventory file respectively of the environment are found below:
playbook.yml
--- - name: Review custom virtualenvs and pull requirements hosts: enva_tower become: true tasks: - name: Include venv role include_role: name: redhat_cop.ee_utilities.virtualenv_migrate
Inventory
[tower] ansibletower.example.com ansible_ssh_private_key_file=/path/to/example.pem [all:vars] ############################################################################### # Required configuration variables for migration from venv -> EE # ############################################################################### # The default URL location to the execution environment (Default Ansible 2.9) # If you want to use the newest Ansible base, change to: ee-minimal-rhel8:latest venv_migrate_default_ee_url="registry.redhat.io/ansible-automation-platform-21/ee-29-rhel8:latest" # User credential for access to venv_migrate_default_ee_url registry_username='myusername'
-
Add ansible_user=<ANSIBLE_USER> based on the user needed to
ssh
into the Ansible Tower node. -
This reference environment takes advantage of encrypted credentials and does not include passwords in plain text. Details in Appendix C, Creating an encrypted credentials.yml file can be found on how to use
ansible-vault
to encrypt your registry credentials. An encrypted credentials.yml file is used to supplyregistry_password
This role requires sudo privileges in order to run the podman
commands.
The sample output of the Ansible playbook shows a list of the additional packages that are required for each custom Python virtual environment. In this case you’ll notice that custom-venv1 Python virtual environment requires the following packages in additional to what is already part of the Ansible 2.9 execution environment:
- certifi
- charset-normalizer
- enum34
- future
- solidfire-sdk-python
While custom-venv2 Python virtual environment only required zabbix-api
in addition to what is already part of the standard Ansible-2.9 execution environment.
The Ansible 2.9 execution environment is used for comparison against the custom Python virtual environments because of the fact that most Ansible Automation Platform 1.2 environments are using Ansible 2.9. This ensures an easier migration transition due to backward compatibility.
TASK [redhat_cop.tower_utilities.virtualenv_migrate : diff | Show the packages that are extra from default EEs in custom venvs.] ****************************************************************************** ok: [3.228.23.40 -> localhost] => { "msg": [ { "/opt/my-envs/custom-venv1/": [ "certifi", "charset-normalizer", "enum34", "future", "solidfire-sdk-python" ] }, { "/opt/my-envs/custom-venv2/": [ "zabbix-api" ] } ] }
Once the packages are captured for each custom Python virtual environment, the Ansible playbook uses the ee_builder
role that is part of the redhat_cop.ee_utilities collection that automates the creation of execution environments on the local user environment.
Before running the supplied Ansible playbook, install ansible-builder on your localhost machine, the playbook run creates execution environments on your local machine based upon the package delta between the custom Python virtual environment and the base execution environment supplied.
$ podman images REPOSITORY TAG IMAGE ID localhost/custom-venv2 latest c017418d1919 localhost/custom-venv1 latest 7cbe3b49974d localhost/custom-venv latest 9d5d809f38b0
5.1.1. Pushing to private automation hub
With the local execution environments in place, you can push them to your private automation hub via:
In this reference architecture, we keep the name of the automation execution environments as the name of the custom Python virtual environments for simplicity. If a change in name is required, use the podman tag command before pushing the execution environments to your private automation hub or a container registry of choice.
$ podman login [automation-hub-url] # Enter the username and password to access Private Automation Hub. $ podman tag [image-id] [automation-hub-url]/[container image name] $ podman push [automation-hub-url]/[container image name]
For more information, visit: Managing containers in private automation hub
Once there, synchronize the execution environments to your automation controller by creating the registry credentials for your private automation hub inside the controller user interface.
To create your registry credentials within automation controller:
- Select Resources→Credentials
- Within Credentials, select the blue Add button
In the Create New Credentials window,
- provide a Name e.g. My private automation hub credentials
- under Credential Type select the drop down and select Container Registry
under Type Details
- provide Authentication URL, e.g. pah.example.com
- provide your private automation hub username within Username field
- provide your private automation hub password or token within Password or Token field
- select Verify SSL if your private automation hub environment supports SSL
- Click Save
To make the execution environments available within automation controller, create a new execution environment that will pull the images from your private automation hub.
Within automation controller,
- Select Administration→Execution Environments
- Within Execution Environments, select the blue Add button
In the Create new execution environment window,
- provide a Name, e.g. my execution environment
- provide the image location of the execution environment, e.g. repo/project/image-name:tag
select the Registry credential magnifying glass
- click the radio button for your private automation hub credentials, e.g. my private automation hub credentials
With the execution environments now available within automation controller, they can be used against any existing Job Templates or newly created Job Templates.
When creating new user-built execution environments not constrained to require backward compatibility, it is recommended to use the ee-minimal
execution environment as your base execution environment to build your new images against.
ansible-2.9
execution environment to best mimic the execution plane environment of Ansible Automation Platform 1.2.