Chapter 3. Configuring Ansible Automation Platform to use egress proxy
You can deploy Ansible Automation Platform so that egress from the platform for various purposes functions properly through proxy servers.
Egress proxy allows clients to make indirect (through a proxy server) requests to network services.
The client first connects to the proxy server and requests some resource, for example, email, located on another server. The proxy server then connects to the specified server and retrieves the resource from it.
3.1. Overview Copy linkLink copied to clipboard!
The egress proxy should be configured on the system and component level of Ansible Automation Platform, for all the RPM and containerized installation methods. For containerized installers, the system proxy configuration for Podman on the nodes solves most of the problems with access through the proxy. For RPM installation, both system and component configurations are needed.
3.1.1. Proxy backends Copy linkLink copied to clipboard!
For HTTP and HTTPS proxies you can use a squid server. Squid is a forward proxy for the Web supporting HTTP, HTTPS, and FTP, reducing bandwidth and improving response times by caching and reusing frequently-requested web pages. It is licensed under the GNU GPL.
Forward proxies are systems that intercept network traffic going to another network (typically the internet) and send it on the behalf of the internal systems. The squid proxy enables all required communication to pass through it.
Make sure all the required Ansible Automation Platform control plane ports are opened on the squid proxy backend. Ansible Automation Platform-specific ports:
acl Safe_ports port 81
acl Safe_ports port 82
acl Safe_ports port 389
acl Safe_ports port 444
acl Safe_ports port 445
acl SSL_ports port 22
The following ports are for containerized installations:
acl SSL_ports port 444
acl SSL_ports port 445
acl SSL_ports port 8443
acl SSL_ports port 8444
acl SSL_ports port 8445
acl SSL_ports port 8446
acl SSL_ports port 44321
acl SSL_ports port 44322
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
3.2. System proxy configuration Copy linkLink copied to clipboard!
An outbound proxy (egress proxy) is a server that acts as an intermediary for requests from clients seeking resources from other servers on the internet. It is used to regulate and secure client traffic, and to provide caching services to improve performance.
The outbound proxy is configured on the system level for all the nodes in the control plane.
You must set the following environment variables:
http_proxy=“http://external-proxy_0:3128”
https_proxy=“http://external-proxy_0:3128”
no_proxy=“localhost,127.0.0.0/8,10.0.0.0/8”
You can also add those variables to the '/etc/environment' file to make them permanent.
The installation program ensures that all external communication during the installation goes through the proxy. For containerized installation, those variables ensure that Podman uses the egress proxy.
3.3. Automation controller settings Copy linkLink copied to clipboard!
After using the RPM installation program, you must configure automation controller to use egress proxy.
This is not required for containerized installers because Podman uses system configured proxy and redirects all the container traffic to the proxy.
For automation controller, set the AWX_TASK_ENV variable in /api/v2/settings/. To do this through the UI use the following procedure:
Procedure
-
From the navigation panel, select
. - Click .
Add the variables to the Extra Environment Variables field
and set:
"AWX_TASK_ENV": { "http_proxy": "http://external-proxy_0:3128", "https_proxy": "http://external-proxy_0:3128", "no_proxy": "localhost,127.0.0.0/8" }
3.3.1. Configuring SCM project sync using SSH to work with a proxy in automation controller Copy linkLink copied to clipboard!
The following procedure for RPM-based Ansible Automation Platform describes how to use automation controller Project Sync by using the SSH protocol to work with a proxy server.
Procedure
Perform the following steps on the automation controller nodes. If ansible-builder has not been installed yet, install it first.
# subscription-manager repos --enable ansible-automation-platform-2.6-for-rhel-8-x86_64-rpms # dnf install ansible-builderBuild a custom execution environment.
First, create a work directory:
# su - awx $ mkdir -p builder/newee $ cd builder/newee
Create an
execution-environment.ymlfile with the following content:version: 1 build_arg_defaults: EE_BASE_IMAGE: 'registry.redhat.io/ansible-automation-platform-24/ee-supported-rhel8:latest' additional_build_steps: prepend: - RUN microdnf install -y ncLog in to registry.redhat.io.
$ podman login registry.redhat.ioRun ansible-builder to start the building process.
$ cd /var/lib/awx/builder/newee/ $ ansible-builder build -t my-env -v 3- Add the custom execution environment you created.
-
On the navigation panel, select
. - Click .
-
In the Image field add
localhost/my-env:latest. - Click .
Re-run the Ansible Automation Platform installation program with the following steps to replace the execution environment from the default to the customized environment which will be used as a Project syncs.
NoteBackup Ansible Automation Platform before running the installation program.
# ./setup.sh -bCreate an
automationcontrollerfile under thegroup_varsdirectory in the same location as thesetup.shfile. The file contents are as follows:control_plane_execution_environment: localhost/my-envRun
setup.sh# ./setup.shCreate
ssh_configunder the directory. For example:Host github.com Hostname ssh.github.com ProxyCommand nc --proxy-type http --proxy proxy.example.com:port %h %p User git-
Add the
ssh_configfile’s directory path in PATH to expose the isolated jobs so that the container execution environment can readssh_configfile. -
In the navigation panel, select
. - Click .
If the
ssh_configfile has been created as/var/lib/awx/.ssh/ssh_config, add this to Paths to expose to isolated jobsNoteEnsure
ssh_configis owned by the AWX user. (#chown awx:awx /var/lib/awx/.ssh/ssh_config)[ "/var/lib/awx/.ssh:/etc/ssh:O" ]
3.4. Enabling a configurable proxy environment for AWS inventory synchronization Copy linkLink copied to clipboard!
To enable a configurable proxy environment for AWS inventory synchronization, you can manually edit the override configuration file or set the configuration in the platform UI:
Manually edit
/usr/lib/systemd/system/receptor.service.d/override.confand add the followinghttp_proxyenvironment variables there:http_proxy:<value> https_proxy:<value> proxy_username:<value> Proxy_password:<value>Or
- To do this through the UI use the following procedure:
Procedure
-
From the navigation panel, select
. - Click .
Add the variables to the Extra Environment Variables field
For example: *
"AWX_TASK_ENV": {
"no_proxy": "localhost,127.0.0.0/8,10.0.0.0/8",
"http_proxy": "http://proxy_host:3128/",
"https_proxy": "http://proxy_host:3128/"
},
3.5. Configuring Proxy settings on automation hub Copy linkLink copied to clipboard!
If your private automation hub is behind a network proxy, you can configure proxy settings on the remote to sync content located outside of your local network.
Prerequisites
- You have Modify Ansible repo content permissions. For more information on permissions, see Access management and authentication.
You have a
requirements.ymlfile that identifies those collections to synchronize from Ansible Galaxy as in the following example:Requirements.yml example
collections:
# Install a collection from Ansible Galaxy.
- name: community.aws
version: 5.2.0
source: https://galaxy.ansible.com
Procedure
- Log in to Ansible Automation Platform.
-
From the navigation panel, select
. - In the Details tab in the Community remote, click .
-
In the YAML requirements field, paste the contents of your
requirements.ymlfile. - Click .
Result
You can now synchronize collections identified in your requirements.yml file from Ansible Galaxy to your private automation hub.
Next steps
See Synchronizing Ansible content collections in automation hub for syncing steps.
3.6. Configuring proxy settings on Event-Driven Ansible Copy linkLink copied to clipboard!
For Event-Driven Ansible, there are no global settings to set a proxy. You must specify the proxy for every project.
Procedure
-
From the navigation panel, select
. Click
Use the Proxy field.
3.7. Configuring proxy settings for automation mesh Copy linkLink copied to clipboard!
You can route outbound communication from the receptor on an automation mesh node through a proxy server. If your proxy does not strip out TLS certificates then an installation of Ansible Automation Platform automatically supports the use of a proxy server.
Every node on the mesh must have a Certifying Authority that the installation program creates on your behalf.
The default install location for the Certifying Authority is:
/etc/receptor/tls/ca/mesh-CA.crt
The certificates and keys created on your behalf use the nodeID for their names:
For the certificate: /etc/receptor/tls/NODEID.crt
For the key: /etc/receptor/tls/NODEID.key
3.8. Configure operator-based Ansible Automation Platform to use egress proxy Copy linkLink copied to clipboard!
When installing Ansible Automation Platform using the operator-based installation method, you can configure the platform to use an egress proxy.
When creating an automationcontroller instance, select the YAML view and add extra_settings in the spec definition as the AWX_TASK_ENV parameter for the proxy settings, as follows:
+
apiVersion: automationcontroller.ansible.com/v1beta1
kind: AutomationController
metadata:
name: example
namespace: ansible-automation-platform
spec:
create_preload_data: true
route_tls_termination_mechanism: Edge
garbage_collect_secrets: false
ingress_type: Route
loadbalancer_port: 80
no_log: true
image_pull_policy: IfNotPresent
projects_storage_size: 8Gi
auto_upgrade: true
task_privileged: false
projects_storage_access_mode: ReadWriteMany
set_self_labels: true
projects_persistence: false
replicas: 1
admin_user: admin
loadbalancer_protocol: http
nodeport_port: 30080
extra_settings:
- setting: AWX_TASK_ENV
value:
HTTPS_PROXY: 'https://192.168.0.XXX:3128'
HTTP_PROXY: 'https://192.168.0.XXX:3128'
NO_PROXY: 10.0.0.0/8
http_proxy: 'https://192.168.0.XXX:3128'
https_proxy: 'https://192.168.0.XXX:3128'
no_proxy: 10.0.0.0/8
3.8.1. Modification for a deployed instance Copy linkLink copied to clipboard!
The configuration is stored as a ConfigMap resource. See it in the OCP console > ConfigMaps > <instancename>-automationcontroller-configmap.
To modify the settings after deployment, use the Operator.
After editing extra_settings, perform the deployment again. Go to OCP console > Deployments > your instance > Decrease the Pod count > Increase the Pod count. You can also redeploy it in the command line utility as follows:
$ oc scale --replicas=0 deployment.apps/<instancename> -n ansible-automation-platform deployment.apps/<instancename> scaled
$ oc scale --replicas=1 deployment.apps/<instancename> -n ansible-automation-platform deployment.apps/<instancename> scaled
Verification
See the settings in the Web UI at Settings > Jobs settings > Extra Environment Variables. If you need to set another value, you can define it in the same way. extra_settings settings is stored statically in the /etc/tower/settings.py file in the automationcontroller instance