Deploying a high availability automation hub
Overview of the requirements and procedures for a high availability deployment of automation hub.
Abstract
Preface Copy linkLink copied to clipboard!
This guide provides an overview of the requirements and procedures for a high availability deployment of your automation hub.
A high availability (HA) configuration increases reliability and scalablility for automation hub deployments.
HA deployments of automation hub have multiple nodes that concurrently run the same service with a load balancer distributing workload (an "active-active" configuration). This configuration eliminates single points of failure to minimize service downtime and allows you to easily add or remove nodes to meet workload demands.
This guide covers deployment of a HA automation hub application stack only. Other HA components, such as database and file system HA, or setting up DNS load balancing, are out of scope for this guide.
Making open source more inclusive Copy linkLink copied to clipboard!
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.
Chapter 1. Requirements for a high availability automation hub Copy linkLink copied to clipboard!
Before deploying a high availability (HA) automation hub, ensure that you have a shared filesystem installed in your environment and that you have configured your network storage system, if applicable.
1.2. Network Storage Installation Requirements Copy linkLink copied to clipboard!
If you intend to install a HA automation hub using a network storage on the automation hub nodes itself, you must first install and use firewalld to open the necessary ports as required by your shared storage system before running the Ansible Automation Platform installer.
Install and configure firewalld by executing the following commands:
Install the
firewallddaemon:dnf install firewalld
$ dnf install firewalldCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add your network storage under <service> using the following command:
firewall-cmd --permanent --add-service=<service>
$ firewall-cmd --permanent --add-service=<service>Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteFor a list of supported services, use the
$ firewall-cmd --get-servicescommandReload to apply the configuration:
firewall-cmd --reload
$ firewall-cmd --reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Chapter 2. Installing a high availability automation hub Copy linkLink copied to clipboard!
Configure the Ansible Automation Platform installer to install automation hub in a highly available (HA) configuration. Install HA automation hub on SELinux by creating mount points and adding the appropriate SELinux contexts to your Ansible Automation Platform environment.
2.1. Highly available automation hub installation Copy linkLink copied to clipboard!
Install a highly available automation hub by making the following changes to the inventory file in the Ansible Automation Platform installer, then running the ./setup.sh script:
Specify database host IP
Specify the IP address for your database host, using the automation_pg_host and automation_pg_port fields. For example:
automationhub_pg_host='192.0.2.10' automationhub_pg_port='5432'
automationhub_pg_host='192.0.2.10'
automationhub_pg_port='5432'
also specify the IP address for your database host in the [database] section, using the value in the automationhub_pg_port field:
[database] 192.0.2.10
[database]
192.0.2.10
List all instances in a clustered setup
If installing a clustered setup, replace localhost ansible_connection=local in the [automationhub] section with the hostname or IP of all instances. For example:
[automationhub] automationhub1.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.18 automationhub2.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.20 automationhub3.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.22
[automationhub]
automationhub1.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.18
automationhub2.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.20
automationhub3.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.22
Red Hat Single Sign-On requirements
If you are implementing Red Hat Single Sign-On on your automation hub environment, specify the main automation hub URL that clients will connect to, using the automationhub_main_url field. For example:
automationhub_main_url = 'https://automationhub.ansible.com'
automationhub_main_url = 'https://automationhub.ansible.com'
If automationhub_main_url is not specified, the first node in the [automationhub] group will be used as default.
2.2. Install a high availability (HA) deployment of automation hub on SELinux Copy linkLink copied to clipboard!
To set up a high availability (HA) deployment of automation hub on SELinux, create two mount points for /var/lib/pulp and /var/lib/pulp/pulpcore_static, then assign the appropriate SELinux contexts to each. You must add the context for /var/lib/pulp/pulpcore_static and run the Ansible Automation Platform installer before adding the context for /var/lib/pulp.
Prerequisites
- You have already configured a NFS export on your server.
Pre-installation procedure
Create a mount point at
/var/lib/pulp:mkdir /var/lib/pulp/
$ mkdir /var/lib/pulp/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open
/etc/fstabusing a text editor, then add the following values:srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache 0 0 srv_rhel8:/data/pulpcore_static /var/lib/pulp/pulpcore_static nfs defaults,_netdev,nosharecache,context="system_u:object_r:httpd_sys_content_rw_t:s0" 0 0
srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache 0 0 srv_rhel8:/data/pulpcore_static /var/lib/pulp/pulpcore_static nfs defaults,_netdev,nosharecache,context="system_u:object_r:httpd_sys_content_rw_t:s0" 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the mount command for
/var/lib/pulp:mount /var/lib/pulp
$ mount /var/lib/pulpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a mount point at
/var/lib/pulp/pulpcore_static:mkdir /var/lib/pulp/pulpcore_static
$ mkdir /var/lib/pulp/pulpcore_staticCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the mount command:
mount -a
$ mount -aCopy to Clipboard Copied! Toggle word wrap Toggle overflow With the mount points set up, run the Ansible Automation Platform installer:
setup.sh -- -b --become-user root
$ setup.sh -- -b --become-user rootCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Once the installation is complete, unmount the /var/lib/pulp/ mount point then apply the appropriate SELinux context:
Post-installation procedure
Shut down the Pulp service:
systemctl stop pulpcore.service
$ systemctl stop pulpcore.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Unmount
/var/lib/pulp/pulpcore_static:umount /var/lib/pulp/pulpcore_static
$ umount /var/lib/pulp/pulpcore_staticCopy to Clipboard Copied! Toggle word wrap Toggle overflow Unmount
/var/lib/pulp/:umount /var/lib/pulp/
$ umount /var/lib/pulp/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open
/etc/fstabusing a text editor, then replace the existing value for/var/lib/pulpwith the following:srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:pulpcore_var_lib_t:s0" 0 0
srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:pulpcore_var_lib_t:s0" 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the mount command:
mount -a
$ mount -aCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Configure pulpcore.service:
With the two mount points set up, shut down the Pulp service to configure
pulpcore.service:systemctl stop pulpcore.service
$ systemctl stop pulpcore.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit
pulpcore.serviceusingsystemctl:systemctl edit pulpcore.service
$ systemctl edit pulpcore.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following entry to
pulpcore.serviceto ensure that automation hub services starts only after starting the network and mounting the remote mount points:[Unit] After=network.target var-lib-pulp.mount
[Unit] After=network.target var-lib-pulp.mountCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable
remote-fs.target:systemctl enable remote-fs.target
$ systemctl enable remote-fs.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Reboot the system:
systemctl reboot
$ systemctl rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Troubleshooting
A bug in the pulpcore SELinux policies can cause the token authentication public/private keys in etc/pulp/certs/ to not have the proper SELinux labels, causing the pulp process to fail. When this occurs, run the following command to temporarily attach the proper labels:
chcon system_u:object_r:pulpcore_etc_t:s0 /etc/pulp/certs/token_{private,public}_key.pem
$ chcon system_u:object_r:pulpcore_etc_t:s0 /etc/pulp/certs/token_{private,public}_key.pem
You must repeat this command to reattach the proper SELinux labels whenever you relabel your system.
Additional Resources
- See the SELinux Requirements on the Pulp Project documentation for a list of SELinux contexts.
- See the Filesystem Layout information for a full description of Pulp folders.