Rechercher

Ce contenu n'est pas disponible dans la langue sélectionnée.

Chapter 3. Prerequisites

download PDF

The side-by-side migration and upgrade to Ansible Automation Platform 2 requires Environment A and Environment B. Environment A should be your main cluster as shown in Figure 1.1, “Environment A Architecture Overview” consisting of:

  • 3 Ansible Tower 3.8.5 nodes running Red Hat Enterprise Linux 7 located in Raleigh NC data center
  • 1 Red Hat Enterprise Linux 7 database node.
  • 2 bastion hosts(jump hosts to access their corresponding isolated nodes)
  • 2 isolated nodes located in Sacramento, CA data center
  • 2 isolated nodes located in New Delhi, India data center

Initially, Environment B is a simplified Ansible Automation Platform 1.2 architecture as shown in Figure 1.2, “Initial Environment B Architecture Overview”. During the upgrade process to Ansible Automation Platform 2, a pool of Red Hat Enterprise Linux 8.4 servers expands the cluster to include:

  • 2 execution nodes accessible directly by the control plane nodes
  • 3 hop nodes (sacramento-hop, dublin-hop and new-delhi-hop)
  • 2 execution nodes accessible only via hop node sacramento-hop
  • 2 execution nodes accessible via hop nodes dublin-hop and new-delhi-hop

The final cluster post upgrade architecture can be seen within the Figure 1.4, “Expanded Environment B Architecture Overview”

Note

These nodes are not required to be physical servers.

3.1. Environment Specifications

Table 3.1. environment specifications

Node Type

Control

Execution

Hop

Database

CPU

4

4

4

4

RAM

16

16

16

16

Disk

40GB

40GB

40GB

150GB+

Notes

  • Processes events and runs cluster jobs including project updates and cleanup jobs. Increasing CPU and memory can help with job event processing.
  • Dedicate a minimum of 20 GB to /var/` for file and working directory storage Storage volume should be rated for a minimum baseline of 1500 IOPS.
  • Projects are stored on control and hybrid, and for the duration of jobs, also on execution nodes. If the cluster has many large projects, consider having twice the GB in /var/lib/awx/projects, to avoid disk space errors.
  • Runs automation. Increase memory and CPU to increase capacity for running more forks
  • Serves to route traffic from one part of the Automation Mesh to another (for example, could be a bastion host into another network). RAM could affect throughput, CPU activity is low. Network bandwidth and latency generally a more important factor than either RAM/CPU.
  • Storage volume should be rated for a high baseline IOPS (1500 or more).
  • Extra disk space is required for the /var/lib/pgsql directory. In the case you cannot expand the base OS partition, consider having 150GB+ for /var/lib/pgsql directory to avoid disk space errors.
Note

All automation controller data is stored in the PostgreSQL database. Database storage increases with the number of hosts managed, number of jobs run, number of facts stored in the fact cache, and number of tasks in any individual job. For example, a playbook run every hour (24 times a day) across 250, hosts, with 20 tasks will store over 800,000 events in the database every week.

If not enough space is reserved in the database, old job runs and facts will need to be cleaned on a regular basis. Refer to Management Jobs in the Automation Controller Administration Guide for more information

3.2. Network Requirements

Ansible Automation Platform 2 requires direct network connectivity between the automation controller instances and mesh worker nodes allocated to run control plane jobs.

Note

In case of hop nodes that route traffic to the execution nodes behind a DMZ or an isolated network, your Ansible Automation Platform cluster can span across multiple networks. A network administrator should ensure that ssh connectivity is available to all nodes for installation purposes. For this reference environment hop-nodes act as ssh proxies to get to their respective execution nodes.

In order to access the Ansible Automation Platform dashboard, a browser that can access the network that the control plane nodes reside on is required. If you wish to access the Ansible Automation Platform dashboard externally, ensure to add a public IP address to your control plane nodes.

It is recommended that network administrators provide a dedicated IP address to all nodes and an appropriate DNS record. Network administrators can assign a static IP address to the nodes and configure the DHCP server to either reserve the IP with an infinite lease, or network administrators should ensure the assigned IP addresses are part of an excluded range. This ensures each node’s IP address remains constant in the absence of a DHCP server. NOTE: For the purposes of this reference architecture, setup of a DHCP server, setting up DNS records and setting up a load balancer is out of scope.

Note

For the purposes of this reference architecture, setup of a DHCP server, setting up DNS records and setting up a load balancer is out of scope.

A network administrator should reserve at least the following number of IP addresses, including:

  1. One IP address for each control plane node.
  2. One IP address for each execution node.
  3. One IP address for each hop node.
  4. One IP address for the database node.

This reference environment reserves 13 IP addresses per site.

The following table provides an example of Environment B of the reference environment.

Usage

Host Name

IP

Automation Controller 1

envb_controller1.example.com

192.168.0.10

Automation Controller 2

envb_controller2.example.com

192.168.0.11

Automation Controller 3

envb_controller3.example.com

192.168.0.12

Control Plane Database

envb_database.example.com

192.168.0.13

Execution Node 1

envb_executionnode-1.example.com

192.168.0.14

Execution Node 2

envb_executionnode-2.example.com

192.168.0.15

Hop Node 1

envb_hopnode-sacramento.example.com

192.168.0.16

Hop Node 2

envb_hopnode-new-delhi.example.com

192.168.0.17

Hop Node 3

envb_hopnode-dublin.example.com

192.168.0.18

Execution Node 3

envb_executionnode-3.example.com

10.14.1.11

Execution Node 4

envb_executionnode-4.example.com

10.14.1.12

Execution Node 5

envb_executionnode-5.example.com

10.15.1.11

Execution Node 6

envb_executionnode-6.example.com

10.15.1.12

3.3. Validation checklist for nodes

The following is a summary of all the requirements:

  • ❏ 16 GB of RAM for controller nodes, database node, execution nodes and hop nodes
  • ❏ 4 CPUs for controller nodes, database nodes, execution nodes and hop nodes
  • ❏ 150 GB+ disk space for database node
  • ❏ 40 GB+ disk space for non-database nodes
  • ❏ DHCP reservations use infinite leases to deploy the cluster with static IP addresses.
  • ❏ DNS records for all nodes
  • ❏ Red Hat Enterprise Linux {rhel_version} or later 64-bit (x86) installed for all nodes
  • ❏ Chrony configurationed for all nodes
Note

Check Section 3.1, “Environment Specifications” for more detailed notes on node requirements.

3.4. Automation controller Configuration Details

This reference architecture focuses on the migration and upgrade of Ansible Automation Platform 1.2 to Ansible Automation Platform 2. The configuration is intended to provide a comprehensive Ansible Automation Platform solution that covers isde-by-side migration scenarios. The key solution components covered within this reference archtiecture consists of:

  • Ansible Automation Platform 1.2
  • Ansible Automation Platform 2
  • Ansible Automation Platform Installer
  • Ansible Playbooks for custom Python virtual environment migration

3.4.1. OS Configuration

3.4.1.1. Chrony Configuration

Each Ansible Automation Platform node in the cluster must have access to an NTP server. The chronyd is a daemon for synchronization of the system clock. It can synchronize the clock with NTP servers. This ensures that when cluster nodes use SSL certificates that require validation, they don’t fail if the date and time between the nodes are not in sync.

On all the nodes on Environment B including those to be used for the Ansible Automation Platform cluster expansion,

  1. If not installed, install chrony as follows

    # dnf install chrony --assumeyes
  2. Edit the /etc/chrony.conf file with a text editor such as vi.

    # vi /etc/chrony.conf
  3. Locate the following public server pool section, and modify it to include the appropriate servers. Only one server is required, but three is recommended. The iburst option is added to speed up the time that it takes to properly sync with the servers.

    # Use public servers from the pool.ntp.org project.
    # Please consider joining the pool (http://www.pool.ntp.org/join.html).
    server <ntp-server-address> iburst
  4. Save all the changes within the /etc/chrony.conf file.
  5. Start and enable that the chronyd daemon is started when the host is booted.

    # systemctl --now enable chronyd.service
  6. Verify the chronyd daemon status.

    # systemctl status chronyd.service

3.4.1.2. Red Hat Subscription Manager

The subscription-manager command registers a system to the Red Hat Network (RHN) and manages the subscription entitlements for a system. The --help option specifies on the command line to query the command for the available options. If the --help option is issued along with a command directive, then options available for the specific command directive are listed.

To use Red Hat Subscription Management for providing packages to a system, the system must first register with the service. In order to register a system, use the subscription-manager command and pass the register command directive. If the --username and --password options are specified, then the command does not prompt for the RHN Network authentication credentials.

An example of registering a system using subscription-manager is shown below.

Note

The following should be done on all the nodes within Environment B including those to be used for the Ansible Automation Platform cluster expansion.

# subscription-manager register --username [User] --password '[Password]'
The system has been registered with id: abcd1234-ab12-ab12-ab12-481ba8187f60

After a system is registered, it must be attached to an entitlement pool. For the purposes of this reference environment, the Red Hat Ansible Automation Platform is the pool chosen. Identify and subscribe to the Red Hat Ansible Automation Platform entitlement pool, the following command directives are required.

# subscription-manager list --available | grep -A8 "Red Hat Ansible Automation Platform"
---
Subscription Name:   Red Hat Ansible Automation Platform, Premium (5000 Managed Nodes)
Provides:            Red Hat Ansible Engine
                     Red Hat Single Sign-On
                     Red Hat Ansible Automation Platform
SKU:                 MCT3695
Contract:            <contract>
Pool ID:             <pool_id>
Provides Management: No
Available:           9990
Suggested:           1
Service Type:        L1-L3
Roles:
# subscription-manager attach --pool <pool_id>
Successfully attached a subscription for: Red Hat Ansible Automation Platform, Premium (5000 Managed Nodes)
# subscription-manager repos --enable=ansible-automation-platform-2.1-for-rhel-8-x86_64-rpms
Note

As part of the Ansible Automation Platform subscription, 10 Red Hat Enterprise Linux (RHEL) subscriptions are available to utilize RHEL for your automation controllers, private automation hubs and other Ansible Automation Platform components.

3.4.1.3. User Accounts

Prior to the installation of Ansible Automation Platform 2, it is recommended to create a non-root user with sudo privileges for the deployment process. This user is used for:

  • SSH connectivity
  • passwordless authentication during installation
  • Privilege escalation (sudo) permissions

For the purposes of this reference environment, the user ansible was chosen, however, any user name would suffice.

On all the nodes on Environment B including those to be used for the Ansible Automation Platform cluster expansion, create a user named ansible and generate an ssh key.

  1. Create a non-root user

    # useradd ansible
  2. Set a password for your ansible user.

    # passwd ansible
  3. Generate an ssh key as the ansible user.

    $ ssh-keygen -t rsa
  4. Disable password requirements when using sudo as the ansible user

    # echo "ansible ALL=(ALL) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/ansible

3.4.1.4. Copying SSH keys to all nodes

With the ansible user created, as the ansible user, copy the ssh key to all the nodes on Environment B including those to be used for the Ansible Automation Platform cluster expansion. This ensures that when the Ansible Automation Platform installation runs, it can ssh to all the nodes without a password.

This can be done using the ssh-copy-id command as follows:

$ ssh-copy-id ansible@hostname.example.com
Note

If running within a cloud provider, you may need to instead create an ~/.ssh/authorized_keys file containing the public key for the ansible user on all your nodes and set the permissions to the authorized_keys file to only the owner (ansible) having read and write access (permissions 600).

3.4.1.5. Configuring Firewall Settings

Firewall access and restrictions play a critical role in securing Ansible Automation Platform 2 environment. The use of Red Hat Enterprise Linux {rhel_version} defaults to using firewalld, a dynamic firewall daemon. firewalld works by assigning network zones to assign a level of trust to a network and its associated connections and interfaces.

It is recommended that firewall settings be configured to permit access to the appropriate services and ports for a success Ansible Automation Platform 2 installation.

On all the nodes within Environment B including those to be used for the Ansible Automation Platform cluster expansion, ensure that firewalld is installed, started and enabled.

  1. Install the firewalld package

    # dnf install firewalld --assumeyes
  2. Start the firewalld service

    # systemctl start firewalld
  3. Enable the firewalld service

    # systemctl enable firewalld

3.5. Ansible Automation Platform Configuration Details

3.5.1. Configuring firewall settings for execution and hop nodes

For a successful Ansible Automation Platform upgrade to 2, one of the prerequisites is to enable the automation mesh port on the mesh nodes(execution and hop nodes). The default port used for the mesh networks on all the nodes is set to 27199/tcp, however it can also be configured to use a different port by specifying receptor_listener_port as each node’s variable within your inventory file. More details on modifying the inventory file can be found within Section 4.3, “Upgrade Environment B to Ansible Automation Platform 2”

Note

For this reference environment all the Ansible Automation Platform 2 controller nodes are designated as node type control. If control nodes are designated as hybrid nodes (default node type), they require mesh port(default: 27199/tcp) to be enabled.

Within your hop and execution node, as the ansible user, set the firewalld port to be used for installation.

  1. Ensure that firewalld is running.

    $ sudo systemctl status firewalld
  2. Add the firewalld port on your controller database node (e.g. port 27199)

    $ sudo firewall-cmd --permanent --zone=public --add-port=27199/tcp
  3. Reload firewalld

    $ sudo firewall-cmd --reload
  4. Confirm that the port is open

    $ sudo firewall-cmd --list-ports
Red Hat logoGithubRedditYoutubeTwitter

Apprendre

Essayez, achetez et vendez

Communautés

À propos de la documentation Red Hat

Nous aidons les utilisateurs de Red Hat à innover et à atteindre leurs objectifs grâce à nos produits et services avec un contenu auquel ils peuvent faire confiance.

Rendre l’open source plus inclusif

Red Hat s'engage à remplacer le langage problématique dans notre code, notre documentation et nos propriétés Web. Pour plus de détails, consultez leBlog Red Hat.

À propos de Red Hat

Nous proposons des solutions renforcées qui facilitent le travail des entreprises sur plusieurs plates-formes et environnements, du centre de données central à la périphérie du réseau.

© 2024 Red Hat, Inc.