이 콘텐츠는 선택한 언어로 제공되지 않습니다.
Chapter 9. Managing Capacity With Instances
Scaling your automation mesh is available on OpenShift deployments of Red Hat Ansible Automation Platform and is possible through adding or removing nodes from your cluster dynamically, using the Instances resource of the automation controller UI, without running the installation script.
Instances serve as nodes in your mesh topology. Automation mesh enables you to extend the footprint of your automation. The location where you launch a job can be different from the location where the ansible-playbook runs.
To manage instances from the automation controller UI, you must have System Administrator or System Auditor permissions.
In general, the more processor cores (CPU) and memory (RAM) a node has, the more jobs that can be scheduled to run on that node at once.
For more information, see Automation controller capacity determination and job impact.
9.1. Prerequisites
The automation mesh is dependent on hop and execution nodes running on Red Hat Enterprise Linux (RHEL). Your Red Hat Ansible Automation Platform subscription grants you ten Red Hat Enterprise Linux licenses that can be used for running components of Ansible Automation Platform.
For additional information about Red Hat Enterprise Linux subscriptions, see Registering the system and managing subscriptions in the Red Hat Enterprise Linux documentation.
The following steps prepare the RHEL instances for deployment of the automation mesh.
- You require a Red Hat Enterprise Linux operating system. Each node in the mesh requires a static IP address, or a resolvable DNS hostname that automation controller can access.
- Ensure that you have the minimum requirements for the RHEL virtual machine before proceeding. For more information, see the Red Hat Ansible Automation Platform system requirements.
Deploy the RHEL instances within the remote networks where communication is required. For information about creating virtual machines, see Creating Virtual Machines in the Red Hat Enterprise Linux documentation. Remember to scale the capacity of your virtual machines sufficiently so that your proposed tasks can run on them.
- RHEL ISOs can be obtained from access.redhat.com.
- RHEL cloud images can be built using Image Builder from console.redhat.com.
9.2. Pulling the secret
If you are using the default execution environment (provided with automation controller) to run on remote execution nodes, you must add a pull secret in the automation controller that contains the credential for pulling the execution environment image.
To do this, create a pull secret on the automation controller namespace and configure the ee_pull_credentials_secret
parameter in the Operator as follows:
Procedure
Create a secret:
oc create secret generic ee-pull-secret \ --from-literal=username=<username> \ --from-literal=password=<password> \ --from-literal=url=registry.redhat.io oc edit automationcontrollers <instance name>
Add
ee_pull_credentials_secret ee-pull-secret
to the specification:spec.ee_pull_credentials_secret=ee-pull-secret
To manage instances from the automation controller UI, you must have System Administrator or System Auditor permissions.
9.3. Setting up Virtual Machines for use in an automation mesh
Procedure
SSH into each of the RHEL instances and perform the following steps. Depending on your network access and controls, SSH proxies or other access models might be required.
Use the following command:
ssh [username]@[host_ip_address]
For example, for an Ansible Automation Platform instance running on Amazon Web Services.
ssh ec2-user@10.0.0.6
- Create or copy an SSH key that can be used to connect from the hop node to the execution node in later steps. This can be a temporary key used just for the automation mesh configuration, or a long-lived key. The SSH user and key are used in later steps.
Enable your RHEL subscription with
baseos
andappstream
repositories. Ansible Automation Platform RPM repositories are only available through subscription-manager, not the Red Hat Update Infrastructure (RHUI). If you attempt to use any other Linux footprint, including RHEL with RHUI, this causes errors.sudo subscription-manager register --auto-attach
If Simple Content Access is enabled for your account, use:
sudo subscription-manager register
For more information about Simple Content Access, see Getting started with simple content access.
Enable Ansible Automation Platform subscriptions and the proper Red Hat Ansible Automation Platform channel:
# subscription-manager repos --enable ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms for RHEL 8 # subscription-manager repos --enable ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms for RHEL 9
Ensure the packages are up to date:
sudo dnf upgrade -y
Install the ansible-core packages:
sudo dnf install -y ansible-core
9.4. Managing instances
To expand job capacity, create a standalone execution node that can be added to run alongside a deployment of automation controller. These execution nodes are not part of the automation controller Kubernetes cluster. The control nodes run in the cluster connect and submit work to the execution nodes through Receptor. These execution nodes are registered in automation controller as type execution
instances, meaning they are only used to run jobs, not dispatch work or handle web requests as control nodes do.
Hop nodes can be added to sit between the control plane of automation controller and standalone execution nodes. These hop nodes are not part of the Kubernetes cluster and are registered in automation controller as an instance of type hop
, meaning they only handle inbound and outbound traffic for otherwise unreachable nodes in different or more strict networks.
The following procedure demonstrates how to set the node type for the hosts.
Procedure
-
From the navigation panel, select
. On the Instances list page, click . The Create new Instance window opens.
An instance requires the following attributes:
Host Name: (required) Enter a fully qualified domain name (public DNS) or IP address for your instance. This field is equivalent to
hostname
for installer-based deployments.NoteIf the instance uses private DNS that cannot be resolved from the control cluster, DNS lookup routing fails, and the generated SSL certificates is invalid. Use the IP address instead.
- Optional: Description: Enter a description for the instance.
- Instance State: This field is auto-populated, indicating that it is being installed, and cannot be modified.
-
Listener Port: This port is used for the receptor to listen on for incoming connections. You can set the port to one that is appropriate for your configuration. This field is equivalent to
listener_port
in the API. The default value is 27199, though you can set your own port value. Instance Type: Only
execution
andhop
nodes can be created. Operator based deployments do not support Control or Hybrid nodes.Options:
- Enable Instance: Check this box to make it available for jobs to run on an execution node.
- Check the Managed by Policy box to enable policy to dictate how the instance is assigned.
Check the Peers from control nodes box to enable control nodes to peer to this instance automatically. For nodes connected to automation controller, check the Peers from Control nodes box to create a direct communication link between that node and automation controller. For all other nodes:
- If you are not adding a hop node, make sure Peers from Control is checked.
- If you are adding a hop node, make sure Peers from Control is not checked.
- For execution nodes that communicate with hop nodes, do not check this box.
To peer an execution node with a hop node, click the icon next to the Peers field.
The Select Peers window is displayed.
Peer the execution node to the hop node.
Click
.To view a graphical representation of your updated topology, see Topology viewer.
NoteExecute the following steps from any computer that has SSH access to the newly created instance.
Click the icon next to Install Bundle to download the tar file that includes this new instance and the files necessary to install the created node into the automation mesh.
The install bundle contains TLS certificates and keys, a certificate authority, and a proper Receptor configuration file.
receptor-ca.crt work-public-key.pem receptor.key install_receptor.yml inventory.yml group_vars/all.yml requirements.yml
Extract the downloaded
tar.gz
Install Bundle from the location where you downloaded it. To ensure that these files are in the correct location on the remote machine, the install bundle includes theinstall_receptor.yml
playbook. The playbook requires the Receptor collection. Run the following command to download the collection:ansible-galaxy collection install -r requirements.yml
Before running the
ansible-playbook
command, edit the following fields in theinventory.yml
file:all: hosts: remote-execution: ansible_host: 10.0.0.6 ansible_user: <username> # user provided ansible_ssh_private_key_file: ~/.ssh/<id_rsa>
-
Ensure
ansible_host
is set to the IP address or DNS of the node. -
Set
ansible_user
to the username running the installation. -
Set
ansible_ssh_private_key_file
to contain the filename of the private key used to connect to the instance. -
The content of the
inventory.yml
file serves as a template and contains variables for roles that are applied during the installation and configuration of a receptor node in a mesh topology. You can modify some of the other fields, or replace the file in its entirety for advanced scenarios. For more information, see Role Variables.
-
Ensure
For a node that uses a private DNS, add the following line to
inventory.yml
:ansible_ssh_common_args: <your ssh ProxyCommand setting>
This instructs the
install-receptor.yml
playbook to use the proxy command to connect through the local DNS node to the private node.- When the attributes are configured, click Details page of the created instance opens. . The
- Save the file to continue.
The system that is going to run the install bundle to setup the remote node and run
ansible-playbook
requires theansible.receptor
collection to be installed:ansible-galaxy collection install ansible.receptor
or
ansible-galaxy install -r requirements.yml
-
Installing the receptor collection dependency from the
requirements.yml
file consistently retrieves the receptor version specified there. Additionally, it retrieves any other collection dependencies that might be needed in the future. - Install the receptor collection on all nodes where your playbook will run, otherwise an error occurs.
-
Installing the receptor collection dependency from the
If
receptor_listener_port
is defined, the machine also requires an available open port on which to establish inbound TCP connections, for example, 27199. Run the following command to open port 27199 for receptor communication:sudo firewall-cmd --permanent --zone=public --add-port=27199/tcp
Run the following playbook on the machine where you want to update your automation mesh:
ansible-playbook -i inventory.yml install_receptor.yml
After this playbook runs, your automation mesh is configured.
To remove an instance from the mesh, see Removing instances.