Este conteúdo não está disponível no idioma selecionado.
Chapter 3. Installing Red Hat Ansible Automation Platform
Ansible Automation Platform is a modular platform. You can deploy automation controller with other automation platform components, such as automation hub and Event-Driven Ansible controller. For more information about the components provided with Ansible Automation Platform, see Red Hat Ansible Automation Platform components in the Red Hat Ansible Automation Platform Planning Guide.
There are several supported installation scenarios for Red Hat Ansible Automation Platform. To install Red Hat Ansible Automation Platform, you must edit the inventory file parameters to specify your installation scenario. You can use one of the following as a basis for your own inventory file:
- Single automation controller with external (installer managed) database
- Single automation controller and single automation hub with external (installer managed) database
- Single automation controller, single automation hub, and single event-driven ansible controller node with external (installer managed ) database
3.1. Editing the Red Hat Ansible Automation Platform installer inventory file Copiar o linkLink copiado para a área de transferência!
You can use the Red Hat Ansible Automation Platform installer inventory file to specify your installation scenario.
Procedure
Navigate to the installer:
[RPM installed package]
cd /opt/ansible-automation-platform/installer/
$ cd /opt/ansible-automation-platform/installer/Copy to Clipboard Copied! Toggle word wrap Toggle overflow [bundled installer]
cd ansible-automation-platform-setup-bundle-<latest-version>
$ cd ansible-automation-platform-setup-bundle-<latest-version>Copy to Clipboard Copied! Toggle word wrap Toggle overflow [online installer]
cd ansible-automation-platform-setup-<latest-version>
$ cd ansible-automation-platform-setup-<latest-version>Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Open the
inventoryfile with a text editor. -
Edit
inventoryfile parameters to specify your installation scenario. You can use one of the supported Installation scenario examples as the basis for yourinventoryfile.
3.2. Inventory file examples based on installation scenarios Copiar o linkLink copiado para a área de transferência!
Red Hat supports several installation scenarios for Ansible Automation Platform. You can develop your own inventory files using the example files as a basis, or you can use the example closest to your preferred installation scenario.
3.2.1. Inventory file recommendations based on installation scenarios Copiar o linkLink copiado para a área de transferência!
Before selecting your installation method for Ansible Automation Platform, review the following recommendations. Familiarity with these recommendations will streamline the installation process.
-
For Red Hat Ansible Automation Platform or automation hub: Add an automation hub host in the
[automationhub]group. - Do not install automation controller and automation hub on the same node for versions of Ansible Automation Platform in a production or customer environment. This can cause contention issues and heavy resource use.
Provide a reachable IP address or fully qualified domain name (FQDN) for the
[automationhub]and[automationcontroller]hosts to ensure users can sync and install content from automation hub from a different node.The FQDN must not contain either the
-or the_symbols, as it will not be processed correctly.Do not use
localhost.-
adminis the default user ID for the initial log in to Ansible Automation Platform and cannot be changed in the inventory file. -
Use of special characters for
pg_passwordis limited. The!,#,0and@characters are supported. Use of other special characters can cause the setup to fail. -
Enter your Red Hat Registry Service Account credentials in
registry_usernameandregistry_passwordto link to the Red Hat container registry. -
The inventory file variables
registry_usernameandregistry_passwordare only required if a non-bundle installer is used.
3.2.1.1. Single automation controller with external (installer managed) database Copiar o linkLink copiado para a área de transferência!
Use this example to populate the inventory file to install Red Hat Ansible Automation Platform. This installation inventory file includes a single automation controller node with an external database on a separate node.
3.2.1.2. Single automation controller and single automation hub with external (installer managed) database Copiar o linkLink copiado para a área de transferência!
Use this example to populate the inventory file to deploy single instances of automation controller and automation hub with an external (installer managed) database.
3.2.1.2.1. Connecting automation hub to a Red Hat Single Sign-On environment Copiar o linkLink copiado para a área de transferência!
You can configure the inventory file further to connect automation hub to a Red Hat Single Sign-On installation.
You must configure a different set of variables when connecting to a Red Hat Single Sign-On installation managed by Ansible Automation Platform than when connecting to an external Red Hat Single Sign-On installation.
For more information about these inventory variables, refer to the Installing and configuring central authentication for the Ansible Automation Platform.
3.2.1.3. High availability automation hub Copiar o linkLink copiado para a área de transferência!
Use the following examples to populate the inventory file to install a highly available automation hub. This inventory file includes a highly available automation hub with a clustered setup.
You can configure your HA deployment further to implement Red Hat Single Sign-On and enable a high availability deployment of automation hub on SELinux.
Specify database host IP
-
Specify the IP address for your database host, using the
automation_pg_hostandautomation_pg_portinventory variables. For example:
automationhub_pg_host='192.0.2.10' automationhub_pg_port=5432
automationhub_pg_host='192.0.2.10'
automationhub_pg_port=5432
-
Also specify the IP address for your database host in the [database] section, using the value in the
automationhub_pg_hostinventory variable:
[database] 192.0.2.10
[database]
192.0.2.10
List all instances in a clustered setup
-
If installing a clustered setup, replace
localhost ansible_connection=localin the [automationhub] section with the hostname or IP of all instances. For example:
[automationhub] automationhub1.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.18 automationhub2.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.20 automationhub3.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.22
[automationhub]
automationhub1.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.18
automationhub2.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.20
automationhub3.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.22
USE_X_FORWARDED_PORT = True USE_X_FORWARDED_HOST = True
USE_X_FORWARDED_PORT = True
USE_X_FORWARDED_HOST = True
If automationhub_main_url is not specified, the first node in the [automationhub] group will be used as default.
3.2.1.4. Enabling a high availability (HA) deployment of automation hub on SELinux Copiar o linkLink copiado para a área de transferência!
You can configure the inventory file to enable high availability deployment of automation hub on SELinux. You must create two mount points for /var/lib/pulp and /var/lib/pulp/pulpcore_static, and then assign the appropriate SELinux contexts to each.
You must add the context for /var/lib/pulp pulpcore_static and run the Ansible Automation Platform installer before adding the context for /var/lib/pulp.
Prerequisites
- You have already configured a NFS export on your server.
Procedure
Create a mount point at
/var/lib/pulp:mkdir /var/lib/pulp/
$ mkdir /var/lib/pulp/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open
/etc/fstabusing a text editor, then add the following values:srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:var_lib_t:s0" 0 0 srv_rhel8:/data/pulpcore_static /var/lib/pulp/pulpcore_static nfs defaults,_netdev,nosharecache,context="system_u:object_r:httpd_sys_content_rw_t:s0" 0 0
srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:var_lib_t:s0" 0 0 srv_rhel8:/data/pulpcore_static /var/lib/pulp/pulpcore_static nfs defaults,_netdev,nosharecache,context="system_u:object_r:httpd_sys_content_rw_t:s0" 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the reload systemd manager configuration command:
systemctl daemon-reload
$ systemctl daemon-reloadCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the mount command for
/var/lib/pulp:mount /var/lib/pulp
$ mount /var/lib/pulpCopy to Clipboard Copied! Toggle word wrap Toggle overflow Create a mount point at
/var/lib/pulp/pulpcore_static:mkdir /var/lib/pulp/pulpcore_static
$ mkdir /var/lib/pulp/pulpcore_staticCopy to Clipboard Copied! Toggle word wrap Toggle overflow Run the mount command:
mount -a
$ mount -aCopy to Clipboard Copied! Toggle word wrap Toggle overflow With the mount points set up, run the Ansible Automation Platform installer:
setup.sh -- -b --become-user root
$ setup.sh -- -b --become-user rootCopy to Clipboard Copied! Toggle word wrap Toggle overflow -
After the installation is complete, unmount the
/var/lib/pulp/mount point.
3.2.1.4.1. Configuring pulpcore.service Copiar o linkLink copiado para a área de transferência!
After you have configured the inventory file, and applied the SELinux context, you now need to configure the pulp service.
Procedure
With the two mount points set up, shut down the Pulp service to configure
pulpcore.service:systemctl stop pulpcore.service
$ systemctl stop pulpcore.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Edit
pulpcore.serviceusingsystemctl:systemctl edit pulpcore.service
$ systemctl edit pulpcore.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Add the following entry to
pulpcore.serviceto ensure that automation hub services starts only after starting the network and mounting the remote mount points:[Unit] After=network.target var-lib-pulp.mount
[Unit] After=network.target var-lib-pulp.mountCopy to Clipboard Copied! Toggle word wrap Toggle overflow Enable
remote-fs.target:systemctl enable remote-fs.target
$ systemctl enable remote-fs.targetCopy to Clipboard Copied! Toggle word wrap Toggle overflow Reboot the system:
systemctl reboot
$ systemctl rebootCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Troubleshooting
A bug in the pulpcore SELinux policies can cause the token authentication public/private keys in etc/pulp/certs/ to not have the proper SELinux labels, causing the pulp process to fail. When this occurs, run the following command to temporarily attach the proper labels:
chcon system_u:object_r:pulpcore_etc_t:s0 /etc/pulp/certs/token_{private,public}_key.pem
$ chcon system_u:object_r:pulpcore_etc_t:s0 /etc/pulp/certs/token_{private,public}_key.pem
Repeat this command to reattach the proper SELinux labels whenever you relabel your system.
3.2.1.4.2. Applying the SELinux context Copiar o linkLink copiado para a área de transferência!
After you have configured the inventory file, you must now apply the context to enable the high availability (HA) deployment of automation hub on SELinux.
Procedure
Shut down the Pulp service:
systemctl stop pulpcore.service
$ systemctl stop pulpcore.serviceCopy to Clipboard Copied! Toggle word wrap Toggle overflow Unmount
/var/lib/pulp/pulpcore_static:umount /var/lib/pulp/pulpcore_static
$ umount /var/lib/pulp/pulpcore_staticCopy to Clipboard Copied! Toggle word wrap Toggle overflow Unmount
/var/lib/pulp/:umount /var/lib/pulp/
$ umount /var/lib/pulp/Copy to Clipboard Copied! Toggle word wrap Toggle overflow Open
/etc/fstabusing a text editor, then replace the existing value for/var/lib/pulpwith the following:srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:pulpcore_var_lib_t:s0" 0 0
srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:pulpcore_var_lib_t:s0" 0 0Copy to Clipboard Copied! Toggle word wrap Toggle overflow Run the mount command:
mount -a
$ mount -aCopy to Clipboard Copied! Toggle word wrap Toggle overflow
3.2.1.5. Configuring content signing on private automation hub Copiar o linkLink copiado para a área de transferência!
To successfully sign and publish Ansible Certified Content Collections, you must configure private automation hub for signing.
Prerequisites
- Your GnuPG key pairs have been securely set up and managed by your organization.
- Your public-private key pair has proper access for configuring content signing on private automation hub.
Procedure
Create a signing script that accepts only a filename.
NoteThis script acts as the signing service and must generate an ascii-armored detached
gpgsignature for that file using the key specified through thePULP_SIGNING_KEY_FINGERPRINTenvironment variable.The script prints out a JSON structure with the following format.
{"file": "filename", "signature": "filename.asc"}{"file": "filename", "signature": "filename.asc"}Copy to Clipboard Copied! Toggle word wrap Toggle overflow All the file names are relative paths inside the current working directory. The file name must remain the same for the detached signature.
Example:
The following script produces signatures for content:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow After you deploy a private automation hub with signing enabled to your Ansible Automation Platform cluster, new UI additions are displayed in collections.
Review the Ansible Automation Platform installer inventory file for options that begin with
automationhub_*.Copy to Clipboard Copied! Toggle word wrap Toggle overflow The two new keys (automationhub_auto_sign_collections and automationhub_require_content_approval) indicate that the collections must be signed and approved after they are uploaded to private automation hub.
3.2.1.6. LDAP configuration on private automation hub Copiar o linkLink copiado para a área de transferência!
You must set the following six variables in your Red Hat Ansible Automation Platform installer inventory file to configure your private automation hub for LDAP authentication:
-
automationhub_authentication_backend -
automationhub_ldap_server_uri -
automationhub_ldap_bind_dn -
automationhub_ldap_bind_password -
automationhub_ldap_user_search_base_dn -
automationhub_ldap_group_search_base_dn
If any of these variables are missing, the Ansible Automation installer cannot complete the installation.
3.2.1.6.1. Setting up your inventory file variables Copiar o linkLink copiado para a área de transferência!
When you configure your private automation hub with LDAP authentication, you must set the proper variables in your inventory files during the installation process.
Procedure
- Access your inventory file according to the procedure in Editing the Red Hat Ansible Automation Platform installer inventory file.
Use the following example as a guide to set up your Ansible Automation Platform inventory file:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteThe following variables will be set with default values, unless you set them with other options.
auth_ldap_user_search_scope= 'SUBTREE' auth_ldap_user_search_filter= '(uid=%(user)s)' auth_ldap_group_search_scope= 'SUBTREE' auth_ldap_group_search_filter= '(objectClass=Group)' auth_ldap_group_type_class= 'django_auth_ldap.config:GroupOfNamesType'
auth_ldap_user_search_scope= 'SUBTREE' auth_ldap_user_search_filter= '(uid=%(user)s)' auth_ldap_group_search_scope= 'SUBTREE' auth_ldap_group_search_filter= '(objectClass=Group)' auth_ldap_group_type_class= 'django_auth_ldap.config:GroupOfNamesType'Copy to Clipboard Copied! Toggle word wrap Toggle overflow - Optional: Set up extra parameters in your private automation hub such as user groups, superuser access, or mirroring. Go to Configuring extra LDAP parameters to complete this optional step.
3.2.1.6.2. Configuring extra LDAP parameters Copiar o linkLink copiado para a área de transferência!
If you plan to set up superuser access, user groups, mirroring or other extra parameters, you can create a YAML file that comprises them in your ldap_extra_settings dictionary.
Procedure
Create a YAML file that contains
ldap_extra_settings.Example:
Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Add any parameters that you require for your setup. The following examples describe the LDAP parameters that you can set in
ldap_extra_settings:Use this example to set up a superuser flag based on membership in an LDAP group.
#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_FLAGS_BY_GROUP: {"is_superuser": "cn=pah-admins,ou=groups,dc=example,dc=com",} ...#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_FLAGS_BY_GROUP: {"is_superuser": "cn=pah-admins,ou=groups,dc=example,dc=com",} ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use this example to set up superuser access.
#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_FLAGS_BY_GROUP: {"is_superuser": "cn=pah-admins,ou=groups,dc=example,dc=com",} ...#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_FLAGS_BY_GROUP: {"is_superuser": "cn=pah-admins,ou=groups,dc=example,dc=com",} ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use this example to mirror all LDAP groups you belong to.
#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_MIRROR_GROUPS: True ...
#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_MIRROR_GROUPS: True ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use this example to map LDAP user attributes (such as first name, last name, and email address of the user).
#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_ATTR_MAP: {"first_name": "givenName", "last_name": "sn", "email": "mail",} ...#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_ATTR_MAP: {"first_name": "givenName", "last_name": "sn", "email": "mail",} ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow Use the following examples to grant or deny access based on LDAP group membership:
To grant private automation hub access (for example, members of the
cn=pah-nosoupforyou,ou=groups,dc=example,dc=comgroup):#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_REQUIRE_GROUP: 'cn=pah-nosoupforyou,ou=groups,dc=example,dc=com' ...
#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_REQUIRE_GROUP: 'cn=pah-nosoupforyou,ou=groups,dc=example,dc=com' ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow To deny private automation hub access (for example, members of the
cn=pah-nosoupforyou,ou=groups,dc=example,dc=comgroup):#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_DENY_GROUP: 'cn=pah-nosoupforyou,ou=groups,dc=example,dc=com' ...
#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_DENY_GROUP: 'cn=pah-nosoupforyou,ou=groups,dc=example,dc=com' ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
Use this example to enable LDAP debug logging.
#ldapextras.yml --- ldap_extra_settings: GALAXY_LDAP_LOGGING: True ...
#ldapextras.yml --- ldap_extra_settings: GALAXY_LDAP_LOGGING: True ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow NoteIf it is not practical to re-run
setup.shor if debug logging is enabled for a short time, you can add a line containingGALAXY_LDAP_LOGGING: Truemanually to the/etc/pulp/settings.pyfile on private automation hub. Restart bothpulpcore-api.serviceandnginx.servicefor the changes to take effect. To avoid failures due to human error, use this method only when necessary.Use this example to configure LDAP caching by setting the variable
AUTH_LDAP_CACHE_TIMEOUT.#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_CACHE_TIMEOUT: 3600 ...
#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_CACHE_TIMEOUT: 3600 ...Copy to Clipboard Copied! Toggle word wrap Toggle overflow
-
Run
setup.sh -e @ldapextras.ymlduring private automation hub installation. .Verification To verify you have set up correctly, confirm you can view all of your settings in the/etc/pulp/settings.pyfile on your private automation hub.
3.2.1.6.3. LDAP referrals Copiar o linkLink copiado para a área de transferência!
If your LDAP servers return referrals, you might have to disable referrals to successfully authenticate using LDAP on private automation hub.
If not, the following message is returned:
Operation unavailable without authentication
Operation unavailable without authentication
To disable the LDAP REFERRALS lookup, set:
GALAXY_LDAP_DISABLE_REFERRALS = true
GALAXY_LDAP_DISABLE_REFERRALS = true
This sets AUTH_LDAP_CONNECTIONS_OPTIONS to the correct option.
3.2.1.7. Single automation controller, single automation hub, and single Event-Driven Ansible controller node with external (installer managed) database Copiar o linkLink copiado para a área de transferência!
Use this example to populate the inventory file to deploy single instances of automation controller, automation hub, and Event-Driven Ansible controller with an external (installer managed) database.
- This scenario requires a minimum of automation controller 2.4 for successful deployment of Event-Driven Ansible controller.
- Event-Driven Ansible controller must be installed on a separate server and cannot be installed on the same host as automation hub and automation controller.
-
Event-Driven Ansible controller cannot be installed in a high availability or clustered configuration. Ensure there is only one host entry in the
automationedacontrollersection of the inventory. -
When an Event-Driven Ansible rulebook is activated under standard conditions, it uses approximately 250 MB of memory. However, the actual memory consumption can vary significantly based on the complexity of the rules and the volume and size of the events processed. In scenarios where a large number of events are anticipated or the rulebook complexity is high, conduct a preliminary assessment of resource usage in a staging environment. This ensures that the maximum number of activations is based on the resource capacity. In the following example, the default
automationedacontroller_max_running_activationssetting is 12, but can be adjusted according to fit capacity.
3.2.1.8. Adding a safe plugin variable to Event-Driven Ansible controller Copiar o linkLink copiado para a área de transferência!
When using redhat.insights_eda or similar plug-ins to run rulebook activations in Event-Driven Ansible controller, you must add a safe plugin variable to a directory in Ansible Automation Platform. This would ensure connection between Event-Driven Ansible controller and the source plugin, and display port mappings correctly.
Procedure
-
Create a directory for the safe plugin variable:
mkdir -p ./group_vars/automationedacontroller -
Create a file within that directory for your new setting (for example,
touch ./group_vars/automationedacontroller/custom.yml) Add the variable
automationedacontroller_safe_pluginsto the file with a comma-separated list of plugins to enable for Event-Driven Ansible controller. For example:automationedacontroller_safe_plugins: “ansible.eda.webhook, ansible.eda.alertmanager”
automationedacontroller_safe_plugins: “ansible.eda.webhook, ansible.eda.alertmanager”Copy to Clipboard Copied! Toggle word wrap Toggle overflow
3.3. Running the Red Hat Ansible Automation Platform installer setup script Copiar o linkLink copiado para a área de transferência!
After you update the inventory file with required parameters for installing your private automation hub, run the installer setup script.
Procedure
Run the
setup.shscriptsudo ./setup.sh
$ sudo ./setup.shCopy to Clipboard Copied! Toggle word wrap Toggle overflow
Installation of Red Hat Ansible Automation Platform will begin.
3.4. Verifying installation of automation controller Copiar o linkLink copiado para a área de transferência!
Verify that you installed automation controller successfully by logging in with the admin credentials you inserted in the inventory file.
Prerequisite
- Port 443 is available
Procedure
-
Go to the IP address specified for the automation controller node in the
inventoryfile. -
Enter your Red Hat Satellite credentials. If this is your first time logging in after installation, upload your
manifestfile. -
Log in with the user ID
adminand the password credentials you set in theinventoryfile.
The automation controller server is accessible from port 80 (https://<CONTROLLER_SERVER_NAME>/) but redirects to port 443.
If the installation fails and you are a customer who has purchased a valid license for Red Hat Ansible Automation Platform, contact Ansible through the Red Hat Customer portal.
Upon a successful log in to automation controller, your installation of Red Hat Ansible Automation Platform 2.4 is complete.
3.4.1. Additional automation controller configuration and resources Copiar o linkLink copiado para a área de transferência!
See the following resources to explore additional automation controller configurations.
| Resource link | Description |
|---|---|
| Set up automation controller and run your first playbook. | |
| Configure automation controller administration through customer scripts, management jobs, etc. | |
| Set up automation controller with a proxy server. | |
| Managing usability analytics and data collection from automation controller | Manage what automation controller information you share with Red Hat. |
| Review automation controller functionality in more detail. |
3.5. Verifying installation of automation hub Copiar o linkLink copiado para a área de transferência!
Verify that you installed your automation hub successfully by logging in with the admin credentials you inserted into the inventory file.
Procedure
-
Navigate to the IP address specified for the automation hub node in the
inventoryfile. -
Enter your Red Hat Satellite credentials. If this is your first time logging in after installation, upload your
manifestfile. -
Log in with the user ID
adminand the password credentials you set in theinventoryfile.
If the installation fails and you are a customer who has purchased a valid license for Red Hat Ansible Automation Platform, contact Ansible through the Red Hat Customer portal.
Upon a successful login to automation hub, your installation of Red Hat Ansible Automation Platform 2.4 is complete.
3.5.1. Additional automation hub configuration and resources Copiar o linkLink copiado para a área de transferência!
See the following resources to explore additional automation hub configurations.
| Resource link | Description |
|---|---|
| Configure user access for automation hub. | |
| Managing Red Hat Certified, validated, and Ansible Galaxy content in automation hub | Add content to your automation hub. |
| Publishing proprietary content collections in automation hub | Publish internally developed collections on your automation hub. |
3.6. Verifying Event-Driven Ansible controller installation Copiar o linkLink copiado para a área de transferência!
Verify that you installed Event-Driven Ansible controller successfully by logging in with the admin credentials you inserted in the inventory file.
Procedure
-
Navigate to the IP address specified for the Event-Driven Ansible controller node in the
inventoryfile. -
Enter your Red Hat Satellite credentials. If this is your first time logging in after installation, upload your
manifestfile. -
Log in with the user ID
adminand the password credentials you set in theinventoryfile.
If the installation fails and you are a customer who has purchased a valid license for Red Hat Ansible Automation Platform, contact Ansible through the Red Hat Customer portal.
Upon a successful login to Event-Driven Ansible controller, your installation of Red Hat Ansible Automation Platform 2.4 is complete.