Search

Automation controller user guide

download PDF
Red Hat Ansible Automation Platform 2.4

User Guide for Automation Controller

Red Hat Customer Content Services

Abstract

This guide shows how to use automation controller to define, operate, scale and delegate automation.

Preface

Thank you for your interest in Red Hat Ansible Automation Platform automation controller. Automation controller helps teams manage complex multitiered deployments by adding control, knowledge, and delegation to Ansible-powered environments.

The Automation controller User Guide describes all of the functionality available in automation controller. It assumes moderate familiarity with Ansible, including concepts such as playbooks, variables, and tags. For more information about these and other Ansible concepts, see the Ansible documentation.

Providing feedback on Red Hat documentation

If you have a suggestion to improve this documentation, or find an error, please contact technical support at https://access.redhat.com to create an issue on the Ansible Automation Platform Jira project using the docs-product component.

Chapter 1. Automation controller overview

With Ansible Automation Platform users across an organization can share, vet, and manage automation content by means of a simple, powerful, and agentless technical implementation. IT managers can provide guidelines on how automation is applied to individual teams. Automation developers can write tasks that use existing knowledge, without the operational overhead of conforming to complex tools and frameworks. It is a more secure and stable foundation for deploying end-to-end automation solutions, from hybrid cloud to the edge.

Ansible Automation Platform includes automation controller, which enables users to define, operate, scale, and delegate automation across their enterprise.

1.1. Real-time playbook output and exploration

Automation controller enables you to watch playbooks run in real time, seeing each host as they check in. You can go back and explore the results for specific tasks and hosts in great detail, search for specific plays or hosts and see just those results, or locate errors that need to be corrected.

1.2. "Push Button" automation

Automation controller enables you to access your favorite projects and re-trigger execution from the web interface. Automation controller asks for input variables, prompts for your credentials, starts and monitors jobs, and displays results and host history.

1.3. Simplified role-based access control and auditing

Automation controller enables you to:

  • Grant permissions to perform a specific task to different teams or explicit users through role-based access control (RBAC). Example tasks include viewing, creating, or modifying a file.
  • Keep some projects private, while enabling some users to edit inventories, and others to run playbooks against certain systems, either in check (dry run) or live mode.
  • Enable certain users to use credentials without exposing the credentials to them.

Automation controller records the history of operations and who made them, including objects edited and jobs launched.

If you want to give any user or team permissions to use a job template, you can assign permissions directly on the job template. Credentials are full objects in the automation controller RBAC system, and can be assigned to multiple users or teams for use.

Automation controller includes an auditor type. A system-level auditor can see all aspects of the systems automation, but does not have permission to run or change automation. An auditor is useful for a service account that scrapes automation information from the REST API.

Additional resources

1.4. Cloud and autoscaling flexibility

Automation controller includes a powerful optional provisioning callback feature that enables nodes to request configuration on demand. This is an ideal solution for a cloud auto-scaling scenario and includes the following features:

  • It integrates with provisioning servers like Cobbler and deals with managed systems with unpredictable uptimes.
  • It requires no management software to be installed on remote nodes.
  • The callback solution can be triggered by a call to curl or wget, and can be embedded in init scripts, kickstarts, or preseeds.
  • You can control access so that only machines listed in the inventory can request configuration.

1.5. The ideal RESTful API

The automation controller REST API is the ideal RESTful API for a systems management application, with all resources fully discoverable, paginated, searchable, and well modeled. A styled API browser enables API exploration from the API root at http://<server name>/api/, showing off every resource and relation. Everything that can be done in the user interface can be done in the API.

1.6. Backup and restore

Ansible Automation Platform can backup and restore your systems or systems, making it easy for you to backup and replicate your instance as required.

1.7. Ansible Galaxy integration

By including an Ansible Galaxy requirements.yml file in your project directory, automation controller automatically fetches the roles your playbook needs from Galaxy, GitHub, or your local source control. For more information, see Ansible Galaxy Support.

1.8. Inventory support for OpenStack

Dynamic inventory support is available for OpenStack. This enables you to target any of the virtual machines or images running in your OpenStack cloud.

For more information, see Openstack.

1.9. Remote command execution

Use remote command execution to perform a simple tasks, such as adding a single user, updating a single security vulnerability, or restarting a failing service. Any task that you can describe as a single Ansible play can be run on a host or group of hosts in your inventory, enabling you to manage your systems quickly and easily. Because of an RBAC engine and detailed audit logging, you know which user has completed a specific task.

1.10. System tracking

You can collect facts using the fact caching feature. For more information, see Fact Caching.

1.11. Integrated notifications

Keep track of the status of your automation.

You can configure the following notifications:

  • stackable notifications for job templates, projects, or entire organizations
  • different notifications for job start, job success, job failure, and job approval (for workflow nodes)

The following notification sources are supported:

You can also customize notification messages for each of the preceding notification types.

1.12. Integrations

Automation controller supports the following integrations:

  • Dynamic inventory sources for Red Hat Satellite 6.

For more information, see Red Hat Satellite 6.

  • Red Hat Insights integration, enabling Insights playbooks to be used as an Ansible Automation Platform project.

For more information, see Setting up Insights Remediations.

  • Automation hub acts as a content provider for automation controller, requiring both an automation controller deployment and an automation hub deployment running alongside each other.

1.13. Custom Virtual Environments

Custom Ansible environment support enables you to have different Ansible environments and specify custom paths for different teams and jobs.

1.14. Authentication enhancements

Automation controller supports:

  • LDAP
  • SAML
  • token-based authentication

LDAP and SAML support enable you to integrate your enterprise account information in a more flexible manner.

Token-based authentication permits authentication of third-party tools and services with automation controller through integrated OAuth 2 token support.

1.15. Cluster management

Run-time management of cluster groups enables configurable scaling.

1.16. Workflow enhancements

To model your complex provisioning, deployment, and orchestration workflows, you can use automation controller expanded workflows in several ways:

  • Inventory overrides for Workflows You can override an inventory across a workflow at workflow definition time, or at launch time. Automation controller enables you to define your application deployment workflows, and then re-use them in multiple environments.
  • Convergence nodes for Workflows When modeling complex processes, you must sometimes wait for multiple steps to finish before proceeding. Automation controller workflows can replicate this; workflow steps can wait for any number of previous workflow steps to complete properly before proceeding.
  • Workflow Nesting You can re-use individual workflows as components of a larger workflow. Examples include combining provisioning and application deployment workflows into a single workflow.
  • Workflow Pause and Approval You can build workflows containing approval nodes that require user intervention. This makes it possible to pause workflows in between playbooks so that a user can give approval (or denial) for continuing on to the next step in the workflow.

For more information, see Workflows in automation controller

1.17. Job distribution

Take a fact gathering or configuration job running across thousands of machines and divide it into slices that can be distributed across your automation controller cluster for increased reliability, faster job completion, and improved cluster use.

For example, you can change a parameter across 15,000 switches at scale, or gather information across your multi-thousand-node RHEL estate.

For more information, see Job Slicing.

1.18. Support for deployment in a FIPS-enabled environment

Automation controller deploys and runs in restricted modes such as FIPS.

1.19. Limit the number of hosts per organization

Many large organizations have instances shared among many organizations. To ensure that one organization cannot use all the licensed hosts, this feature enables superusers to set a specified upper limit on how many licensed hosts can be allocated to each organization. The automation controller algorithm factors changes in the limit for an organization and the number of total hosts across all organizations. Inventory updates fail if an inventory synchronization brings an organization out of compliance with the policy. Additionally, superusers are able to over-allocate their licenses, with a warning.

1.20. Inventory plugins

The following inventory plugins are used from upstream collections:

  • amazon.aws.aws_ec2
  • community.vmware.vmware_vm_inventory
  • azure.azcollection.azure_rm
  • google.cloud.gcp_compute
  • theforeman.foreman.foreman
  • openstack.cloud.openstack
  • ovirt.ovirt.ovirt
  • awx.awx.tower

1.21. Secret management system

With a secret management system, external credentials are stored and supplied for use in automation controller so you need not provide them directly.

Chapter 2. Automation controller licensing, updates and support

Automation controller is provided as part of your annual Red Hat Ansible Automation Platform subscription.

Ansible is an open source software project and is licensed under the GNU General Public License version 3, as described in the Ansible Source Code

You must have valid subscriptions attached before installing Ansible Automation Platform.

For more information, see Attaching Subscriptions.

2.1. Trial and evaluation

You require a license to run automation controller. You can start by using a free trial license.

  • Trial licenses for Ansible Automation Platform are available at: http://ansible.com/license
  • Support is not included in a trial license or during an evaluation of the automation controller software.

2.2. Component licenses

To view the license information for the components included in automation controller, refer to /usr/share/doc/automation-controller-<version>/README.

where <version> refers to the version of automation controller you have installed.

To view a specific license, refer to /usr/share/doc/automation-controller-<version>/*.txt.

where * is the license file name to which you are referring.

2.3. Node counting in licenses

The automation controller license defines the number of Managed Nodes that can be managed as part of a Red Hat Ansible Automation Platform subscription.

A typical license says "License Count: 500", which sets the maximum number of Managed Nodes at 500.

For more information on managed node requirements for licensing, see https://access.redhat.com/articles/3331481.

Note

Ansible does not recycle node counts or reset automated hosts.

Chapter 3. Logging into automation controller after installation

After you install automation controller, you must log in.

Procedure

  1. With the login information provided after your installation completed, open a web browser and log in to the automation controller by navigating to its server URL at: https://<CONTROLLER_SERVER_NAME>/
  2. Use the credentials specified during the installation process to login:

    • The default username is admin.
    • The password for admin is the value specified.
  3. Click the More Actions icon next to the desired user.
  4. Click Edit.
  5. Edit the required details and click Save.

Chapter 4. Managing your Ansible automation controller subscription

Before you can use automation controller, you must have a valid subscription, which authorizes its use.

4.1. Subscription Types

Red Hat Ansible Automation Platform is provided at various levels of support and number of machines as an annual subscription.

All subscription levels include regular updates and releases of automation controller, Ansible, and any other components of the Platform.

For more information, contact Ansible through the Red Hat Customer Portal or at http://www.ansible.com/contact-us/.

4.2. Obtaining an authorized Ansible automation controller subscription

If you already have a subscription to a Red Hat product, you can acquire an automation controller subscription through that subscription. If you do not have a subscription to Red Hat Ansible Automation Platform and Red Hat Satellite, you can request a trial subscription.

Procedure

  • If you have a Red Hat Ansible Automation Platform subscription, use your Red Hat customer credentials when you launch the automation controller to access your subscription information. See Importing a subscription.
  • If you have a non-Ansible Red Hat or Satellite subscription, access automation controller with one of these methods:

Additional resources

4.3. Obtaining a subscriptions manifest

To upload a subscriptions manifest, first set up your subscription allocations:

Procedure

  1. Navigate to https://access.redhat.com/management/subscription_allocations. The Subscriptions Allocations page contains no subscriptions until you create one.
  2. Click Create New subscription allocation.

    Note

    If Create New subscription allocation does not display, or is disabled, you do not have the proper permissions to create subscription allocations. To create a subscription allocation, you must either be an Administrator on the Customer Portal, or have the Manage Your Subscriptions role. Contact an access.redhat.com administrator, or organization administrator who can grant you permission to manage subscriptions.

  3. Enter a Name for your subscription and select 6.15 from the Type drop-down menu.

    Create a Subscriptions Allocation
  4. Click Create.

    When your subscriptions manifest is successfully created, the number indicated next to Entitlements indicates the number of entitlements associated with your subscription.

    Details of subscription allocations

4.3.1. Setting up a subscriptions manifest

To obtain a subscriptions manifest, you must add an entitlement to your subscriptions through the Subscriptions tab.

Procedure

  1. Click the Subscriptions tab.
  2. If there are no subscriptions to display, click Add Subscriptions.
  3. The following screen enables you to select and add entitlements to put in the manifest file.

    Ansible Automation Platform subscriptions

    You can select multiple Ansible Automation Platform subscriptions in your subscription allocation. Valid Ansible Automation Platform subscriptions commonly go by the name "Red Hat Ansible Automation…".

  4. Specify the number of entitlements or managed nodes to put in the manifest file. This enables you to split up a subscription, for example: 400 nodes on a development cluster and 600 nodes for the production cluster, out of a 1000 node subscription.

    Note

    You can apply multiple subscriptions to a single installation by adding multiple subscriptions of the same type to a manifest file and uploading them. Similarly, a subset of a subscription can be applied by only allocating a portion of the subscription when creating the manifest.

  5. Click Submit.

    The allocations you specified, when successfully added, are displayed in the Subscriptions tab.

  6. Click the Details tab to access the subscription manifest file.
  7. Click Export Manifest to export the manifest file for this subscription. A folder pre-pended with manifest_ is downloaded to your local drive. Multiple subscriptions with the same SKU are aggregated.
  8. When you have a subscription manifest, go to the Subscription screen.
  9. Click Browse to upload the entire manifest file.
  10. Navigate to the location where the file is saved. Do not open it or upload individual parts of it.

4.4. Importing a subscription

After you have obtained an authorized Ansible Automation Platform subscription, you must import it into the automation controller system before you can use automation controller. .Prerequisites

Procedure

  1. Launch automation controller for the first time. The Subscription Management screen displays.

    Subscription Management
  2. Retrieve and import your subscription by completing either of the following steps:

    1. If you have obtained a subscription manifest, upload it by navigating to the location where the file is saved. The subscription manifest is the complete .zip file, and not only its component parts.

      Note

      If the Browse option in the Subscription manifest option is disabled, clear the username and password fields to enable it.

      The subscription metadata is then retrieved from the RHSM/Satellite API, or from the manifest provided. If many subscription counts were applied in a single installation, automation controller combines the counts but uses the earliest expiration date as the expiry (at which point you must refresh your subscription).

    2. If you are using your Red Hat customer credentials, enter your username and password on the license page. Use your Satellite username or password if your automation controller cluster nodes are registered to Satellite with Subscription Manager. After you enter your credentials, click Get Subscriptions.

      Automation controller retrieves your configured subscription service. Then, it prompts you to select the subscription that you want to run and applies that metadata to automation controller. You can log in over time and retrieve new subscriptions if you have renewed.

  3. Click Next to proceed to the Tracking and Insights page.

    Tracking and insights collect data to help Red Hat improve the product and deliver a better user experience. For more information about data collection, see Usability Analytics and Data Collection of the Automation controller Administration Guide.

    This option is checked by default, but you can opt out of any of the following:

    • User analytics. Collects data from the controller UI.
    • Insights Analytics. Provides a high level analysis of your automation with automation controller. It helps you to identify trends and anomalous use of the controller. For opt-in of Automation Analytics to be effective, your instance of automation controller must be running on Red Hat Enterprise Linux. For more information, see the Automation Analytics section.

      Note

      You can change your analytics data collection preferences at any time.

  4. After you have specified your tracking and Insights preferences, click Next to proceed to the End User Agreement.
  5. Review and check the I agree to the End User License Agreement checkbox and click Submit.

    After your subscription is accepted, automation controller displays the subscription details and opens the Dashboard. To return to the Subscription settings screen from the Dashboard, select SettingsSubscription settings from the Subscription option in the navigation panel.

  6. Optional: To return to the Subscription settings screen from the Dashboard, select SettingsSubscription settings option in the navigation panel.

    Subscription Details

Troubleshooting your subscription

When your subscription expires (you can check this in the Subscription details of the Subscription settings window), you must renew it in automation controller. You can do this by either importing a new subscription, or setting up a new subscription.

If you meet the "Error fetching licenses" message, check that you have the proper permissions required for the Satellite user. The automation controller administrator requires this to apply a subscription.

The Satellite username and password is used to query the Satellite API for existing subscriptions. From the Satellite API, the automation controller receives metadata about those subscriptions, then filters through to find valid subscriptions that you can apply. These are then displayed as valid subscription options in the UI.

The following Satellite roles grant proper access:

  • Custom with view_subscriptions and view_organizations filter
  • Viewer
  • Administrator
  • Organization Administrator
  • Manager

Use the Custom role for your automation controller integration, as it is the most restrictive. For more information, see the Satellite documentation on managing users and roles.

Note

The System Administrator role is not equal to the Administrator user checkbox, and does not offer enough permissions to access the subscriptions API page.

4.5. Add a subscription manually

If you are unable to apply or update the subscription information by using the automation controller user interface, you can upload the subscriptions manifest manually in an Ansible playbook.

Use the license module in the ansible.controller collection:

- name: Set the license using a file
  license:
  manifest: "/tmp/my_manifest.zip"

For more information, see the Automation controller license module.

4.6. Attaching Subscriptions

You must have valid Ansible Automation Platform subscriptions attached before installing Ansible Automation Platform.

Note

Attaching subscriptions is unnecessary if your Red Hat account has enabled Simple Content Access Mode. However, you must register to Red Hat Subscription Management (RHSM) or Red Hat Satellite before installing Ansible Automation Platform.

Procedure

  1. To find the pool_id of your subscription, enter the following command:

    # subscription-manager list --available --all | grep "Ansible Automation Platform" -B 3 -A 6

    The command returns the following:

    Subscription Name: Red Hat Ansible Automation Platform, Premium (5000 Managed Nodes)
    Provides: Red Hat Ansible Engine
    Red Hat Single Sign-On
    Red Hat Ansible Automation Platform
    SKU: MCT3695
    Contract: ********
    Pool ID: ********************
    Provides Management: No
    Available: 4999
    Suggested: 1
  2. To attach this subscription, enter the following command:

    # subscription-manager attach --pool=<pool_id>

    If all nodes have attached, then the repositories are found.

  3. To check whether the subscription attached successfully, enter the following command:

    # subscription-manager list --consumed
  4. To remove this subscription, enter the following command:

    #subscription-manager remove --pool=<pool_id>

4.7. Troubleshooting: Keep your subscription in compliance

Your subscription has two possible statuses:

  • Compliant: Indicates that your subscription is appropriate for the number of hosts that you have automated within your subscription count.
  • Out of compliance: Indicates that you have exceeded the number of hosts in your subscription.

Compliance is computed as follows:

managed > manifest_limit    =>  non-compliant
managed =< manifest_limit   =>  compliant

Where: managed is the number of unique managed hosts without deletions, and manifest_limit is the number of managed hosts in the subscription manifest.

Other important information displayed are:

  • Hosts automated: Host count automated by the job, which consumes the license count.
  • Hosts imported: Host count considering unique host names across all inventory sources. This number does not impact hosts remaining.
  • Hosts remaining: Total host count minus hosts automated.
  • Hosts deleted: Hosts that were deleted, freeing the license capacity.
  • Active hosts previously deleted: Number of hosts now active that were previously deleted.

For example, if you have a subscription capacity of 10 hosts:

  • Starting with 9 hosts, 2 hosts were added and 3 hosts were deleted, you now have 8 hosts (compliant).
  • 3 hosts were automated again, now you have 11 hosts, which puts you over the subscription limit of 10 (non-compliant).
  • If you delete hosts, refresh the subscription details to see the change in count and status.

4.8. Viewing the host activity

Procedure

  1. In the navigation panel, select Host Metrics to view the activity associated with hosts, such as those that have been automated and deleted.

    Each unique hostname is listed and sorted by the user’s preference.

    Host metrics
    Note

    A scheduled task automatically updates these values on a weekly basis and deletes jobs with hosts that were last automated more than a year ago.

  2. Delete unnecessary hosts directly from the Host Metrics view by selecting the desired hosts and clicking Delete.

    These are soft-deleted, meaning their records are not removed, but are not being used and thereby not counted towards your subscription.

4.9. Host metric utilities

Automation controller provides a way to generate a CSV output of the host metric data and host metric summary through the Command Line Interface (CLI). You can also soft delete hosts in bulk through the API.

4.9.1. awx-manage utility

The awx-manage utility supports the following options:

awx-manage host_metric --csv

This command produces host metric data, a host metrics summary file, and a cluster info file. To package all the files into a single tarball for distribution and sharing use:

awx-manage host_metric --tarball

To specify the number of rows (<n>) to output to each file:

awx-manage host_metric --tarball --rows_per_file <n>

The following is an example of a configuration file:

Configuration file

Automation Analytics receives and uses the JSON file.

4.9.2. API endpoint functions

The API lists only non-deleted records and are sortable by last_automation and used_in_inventories columns.

You can use the host metric API endpoint,api/v2/host_metric to soft delete hosts:

api/v2/host_metric <n> DELETE

A monthly scheduled task automatically deletes jobs that uses hosts from the Host Metric table that were last automated more than a year ago.

Chapter 5. The User Interface

The automation controller User Interface (UI) provides a graphical framework for your IT orchestration requirements. The navigation panel provides quick access to automation controller resources, such as Projects, Inventories, Job Templates, and Jobs.

Note

The automation controller UI is also available as a technical preview and is subject to change in future releases. To preview the new UI, click the Enable Preview of New User Interface toggle to On from the Miscellaneous System option of the Settings menu.

After saving, logout and log back in to access the new UI from the preview banner. To return to the current UI, click the link on the top banner where indicated.

Access your user profile, the About page, view related documentation, or log out using the icons in the page header.

You can view the activity stream for that user by clicking the Activity Stream activitystream icon.

5.1. Views

The automation controller UI provides several options for viewing information.

5.1.1. Dashboard View

Use the navigation menu to complete the following tasks:

  • Display different views
  • Navigate to your resources
  • Grant access to users
  • Administer automation controller features in the UI

Procedure

  • From the navigation panel, select Views to hide or display the Views options.
  • The dashboard displays a summary of your current Job status.

    • You can filter the job status within a period of time or by job type.
Dashboard home
  • You can also display summaries of Recent Jobs and Recent Templates.

The Recent Jobs tab displays which jobs were most recently run, their status, and the time at which they were run.

Recent jobs

The Recent Templates tab displays a summary of the most recently used templates. You can also access this summary by selecting ResourcesTemplates from the navigation panel.

Recent templates

Note

Click ViewsDashboard on the navigation panel, or the Ansible Automation Platform logo at any time to return to the Dashboard.

5.1.2. Jobs view

  • From the navigation panel, select ViewsJobs. This view displays the jobs that have run, including projects, templates, management jobs, SCM updates, and playbook runs.

Jobs view

5.1.3. Schedules view

From the navigation panel, select ViewsSchedules. This view shows all the scheduled jobs that are configured.

image

5.1.4. Activity Stream

  • From the navigation panel, select ViewsActivity Stream to display Activity Streams. Most screens have an Activity Stream activitystream icon.

Activity Stream

An Activity Stream shows all changes for a particular object. For each change, the Activity Stream shows the time of the event, the user that initiated the event, and the action. The information displayed varies depending on the type of event. Click the Examine View Event Details icon to display the event log for the change.

event log

You can filter the Activity Stream by the initiating user, by system (if it was system initiated), or by any related object, such as a credential, job template, or schedule.

The Activity Stream on the main Dashboard shows the Activity Stream for the entire instance. Most pages permit viewing an activity stream filtered for that specific object.

5.1.5. Workflow Approvals

  • From the navigation panel, select ViewsWorkflow Approvals to see your workflow approval queue. The list contains actions that require you to approve or deny before a job can proceed.

5.1.6. Host Metrics

  • From the navigation panel, select Host Metrics to see the activity associated with hosts, which includes counts on those that have been automated, used in inventories, and deleted.

Host Metrics

For further information, see Troubleshooting: Keeping your subscription in compliance.

5.2. Resources Menu

The Resources menu provides access to the following components of automation controller:

5.3. Access Menu

The Access menu enables you to configure who has permissions to automation controller resources:

5.4. Administration

The Administration menu provides access to the administrative options of automation controller. From here, you can create, view, and edit:

5.5. The Settings menu

Configure global and system-level settings using the Settings menu. The Settings menu provides access to automation controller configuration settings.

The Settings page enables administrators to configure the following:

  • Authentication
  • Jobs
  • System-level attributes
  • Customize the UI
  • Product license information

Chapter 7. Organizations

An organization is a logical collection of users, teams, projects, and inventories. It is the highest level object in the controller object hierarchy.

Hierarchy

From the navigation menu, select Organizations to display the existing organizations for your installation.

Organizations

Organizations can be searched by Name or Description.

Modify organizations using the Edit icon. Click Delete to remove a selected organization.

7.1. Creating an organization

Note

Automation controller automatically creates a default organization. If you have a Self-support level license, you have only the default organization available and must not delete it.

You can use the default organization as it is initially set up and edit it later.

  1. Click Add to create a new organization.

    Organizations- new organization form

  2. You can configure several attributes of an organization:

    • Enter the Name for your organization (required).
    • Enter a Description for the organization.
    • Max Hosts is only editable by a superuser to set an upper limit on the number of license hosts that an organization can have. Setting this value to 0 signifies no limit. If you try to add a host to an organization that has reached or exceeded its cap on hosts, an error message displays:

      The inventory sync output view also shows the host limit error.

      Error

      Click Details for additional information about the error.

    • Enter the name of the Instance Groups on which to run this organization.
    • Enter the name of the execution environment or search for one that exists on which to run this organization. For more information, see Upgrading to Execution Environments.
    • Optional: Enter the Galaxy Credentials or search from a list of existing ones.
  3. Click Save to finish creating the organization.

When the organization is created, automation controller displays the Organization details, and enables you to manage access and execution environments for the organization.

Organization details

From the Details tab, you can edit or delete the organization.

Note

If you attempt to delete items that are used by other work items, a message lists the items that are affected by the deletion and prompts you to confirm the deletion. Some screens contain items that are invalid or have been deleted previously, and will fail to run.

The following is an example of such a message:

Warning

7.2. Access to organizations

  • Select Access when viewing your organization to display the users associated with this organization, and their roles.

Organization access

Use this page to complete the following tasks:

  • Manage the user membership for this organization. Click Users on the navigation panel to manage user membership on a per-user basis from the Users page.
  • Assign specific users certain levels of permissions within your organization.
  • Enable them to act as an administrator for a particular resource. For more information, see Role-Based Access Controls.

Click a user to display that user’s details. You can review, grant, edit, and remove associated permissions for that user. For more information, see Users.

7.2.1. Add a User or Team

To add a user or team to an organization, the user or team must already exist.

For more information, see Creating a User and Creating a Team.

To add existing users or team to the Organization:

Procedure

  1. In the Access tab of the Organization page, click Add.
  2. Select a user or team to add.
  3. Click Next.
  4. Select one or more users or teams from the list by clicking the checkbox next to the name to add them as members.
  5. Click Next.

    Add roles

    In this example, two users have been selected.

  6. Select the role you want the selected user or team to have. Scroll down for a complete list of roles. Different resources have different options available.

    Add user roles

  7. Click Save to apply the roles to the selected user or team, and to add them as members. The Add Users or Add Teams window displays the updated roles assigned for each user and team.

    Note

    A user or team with associated roles retains them if they are reassigned to another organization.

  8. To remove roles for a particular user, click the disassociate Disassociate icon next to its resource. This launches a confirmation dialog, asking you to confirm the disassociation.

7.2.2. Work with Notifications

Selecting the Notifications tab on the Organization details page enables you to review any notification integrations you have set up.

Notifications

Use the toggles to enable or disable the notifications to use with your particular organization. For more information, see Enable and Disable Notifications.

If no notifications have been set up, select AdministrationNotifications from the navigation panel.

For information on configuring notification types, see Notification Types.

Chapter 8. Managing Users in automation controller

Users associated with an organization are shown in the Access tab of the organization.

Other users can be added to an organization, including a Normal User, System Auditor, or System Administrator, but they must be created first.

You can sort or search the User list by Username, First Name, or Last Name. Click the headers to toggle your sorting preference.

You can view user permissions and user type beside the user name on the Users page.

8.1. Creating a user

To create new users in automation controller and assign them a role.

Procedure

  1. On the Users page, click Add.

    The Create User dialog opens.

  2. Enter the appropriate details about your new user. Fields marked with an asterisk (*) are required.

    Note

    If you are modifying your own password, log out and log back in again for it to take effect.

    You can assign three types of users:

    • Normal User: Normal Users have read and write access limited to the resources (such as inventory, projects, and job templates) for which that user has been granted the appropriate roles and privileges.
    • System Auditor: Auditors inherit the read-only capability for all objects within the environment.
    • System Administrator: A System Administrator (also known as a Superuser) has full system administration privileges — with full read and write privileges over the entire installation. A System Administrator is typically responsible for managing all aspects of and delegating responsibilities for day-to-day work to various users.

      User Types

      Note

      A default administrator with the role of System Administrator is automatically created during the installation process and is available to all users of automation controller. One System Administrator must always exist. To delete the System Administrator account, you must first create another System Administrator account.

  3. Click Save.

    When the user is successfully created, the User dialog opens.

    Edit User Form

  4. Click Delete to delete the user, or you can delete users from a list of current users. For more information, see Deleting a user.

    The same window opens whether you click the user’s name, or the Edit Edit icon beside the user. You can use this window to review and modify the User’s Organizations, Teams, Roles, and other user membership details.

Note

If the user is not newly-created, the details screen displays the last login activity of that user.

If you log in as yourself, and view the details of your user profile, you can manage tokens from your user profile.

For more information, see Adding a user token.

8.2. Deleting a user

Before you can delete a user, you must have user permissions. When you delete a user account, the name and email of the user are permanently removed from automation controller.

Procedure

  1. From the navigation panel, select AccessUsers.
  2. Click Users to display a list of the current users.
  3. Select the check box for the user that you want to remove.
  4. Click Delete.
  5. Click Delete in the confirmation warning message to permanently delete the user.

8.3. Displaying user organizations

Select a specific user to display the Details page, select the Organizations tab to display the list of organizations of which that user is a member.

Note

Organization membership cannot be modified from this display panel.

Users - Organizations list

8.4. Displaying a user’s teams

From the Users > Details page, select the Teams tab to display the list of teams of which that user is a member.

Note

You cannot modify team membership from this display panel. For more information, see Teams.

Until you create a team and assign a user to that team, the assigned teams details for that user is displayed as empty.

8.5. Displaying a user’s roles

From the Users > Details page, select the Roles tab to display the set of roles assigned to this user. These offer the ability to read, change, and administer projects, inventories, job templates, and other elements.

Users- permissions list

8.5.1. Adding and removing user permissions

To add permissions to a particular user:

Procedure

  1. From the Users list view, click on the name of a user.
  2. On the Details page, click Add. This opens the Add user permissions wizard.

    Add Permissions Form

  3. Select the object to a assign permissions, for which the user will have access.
  4. Click Next.
  5. Select the resource to assign team roles and click Next.

    image

  6. Select the resource you want to assign permissions to. Different resources have different options available.

    image

  7. Click Save.
  8. The Roles page displays the updated profile for the user with the permissions assigned for each selected resource.
Note

You can also add teams, individual, or multiple users and assign them permissions at the object level. This includes templates, credentials, inventories, projects, organizations, or instance groups. This feature reduces the time for an organization to onboard many users at one time.

To remove permissions:

  • Click the Disassociate icon next to the resource. This launches a confirmation dialog asking you to confirm the disassociation.

8.6. Creating tokens for a user

The Tokens tab is only present for the user you created for yourself.

Before you add a token for your user, you might want to Create an application if you want to associate your token with it.

You can also create a Personal Access Token (PAT) without associating it with any application.

Procedure

  1. Select your user from the Users list view to configure your OAuth 2 tokens.
  2. Select the Tokens tab from your user’s profile.
  3. Click Add to open the Create Token window.
  4. Enter the following information:

    • Application: Enter the name of the application with which you want to associate your token.

      Alternatively, you can search for the application name clicking the Search icon. This opens a separate window that enables you to choose from the available options. Use the Search bar to filter by name if the list is extensive.

      Leave this field blank if you want to create a PAT that is not linked to any application.

    • Optional: Description: Provide a short description for your token.
    • Scope: Specify the level of access you want this token to have.
  5. Click Save or Cancel to abandon your changes.
  6. After the token is saved, the newly created token for the user is displayed.

    User -token information

    Important

    This is the only time the token value and associated refresh token value are ever shown.

Chapter 9. Managing teams

A Team is a subdivision of an organization with associated users, projects, credentials, and permissions. Teams offer a means to implement role-based access control schemes and delegate responsibilities across organizations. For example, you can grant permissions to a whole team rather than to each user on the team.

From the navigation panel, select AccessTeams.

Teams list

You can sort and search the team list and searched by Name or Organization.

Click the Edit Edit icon next to the entry to edit information about the team. You can also review Users and Permissions associated with this team.

9.1. Creating a team

You can create as many teams of users as you need for your organization. You can assign permissions to each team, just as with users. Teams can also assign ownership for credentials, minimizing the steps to assign the same credentials to the same user.

Procedure

  1. On the Teams page, click Add.
  2. Enter the appropriate details into the following fields:

    • Name
    • Optional: Description
    • Organization: You must select an existing organization
  3. Click Save. The Details dialog opens.
  4. Review and edit your team information.

    Teams- Details dialog

9.1.1. Adding or removing a user to a team

To add a user to a team, the user must already have been created. For more information, see Creating a user. Adding a user to a team adds them as a member only. Use the Access tab to specify a role for the user on different resources.

Procedure

  1. In the Access tab of the Details page click Add.
  2. Follow the prompts to add a user and assign them to roles.
  3. Click Save.

9.1.2. Removing roles for a user

Procedure

  • To remove roles for a particular user, click the Disassociate icon next to its resource.

This launches a confirmation dialog, asking you to confirm the disassociation.

9.1.3. Team access

The Access tab displays the list of users that are members a specific team.

Teams - users list

You can search this list by Username, First Name, or Last Name. For more information, see Users.

9.1.4. Team roles and permissions

Select the Roles tab on the Roles Details page to display a list of the permissions that are currently available for this team.

9.1.5. Adding and removing team permissions

By default, all teams that you create have read permissions. You can assign additional permissions, such as edit and administer projects, inventories, and other elements.

You can set permissions through an inventory, project, job template, or within the Organizations view.

Procedure

  1. From the Team list view , click the required user.
  2. On the Details page, click Add. This opens the Add team permissions wizard.
  3. Select the object to which the team requires access.
  4. Click Next.
  5. Select the resource to assign team roles.
  6. Click Next.
  7. Click the checkbox beside the role to assign that role to your chosen type of resource. Different resources have different options available.

    Assign roles

  8. Click Save.
  9. The updated profile for the user with the roles assigned for each selected resource is displayed.

    Teams sample roles

9.1.5.1. Removing team permissions
  • To remove Permissions for a particular resource, click the Disassociate icon next to its resource. This launches a confirmation dialog, asking you to confirm the disassociation.
Note

You can also add teams, individual, or many users and assign them permissions at the object level. This includes projects, inventories, job templates, and workflow templates. This feature reduces the time for an organization to onboard many users at one time.

Chapter 10. Managing user credentials

Credentials authenticate the automation controller user when launching jobs against machines, synchronizing with inventory sources, and importing project content from a version control system.

You can grant users and teams the ability to use these credentials, without exposing the credential to the user. If a user moves to a different team or leaves the organization, you do not have to re-key all of your systems just because that credential was available in automation controller.

Note

Automation controller encrypts passwords and key information in the database and never makes secret information visible through the API. For further information, see the Automation controller Administration Guide.

10.1. How credentials work

Automation controller uses SSH to connect to remote hosts. To pass the key from automation controller to SSH, the key must be decrypted before it can be written to a named pipe. Automation controller uses that pipe to send the key to SSH, so that the key is never written to disk. If passwords are used, automation controller handles them by responding directly to the password prompt and decrypting the password before writing it to the prompt.

10.2. Creating new credentials

Credentials added to a team are made available to all members of the team. You can also add credentials to individual users.

As part of the initial setup, two credentials are available for your use: Demo Credential and Ansible Galaxy. Use the Ansible Galaxy credential as a template. You can copy this credential, but not edit it. Add more credentials as needed.

Procedure

  1. From the navigation panel, select ResourcesCredentials.
  2. Click Add.
  3. Enter the following information:

    • The name for your new credential.
    • Optional: a description for the new credential.
    • Optional: The name of the organization with which the credential is associated.

      Note

      A credential with a set of permissions associated with one organization persists if the credential is reassigned to another organization.

  4. In the Credential Type field, enter or select the credential type you want to create.
  5. Enter the appropriate details depending on the type of credential selected, as described in Credential types.
  6. Click Save.

10.3. Adding new users and job templates to existing credentials

Procedure

  1. From the navigation panel, select ResourcesCredentials.
  2. Select the credential that you want to assign to additional users.
  3. Click the Access tab. You can see users and teams associated with this credential and their roles.
  4. Choose a user and click Add. If no users exist, add them from the Users menu. For more information, see Users.
  5. Select Job Templates to display the job templates associated with this credential, and which jobs have run recently by using this credential.
  6. Choose a job template and click Add to assign the credential to additional job templates. For more information about creating new job templates, see the Job templates section.

10.4. Credential types

Automation controller supports the following credential types:

The credential types associated with Centrify, CyberArk, HashiCorp Vault, Microsoft Azure Key Vault, and Thycotic are part of the credential plugins capability that enables an external system to lookup your secrets information.

For more information, see Secrets Management System.

10.4.1. Amazon Web Services credential type

Select this credential to enable synchronization of cloud inventory with Amazon Web Services.

Automation controller uses the following environment variables for AWS credentials:

AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SECURITY_TOKEN

These are fields prompted in the user interface.

Amazon Web Services credentials consist of the AWS Access Key and Secret Key.

Automation controller provides support for EC2 STS tokens, also known as Identity and Access Management (IAM) STS credentials. Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS IAM users.

Note

If the value of your tags in EC2 contain Booleans (yes/no/true/false), you must quote them.

Warning

To use implicit IAM role credentials, do not attach AWS cloud credentials in automation controller when relying on IAM roles to access the AWS API.

Attaching your AWS cloud credential to your job template forces the use of your AWS credentials, not your IAM role credentials.

Additional resources

For more information about the IAM/EC2 STS Token, see Temporary security credentials in IAM.

10.4.1.1. Access Amazon EC2 credentials in an Ansible Playbook

You can get AWS credential parameters from a job runtime environment:

vars:
  aws:
    access_key: '{{ lookup("env", "AWS_ACCESS_KEY_ID") }}'
    secret_key: '{{ lookup("env", "AWS_SECRET_ACCESS_KEY") }}'
    security_token: '{{ lookup("env", "AWS_SECURITY_TOKEN") }}'

10.4.2. Ansible Galaxy/Automation Hub API token credential type

Select this credential to access Ansible Galaxy or use a collection published on an instance of private automation hub.

Entering the Galaxy server URL on this screen.

Populate the Galaxy Server URL field with the contents of the Server URL field at Red Hat Hybrid Cloud Console. Populate the Auth Server URL field with the contents of the SSO URL field at Red Hat Hybrid Cloud Console.

Additional resources

For more information, see Using Collections with automation hub.

10.4.3. Centrify Vault Credential Provider Lookup credential type

This is considered part of the secret management capability. For more information, see Centrify Vault Credential Provider Lookup.

10.4.4. Container Registry credential type

Select this credential to enable automation controller to access a collection of container images. For more information, see What is a container registry?.

You must specify a name. The Authentication URL field is pre-populated with a default value. You can change the value by specifying the authentication endpoint for a different container registry.

10.4.5. CyberArk Central Credential Provider Lookup credential type

This is considered part of the secret management capability.

For more information, see CyberArk Central Credential Provider (CCP) Lookup.

10.4.6. CyberArk Conjur Secrets Manager Lookup credential type

This is considered part of the secret management capability.

For more information, see CyberArk Conjur Secrets Manager Lookup.

10.4.7. GitHub Personal Access Token credential type

Select this credential to enable you to access GitHub by using a Personal Access Token (PAT), which you can get through GitHub.

For more information, see Working with Webhooks.

GitHub PAT credentials require a value in the Token field, which is provided in your GitHub profile settings.

Use this credential to establish an API connection to GitHub for use in webhook listener jobs, to post status updates.

10.4.8. GitLab Personal Access Token credential type

Select this credential to enable you to access GitLab by using a Personal Access Token (PAT), which you can get through GitLab.

For more information, see Working with Webhooks.

GitLab PAT credentials require a value in the Token field, which is provided in your GitLab profile settings.

Use this credential to establish an API connection to GitLab for use in webhook listener jobs, to post status updates.

10.4.9. Google Compute Engine credential type

Select this credential to enable synchronization of a cloud inventory with Google Compute Engine (GCE).

Automation controller uses the following environment variables for GCE credentials:

GCE_EMAIL
GCE_PROJECT
GCE_CREDENTIALS_FILE_PATH

These are fields prompted in the user interface:

GCE credentials require the following information:

  • Service Account Email Address: The email address assigned to the Google Compute Engine service account.
  • Optional: Project: Provide the GCE assigned identification or the unique project ID that you provided at project creation time.
  • Optional: Service Account JSON File: Upload a GCE service account file. Click Browse to browse for the file that has the special account information that can be used by services and applications running on your GCE instance to interact with other Google Cloud Platform APIs. This grants permissions to the service account and virtual machine instances.
  • RSA Private Key: The PEM file associated with the service account email.
10.4.9.1. Access Google Compute Engine credentials in an Ansible Playbook

You can get GCE credential parameters from a job runtime environment:

vars:
  gce:
    email: '{{ lookup("env", "GCE_EMAIL") }}'
    project: '{{ lookup("env", "GCE_PROJECT") }}'
    pem_file_path: '{{ lookup("env", "GCE_PEM_FILE_PATH") }}'

10.4.10. GPG Public Key credential type

Select this credential type to enable automation controller to verify the integrity of the project when synchronizing from source control.

For more information about how to generate a valid keypair, use the CLI tool to sign content, and how to add the public key to the controller, see Project Signing and Verification.

10.4.11. HashiCorp Vault Secret Lookup credential type

This is considered part of the secret management capability.

For more information, see HashiCorp Vault Secret Lookup.

10.4.12. HashiCorp Vault Signed SSH credential type

This is considered part of the secret management capability.

For more information, see HashiCorp Vault Signed SSH.

10.4.13. Insights credential type

Select this credential type to enable synchronization of cloud inventory with Red Hat Insights.

Insights credentials are the Insights Username and Password, which are the user’s Red Hat Customer Portal Account username and password.

The extra_vars and env injectors for Insights are as follows:

ManagedCredentialType(
    namespace='insights',
....
....
....

injectors={
        'extra_vars': {
            "scm_username": "{{username}}",
            "scm_password": "{{password}}",
        },
        'env': {
            'INSIGHTS_USER': '{{username}}',
            'INSIGHTS_PASSWORD': '{{password}}',
        },

10.4.14. Machine credential type

Machine credentials enable automation controller to call Ansible on hosts under your management. You can specify the SSH username, optionally give a password, an SSH key, a key password, or have automation controller prompt the user for their password at deployment time. They define SSH and user-level privilege escalation access for playbooks, and are used when submitting jobs to run playbooks on a remote host.

The following network connections use Machine as the credential type: httpapi, netconf, and network_cli

Machine and SSH credentials do not use environment variables. They pass the username through the ansible -u flag, and interactively write the SSH password when the underlying SSH client prompts for it.

Machine credentials require the following inputs:

  • Username: The username to use for SSH authentication.
  • Password: The password to use for SSH authentication. This password is stored encrypted in the database, if entered. Alternatively, you can configure automation controller to ask the user for the password at launch time by selecting Prompt on launch. In these cases, a dialog opens when the job is launched, promoting the user to enter the password and password confirmation.
  • SSH Private Key: Copy or drag-and-drop the SSH private key for the machine credential.
  • Private Key Passphrase: If the SSH Private Key used is protected by a password, you can configure a Key Passphrase for the private key. This password is stored encrypted in the database, if entered. You can also configure automation controller to ask the user for the key passphrase at launch time by selecting Prompt on launch. In these cases, a dialog opens when the job is launched, prompting the user to enter the key passphrase and key passphrase confirmation.
  • Privilege Escalation Method: Specifies the type of escalation privilege to assign to specific users. This is the same as specifying the --become-method=BECOME_METHOD parameter, where BECOME_METHOD is any of the existing methods, or a custom method you have written. Begin entering the name of the method, and the appropriate name auto-populates.
  • empty selection: If a task or play has become set to yes and is used with an empty selection, then it will default to sudo.
  • sudo: Performs single commands with superuser (root user) privileges.
  • su: Switches to the superuser (root user) account (or to other user accounts).
  • pbrun: Requests that an application or command be run in a controlled account and provides for advanced root privilege delegation and keylogging.
  • pfexec: Executes commands with predefined process attributes, such as specific user or group IDs.
  • dzdo: An enhanced version of sudo that uses RBAC information in Centrify’s Active Directory service. For more information, see Centrify’s site on DZDO.
  • pmrun: Requests that an application is run in a controlled account. See Privilege Manager for Unix 6.0.
  • runas: Enables you to run as the current user.
  • enable: Switches to elevated permissions on a network device.
  • doas: Enables your remote/login user to run commands as another user through the doas ("Do as user") utility.
  • ksu: Enables your remote/login user to run commands as another user through Kerberos access.
  • machinectl: Enables you to manage containers through the systemd machine manager.
  • sesu: Enables your remote/login user to run commands as another user through the CA Privileged Access Manager.
Note

Custom become plugins are available from Ansible 2.8+. For more information, see Understanding Privilege Escalation and the list of Become plugins

  • Privilege Escalation Username: You see this field only if you selected an option for privilege escalation. Enter the username to use with escalation privileges on the remote system.
  • Privilege Escalation Password: You see this field only if you selected an option for privilege escalation. Enter the password to use to authenticate the user through the selected privilege escalation type on the remote system. This password is stored encrypted in the database. You can also configure automation controller to ask the user for the password at launch time by selecting Prompt on launch. In these cases, a dialog opens when the job is launched, promoting the user to enter the password and password confirmation.
Note

You must use sudo password must in combination with SSH passwords or SSH Private Keys, because automation controller must first establish an authenticated SSH connection with the host before invoking sudo to change to the sudo user.

Warning

Credentials that are used in scheduled jobs must not be configured as Prompt on launch.

10.4.14.1. Access machine credentials in an ansible playbook

You can get username and password from Ansible facts:

vars:
  machine:
    username: '{{ ansible_user }}'
    password: '{{ ansible_password }}'

10.4.15. Microsoft Azure Key Vault credential type

This is considered part of the secret management capability.

For more information, see Microsoft Azure Key Vault.

10.4.16. Microsoft Azure Resource Manager credential type

Select this credential type to enable synchronization of cloud inventory with Microsoft Azure Resource Manager.

Microsoft Azure Resource Manager credentials require the following inputs:

  • Subscription ID: The Subscription UUID for the Microsoft Azure account.
  • Username: The username to use to connect to the Microsoft Azure account.
  • Password: The password to use to connect to the Microsoft Azure account.
  • Client ID: The Client ID for the Microsoft Azure account.
  • Client Secret: The Client Secret for the Microsoft Azure account.
  • Tenant ID: The Tenant ID for the Microsoft Azure account.
  • Azure Cloud Environment: The variable associated with Azure cloud or Azure stack environments.

These fields are equal to the variables in the API.

To pass service principal credentials, define the following variables:

AZURE_CLIENT_ID
AZURE_SECRET
AZURE_SUBSCRIPTION_ID
AZURE_TENANT
AZURE_CLOUD_ENVIRONMENT

To pass an Active Directory username and password pair, define the following variables:

AZURE_AD_USER
AZURE_PASSWORD
AZURE_SUBSCRIPTION_ID

You can also pass credentials as parameters to a task within a playbook. The order of precedence is parameters, then environment variables, and finally a file found in your home directory.

To pass credentials as parameters to a task, use the following parameters for service principal credentials:

client_id
secret
subscription_id
tenant
azure_cloud_environment

Alternatively, pass the following parameters for Active Directory username/password:

ad_user
password
subscription_id
10.4.16.1. Access Microsoft Azure resource manager credentials in an ansible playbook

You can get Microsoft Azure credential parameters from a job runtime environment:

vars:
  azure:
    client_id: '{{ lookup("env", "AZURE_CLIENT_ID") }}'
    secret: '{{ lookup("env", "AZURE_SECRET") }}'
    tenant: '{{ lookup("env", "AZURE_TENANT") }}'
    subscription_id: '{{ lookup("env", "AZURE_SUBSCRIPTION_ID") }}'

10.4.17. Network credential type

Note

Select the Network credential type if you are using a local connection with provider to use Ansible networking modules to connect to and manage networking devices.

When connecting to network devices, the credential type must match the connection type:

  • For local connections using provider, credential type should be Network.
  • For all other network connections (httpapi, netconf, and network_cli), the credential type should be Machine.

For more information about connection types available for network devices, see Multiple Communication Protocols.

Automation controller uses the following environment variables for Network credentials:

ANSIBLE_NET_USERNAME
ANSIBLE_NET_PASSWORD

Provide the following information for network credentials:

  • Username: The username to use in conjunction with the network device.
  • Password: The password to use in conjunction with the network device.
  • SSH Private Key: Copy or drag-and-drop the actual SSH Private Key to be used to authenticate the user to the network through SSH.
  • Private Key Passphrase: The passphrase for the private key to authenticate the user to the network through SSH.
  • Authorize: Select this from the Options field to control whether or not to enter privileged mode.
  • If Authorize is checked, enter a password in the Authorize Password field to access privileged mode.

For more information, see Porting Ansible Network Playbooks with New Connection Plugins.

10.4.18. Access network credentials in an ansible playbook

You can get the username and password parameters from a job runtime environment:

vars:
  network:
    username: '{{ lookup("env", "ANSIBLE_NET_USERNAME") }}'
    password: '{{ lookup("env", "ANSIBLE_NET_PASSWORD") }}'

10.4.19. OpenShift or Kubernetes API Bearer Token credential type

Select this credential type to create instance groups that point to a Kubernetes or OpenShift container.

For more information, see Container and Instance Groups in the Automation controller Administration Guide.

Provide the following information for container credentials:

  • OpenShift or Kubernetes API Endpoint (required): The endpoint used to connect to an OpenShift or Kubernetes container.
  • API Authentication Bearer Token (required): The token used to authenticate the connection.
  • Optional: Verify SSL: You can check this option to verify the server’s SSL/TLS certificate is valid and trusted. Environments that use internal or private Certificate Authority (CA) must leave this option unchecked to disable verification.
  • Certificate Authority Data: Include the BEGIN CERTIFICATE and END CERTIFICATE lines when pasting the certificate, if provided.

A container group is a type of instance group that has an associated credential that enables connection to an OpenShift cluster. To set up a container group, you must have the following items:

  • A namespace you can start into. Although every cluster has a default namespace, you can use a specific namespace.
  • A service account that has the roles that enable it to start and manage pods in this namespace.
  • If you use execution environments in a private registry, and have a container registry credential associated with them in automation controller, the service account also requires the roles to get, create, and delete secrets in the namespace.

    If you do not want to give these roles to the service account, you can pre-create the ImagePullSecrets and specify them on the pod spec for the container group. In this case, the execution environment must not have a Container Registry credential associated, or automation controller attempts to create the secret for you in the namespace.

  • A token associated with that service account (OpenShift or Kubernetes Bearer Token)
  • A CA certificate associated with the cluster
10.4.19.1. Creating a service account in an Openshift cluster

Creating a service account in an Openshift or Kubernetes cluster to be used to run jobs in a container group through automation controller. After you create the service account, its credentials are provided to automation controller in the form of an Openshift or Kubernetes API bearer token credential.

After you create a service account, use the information in the new service account to configure automation controller.

Procedure

  1. To create a service account, download and use the sample service account and change it as required to obtain the previous credentials.
  2. Apply the configuration from the sample service account:

    oc apply -f containergroup-sa.yml
  3. Get the secret name associated with the service account:

    export SA_SECRET=$(oc get sa containergroup-service-account -o json | jq '.secrets[0].name' | tr -d '"')
  4. Get the token from the secret:

    oc get secret $(echo ${SA_SECRET}) -o json | jq '.data.token' | xargs | base64 --decode > containergroup-sa.token
  5. Get the CA cert:

    oc get secret $SA_SECRET -o json | jq '.data["ca.crt"]' | xargs | base64 --decode > containergroup-ca.crt
  6. Use the contents of containergroup-sa.token and containergroup-ca.crt to provide the information for the OpenShift or Kubernetes API Bearer Token required for the container group.

10.4.20. OpenStack credential type

Select this credential type to enable synchronization of cloud inventory with OpenStack.

Provide the following information for OpenStack credentials:

  • Username: The username to use to connect to OpenStack.
  • Password (API Key): The password or API key to use to connect to OpenStack.
  • Host (Authentication URL): The host to be used for authentication.
  • Project (Tenant Name): The Tenant name or Tenant ID used for OpenStack. This value is usually the same as the username.
  • Optional: Project (Domain Name): Provide the project name associated with your domain.
  • Optional: Domain name: Provide the FQDN to be used to connect to OpenStack.

If you are interested in using OpenStack Cloud Credentials, see Use Cloud Credentials with a cloud inventory, which includes a sample playbook.

10.4.21. Red Hat Ansible Automation Platform credential type

Select this credential to access another automation controller instance.

Ansible Automation Platform credentials require the following inputs:

  • Red Hat Ansible Automation Platform: The base URL or IP address of the other instance to connect to.
  • Username: The username to use to connect to it.
  • Password: The password to use to connect to it.
  • Oauth Token: If username and password are not used, provide an OAuth token to use to authenticate.

The env injectors for Ansible Automation Platform are as follows:

ManagedCredentialType(
    namespace='controller',

....
....
....

injectors={
        'env': {
            'TOWER_HOST': '{{host}}',
            'TOWER_USERNAME': '{{username}}',
            'TOWER_PASSWORD': '{{password}}',
            'TOWER_VERIFY_SSL': '{{verify_ssl}}',
            'TOWER_OAUTH_TOKEN': '{{oauth_token}}',
            'CONTROLLER_HOST': '{{host}}',
            'CONTROLLER_USERNAME': '{{username}}',
            'CONTROLLER_PASSWORD': '{{password}}',
            'CONTROLLER_VERIFY_SSL': '{{verify_ssl}}',
            'CONTROLLER_OAUTH_TOKEN': '{{oauth_token}}',
        }
10.4.21.1. Access automation controller credentials in an Ansible Playbook

You can get the host, username, and password parameters from a job runtime environment:

vars:
  controller:
    host: '{{ lookup("env", "CONTROLLER_HOST") }}'
    username: '{{ lookup("env", "CONTROLLER_USERNAME") }}'
    password: '{{ lookup("env", "CONTROLLER_PASSWORD") }}'

10.4.22. Red Hat Satellite 6 credential type

Select this credential type to enable synchronization of cloud inventory with Red Hat Satellite 6.

Automation controller writes a Satellite configuration file based on fields prompted in the user interface. The absolute path to the file is set in the following environment variable:

FOREMAN_INI_PATH

Satellite credentials have the following required inputs:

  • Satellite 6 URL: The Satellite 6 URL or IP address to connect to.
  • Username: The username to use to connect to Satellite 6.
  • Password: The password to use to connect to Satellite 6.

10.4.23. Red Hat Virtualization credential type

Select this credential to enable automation controller to access Ansible’s oVirt4.py dynamic inventory plugin, which is managed by Red Hat Virtualization.

Automation controller uses the following environment variables for Red Hat Virtualization credentials. These are fields in the user interface:

OVIRT_URL
OVIRT_USERNAME
OVIRT_PASSWORD

Provide the following information for Red Hat Virtualization credentials:

  • Host (Authentication URL): The host URL or IP address to connect to. To sync with the inventory, the credential URL needs to include the ovirt-engine/api path.
  • Username: The username to use to connect to oVirt4. This must include the domain profile to succeed, for example username@ovirt.host.com.
  • Password: The password to use to connect to it.
  • Optional: CA File: Provide an absolute path to the oVirt certificate file (it might end in .pem, .cer and .crt extensions, but preferably .pem for consistency)
10.4.23.1. Access virtualization credentials in an Ansible Playbook

You can get the Red Hat Virtualization credential parameter from a job runtime environment:

vars:
  ovirt:
    ovirt_url: '{{ lookup("env", "OVIRT_URL") }}'
    ovirt_username: '{{ lookup("env", "OVIRT_USERNAME") }}'
    ovirt_password: '{{ lookup("env", "OVIRT_PASSWORD") }}'

The file and env injectors for Red Hat Virtualization are as follows:

ManagedCredentialType(
    namespace='rhv',

....
....
....

injectors={
        # The duplication here is intentional; the ovirt4 inventory plugin
        # writes a .ini file for authentication, while the ansible modules for
        # ovirt4 use a separate authentication process that support
        # environment variables; by injecting both, we support both
        'file': {
            'template': '\n'.join(
                [
                    '[ovirt]',
                    'ovirt_url={{host}}',
                    'ovirt_username={{username}}',
                    'ovirt_password={{password}}',
                    '{% if ca_file %}ovirt_ca_file={{ca_file}}{% endif %}',
                ]
            )
        },
        'env': {'OVIRT_INI_PATH': '{{tower.filename}}', 'OVIRT_URL': '{{host}}', 'OVIRT_USERNAME': '{{username}}', 'OVIRT_PASSWORD': '{{password}}'},
    },
)

10.4.24. Source Control credential type

Source Control credentials are used with projects to clone and update local source code repositories from a remote revision control system such as Git or Subversion.

Source Control credentials require the following inputs:

  • Username: The username to use in conjunction with the source control system.
  • Password: The password to use in conjunction with the source control system.
  • SCM Private Key: Copy or drag-and-drop the actual SSH Private Key to be used to authenticate the user to the source control system through SSH.
  • Private Key Passphrase: If the SSH Private Key used is protected by a passphrase, you can configure a Key Passphrase for the private key.
Note

You cannot configure Source Control credentials as Prompt on launch.

If you are using a GitHub account for a Source Control credential and you have Two Factor Authentication (2FA) enabled on your account, you must use your Personal Access Token in the password field rather than your account password.

10.4.25. Thycotic DevOps Secrets Vault credential type

This is considered part of the secret management capability.

For more information, see Thycotic DevOps Secrets Vault.

10.4.26. Thycotic secret server credential type

This is considered part of the secret management capability.

For more information, see Thycotic Secret Server.

10.4.27. Ansible Vault credential type

Select this credential type to enable synchronization of inventory with Ansible Vault.

Vault credentials require the Vault Password and an optional Vault Identifier if applying multi-Vault credentialing.

For more information on the Multi-Vault support, refer to the Multi-Vault Credentials section of the Automation controller Administration Guide.

You can configure automation controller to ask the user for the password at launch time by selecting Prompt on launch.

When you select Prompt on Launch, a dialog opens when the job is launched, prompting the user to enter the password.

Warning

Credentials that are used in scheduled jobs must not be configured as Prompt on launch.

For more information about Ansible Vault, see Protecting sensitive data with Ansible vault.

10.4.28. VMware vCenter credential type

Select this credential type to enable synchronization of inventory with VMware vCenter.

Automation controller uses the following environment variables for VMware vCenter credentials:

VMWARE_HOST
VMWARE_USER
VMWARE_PASSWORD
VMWARE_VALIDATE_CERTS

These are fields prompted in the user interface.

VMware credentials require the following inputs:

  • vCenter Host: The vCenter hostname or IP address to connect to.
  • Username: The username to use to connect to vCenter.
  • Password: The password to use to connect to vCenter.
Note

If the VMware guest tools are not running on the instance, VMware inventory synchronization does not return an IP address for that instance.

10.4.28.1. Access VMware vCenter credentials in an ansible playbook

You can get VMware vCenter credential parameters from a job runtime environment:

vars:
  vmware:
    host: '{{ lookup("env", "VMWARE_HOST") }}'
    username: '{{ lookup("env", "VMWARE_USER") }}'
    password: '{{ lookup("env", "VMWARE_PASSWORD") }}'

10.5. Use automation controller credentials in a playbook

The following playbook is an example of how to use automation controller credentials in your playbook.

- hosts: all

  vars:
    machine:
      username: '{{ ansible_user }}'
      password: '{{ ansible_password }}'
    controller:
      host: '{{ lookup("env", "CONTROLLER_HOST") }}'
      username: '{{ lookup("env", "CONTROLLER_USERNAME") }}'
      password: '{{ lookup("env", "CONTROLLER_PASSWORD") }}'
    network:
      username: '{{ lookup("env", "ANSIBLE_NET_USERNAME") }}'
      password: '{{ lookup("env", "ANSIBLE_NET_PASSWORD") }}'
    aws:
      access_key: '{{ lookup("env", "AWS_ACCESS_KEY_ID") }}'
      secret_key: '{{ lookup("env", "AWS_SECRET_ACCESS_KEY") }}'
      security_token: '{{ lookup("env", "AWS_SECURITY_TOKEN") }}'
    vmware:
      host: '{{ lookup("env", "VMWARE_HOST") }}'
      username: '{{ lookup("env", "VMWARE_USER") }}'
      password: '{{ lookup("env", "VMWARE_PASSWORD") }}'
    gce:
      email: '{{ lookup("env", "GCE_EMAIL") }}'
      project: '{{ lookup("env", "GCE_PROJECT") }}'
    azure:
      client_id: '{{ lookup("env", "AZURE_CLIENT_ID") }}'
      secret: '{{ lookup("env", "AZURE_SECRET") }}'
      tenant: '{{ lookup("env", "AZURE_TENANT") }}'
      subscription_id: '{{ lookup("env", "AZURE_SUBSCRIPTION_ID") }}'

  tasks:
    - debug:
        var: machine

    - debug:
        var: controller

    - debug:
        var: network

    - debug:
        var: aws

    - debug:
        var: vmware

    - debug:
        var: gce

    - shell: 'cat {{ gce.pem_file_path }}'
      delegate_to: localhost

    - debug:
        var: azure
Use 'delegate_to' and any lookup variable
- command: somecommand
  environment:
    USERNAME: '{{ lookup("env", "USERNAME") }}'
    PASSWORD: '{{ lookup("env", "PASSWORD") }}'
  delegate_to: somehost

Chapter 11. Custom credential types

As a system administrator, you can define a custom credential type in a standard format by using a YAML or JSON-like definition. You can define a custom credential type that works in ways similar to existing credential types. For example, a custom credential type can inject an API token for a third-party web service into an environment variable, for your playbook or custom inventory script to consume.

Custom credentials support the following ways of injecting their authentication information:

  • Environment variables
  • Ansible extra variables
  • File-based templating, which means generating .ini or .conf files that contain credential values

You can attach one SSH and multiple cloud credentials to a job template. Each cloud credential must be of a different type. Only one of each type of credential is permitted. Vault credentials and machine credentials are separate entities.

Note
  • When creating a new credential type, you must avoid collisions in the extra_vars, env, and file namespaces.
  • Environment variable or extra variable names must not start with ANSIBLE_ because they are reserved.
  • You must have System administrator (superuser) permissions to be able to create and edit a credential type (CredentialType) and to be able to view the CredentialType.injection field.

11.1. Content sourcing from collections

A "managed" credential type of kind=galaxy represents a content source for fetching collections defined in requirements.yml when project updates are run. Examples of content sources are galaxy.ansible.com, console.redhat.com, or on-premise automation hub. This new credential type represents a URL and (optional) authentication details necessary to construct the environment variables when a project update runs ansible-galaxy collection install as described in the Ansible documentation, Configuring the ansible-galaxy client. It has fields that map directly to the configuration options exposed to the Ansible Galaxy CLI, for example, per-server.

An endpoint in the API reflects an ordered list of these credentials at the Organization level:

/api/v2/organizations/N/galaxy_credentials/

When installations of automation controller migrate existing Galaxy-oriented setting values, post-upgrade proper credentials are created and attached to every Organization. After upgrading to the latest version, every organization that existed before upgrade now has a list of one or more "Galaxy" credentials associated with it.

Additionally, post-upgrade, these settings are not visible (or editable) from the /api/v2/settings/jobs/ endpoint.

Automation controller continues to fetch roles directly from public Galaxy even if galaxy.ansible.com is not the first credential in the list for the organization. The global Galaxy settings are no longer configured at the jobs level, but at the organization level in the user interface.

The organization’s Add and Edit windows have an optional Credential lookup field for credentials of kind=galaxy.

Create organization

It is important to specify the order of these credentials as order sets precedence for the sync and lookup of the content. For more information, see Creating an organization.

For more information about how to set up a project by using collections, see Using Collections with automation hub.

11.2. Backwards-Compatible API considerations

Support for version 2 of the API (api/v2/) means a one-to-many relationship for job templates to credentials (including multicloud support).

You can filter credentials the v2 API:

curl "https://controller.example.org/api/v2/credentials/?credential_type__namespace=aws"

In the V2 Credential Type model, the relationships are defined as follows:

MachineSSH

Vault

Vault

Network

Sets environment variables, for example ANSIBLE_NET_AUTHORIZE

SCM

Source Control

Cloud

EC2, AWS

Cloud

Lots of others

Insights

Insights

Galaxy

galaxy.ansible.com, console.redhat.com

Galaxy

on-premise automation hub

11.3. Content verification

Automation controller uses GNU Privacy Guard (GPG) to verify content.

For more information, see The GNU Privacy Handbook.

11.4. Getting started with credential types

From the navigation panel, select AdministrationCredential Types. If no custom credential types have been created, the Credential Types prompts you to add one.

If credential types have been created, this page displays a list of existing and available Credential Types.

To view more information about a credential type, click the name of a credential or the Edit Edit icon.

Each credential type displays its own unique configurations in the Input Configuration field and the Injector Configuration field, if applicable. Both YAML and JSON formats are supported in the configuration fields.

11.5. Creating a new credential type

To create a new credential type:

Procedure

  1. In the Credential Types view, click Add.

    Create new credential type

  2. Enter the appropriate details in the Name and Description field.

    Note

    When creating a new credential type, do not use reserved variable names that start with ANSIBLE_ for the INPUT and INJECTOR names and IDs, as they are invalid for custom credential types.

  3. In the Input Configuration field, specify an input schema that defines a set of ordered fields for that type. The format can be in YAML or JSON:

    YAML

    fields:
      - type: string
        id: username
        label: Username
      - type: string
        id: password
        label: Password
        secret: true
    required:
      - username
      - password

    View more YAML examples at the YAML page.

    JSON

    {
    "fields": [
      {
      "type": "string",
      "id": "username",
      "label": "Username"
      },
      {
      "secret": true,
      "type": "string",
      "id": "password",
      "label": "Password"
       }
      ],
     "required": ["username", "password"]
    }

    View more JSON examples at The JSON website.

    The following configuration in JSON format shows each field and how they are used:

    {
      "fields": [{
        "id": "api_token",    # required - a unique name used to reference the field value
    
        "label": "API Token", # required - a unique label for the field
    
        "help_text": "User-facing short text describing the field.",
    
        "type": ("string" | "boolean")   # defaults to 'string'
    
        "choices": ["A", "B", "C"]   # (only applicable to `type=string`)
    
        "format": "ssh_private_key"  # optional, can be used to enforce data format validity
                                     for SSH private key data (only applicable to `type=string`)
    
        "secret": true,       # if true, the field value will be encrypted
    
        "multiline": false    # if true, the field should be rendered as multi-line for input entry
                              # (only applicable to `type=string`)
    },{
        # field 2...
    },{
        # field 3...
    }],
    
    "required": ["api_token"]   # optional; one or more fields can be marked as required
    },

    When type=string, fields can optionally specify multiple choice options:

    {
      "fields": [{
          "id": "api_token",    # required - a unique name used to reference the field value
          "label": "API Token", # required - a unique label for the field
          "type": "string",
          "choices": ["A", "B", "C"]
      }]
    },
  4. In the Injector Configuration field, enter environment variables or extra variables that specify the values a credential type can inject. The format can be in YAML or JSON (see examples in the previous step).

    The following configuration in JSON format shows each field and how they are used:

    {
      "file": {
          "template": "[mycloud]\ntoken={{ api_token }}"
      },
      "env": {
          "THIRD_PARTY_CLOUD_API_TOKEN": "{{ api_token }}"
      },
      "extra_vars": {
          "some_extra_var": "{{ username }}:{{ password }}"
      }
    }

    Credential Types can also generate temporary files to support .ini files or certificate or key data:

    {
      "file": {
          "template": "[mycloud]\ntoken={{ api_token }}"
      },
      "env": {
          "MY_CLOUD_INI_FILE": "{{ tower.filename }}"
      }
    }

    In this example, automation controller writes a temporary file that has:

    [mycloud]\ntoken=SOME_TOKEN_VALUE

    The absolute file path to the generated file is stored in an environment variable named MY_CLOUD_INI_FILE.

    The following is an example of referencing many files in a custom credential template:

    Inputs

    {
      "fields": [{
        "id": "cert",
        "label": "Certificate",
        "type": "string"
      },{
        "id": "key",
        "label": "Key",
        "type": "string"
      }]
    }

    Injectors

    {
      "file": {
        "template.cert_file": "[mycert]\n{{ cert }}",
        "template.key_file": "[mykey]\n{{ key }}"
    },
    "env": {
        "MY_CERT_INI_FILE": "{{ tower.filename.cert_file }}",
        "MY_KEY_INI_FILE": "{{ tower.filename.key_file }}"
    }
    }
  5. Click Save.

    Your newly created credential type is displayed on the list of credential types:

    New credential type

  6. Click the Edit Edit icon to modify the credential type options.

    Note

    In the Edit screen, you can modify the details or delete the credential. If the Delete option is disabled, this means that the credential type is being used by a credential, and you must delete the credential type from all the credentials that use it before you can delete it.

Verification

  • Verify that the newly created credential type can be selected from the Credential Type selection window when creating a new credential:

Verify new credential type

Additional resources

For information about how to create a new credential, see Creating a credential.

Chapter 12. Secret management system

Users and system administrators upload machine and cloud credentials so that automation can access machines and external services on their behalf. By default, sensitive credential values such as SSH passwords, SSH private keys, and API tokens for cloud services are stored in the database after being encrypted.

With external credentials backed by credential plugins, you can map credential fields (such as a password or an SSH Private key) to values stored in a secret management system instead of providing them to automation controller directly.

Automation controller provides a secret management system that include integrations for:

  • AWS Secrets Manager Lookup
  • Centrify Vault Credential Provider Lookup
  • CyberArk Central Credential Provider Lookup (CCP)
  • CyberArk Conjur Secrets Manager Lookup
  • HashiCorp Vault Key-Value Store (KV)
  • HashiCorp Vault SSH Secrets Engine
  • Microsoft Azure Key Management System (KMS)
  • Thycotic DevOps Secrets Vault
  • Thycotic Secret Server

These external secret values are fetched before running a playbook that needs them.

Additional resources

For more information about specifying secret management system credentials in the user interface, see Credentials.

12.1. Configuring and linking secret lookups

When pulling a secret from a third-party system, you are linking credential fields to external systems. To link a credential field to a value stored in an external system, select the external credential corresponding to that system and provide metadata to look up the required value. The metadata input fields are part of the external credential type definition of the source credential.

Automation controller provides a credential plugin interface for developers, integrators, system administrators, and power-users with the ability to add new external credential types to extend it to support other secret management systems.

Use the following procedure to use automation controller to configure and use each of the supported third-party secret management systems.

Procedure

  1. Create an external credential for authenticating with the secret management system. At minimum, give a name for the external credential and select one of the following for the Credential Type field:

  2. For any of the fields that follow the Type Details area that you want to link to the external credential, click the key Link icon in the input field to link one or more input fields to the external credential along with metadata for locating the secret in the external system.

    Type details

  3. Select the input source to use to retrieve your secret information.

    Credentials link

  4. Select the credential you want to link to, and click Next. This takes you to the Metadata tab of the input source. This example shows the Metadata prompt for HashiVault Secret Lookup. Metadata is specific to the input source you select.

    For more information, see the Metadata for credential input sources table.

    Metadata

  5. Click Test to verify connection to the secret management system. If the lookup is unsuccessful, an error message similar to the following displays:

    Exception

  6. Click OK. You return to the Details screen of your target credential.
  7. Repeat these steps, starting with Step 3 to complete the remaining input fields for the target credential. By linking the information in this manner, automation controller retrieves sensitive information, such as username, password, keys, certificates, and tokens from the third-party management systems and populates the remaining fields of the target credential form with that data.
  8. If necessary, supply any information manually for those fields that do not use linking as a way of retrieving sensitive information. For more information about each of the fields, see the appropriate Credential Types.
  9. Click Save.

Additional resources

For more information, see the development documents for Credential plugins.

12.1.1. Metadata for credential input sources

The information required for the Metadata tab of the input source.

AWS Secrets Manager Lookup
MetadataDescription

AWS Secrets Manager Region (required)

The region where the secrets manager is located.

AWS Secret Name (required)

Specify the AWS secret name that was generated by the AWS access key.

Centrify Vault Credential Provider Lookup
MetadataDescription

Account name (required)

Name of the system account or domain associated with Centrify Vault.

System Name

Specify the name used by the Centrify portal.

CyberArk Central Credential Provider Lookup
MetadataDescription

Object Query (Required)

Lookup query for the object.

Object Query Format

Select Exact for a specific secret name, or Regexp for a secret that has a dynamically generated name.

Object Property

Specifies the name of the property to return. For example, UserName or Address other than the default of Content.

Reason

If required for the object’s policy, supply a reason for checking out the secret, as CyberArk logs those.

CyberArk Conjur Secrets Lookup
MetadataDescription

Secret Identifier

The identifier for the secret.

Secret Version

Specify a version of the secret, if necessary, otherwise, leave it empty to use the latest version.

HashiVault Secret Lookup
MetadataDescription

Name of Secret Backend

Specify the name of the KV backend to use. Leave it blank to use the first path segment of the Path to Secret field instead.

Path to Secret (required)

Specify the path to where the secret information is stored; for example, /path/username.

Key Name (required)

Specify the name of the key to look up the secret information.

Secret Version (V2 Only)

Specify a version if necessary, otherwise, leave it empty to use the latest version.

HashiCorp Signed SSH
MetadataDescription

Unsigned Public Key (required)

Specify the public key of the certificate you want to have signed. It needs to be present in the authorized keys file of the target hosts.

Path to Secret (required)

Specify the path to where the secret information is stored; for example, /path/username.

Role Name (required)

A role is a collection of SSH settings and parameters that are stored in Hashi vault. Typically, you can specify some with different privileges or timeouts, for example. So you could have a role that is permitted to get a certificate signed for root, and other less privileged ones, for example.

Valid Principals

Specify a user (or users) other than the default, that you are requesting vault to authorize the cert for the stored key. Hashi vault has a default user for whom it signs, for example, ec2-user.

Microsoft Azure KMS
MetadataDescription

Secret Name (required)

The name of the secret as it is referenced in Microsoft Azure’s Key vault app.

Secret Version

Specify a version of the secret, if necessary, otherwise, leave it empty to use the latest version.

Thycotic DevOps Secrets Vault
MetadataDescription

Secret Path (required)

Specify the path to where the secret information is stored, for example, /path/username.

Thycotic Secret Server
MetadataDescription

Secret ID (required)

The identifier for the secret.

Secret Field

Specify the field to be used from the secret.

12.1.2. AWS Secrets Manager Lookup

This plugin enables Amazon Web Services to be used as a credential input source to pull secrets from the Amazon Web Services Secrets Manager. The AWS Secrets Manager provides similar service to Microsoft Azure Key Vault, and the AWS collection provides a lookup plugin for it.

When AWS Secrets Manager lookup is selected for Credential Type, provide the following metadata to configure your lookup:

  • AWS Access Key (required): provide the access key used for communicating with AWS key management system
  • AWS Secret Key (required): provide the secret as obtained by the AWS IAM console

The following is an example of a configured AWS Secret Manager credential.

Create AWS secret

12.1.3. Centrify Vault Credential Provider Lookup

You need the Centrify Vault web service running to store secrets for this integration to work. When you select Centrify Vault Credential Provider Lookup for Credential Type, give the following metadata to configure your lookup:

  • Centrify Tenant URL (required): give the URL used for communicating with Centrify’s secret management system
  • Centrify API User (required): give the username
  • Centrify API Password (required): give the password
  • OAuth2 Application ID : specify the identifier given associated with the OAuth2 client
  • OAuth2 Scope : specify the scope of the OAuth2 client

12.1.4. CyberArk Central Credential Provider (CCP) Lookup

The CyberArk Central Credential Provider web service must be running to store secrets for this integration to work. When you select CyberArk Central Credential Provider Lookup for Credential Type, give the following metadata to configure your lookup:

  • CyberArk CCP URL (required): give the URL used for communicating with CyberArk CCP’s secret management system. It must include the URL scheme such as http or https.
  • Optional: Web Service ID: specify the identifier for the web service. Leaving this blank defaults to AIMWebService.
  • Application ID (required): specify the identifier given by CyberArk CCP services.
  • Client Key: paste the client key if provided by CyberArk.
  • Client Certificate: include the BEGIN CERTIFICATE and END CERTIFICATE lines when pasting the certificate, if provided by CyberArk.
  • Verify SSL Certificates: this option is only available when the URL uses HTTPS. Check this option to verify that the server’s SSL/TLS certificate is valid and trusted. For environments that use internal or private CA’s, leave this option unchecked to disable verification.

12.1.5. CyberArk Conjur Secrets Manager Lookup

With a Conjur Cloud tenant available to target, configure the CyberArk Conjur Secrets Lookup external management system credential plugin.

When you select CyberArk Conjur Secrets Manager Lookup for Credential Type, give the following metadata to configure your lookup:

  • Conjur URL (required): provide the URL used for communicating with CyberArk Conjur’s secret management system. This must include the URL scheme, such as http or https.
  • API Key (required): provide the key given by your Conjur admin
  • Account (required): the organization’s account name
  • Username (required): the specific authenticated user for this service
  • Public Key Certificate: include the BEGIN CERTIFICATE and END CERTIFICATE lines when pasting the public key, if provided by CyberArk

The following is an example of a configured CyberArk Conjur credential.

CyberArk Conjur credential

12.1.6. HashiCorp Vault Secret Lookup

When you select HashiCorp Vault Secret Lookup for Credential Type, give the following metadata to configure your lookup:

  • Server URL (required): give the URL used for communicating with HashiCorp Vault’s secret management system.
  • Token: specify the access token used to authenticate HashiCorp’s server.
  • CA Certificate: specify the CA certificate used to verify HashiCorp’s server.
  • Approle Role_ID: specify the ID if using Appprole for authentication.
  • Approle Secret_ID: specify the corresponding secret ID for Approle authentication.
  • Client Certificate: specify a PEM-encoded client certificate when using the TLS authentication method, including any required intermediate certificates expected by Hashicorp Vault.
  • Client Certificate Key: specify a PEM-encoded certificate private key when using the TLS authentication method.
  • TLS Authentication Role: specify the role or certificate name in Hashicorp Vault that corresponds to your client certificate when using the TLS authentication method. If it is not provided, Hashicorp Vault attempts to match the certificate automatically.
  • Namespace name: specify the Namespace name (Hashicorp Vault enterprise only).
  • Kubernetes role: specify the role name when using Kubernetes authentication.
  • Username: enter the username of the user to be used to authenticate this service.
  • Password: enter the password associated with the user to be used to authenticate this service.
  • Path to Auth: specify a path if other than the default path of /approle.
  • API Version (required): select v1 for static lookups and v2 for versioned lookups.

LDAP authentication requires LDAP to be configured in HashiCorp’s Vault UI and a policy added to the user. Cubbyhole is the name of the default secret mount. If you have proper permissions, you can create other mounts and write key values to those.

To test the lookup, create another credential that uses Hashicorp Vault lookup.

Additional resources

For more detail about the LDAP authentication method and its fields, see the Vault documentation for LDAP auth method.

For more information about Approle authentication method and its fields, see the Vault documentation for AppRole auth method.

For more information about the userpass authentication method and its fields, see the Vault documentation for userpass auth method.

For more information about the Kubernetes auth method and its fields, see the Vault documentation for Kubernetes auth method.

For more information about the TLS certificate auth method and its fields, see the Vault documentation for TLS certificates auth method.

12.1.7. HashiCorp Vault Signed SSH

When you select HashiCorp Vault Signed SSH for Credential Type, give the following metadata to configure your lookup:

  • Server URL (required): give the URL used for communicating with HashiCorp Signed SSH’s secret management system.
  • Token: specify the access token used to authenticate HashiCorp’s server.
  • CA Certificate: specify the CA certificate used to verify HashiCorp’s server.
  • Approle Role_ID: specify the ID for Approle authentication.
  • Approle Secret_ID: specify the corresponding secret ID for Approle authentication.
  • Client Certificate: specify a PEM-encoded client certificate when using the TLS authentication method, including any required intermediate certificates expected by Hashicorp Vault.
  • Client Certificate Key: specify a PEM-encoded certificate private key when using the TLS authentication method.
  • TLS Authentication Role: specify the role or certificate name in Hashicorp Vault that corresponds to your client certificate when using the TLS authentication method. If it is not provided, Hashicorp Vault attempts to match the certificate automatically.
  • Namespace name: specify the Namespace name (Hashicorp Vault enterprise only).
  • Kubernetes role: specify the role name when using Kubernetes authentication.
  • Username: enter the username of the user to be used to authenticate this service.
  • Password: enter the password associated with the user to be used to authenticate this service.
  • Path to Auth: specify a path if other than the default path of /approle.

Additional resources

For more information about Approle authentication method and its fields, see the Vault documentation for Approle Auth Method.

For more information about the Kubernetes authtication method and its fields, see the Vault documentation for Kubernetes auth method.

For more information about the TLS certificate auth method and its fields, see the Vault documentation for TLS certificates auth method.

12.1.8. Microsoft Azure Key Vault

When you select Microsoft Azure Key Vault for Credential Type, give the following metadata to configure your lookup:

  • Vault URL (DNS Name) (required): give the URL used for communicating with Microsoft Azure’s key management system
  • Client ID (required): give the identifier as obtained by the Microsoft Azure Active Directory
  • Client Secret (required): give the secret as obtained by the Microsoft Azure Active Directory
  • Tenant ID (required): give the unique identifier that is associated with an Microsoft Azure Active Directory instance within an Azure subscription
  • Cloud Environment: select the applicable cloud environment to apply

12.1.9. Thycotic DevOps Secrets Vault

When you select Thycotic DevOps Secrets Vault for Credential Type, give the following metadata to configure your lookup:

  • Tenant (required): give the URL used for communicating with Thycotic’s secret management system
  • Top-level Domain (TLD): give the top-level domain designation, for example .com, .edu, or .org, associated with the secret vault you want to integrate
  • Client ID (required): give the identifier as obtained by the Thycotic secret management system
  • Client Secret (required): give the secret as obtained by the Thycotic secret management system

12.1.10. Thycotic Secret Server

When you select Thycotic Secrets Server for Credential Type, give the following metadata to configure your lookup:

  • Secret Server URL (required): give the URL used for communicating with the Thycotic Secrets Server management system
  • Username (required): specify the authenticated user for this service
  • Password (required): give the password associated with the user

Chapter 13. Applications

Create and configure token-based authentication for external applications such as ServiceNow and Jenkins. With token-based authentication, external applications can easily integrate with automation controller.

With OAuth 2 you can use tokens to share data with an application without disclosing login information. You can configure these tokens as read-only.

You can create an application that is representative of the external application you are integrating with, then use it to create tokens for the application to use on behalf of its users.

Associating these tokens with an application resource enables you to manage all tokens issued for a particular application. By separating the issue of tokens under Applications, you can revoke all tokens based on the Application without having to revoke all tokens in the system.

13.1. Getting Started with Applications

From the navigation panel, select AdministrationApplications. The Applications page displays a searchable list of all available Applications currently managed by automation controller and can be sorted by Name.

Applications- with example apps

If no applications exist, you are requested to add applications.

Add applications

13.2. Creating a new application

When integrating an external web application with automation controller the web application might need to create OAuth2 Tokens on behalf of users of the web application. Creating an application with the Authorization Code grant type is the preferred way to do this for the following reasons:

  • External applications can obtain a token for users, using their credentials.
  • Compartmentalized tokens issued for a particular application, enables those tokens to be easily managed. For example, revoking all tokens associated with that application.

Procedure

  1. From the navigation panel, select AdministrationApplications.
  2. Click Add. The Create New Application page opens.

    Create application

  3. Enter the following details:

    • Name (required): give a name for the application you want to create
    • Optional: Description: give a short description for your application
    • Organization (required): give an organization with which this application is associated
    • Authorization Grant Type (required): select one of the grant types to use for the user to get tokens for this application. For more information, see Application Functions in the Applications section of the Automation controller Administration Guide.
    • Redirect URIS: give a list of allowed URIs, separated by spaces. You need this if you specified the grant type to be Authorization code.
    • Client Type (required): select the level of security of the client device.
  4. Click Save, or click Cancel to abandon your changes.

    The client ID displays in a window.

13.2.1. Adding tokens

You can view a list of users that have tokens to access an application by selecting the Tokens tab Application details page.

Configure authentication tokens for your users. You can select the application to which the token is associated and the level of access that the token has.

Note

You can only create OAuth 2 Tokens for your user through the API or UI, which means you can only access your own user profile to configure or view your tokens.

Procedure

  1. From the navigation panel, select AccessUsers.
  2. Select the user for which you want to configure the OAuth 2 tokens.
  3. Select the Tokens tab on the user’s profile.

    When no tokens are present, the Tokens screen prompts you to add them.

  4. Click Add to open the Create Token window.
  5. Enter the following details:

    • Application: enter the name of the application with which you want to associate your token. Alternatively, you can search for it by clicking the Search icon. This opens a separate window that enables you to choose from the available options. Use the Search bar to filter by name if the list is extensive. Leave this field blank if you want to create a Personal Access Token (PAT) that is not linked to any application.
    • Optional: Description: provide a short description for your token.
    • Scope (required): specify the level of access you want this token to have.
  6. Click Save, or click Cancel to abandon your changes.

    After you save the token, the newly created token for the user is displayed with the token information and when it expires.

    Token information

  7. To view the application to which the token is associated and the token expiration date, go to the token list view.

    Token assignment

Verification

To verify that the application now shows the user with the appropriate token, open the Tokens tab of the Applications window:

image

Additional resources

If you are a system administrator and have to create or remove tokens for other users, see the revoke and create commands in the Token and session management section of the Automation controller Administration Guide.

Chapter 14. Execution environments

Unlike legacy virtual environments, execution environments are container images that make it possible to incorporate system-level dependencies and collection-based content. Each execution environment enables you to have a customized image to run jobs and has only what is necessary when running the job.

14.1. Building an execution environment

If your Ansible content depends on custom virtual environments instead of a default environment, you must take additional steps. You must install packages on each node, interact well with other software installed on the host system, and keep them in synchronization.

To simplify this process, you can build container images that serve as Ansible Control nodes. These container images are referred to as automation execution environments, which you can create with ansible-builder. Ansible-runner can then make use of those images.

14.1.1. Install ansible-builder

To build images, you must have Podman or Docker installed, along with the ansible-builder Python package.

The --container-runtime option must correspond to the Podman or Docker executable you intend to use.

For more information, see Quickstart for Ansible Builder, or Creating and consuming execution environments.

14.1.2. Content needed for an execution environment

Ansible-builder is used to create an execution environment.

An execution environment must contain:

  • Ansible
  • Ansible Runner
  • Ansible Collections
  • Python and system dependencies of:

    • modules or plugins in collections
    • content in ansible-base
    • custom user needs

Building a new execution environment involves a definition that specifies which content you want to include in your execution environment, such as collections, Python requirements, and system-level packages. The definition must be a .yml file

The content from the output generated from migrating to execution environments has some of the required data that can be piped to a file or pasted into this definition file.

Additional resources

For more information, see Migrate legacy venvs to execution environments. If you did not migrate from a virtual environment, you can create a definition file with the required data described in the Execution Environment Setup Reference.

Collection developers can declare requirements for their content by providing the appropriate metadata.

For more information, see Dependencies.

14.1.3. Example YAML file to build an image

The ansible-builder build command takes an execution environment definition as an input. It outputs the build context necessary for building an execution environment image, and then builds that image. The image can be re-built with the build context elsewhere, and produces the same result. By default, the builder searches for a file named execution-environment.yml in the current directory.

The following example execution-environment.yml file can be used as a starting point:

---
version: 3
dependencies:
  galaxy: requirements.yml

The content of requirements.yml:

---
collections:
  - name: awx.awx

To build an execution environment using the preceding files and run the following command:

ansible-builder build
...
STEP 7: COMMIT my-awx-ee
--> 09c930f5f6a
09c930f5f6ac329b7ddb321b144a029dbbfcc83bdfc77103968b7f6cdfc7bea2
Complete! The build context can be found at: context

In addition to producing a ready-to-use container image, the build context is preserved. This can be rebuilt at a different time or location with the tools of your choice, such as docker build or podman build.

Additional resources

For additional information about the ansible-builder build command, see Ansible’s CLI Usage documentation.

14.1.4. Execution environment mount options

Rebuilding an execution environment is one way to add certificates, but inheriting certificates from the host provides a more convenient solution. For VM-based installations, automation controller automatically mounts the system truststore in the execution environment when jobs run.

You can customize execution environment mount options and mount paths in the Paths to expose to isolated jobs field of the Job Settings page, where Podman-style volume mount syntax is supported.

Additional resources

For more information, see the Podman documentation.

14.1.4.1. Troubleshooting execution environment mount options

In some cases where the /etc/ssh/* files were added to the execution environment image due to customization of an execution environment, an SSH error can occur. For example, exposing the /etc/ssh/ssh_config.d:/etc/ssh/ssh_config.d:O path enables the container to be mounted, but the ownership permissions are not mapped correctly.

Use the following procedure if you meet this error, or have upgraded from an older version of automation controller:

Procedure

  1. Change the container ownership on the mounted volume to root.
  2. From the navigation panel, select Settings.
  3. Select Jobs settings from the Jobs option.
  4. Expose the path in the Paths to expose to isolated jobs field, using the current example:

    Paths

    Note

    The :O option is only supported for directories. Be as detailed as possible, especially when specifying system paths. Mounting /etc or /usr directly has an impact that makes it difficult to troubleshoot.

    This informs Podman to run a command similar to the following example, where the configuration is mounted and the ssh command works as expected:

    podman run -v /ssh_config:/etc/ssh/ssh_config.d/:O ...

To expose isolated paths in OpenShift or Kubernetes containers as HostPath, use the following configuration:

Expose isolated jobs

Set Expose host paths for Container Groups to On to enable it.

When the playbook runs, the resulting Pod specification is similar to the following example. Note the details of the volumeMounts and volumes sections.

Pod specification

14.1.4.2. Mounting the directory in the execution node to the execution environment container

With Ansible Automation Platform 2.1.2, only O and z options were available. Since Ansible Automation Platform 2.2, further options such as rw are available. This is useful when using NFS storage.

Procedure

  1. From the navigation panel, select Settings.
  2. Select Jobs settings from the Jobs option.
  3. Edit the Paths to expose to isolated jobs field:

    • Enter a list of paths that volumes are mounted from the execution node or the hybrid node to the container.
    • Enter one path per line.
    • The supported format is HOST-DIR[:CONTAINER-DIR[:OPTIONS]. The allowed paths are z, O, ro, and rw.

      Example

      [
        "/var/lib/awx/.ssh:/root/.ssh:O"
      ]

    • For the rw option, configure the SELinux label correctly. For example, to mount the /foo directory, complete the following commands:

      sudo su
      mkdir /foo
      chmod 777 /foo
      semanage fcontext -a -t container_file_t "/foo(/.*)?"
      restorecon -vvFR /foo

The awx user must be permitted to read or write in this directory at least. Set the permissions as 777 at this time.

Additional resources

For more information about mount volumes, see the --volume option of the podman-run(1) section of the Podman documentation.

14.2. Adding an execution environment to a job template

Prerequisites

  • An execution environment must have been created using ansible-builder as described in Build an execution environment. When an execution environment has been created, you can use it to run jobs. Use the automation controller UI to specify the execution environment to use in your job templates.
  • Depending on whether an execution environment is made available for global use or tied to an organization, you must have the appropriate level of administrator privileges to use an execution environment in a job. Execution environments tied to an organization require Organization administrators to be able to run jobs with those execution environments.
  • Before running a job or job template that uses an execution environment that has a credential assigned to it, ensure that the credential contains a username, host, and password.

Procedure

  1. From the navigation panel, select AdministrationExecution Environments.
  2. Click Add to add an execution environment.
  3. Enter the appropriate details into the following fields:

    • Name (required): Enter a name for the execution environment.
    • Image (required): Enter the image name. The image name requires its full location (repository), the registry, image name, and version tag in the example format of quay.io/ansible/awx-ee:latestrepo/project/image-name:tag.
    • Optional: Pull: Choose the type of pull when running jobs:

      • Always pull container before running: Pulls the latest image file for the container.
      • Only pull the image if not present before running: Only pulls the latest image if none is specified.
      • Never pull container before running: Never pull the latest version of the container image.

        Note

        If you do not set a type for pull, the value defaults to Only pull the image if not present before running.

    • Optional: Description:
    • Optional: Organization: Assign the organization to specifically use this execution environment. To make the execution environment available for use across multiple organizations, leave this field blank.
    • Registry credential: If the image has a protected container registry, provide the credential to access it.

      New execution environment

  4. Click Save.

    Your newly added execution environment is ready to be used in a job template.

  5. To add an execution environment to a job template, specify it in the Execution Environment field of the job template, as shown in the following example:

Execution Environment added

When you have added an execution environment to a job template, those templates are listed in the Templates tab of the execution environment:

Execution environment templates

Chapter 15. Execution Environment Setup Reference

This section contains reference information associated with the definition of an execution environment. You define the content of your execution environment in a YAML file. By default, this file is called execution_environment.yml. This file tells Ansible Builder how to create the build instruction file (Containerfile for Podman, Dockerfile for Docker) and build context for your container image.

Note

The definition schema for Ansible Builder 3.x is documented here. If you are running an older version of Ansible Builder, you need an older schema version. For more information, see older versions of this documentation. We recommend using version 3, which offers substantially more configurable options and functionality than previous versions.

15.1. Execution environment definition example

You must create a definition file to build an image for an execution environment. The file is in YAML format.

You must specify the version of Ansible Builder in the definition file. The default version is 1.

The following definition file is using Ansible Builder version 3:

version: 3
build_arg_defaults:
  ANSIBLE_GALAXY_CLI_COLLECTION_OPTS: '--pre'
dependencies:
  galaxy: requirements.yml
  python:
    - six
    - psutil
  system: bindep.txt
images:
  base_image:
    name: registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel8:latest
additional_build_files:
    - src: files/ansible.cfg
      dest: configs
additional_build_steps:
  prepend_galaxy:
    - ADD _build/configs/ansible.cfg /home/runner/.ansible.cfg
  prepend_final: |
    RUN whoami
    RUN cat /etc/os-release
  append_final:
    - RUN echo This is a post-install command!
    - RUN ls -la /etc

15.2. Configuration options

Use the following configuration YAML keys in your definition file.

The Ansible Builder 3.x execution environment definition file accepts seven top-level sections:

15.2.1. additional_build_files

The build files specify what are to be added to the build context directory. These can then be referenced or copied by additional_build_steps during any build stage.

The format is a list of dictionary values, each with a src and dest key and value.

Each list item must be a dictionary containing the following required keys:

src

Specifies the source files to copy into the build context directory.

This can be an absolute path, for example, /home/user/.ansible.cfg, or a path that is relative to the file. Relative paths can be a glob expression matching one or more files, for example, files/\*.cfg. Note that an absolute path must not include a regular expression. If src is a directory, the entire contents of that directory are copied to dest.

dest

Specifies a subdirectory path underneath the _build subdirectory of the build context directory that contains the source files, for example, files/configs.

This cannot be an absolute path or contain .. within the path. This directory is created for you if it does not exist.

Note

When using an ansible.cfg file to pass a token and other settings for a private account to an automation hub server, listing the configuration file path here as a string enables it to be included as a build argument in the initial phase of the build.

15.2.2. additional_build_steps

The build steps specify custom build commands for any build phase. These commands are inserted directly into the build instruction file for the container runtime, for example, Containerfile or Dockerfile. The commands must conform to any rules required by the containerization tool.

You can add build steps before or after any stage of the image creation process. For example, if you need git to be installed before you install your dependencies, you can add a build step at the end of the base build stage.

The following are the valid keys. Each supports either a multi-line string, or a list of strings.

append_base

Commands to insert after building of the base image.

append_builder

Commands to insert after building of the builder image.

append_final

Commands to insert after building of the final image.

append_galaxy

Commands to insert after building of the galaxy image.

prepend_base

Commands to insert before building of the base image.

prepend_builder

Commands to insert before building of the builder image.

prepend_final

Commands to insert before building of the final image.

prepend_galaxy

Commands to insert before building of the galaxy image.

15.2.3. build_arg_defaults

This specifies the default values for build arguments as a dictionary.

This is an alternative to using the --build-arg CLI flag.

Ansible Builder uses the following build arguments:

ANSIBLE_GALAXY_CLI_COLLECTION_OPTS

Enables the user to pass the -pre flag and other flags to enable the installation of pre-release collections.

ANSIBLE_GALAXY_CLI_ROLE_OPTS

This enables the user to pass any flags, such as --no-deps, to the role installation.

PKGMGR_PRESERVE_CACHE

This controls how often the package manager cache is cleared during the image build process.

If this value is not set, which is the default, the cache is cleared frequently. If the value is always, the cache is never cleared. Any other value forces the cache to be cleared only after the system dependencies are installed in the final build stage.

Ansible Builder hard-codes values given inside of build_arg_defaults into the build instruction file, so they persist if you run your container build manually.

If you specify the same variable in the definition and at the command line with the CLI build-arg flag, the CLI value overrides the value in the definition.

15.2.4. Dependencies

Specifies dependencies to install into the final image, including ansible-core, ansible-runner, Python packages, system packages, and collections. Ansible Builder automatically installs dependencies for any Ansible collections you install.

In general, you can use standard syntax to constrain package versions. Use the same syntax you would pass to dnf, pip, ansible-galaxy, or any other package management utility. You can also define your packages or collections in separate files and reference those files in the dependencies section of your definition file.

The following keys are valid:

ansible_core

The version of the ansible-core Python package to be installed.

This value is a dictionary with a single key, package_pip. The package_pip value is passed directly to pip for installation and can be in any format that pip supports. The following are some example values:

ansible_core:
    package_pip: ansible-core
ansible_core:
    package_pip: ansible-core==2.14.3
ansible_core:
    package_pip: https://github.com/example_user/ansible/archive/refs/heads/ansible.tar.gz

ansible_runner

The version of the Ansible Runner Python package to be installed.

This value is a dictionary with a single key, package_pip. The package_pip value is passed directly to pip for installation and can be in any format that pip supports. The following are some example values:

ansible_runner:
    package_pip: ansible-runner
ansible_runner:
    package_pip: ansible-runner==2.3.2
ansible_runner:
    package_pip: https://github.com/example_user/ansible-runner/archive/refs/heads/ansible-runner.tar.gz

galaxy

Collections to be installed from Ansible Galaxy.

This can be a filename, a dictionary, or a multi-line string representation of an Ansible Galaxy requirements.yml file. For more information about the requirements file format, see the Galaxy User Guide.

python

The Python installation requirements.

This can be a filename, or a list of requirements. Ansible Builder combines all the Python requirements files from all collections into a single file using the requirements-parser library.

This library supports complex syntax, including references to other files. If many collections require the same package name, Ansible Builder combines them into a single entry and combines the constraints.

Ansible Builder excludes some packages in the combined file of Python dependencies even if a collection lists them as dependencies. These include test packages and packages that provide Ansible itself. The full list can is available under EXCLUDE_REQUIREMENTS in src/ansible_builder/_target_scripts/introspect.py.

If you need to include one of these excluded package names, use the --user-pip option of the introspect command to list it in the user requirements file.

Packages supplied this way are not processed against the list of excluded Python packages.

python_interpreter

A dictionary that defines the Python system package name to be installed by dnf (package_system) or a path to the Python interpreter to be used (python_path).

system

The system packages to be installed, in bindep format. This can be a filename or a list of requirements.

For more information about bindep, see the OpenDev documentation.

For system packages, use the bindep format to specify cross-platform requirements, so they can be installed by whichever package management system the execution environment uses. Collections must specify necessary requirements for [platform:rpm]. Ansible Builder combines system package entries from multiple collections into a single file. Only requirements with no profiles (runtime requirements) are installed to the image. Entries from many collections which are duplicates of each other can be consolidated in the combined file.

The following example uses filenames that contain the various dependencies:

dependencies:
  python: requirements.txt
  system: bindep.txt
  galaxy: requirements.yml
  ansible_core:
      package_pip: ansible-core==2.14.2
  ansible_runner:
      package_pip: ansible-runner==2.3.1
  python_interpreter:
      package_system: "python310"
      python_path: "/usr/bin/python3.10"

This example uses inline values:

dependencies:
  python:
    - pywinrm
  system:
    - iputils [platform:rpm]
  galaxy:
    collections:
      - name: community.windows
      - name: ansible.utils
        version: 2.10.1
  ansible_core:
      package_pip: ansible-core==2.14.2
  ansible_runner:
      package_pip: ansible-runner==2.3.1
  python_interpreter:
      package_system: "python310"
      python_path: "/usr/bin/python3.10"
Note

If any of these dependency files (requirements.txt, bindep.txt, and requirements.yml) are in the build_ignore of the collection, the build fails.

Collection maintainers can verify that ansible-builder recognizes the requirements they expect by using the introspect command:

ansible-builder introspect --sanitize ~/.ansible/collections/

The --sanitize option reviews all of the collection requirements and removes duplicates. It also removes any Python requirements that are normally excluded (see python dependencies).

Use the -v3 option to introspect to see logging messages about requirements that are being excluded.

15.2.5. images

Specifies the base image to be used. At a minimum you must specify a source, image, and tag for the base image. The base image provides the operating system and can also provide some packages. Use the standard host/namespace/container:tag syntax to specify images. You can use Podman or Docker shortcut syntax instead, but the full definition is more reliable and portable.

Valid keys for this section are:

base_image

A dictionary defining the parent image for the execution environment.

A name key must be supplied with the container image to use. Use the signature_original_name key if the image is mirrored within your repository, but signed with the original image’s signature key.

15.2.6. Image verification

You can verify signed container images if you are using the podman container runtime.

Set the container-policy CLI option to control how this data is used in relation to a Podman policy.json file for container image signature validation.

  • ignore_all policy: Generate a policy.json file in the build context directory <context> where no signature validation is performed.
  • system policy: Signature validation is performed using pre-existing policy.json files in standard system locations. ansible-builder assumes no responsibility for the content within these files, and the user has complete control over the content.
  • signature_required policy: ansible-builder uses the container image definitions to generate a policy.json file in the build context directory <context> that is used during the build to validate the images.

15.2.7. options

A dictionary of keywords or options that can affect the runtime functionality Ansible Builder.

Valid keys for this section are:

  • container_init: A dictionary with keys that allow for customization of the container ENTRYPOINT and CMD directives (and related behaviors). Customizing these behaviors is an advanced task, and can result failures that are difficult to debug. Because the provided defaults control several intertwined behaviors, overriding any value skips all remaining defaults in this dictionary.

    Valid keys are:

    • cmd: Literal value for the CMD Containerfile directive. The default value is ["bash"].
    • entrypoint: Literal value for the ENTRYPOINT Containerfile directive. The default entrypoint behavior handles signal propagation to subprocesses, as well as attempting to ensure at runtime that the container user has a proper environment with a valid writeable home directory, represented in /etc/passwd, with the HOME environment variable set to match. The default entrypoint script can emit warnings to stderr in cases where it is unable to suitably adjust the user runtime environment. This behavior can be ignored or elevated to a fatal error; consult the source for the entrypoint target script for more details.

      The default value is ["/opt/builder/bin/entrypoint", "dumb-init"].

    • package_pip: Package to install with pip for entrypoint support. This package is installed in the final build image.

      The default value is dumb-init==1.2.5.

  • package_manager_path: string with the path to the package manager (dnf or microdnf) to use. The default is /usr/bin/dnf. This value is used to install a Python interpreter, if specified in dependencies, and during the build phase by the assemble script.
  • skip_ansible_check: This boolean value controls whether or not the check for an installation of Ansible and Ansible Runner is performed on the final image.

    Set this value to True to not perform this check.

    The default is False.

  • relax_passwd_permissions: This boolean value controls whether the root group (GID 0) is explicitly granted write permission to /etc/passwd in the final container image. The default entrypoint script can attempt to update /etc/passwd under some container runtimes with dynamically created users to ensure a fully-functional POSIX user environment and home directory. Disabling this capability can cause failures of software features that require users to be listed in /etc/passwd with a valid and writeable home directory, for example, async in ansible-core, and the ~username shell expansion.

    The default is True.

  • workdir: Default current working directory for new processes started under the final container image. Some container runtimes also use this value as HOME for dynamically-created users in the root (GID 0) group. When this value is specified, if the directory does not already exist, it is created, set to root group ownership, and rwx group permissions are recursively applied to it.

    The default value is /runner.

  • user: This sets the username or UID to use as the default user for the final container image.

    The default value is 1000.

Example options:

options:
    container_init:
        package_pip: dumb-init>=1.2.5
        entrypoint: '["dumb-init"]'
        cmd: '["csh"]'
    package_manager_path: /usr/bin/microdnf
    relax_password_permissions: false
    skip_ansible_check: true
    workdir: /myworkdir
    user: bob

15.2.8. version

An integer value that sets the schema version of the execution environment definition file.

Defaults to 1.

The value must be 3 if you are using Ansible Builder 3.x.

15.3. Default execution environment for AWX

The example in test/data/pytz requires the awx.awx collection in the definition. The lookup plugin awx.awx.tower_schedule_rrule requires the PyPI pytz and another library to work. If the test/data/pytz/execution-environment.yml file is provided to the ansible-builder build command, it installs the collection inside the image, reads the requirements.txt file inside of the collection, and then installs pytz into the image.

The image produced can be used inside of an ansible-runner project by placing these variables inside the env/settings file, inside the private data directory.

---
container_image: image-name
process_isolation_executable: podman # or docker
process_isolation: true

The awx.awx collection is a subset of content included in the default AWX .

For further information, see the awx-ee repository.

Chapter 16. Projects

A Project is a logical collection of Ansible playbooks, represented in automation controller. You can manage playbooks and playbook directories different ways:

  • By placing them manually under the Project Base Path on your automation controller server.
  • By placing your playbooks into a source code management (SCM) system supported by the automation controller. These include Git, Subversion, Mercurial and Red Hat Insights.

For more information on creating a Red Hat Insights project, see Setting up insights remediations.

Note

The Project Base Path is /var/lib/awx/projects. However, this can be modified by the system administrator. It is configured in /etc/tower/conf.d/custom.py.

Use caution when editing this file, as incorrect settings can disable your installation.

The Projects page displays the list of the projects that are currently available.

Automation controller provides you with a Demo Project that you can work with initially.

Projects - home

The default view is collapsed (Compact) with project name and its status, but you can use the Arrow next to each entry to expand for more information.

Projects - expanded

For each project listed, you can get the latest SCM revision Refresh , edit Edit the project, or copy Copy the project attributes, using the icons next to each project.

Projects can be updated while a related job is running.

In cases where you have a large project (around 10 GB), disk space on /tmp may be an issue.

Status indicates the state of the project and may be one of the following (note that you can also filter your view by specific status types):

  • Pending - The source control update has been created, but not queued or started yet. Any job (not just source control updates) stays in pending until it is ready to be run by the system. Possible reasons for it not being ready are:

    • It has dependencies that are currently running so it has to wait until they are done.
    • There is not enough capacity to run in the locations it is configured to.
  • Waiting - The source control update is in the queue waiting to be executed.
  • Running - The source control update is currently in progress.
  • Successful - The last source control update for this project succeeded.
  • Failed - The last source control update for this project failed.
  • Error - The last source control update job failed to run at all.
  • Canceled - The last source control update for the project was canceled.
  • Never updated - The project is configured for source control, but has never been updated.
  • OK - The project is not configured for source control, and is correctly in place.
  • Missing - Projects are absent from the project base path of /var/lib/awx/projects. This is applicable for manual or source control managed projects.
Note

Projects of credential type Manual cannot update or schedule source control-based actions without being reconfigured as an SCM type credential.

16.1. Adding a new project

You can create a logical collection of playbooks, called projects in automation controller.

Procedure

  1. From the navigation panel, select ResourcesProjects.
  2. On the Projects page, click Add to launch the Create Project window.

    Projects- create new project

  3. Enter the appropriate details into the following required fields:

    • Name (required)
    • Optional: Description
    • Organization (required): A project must have at least one organization. Select one organization now to create the project. When the project is created you can add additional organizations.
    • Optional: Execution Environment: Enter the name of the execution environment or search from a list of existing ones to run this project. For more information, see Migrating to Execution Environments in the Red Hat Ansible Automation Platform Upgrade and Migration Guide.
    • Source Control Type (required): Select an SCM type associated with this project from the menu. Options in the following sections become available depending on the type chosen. For more information, see Manage playbooks manually or Manage playbooks using source control.
    • Optional: Content Signature Validation Credential: Use this field to enable content verification. Specify the GPG key to use for validating content signature during project synchronization. If the content has been tampered with, the job will not run. For more information, see Project signing and verification.
  4. Click Save.

Additional resources

The following describe the ways projects are sourced:

16.1.1. Managing playbooks manually

Procedure

  • Create one or more directories to store playbooks under the Project Base Path, for example, /var/lib/awx/projects/.
  • Create or copy playbook files into the playbook directory.
  • Ensure that the playbook directory and files are owned by the same UNIX user and group that the service runs as.
  • Ensure that the permissions are appropriate for the playbook directories and files.

Troubleshooting

  • If you have not added any Ansible Playbook directories to the base project path an error message is displayed. Choose one of the following options to troubleshoot this error:

    • Create the appropriate playbook directories and check out playbooks from your SCM (spell this*).
    • Copy playbooks into the appropriate playbook directories.

16.1.2. Managing playbooks using source control

Choose one of the following options when managing playbooks using source control:

16.1.2.1. SCM Types - Configuring playbooks to use Git and Subversion

Procedure

  1. In the Project Details tab, select the appropriate option (Git or Subversion) from the SCM Type menu.

    Select scm

  2. Enter the appropriate details into the following fields:

    • SCM URL - See an example in the tooltip.
    • Optional: SCM Branch/Tag/Commit: Enter the SCM branch, tags, commit hashes, arbitrary refs, or revision number (if applicable) from the source control (Git or Subversion) to checkout. Some commit hashes and references might not be available unless you also provide a custom refspec in the next field. If left blank, the default is HEAD which is the last checked out Branch, Tag, or Commit for this project.
    • SCM Refspec - This field is an option specific to git source control and only advanced users familiar and comfortable with git should specify which references to download from the remote repository. For more information, see Job branch overriding.
    • Source Control Credential - If authentication is required, select the appropriate source control credential
  3. Optional: SCM Update Options - select the launch behavior, if applicable:

    • Clean - Removes any local modifications before performing an update.
    • Delete - Deletes the local repository in its entirety before performing an update. Depending on the size of the repository this can significantly increase the amount of time required to complete an update.
    • Track submodules - Tracks the latest commit. There is more information in the tooltip Tooltip .
    • Update Revision on Launch - Updates the revision of the project to the current revision in the remote source control, and caching the roles directory from Galaxy or Collections support. Automation controller ensures that the local revision matches and that the roles and collections are up-to-date with the last update. In addition, to avoid job overflows if jobs are spawned faster than the project can synchronize, selecting this enables you to configure a Cache Timeout to cache previous project synchronizations for a given number of seconds.
    • Allow Branch Override - Enables a job template or an inventory source that uses this project to start with a specified SCM branch or revision other than that of the project. For more information, see Job branch overriding.

      Override options

  4. Click Save to save your project.
Tip

Using a GitHub link is an easy way to use a playbook. To help get you started, use the helloworld.yml file available here.

This link offers a very similar playbook to the one created manually in the instructions found in Automation controller User Guide. Using it will not alter or harm your system in any way.

16.1.2.2. SCM Type - Configuring playbooks to use Red Hat Insights

Procedure

  1. In the Project Details page, select Red Hat Insights from the SCM Type menu.
  2. In the Credential field, select the appropriate credential for use with Insights, as Red Hat Insights requires a credential for authentication.
  3. Optional: In the SCM Update Options field, select the launch behavior, if applicable.

    • Clean - Removes any local modifications before performing an update.
    • Delete - Deletes the local repository in its entirety before performing an update. Depending on the size of the repository this can significantly increase the amount of time required to complete an update.
    • Update Revision on Launch - Updates the revision of the project to the current revision in the remote source control, and caches the roles directory from Ansible Galaxy support or Collections support. Automation controller ensures that the local revision matches, and that the roles and collections are up-to-date. If jobs are spawned faster than the project can synchronize, selecting this enables you to configure a Cache Timeout to cache previous project synchronizations for a certain number of seconds, to avoid job overflow.

      SCM insights

  4. Click Save.
16.1.2.3. SCM Type - Configuring playbooks to use a remote archive

Playbooks that use a remote archive enable projects to be based on a build process that produces a versioned artifact, or release, containing all the requirements for that project in a single archive.

Procedure

  1. In the Project Details page, select Remote Archive from the SCM Type menu.
  2. Enter the appropriate details into the following fields:

    • SCM URL - requires a URL to a remote archive, such as a GitHub Release or a build artifact stored in Artifactory and unpacks it into the project path for use.
    • SCM Credential - If authentication is required, select the appropriate SCM credential.
  3. Optional: In the SCM Update Options field, select the launch behavior, if applicable:

    • Clean - Removes any local modifications before performing an update.
    • Delete - Deletes the local repository in its entirety before performing an update. Depending on the size of the repository this can significantly increase the amount of time required to complete an update.
    • Update Revision on Launch - Not recommended. This option updates the revision of the project to the current revision in the remote source control, and caches the roles directory from Ansible Galaxy support or Collections support.
    • Allow Branch Override - Not recommended. This option enables a job template that uses this project to launch with a specified SCM branch or revision other than that of the project’s.

      Remote archived project

      Note

      Since this SCM type is intended to support the concept of unchanging artifacts, it is advisable to disable Galaxy integration (for roles, at a minimum).

  4. Click Save.

16.2. Updating projects from source control

Procedure

  1. From the navigation panel, select ResourcesProjects.
  2. Click the sync Sync icon next to the project that you want to update.

    Note

    Immediately after adding a project setup to use source control, a sync starts that fetches the project details from the configured source control.

    • Click the project’s status under the Status column for further information about the update process. This brings you to the Output tab of the Jobs section.

      Project-update status

16.3. Work with permissions

The set of permissions assigned to a project (role-based access controls) that provide the ability to read, change, and administer projects, inventories, job templates, and other elements are privileges.

To access the project permissions, select the Access tab of the Projects page. This screen displays a list of users that currently have permissions to this project.

You can sort and search this list by Username, First Name, or Last Name.

16.3.1. Adding project permissions

Manage the permissions that users and teams have to access a project.

Procedure

  1. From the navigation panel, select ResourcesProjects.
  2. Select the project that you want to update and click the Access tab.
  3. Click Add.
  4. Select a user or team to add and click Next.
  5. Select one or more users or teams from the list by clicking the checkbox next to the name to add them as members.
  6. Click Next.
  7. Select the roles you want the selected users or teams to have. Be sure to scroll down for a complete list of roles. Different resources have different options available.

    Add user roles

  8. Click Save to apply the roles to the selected users or teams and to add them as members. The updated roles assigned for each user and team are displayed.

    Permissions assigned

16.3.2. Removing permissions from a project

Remove roles for a particular user.

Procedure

  1. From the navigation panel, select ResourcesProjects.
  2. Select the project that you want to update and click the Access tab.
  3. Click the Disassociate icon next to the user role in the Roles column.
  4. Click Delete in the confirmation window to confirm the disassociation.

16.4. Ansible Galaxy support

At the end of a project update, automation controller searches for the requirements.yml file in the roles directory, located at <project-top-level-directory>/roles/requirements.yml.

If this file is found, the following command automatically runs:

ansible-galaxy role install -r roles/requirements.yml -p <project-specific cache location>/requirements_roles -vvv

This file enables you to reference Ansible Galaxy roles or roles within other repositories which can be checked out in conjunction with your own project. The addition of Ansible Galaxy access eliminates the need to create git submodules to achieve this result. Given that SCM projects, along with roles or collections, are pulled into and executed from a private job environment, a <private job directory> specific to the project within /tmp is created by default. However, you can specify another Job Execution Path based on your environment in the Jobs Settings tab of the Settings window:

Configure execution path

The cache directory is a subdirectory inside the global projects folder. The content can be copied from the cache location to <job private directory>/requirements_roles.

By default, automation controller has a system-wide setting that enables you to dynamically download roles from the roles/requirements.yml file for SCM projects. You can turn off this setting in the Jobs settings screen of the Settings menu by switching the Enable Role Download toggle button to Off.

Whenever a project synchronization runs, automation controller determines if the project source and any roles from Galaxy or Collections are out of date with the project. Project updates download the roles inside the update.

If jobs need to pick up a change made to an upstream role, updating the project ensures that this happens. A change to the role means that a new commit was pushed to the provision-role source control.

To make this change take effect in a job, you do not have to push a new commit to the playbooks repository. You must update the project, which downloads roles to a local cache.

For instance, say you have two git repositories in source control. The first one is playbooks and the project in automation controller points to this URL. The second one is provision-role and it is referenced by the roles/requirements.yml file inside of the playbooks git repository.

Jobs download the most recent roles before every job run. Roles and collections are locally cached for performance reasons. You must select Update Revision on Launch in the project SCM Update Options to ensure that the upstream role is re-downloaded before each job run:

update-on-launch

The update happens much earlier in the process than the sync, so this identifies errors and details faster and in a more logical location.

For more information and examples on the syntax of the requirements.yml file, see the role requirements section in the Ansible documentation.

If there are any directories that must be specifically exposed, you can specify those in the Jobs section of the Settings screen in Paths to Expose to Isolated Jobs. You can also update the following entry in the settings file:

AWX_ISOLATION_SHOW_PATHS = ['/list/of/', '/paths']
Note

If your playbooks need to use keys or settings defined in AWX_ISOLATION_SHOW_PATHS, you must add AWX_ISOLATION_SHOW_PATHS to /var/lib/awx/.ssh.

If you made changes in the settings file, be sure to restart services with the automation-controller-service restart command after your changes have been saved.

In the UI, you can configure these settings in the Jobs settings window.

Configure jobs

16.5. Collections support

Automation controller supports project-specific Ansible collections in job runs. If you specify a collections requirements file in the SCM at collections/requirements.yml, automation controller installs collections in that file in the implicit project synchronization before a job run.

Automation controller has a system-wide setting that enables collections to be dynamically downloaded from the collections/requirements.yml file for SCM projects. You can turn off this setting in the Jobs settings tab of the Settings menu by switching the Enable Collections Download toggle button to Off.

Download collections

Roles and collections are locally cached for performance reasons, and you select Update Revision on Launch in the project SCM Update Options to ensure this:

update-on-launch

Note

If you also have collections installed in your execution environment, the collections specified in the project’s requirements.yml file will take precedence when running a job. This precedence applies regardless of the version of the collection. For example, if the collection specified in requirements.yml is older than the collection within the execution environment, the collection specified in requirements.yml is used.

16.5.1. Using collections with automation hub

Before automation controller can use automation hub as the default source for collections content, you must create an API token in the automation hub UI. You then specify this token in automation controller.

Use the following procedure to connect to private automation hub or automation hub, the only difference is which URL you specify.

Procedure

  1. Go to https://console.redhat.com/ansible/automation-hub/token.
  2. Click Load token.
  3. Click the copy Copy icon to copy the API token to the clipboard.
  4. Create a credential by choosing one of the following options:

    1. To use automation hub, create an automation hub credential using the copied token and pointing to the URLs shown in the Server URL and SSO URL fields of the token page:

    2. To use private automation hub, create an automation hub credential using a token retrieved from the Repo Management dashboard of your private automation hub and pointing to the published repository URL as shown:

      image

      You can create different repositories with different namespaces or collections in them. For each repository in automation hub you must create a different credential.

      Copy the Ansible CLI URL from the UI in the format of /https://$<hub_url>/api/galaxy/content/<repo you want to pull from> into the Galaxy Server URL field of the Create Credential form:

      Create hub credential

      For UI specific instructions, see Red Hat Certified, validated, and Ansible Galaxy content in automation hub.

  5. Go to the organization for which you want to synchronize content from and add the new credential to the organization. This enables you to associate each organization with the credential, or repository, that you want to use content from.

    Credential association

    Example

    You have two repositories:

    • Prod: Namespace 1 and Namespace 2, each with collection A and B so: namespace1.collectionA:v2.0.0 and namespace2.collectionB:v2.0.0
    • Stage: Namespace 1 with only collection A so: namespace1.collectionA:v1.5.0 on , you have a repository URL for Prod and Stage.

      You can create a credential for each one.

      Then you can assign different levels of access to different organizations. For example, you can create a Developers organization that has access to both repository, while an Operations organization just has access to the Prod repository only.

      For UI specific instructions, see Configuring user access for container repositories in private automation hub.

  6. If automation hub has self-signed certificates, use the toggle to enable the setting Ignore Ansible Galaxy SSL Certificate Verification. For automation hub, which uses a signed certificate, use the toggle to disable it instead. This is a global setting:

    image

  7. Create a project, where the source repository specifies the necessary collections in a requirements file located in the collections/requirements.yml file. For information about the syntax to use, see Using Ansible collections in the Ansible documentation.

    Project source repository

  8. In the Projects list view, click the sync Update icon to update this project. Automation controller fetches the Galaxy collections from the collections/requirements.yml file and reports it as changed. The collections are installed for any job template using this project.
Note

If updates are required from Galaxy or Collections, a sync is performed that downloads the required roles, consuming that much more space in your /tmp file. In cases where you have a large project (around 10 GB), disk space on /tmp may be an issue.

Additional resources

For more information about collections, see Using Collections.

For more information about how Red Hat publishes one of these official collections, which can be used to automate your install directly, see the AWX Ansible Collection documentation.

Chapter 17. Project Signing and Verification

Project signing and verification lets you sign files in your project directory, then verify whether or not that content has changed in any way, or files have been added or removed from the project unexpectedly. To do this, you require a private key for signing and a matching public key for verifying.

For project maintainers, the supported way to sign content is to use the ansible-sign utility, using the command-line interface (CLI) supplied with it.

The CLI aims to make it easy to use cryptographic technology such as GNU Privacy Guard (GPG) to validate that files within a project have not been tampered with in any way. Currently, GPG is the only supported means of signing and validation.

Automation controller is used to verify the signed content. After a matching public key has been associated with the signed project, automation controller verifies that the files included during signing have not changed, and that files have been added or removed unexpectedly. If the signature is not valid or a file has changed, the project fails to update, and jobs making use of the project will not launch. Verification status of the project ensures that only secure, untampered content can be run in jobs.

If the repository has already been configured for signing and verification, the usual workflow for altering the project becomes the following:

  1. You have a project repository set up already and want to make a change to a file.
  2. You make the change, and run the following command:

    ansible-sign project gpg-sign /path/to/project

    This command updates a checksum manifest and signs it.

  3. You commit the change, the updated checksum manifest, and the signature to the repository.
  4. When you synchronize the project, automation controller pulls in the new changes, checks that the public key associated with the project in automation controller matches the private key that the checksum manifest was signed with (this prevents tampering with the checksum manifest itself), then re-calculates the checksums of each file in the manifest to ensure that the checksum matches (and thus that no file has changed). It also ensures that all files are accounted for:

Files must be included in, or excluded from, the MANIFEST.in file. For more information on this file, see Sign a project If files have been added or removed unexpectedly, verification fails.

Content signing

17.1. Prerequisites

  • RHEL nodes must properly be subscribed to:

    • RHEL subscription with baseos and appstream repositories must be enabled.
    • Your Red Hat Ansible Automation Platform subscription and the proper channel must be enabled:

      ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms for RHEL 8
      ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms for RHEL 9
  • A valid GPG public or private keypair is required for signing content. For more information, see How to create GPG keypairs.

    For more information about GPG keys, see the GnuPG documentation.

    Verify that you have a valid GPG keypair in your default GnuPG keyring, with the following command:

    gpg --list-secret-keys

    If this command produces no output, or one line of output that states, trustdb was created, then you do not have a secret key in your default keyring. In this case, refer to How to create GPG keypairs to learn how to create a new keypair before proceeding. If it produces any other output, you have a valid secret key and are ready to use ansible-sign.

17.2. Adding a GPG key to automation controller

To use the GPG key for content signing and validation in automation controller, add it by running the following command in the CLI:

$ gpg --list-keys
$ gpg --export --armour <key fingerprint> > my_public_key.asc
  1. From the navigation panel, select ResourcesCredentials.
  2. Click Add.
  3. Provide a meaningful name for the new credential, for example, "Infrastructure team public GPG key".
  4. In the Credential Type field, select GPG Public Key.
  5. Click Browse to locate and select the public key file, for example, my_public_key.asc.
  6. Click Save.

    image

    This credential can now be selected in projects <ug_projects_add>, and content verification automatically takes place on future project synchronizations.

Note

Use the project cache SCM timeout to control how often you want automation controller to re-validate the signed content. When a project is configured to update on launch (of any job template configured to use that project), you can enable the cache timeout setting, which sets it to update after N seconds have passed since the last update. If validation is running too frequently, you can slow down how often project updates occur by specifying the time in the Cache Timeout field of the Option Details pane of the project.

image

17.3. Installing the ansible-sign CLI utility

Use the ansible-sign utility to provide options for the user to sign and verify whether the project is signed.

Procedure

  1. Run the following command to install ansible-sign:

    $ dnf install ansible-sign
  2. Verify that ansible-sign was successfully installed using the following command:

    $ ansible-sign --version

    Output similar to the following indicates that you have successfully installed ansible-sign:

    ansible-sign 0.1

17.4. Sign a project

Signing a project involves an Ansible project directory. For more information on project directory structures, see Sample Ansible setup in the Ansible documentation.

The following sample project has a very simple structure: an inventory file, and two small playbooks under a playbooks directory:

$ cd sample-project/
$ tree -a .
.
├── inventory
└── playbooks
    └── get_uptime.yml
    └── hello.yml

    1 directory, 3 files
Note

The commands used assume that your working directory is the root of your project. ansible-sign project commands take the project root directory as their last argument.

Use . to indicate the current working directory.

ansible-sign protects content from tampering by taking checksums (SHA256) of all of the secured files in the project, compiling those into a checksum manifest file, and then signing that manifest file.

To sign content, create a MANIFEST.in file in the project root directory that tells ansible-sign which files to protect.

Internally, ansible-sign uses the distlib.manifest module of Python’s distlib library, therefore MANIFEST.in must follow the syntax that this library specifies. For an explanation of the 'MANIFEST.in` file directives, see the Python Packaging User Guide.

In the sample project, two directives are included, resulting in the following MANIFEST.in file:

include inventory
recursive-include playbooks *.yml

With this file in place, generate your checksum manifest file and sign it. Both of these steps are achieved in a single ansible-sign command:

$ ansible-sign project gpg-sign .

Successful execution displays output similar to the following:

[OK   ] GPG signing successful!
[NOTE ] Checksum manifest: ./.ansible-sign/sha256sum.txt
[NOTE ] GPG summary: signature created

The project has now been signed.

Note that the gpg-sign subcommand resides under the project subcommand.

For signing project content, every command starts with ansible-sign project.

Every ansible-sign project command takes the project root directory . as its final argument.

ansible-sign makes use of your default keyring and looks for the first available secret key that it can find, to sign your project. You can specify a specific secret key to use with the --fingerprint option, or even a completely independent GPG home directory with the --gnupg-home option.

Note

If you are using a desktop environment, GnuPG automatically prompts you for your secret key’s passphrase.

If this functionality does not work, or you are working without a desktop environment, for example, through SSH, you can use the -p --prompt-passphrase flag after gpg-sign , which causes ansible-sign to prompt for the password instead.

Note that an .ansible-sign directory was created in the project directory. This directory contains the checksum manifest and a detached GPG signature for it.

$ tree -a .
.
├── .ansible-sign
│   ├── sha256sum.txt
│   └── sha256sum.txt.sig
├── inventory
├── MANIFEST.in
└── playbooks
    ├── get_uptime.yml
    └── hello.yml

17.5. Verify your project

To verify that a signed Ansible project has not been altered, you can use ansible-sign to check whether the signature is valid and that the checksums of the files match what the checksum manifest says they should be. The ansible-sign project gpg-verify command can be used to automatically verify both of these conditions.

$ ansible-sign project gpg-verify .
[OK   ] GPG signature verification succeeded.
[OK   ] Checksum validation succeeded.
Note

By default, ansible-sign makes use of your default GPG keyring to look for a matching public key. You can specify a keyring file with the --keyring option, or a different GPG home with the --gnugpg-home option.

If verification fails for any reason, information is displayed to help you debug the cause. More verbosity can be enabled by passing the global --debug flag, immediately after ansible-sign in your commands.

Note

When a GPG credential is used in a project, content verification automatically takes place on future project synchronizations.

17.6. Automate signing

In environments with highly-trusted Continuous Integration (CI) environments such as OpenShift or Jenkins, it is possible to automate the signing process.

For example, you can store your GPG private key in a CI platform of choice as a secret, and import that into GnuPG in the CI environment. You can then run through the signing workflow within the normal CI environment.

When signing a project using GPG, the environment variable ANSIBLE_SIGN_GPG_PASSPHRASE can be set to the passphrase of the signing key. This can be injected and masked or secured in a CI pipeline.

Depending on the scenario, ansible-sign returns with a different exit-code, during both signing and verification. This can also be useful in the context of CI and automation, as a CI environment can act differently based on the failure. For example, it can send alerts for some errors, but fail silently for others.

These are the current exit codes used in ansible-sign, which can be considered stable:

Exit codeApproximate meaningExample scenarios

0

Success

  • Signing was successful
  • Verification was successful

1

General failure

  • The checksum manifest file contained a syntax error during verification
  • The signature file did not exist during verification
  • MANIFEST.in did not exist during signing

2

Checksum verification failure

  • The checksum hashes calculated during verification differed from what was in the signed checksum manifest, for example, a project file was changed but the signing process was not re-completed.

3

Signature verification failure

  • The signer’s public key was not in the user’s GPG keyring
  • The wrong GnuPG home directory or keyring file was specified
  • The signed checksum manifest file was modified in some way

4

Signing process failure

  • The signer’s private key was not found in the GPG keyring
  • The wrong GnuPG home directory or keyring file was specified

Chapter 18. Inventories

Red Hat Ansible Automation Platform works against a list of managed nodes or hosts in your infrastructure that are logically organized, using an inventory file. You can use the Red Hat Ansible Automation Platform installer inventory file to specify your installation scenario and describe host deployments to Ansible. By using an inventory file, Ansible can manage a large number of hosts with a single command. Inventories also help you use Ansible more efficiently by reducing the number of command line options you have to specify. Inventories are divided into groups and these groups contain the hosts.

Groups may be sourced manually, by entering host names into automation controller, or from one of its supported cloud providers.

Note

If you have a custom dynamic inventory script, or a cloud provider that is not yet supported natively in automation controller, you can also import that into automation controller.

For more information, see Inventory file importing in the Automation controller Administration Guide.

From the navigation panel, select ResourcesInventories. The Inventories window displays a list of the inventories that are currently available. You can sort the inventory list by Name, Type, or Organization.

Inventories - home

The Inventory details page includes:

  • Name: The inventory name.
  • Status

The statuses are:

  • Success: When the inventory source sync completed successfully
  • Disabled: No inventory source added to the inventory
  • Error: When the inventory source sync completed with error

    • Type: Identifies whether it is a standard inventory, a Smart inventory, or a constructed inventory.
    • Organization: The organization to which the inventory belongs.
    • Actions: The following actions are available for the selected inventory:
  • Edit Edit : Edit the properties for the selected inventory
  • Copy Copy : Makes a copy of an existing inventory as a template for creating a new one

Click the Inventory name to display the Details page for the selected inventory, which shows the inventory’s groups and hosts.

18.1. Smart Inventories

Smart Inventories are collections of hosts defined by a stored search that can be viewed like a standard inventory and can be easily used with job runs. Organization administrators have admin permission for inventories in their organization and can create Smart Inventories.

A Smart Inventory is identified by KIND=smart.

You can define a Smart Inventory using the same method being used with Search. InventorySource is directly associated with an Inventory.

Note

Smart inventories are deprecated and will be removed in a future release. Consider moving to constructed inventories for enhancements and replacement.

The Inventory model has the following new fields that are blank by default but are set accordingly for Smart Inventories:

  • kind is set to smart for Smart Inventories.
  • host_filter is set AND kind is set to smart for Smart Inventories.

The host model has a related endpoint, smart_inventories that identifies a set of all the Smart Inventories a host is associated with. The membership table is updated every time a job runs against a smart inventory.

Note

To update the memberships more frequently, you can change the AWX_REBUILD_SMART_MEMBERSHIP file-based setting to True. (The default is False). This updates memberships if the following events occur:

  • A new host is added
  • An existing host is modified (updated or deleted)
  • A new Smart Inventory is added
  • An existing Smart Inventory is modified (updated or deleted)

You can view inventories without being editable:

  • Names of Host and Group created as a result of an inventory source synchronization.
  • Group records cannot be edited or moved.

You cannot create hosts from a Smart Inventory host endpoint (/inventories/N/hosts/) as with a normal inventory. The administrator of a Smart Inventory has permission to edit fields such as the name, description, variables, and the ability to delete, but does not have the permission to modify the host_filter, because that affects which hosts (that have a primary membership inside another inventory) are included in the smart inventory.

host_filter only applies to hosts inside of inventories inside the Smart Inventory’s organization.

To modify host_filter, you must be the organization administrator of the inventory’s organization. Organization administrators have implicit "admin" access to all inventories inside the organization, therefore, this does not convey any permissions they did not already possess.

Administrators of the Smart Inventory can grant other users (who are not also admins of your organization) permissions such as "use" and "adhoc" to the smart inventory. These permit the actions indicated by the role, as with other standard inventories. However, this does not grant any special permissions to hosts (which live in a different inventory). It does not permit direct read permission to hosts, or permit them to see additional hosts under /#/hosts/, although they can still view the hosts under the smart inventory host list.

In some situations, you can modify the following:

  • A new Host created manually on Inventory with Inventory sources.
  • Groups that were created as a result of inventory source synchronizations.
  • Variables on Host and Group are not changeable, even as the local System Administrator.

Hosts associated with the Smart Inventory are manifested at view time. If the results of a Smart Inventory contains more than one host with identical hostnames, only one of the matching hosts is included as part of the Smart Inventory, ordered by Host ID.

18.1.1. Smart Host Filters

You can use a search filter to populate hosts for an inventory. This feature uses the fact searching feature.

Automation controller stores facts generated by an Ansible playbook during a Job Template in the database whenever use_fact_cache=True is set per-Job Template. New facts are merged with existing facts and are per-host. These stored facts can be used to filter hosts with the /api/v2/hosts endpoint, using the GET query parameter host_filter.

For example:

/api/v2/hosts?host_filter=ansible_facts__ansible_processor_vcpus=8

The host_filter parameter permits:

  • grouping with ()
  • use of the boolean and operator:

    • __ to reference related fields in relational fields
    • __ is used on ansible_facts to separate keys in a JSON key path
    • `[] is used to denote a json array in the path specification
    • "" can be used in the value when spaces are wanted in the value
  • "classic" Django queries may be embedded in the host_filter

Examples:

/api/v2/hosts/?host_filter=name=localhost
/api/v2/hosts/?host_filter=ansible_facts__ansible_date_time__weekday_number="3"
/api/v2/hosts/?host_filter=ansible_facts__ansible_processor[]="GenuineIntel"
/api/v2/hosts/?host_filter=ansible_facts__ansible_lo__ipv6[]__scope="host"
/api/v2/hosts/?host_filter=ansible_facts__ansible_processor_vcpus=8
/api/v2/hosts/?host_filter=ansible_facts__ansible_env__PYTHONUNBUFFERED="true"
/api/v2/hosts/?host_filter=(name=localhost or name=database) and (groups__name=east or groups__name="west coast") and ansible_facts__an

You can search host_filter by host name, group name, and Ansible facts.

Group search has the following format:

groups.name:groupA

Fact search has the following format:

ansible_facts.ansible_fips:false

You can also perform Smart Search searches, which consist of a host name and host description.

host_filter=name=my_host
Note

If a search term in host_filter is of string type, to make the value a number (for example, 2.66) or a JSON keyword (for example, null, true or false) valid, add double quotations around the value to prevent the controller from parsing it as a non-string:

host_filter=ansible_facts__packages__dnsmasq[]__version="2.66"

18.1.2. Defining a host filter with ansible_facts

Use the following procedure to use ansible_facts to define the host filter when creating Smart Inventories.

Procedure

  1. From the navigation panel, select ResourcesInventories.
  2. Select Add Smart Inventory from Add list.
  3. In the Create new smart inventory page, click the Search icon in the Smart host filter field. This opens a window to filter hosts for this inventory.

    Dfine host filter

  4. In the search menu, change the search criteria from Name to Advanced and select ansible_facts from the Key field.

    Define host filter facts

    If you wanted to add the following ansible fact:

    /api/v2/hosts/?host_filter=ansible_facts__ansible_processor[]="GenuineIntel"

    In the search field, enter ansible_processor[]="GenuineIntel" (no extra spaces or __ before the value) and click Enter.

    image

    The search criteria for the specified ansible fact is displayed.

  5. Click Select to add it to the Smart host filter field.
  6. Click Save.
  7. The Details tab of the new Smart Inventory opens and displays the specified ansible facts in the Smart host filter field.
  8. From the Details view, you can edit the Smart host filter field by clicking Edit and delete existing filters, clear all existing filters, or add new ones.

    image

18.2. Constructed Inventories

You can create a new inventory (called a constructed inventory) from a list of input inventories.

A constructed inventory contains copies of hosts and groups in its input inventories, permitting jobs to target groups of servers across multiple inventories. Groups and hostvars can be added to the inventory content, and hosts can be filtered to limit the size of the constructed inventory.

Constructed inventories use the ansible.builtin.constructed inventory model.

The key factors of a constructed inventory are:

  • The normal Ansible hostvars namespace is available
  • They provide groups

Constructed inventories take source_vars and limit as inputs and transform its input_inventories into a new inventory, complete with groups. Groups (existing or constructed) can then be referenced in the limit field to reduce the number of hosts produced.

You can construct groups based on these host properties:

  • RHEL major or minor versions
  • Windows hosts
  • Cloud based instances tagged in a certain region
  • other

The following is an example of a constructed inventory details view:

Constructed inventory details

The examples described in subsequent sections are organized by the structure of the input inventories.

18.2.1. Filtering on group name and variables

You can filter on a combination of groups and variables. For example, you can filter hosts that match a group variable value and also match a host variable value.

There are two approaches to executing this filter:

  • Define two groups: one group to match the group variable and the other group to match the host variable value. Use the limit pattern to return the hosts that are in both groups. This is the recommended approach.
  • Define one group. In the definition, include the condition that the group and host variables must match specific values. Use the limit pattern to return all the hosts in the new group.

Example:

The following inventory file defines four hosts and sets group and host variables. It defines a product group, a sustaining group, and it sets two hosts to a shutdown state.

The goal is to create a filter that returns only production hosts that are shutdown.

[account_1234]
host1
host2 state=shutdown

[account_4321]
host3
host4 state=shutdown

[account_1234:vars]
account_alias=product_dev

[account_4321:vars]
account_alias=sustaining

The goal here is to return only shutdown hosts that are present in the group with the account_alias variable equal to product_dev. There are two approaches to this, both shown in YAML format. The first one suggested is recommended.

  1. Construct 2 groups, limit to intersection:

    source_vars:

    plugin: constructed
    strict: true
    groups:
      is_shutdown: state | default("running") == "shutdown"
      product_dev: account_alias == "product_dev"

    limit: is_shutdown:&product_dev

    This constructed inventory input creates a group for both categories and uses the limit (host pattern) to only return hosts that are in the intersection of those two groups, which is documented in Patterns:targeting hosts and groups.

    When a variable is or is not defined (depending on the host), you can give a default. For example, use | default("running") if you know what value it should have when it is not defined. This helps with debugging, as described in Debugging tips.

  2. Construct 1 group, limit to group:

    source_vars:

    plugin: constructed
    strict: true
    groups:
      shutdown_in_product_dev: state | default("running") == "shutdown" and account_alias == "product_dev"

    limit: shutdown_in_product_dev

    This input creates one group that only includes hosts that match both criteria. The limit is then just the group name by itself, returning host2. The same as the earlier approach.

18.2.2. Debugging tips

It is important to set the strict parameter to true so that you can debug problems with your templates. If the template fails to render, an error occurs in the associated inventory update for that constructed inventory.

When encountering errors, increase verbosity to get more details.

Giving a default, such as | default("running") is a generic use of Jinja2 templates in Ansible. Doing this avoids errors from the template when you set strict: true.

You can also set strict: false, and so enable the template to produce an error, which results in the host not getting included in that group. However, doing this makes it difficult to debug issues in the future if your templates continue to grow in complexity.

You might still have to debug the intended function of the templates if they are not producing the expected inventory content. For example, if a groups group has a complex filter (like shutdown_in_product_dev) but does not contain any hosts in the resultant constructed inventory, then use the compose parameter to help debug.

For example:

source_vars:

plugin: constructed
strict: true
groups:
  shutdown_in_product_dev: state | default("running") == "shutdown" and account_alias == "product_dev"
compose:
  resolved_state: state | default("running")
  is_in_product_dev: account_alias == "product_dev"

limit: ``

Running with a blank limit returns all hosts. You can use this to inspect specific variables on specific hosts, giving insight into where problems in the groups lie.

18.2.3. Nested groups

A nested group consists of two groups where one is a child of the other. In the following example, the child group has another host inside of it, and the parent group has a variable defined.

Because of the way Ansible core operates, the variable of the parent group is available in the namespace as a playbook is running, and can be used for filtering.

The following example inventory file, nested.yml is in YAML format:

all:
  children:
    groupA:
      vars:
        filter_var: filter_val
      children:
        groupB:
          hosts:
            host1: {}
    ungrouped:
      hosts:
        host2: {}

Because host1 is in groupB, it is also in groupA.

Filter on nested group names

Use the following YAML format to filter on nested group names:

`source_vars`:

plugin: constructed

`limit`: `groupA`

Filter on nested group property

Use the following YAML format to filter on a group variable, even if the host is indirectly a member of that group.

In the inventory content, note that host2 is not expected to have the variable filter_var defined, because it is not in any of the groups. Because strict: true is used, use a default value so that hosts without that variable are defined. Using this, host2, returns false from the expression, instead of producing an error. host1 inherits the variable from its groups, and is returned.

source_vars:

plugin: constructed
strict: true
groups:
  filter_var_is_filter_val: filter_var | default("") == "filter_val"

limit: filter_var_is_filter_val

18.2.4. Ansible facts

To create an inventory with Ansible facts, you must run a playbook against the inventory that has the setting gather_facts: true. The facts differ system-to-system. The following examples are not intended to address all known scenarios.

18.2.4.1. Filter on environment variables

The following example involves filtering on environmental variables using the YAML format:

source_vars:

plugin: constructed
strict: true
groups:
  hosts_using_xterm: ansible_env.TERM == "xterm"

limit: hosts_using_xterm
18.2.4.2. Filter hosts by processor type

The following example involves filtering hosts by processor type (Intel) using the YAML format:

source_vars:

plugin: constructed
strict: true
groups:
  intel_hosts: "GenuineIntel" in ansible_processor

limit: intel_hosts
Note

Hosts in constructed inventories are not counted against your license allotment because they are referencing the original inventory host. Additionally, hosts that are disabled in the original inventories are not included in the constructed inventory.

An inventory update run using ansible-inventory creates the constructed inventory contents.

This is always configured to update-on-launch before a job, but you can still select a cache timeout value in case this takes too long.

When creating a constructed inventory, the API ensures that it always has one inventory source associated with it. All inventory updates have an associated inventory source, and the fields needed for constructed inventory (source_vars and limit) are fields already present on the inventory source model.

18.3. Inventory Plugins

Inventory updates use dynamically-generated YAML files which are parsed by their respective inventory plugin. In automation controller v4.4, you can provide the inventory plugin configuration directly to automation controller using the inventory source source_vars for the following inventory sources:

Newly created configurations for inventory sources contain the default plugin configuration values. If you want your newly created inventory sources to match the output of a legacy source, you must apply a specific set of configuration values for that source. To ensure backward compatibility, automation controller uses "templates" for each of these sources to force the output of inventory plugins into the legacy format.

For more information on sources and their respective templates, see Supported inventory plugin templates.

source_vars that contain plugin: foo.bar.baz as a top-level key are replaced with the fully-qualified inventory plugin name at runtime based on the InventorySource source. For example, if ec2 is selected for the InventorySource then, at run-time, plugin is set to amazon.aws.aws_ec2.

18.4. Add a new inventory

Adding a new inventory involves the following components:

Use the following procedure to create an inventory:

Procedure

  1. From the navigation panel, select ResourcesInventories. The Inventories window displays a list of the inventories that are currently available.
  2. Click Add, and select the type of inventory to create.
  3. Enter the appropriate details into the following fields:

    • Name: Enter a name appropriate for this inventory.
    • Optional: Description: Enter an arbitrary description as appropriate.
    • Organization: Required. Choose among the available organizations.
    • Only applicable to Smart Inventories: Smart Host Filter: Click the Search icon to open a separate window to filter hosts for this inventory. These options are based on the organization you chose.

      Filters are similar to tags in that tags are used to filter certain hosts that contain those names. Therefore, to populate the Smart Host Filter field, specify a tag that contains the hosts you want, not the hosts themselves. Enter the tag in the Search field and click Enter. Filters are case-sensitive. For more information, see Smart host filters.

    • Instance Groups: Click the Search icon to open a separate window. Select the instance group or groups for this inventory to run on. If the list is extensive, use the search to narrow the options. You can select multiple instance groups and sort them in the order that you want them run.

      image

    • Optional: Labels: Supply labels that describe this inventory, so they can be used to group and filter inventories and jobs.
    • Only applicable to constructed inventories: Input inventories: Specify the source inventories to include in this constructed inventory. Click the Search icon to select from available inventories. Empty groups from input inventories are copied into the constructed inventory.
    • Optional:(Only applicable to constructed inventories): Cached timeout (seconds): Set the length of time you want the cache plugin data to timeout.
    • Only applicable to constructed inventories: Verbosity: Control the level of output that Ansible produces as the playbook executes related to inventory sources associated with constructed inventories. Select the verbosity from Normal to various Verbose or Debug settings. This only appears in the "details" report view.

      • Verbose logging includes the output of all commands.
      • Debug logging is exceedingly verbose and includes information on SSH operations that can be useful in certain support instances. Most users do not need to see debug mode output.
    • Only applicable to constructed inventories: Limit: Restricts the number of returned hosts for the inventory source associated with the constructed inventory. You can paste a group name into the limit field to only include hosts in that group. For more information, ses the Source vars setting.
    • Only applicable to standard inventories: Options: Check the Prevent Instance Group Fallback option to enable only the instance groups listed in the Instance Groups field to execute the job. If unchecked, all available instances in the execution pool will be used based on the hierarchy described in Control where a job runs in the Automation controller Administration Guide. Click the Help icon for additional information.

      Note

      Set the prevent_instance_group_fallback option for Smart Inventories through the API.

    • Variables (Source vars for constructed inventories):

      • Variables Variable definitions and values to apply to all hosts in this inventory. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two.
      • Source vars for constructed inventories creates groups, specifically under the groups key of the data. It accepts Jinja2 template syntax, renders it for every host, makes a true or false evaluation, and includes the host in the group (from the key of the entry) if the result is true. This is particularly useful because you can paste that group name into the limit field to only include hosts in that group. See Example 1 in Smart host filters.
  4. Click Save.

After saving the new inventory, you can proceed with configuring permissions, groups, hosts, sources, and view completed jobs, if applicable to the type of inventory.

18.4.1. Adding permissions to inventories

Use the following procedure to add permissions to inventories:

Procedure

  1. From the navigation panel, select ResourcesInventories.
  2. Select a template, and in the Access tab, click Add.
  3. Select a user or team to add and click Next.
  4. Select the check box next to a name to add one or more users or teams from the list as members.
  5. Click Next.

    image

    In this example, two users have been selected to be added.

  6. Select the roles you want the selected users or teams to have. Scroll down for a complete list of roles. Different resources have different options available.

    Add user roles

  7. Click Save to apply the roles to the selected users or teams and to add them as members.

The Add Users or Teams window closes to display the updated roles assigned for each user and team.

Permissions tab with Role Assignments

Removing a permission

  • To remove roles for a particular user, click the Disassociate icon next to its resource.

This launches a confirmation window, asking you to confirm the disassociation.

18.4.2. Adding groups to inventories

Inventories are divided into groups, which can contain hosts and other groups. Groups are only applicable to standard inventories and are not a configurable directly through a Smart Inventory. You can associate an existing group through hosts that are used with standard inventories.

The following actions are available for standard inventories:

  • Create a new Group
  • Create a new Host
  • Run a command on the selected Inventory
  • Edit Inventory properties
  • View activity streams for Groups and Hosts
  • Obtain help building your Inventory
Note

Inventory sources are not associated with groups. Spawned groups are top-level and can still have child groups. All of these spawned groups can have hosts.

Use the following procedure to create a new group for an inventory:

Procedure

  1. Select the Inventory name you want to add groups to.
  2. In the Inventory Details page, select the Groups tab.
  3. Click Add to open the Create Group window.
  4. Enter the appropriate details:

    • Name: Required
    • Optional: Description: Enter a description as appropriate.
    • Variables: Enter definitions and values to be applied to all hosts in this group. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two.
  5. Click Save.
  6. When you have added a group to a template, the Group details page is displayed.
18.4.2.1. Adding groups within groups

Use the following procedure to add groups within groups:

Procedure

  1. When you have added a group to a template, the Group details page is displayed.
  2. Select the Related Groups tab.
  3. Click Add.
  4. Select whether to add a group that already exists in your configuration or create a new group.
  5. If creating a new group, enter the appropriate details into the required and optional fields:

    • Name (required):
    • Optional: Description: Enter a description as appropriate.
    • Variables: Enter definitions and values to be applied to all hosts in this group. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two.
  6. Click Save.
  7. The Create Group window closes and the newly created group is displayed as an entry in the list of groups associated with the group that it was created for.

If you choose to add an existing group, available groups appear in a separate selection window.

When a group is selected, it is displayed in the list of groups associated with the group.

  • To configure additional groups and hosts under the subgroup, click the name of the subgroup from the list of groups and repeat the steps listed in this section.
18.4.2.2. View or edit inventory groups

The groups list view displays all your inventory groups, or you can filter it to only display the root groups. An inventory group is considered a root group if it is not a subset of another group.

You can delete a subgroup without concern for dependencies, because automation controller looks for dependencies such as child groups or hosts. If any exist, a confirmation window displays for you to choose whether to delete the root group and all of its subgroups and hosts; or to promote the subgroups so they become the top-level inventory groups, along with their hosts.

Delete group

18.4.3. Adding hosts to an inventory

You can configure hosts for the inventory as well as for groups and groups within groups.

Use the following procedure to add hosts:

Procedure

  1. Select the Inventory name you want to add groups to.
  2. In the Inventory Details page, select the Hosts tab.
  3. Click Add.
  4. Select whether to add a host that already exists in your configuration or create a new host.
  5. If creating a new host, set the toggle to On to include this host while running jobs.
  6. Enter the appropriate details:

    • Host Name (required):
    • Optional: Description: Enter a description as appropriate.
    • Variables: Enter definitions and values to be applied to all hosts in this group, as in the following example:

      {
        ansible_user : <username to ssh into>
        ansible_ssh_pass : <password for the username>
        ansible_become_pass: <password for becoming the root>
      }

      Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two.

  7. Click Save.
  8. The Create Host window closes and the newly created host is displayed in the list of hosts associated with the group that it was created for.

    Inventories add group host

    If you choose to add an existing host, available hosts appear in a separate selection window.

    When a host is selected, it is displayed in the list of hosts associated with the group.

  9. You can disassociate a host from this screen by selecting the host and clicking the Disassociate icon.

    Note

    You can also run ad hoc commands from this screen. For more information, see [Running ad hoc commands].

  10. To configure additional groups for the host, click the name of the host from the list of hosts.

    Inventories add group host emphasized

    This opens the Details tab of the selected host.

  11. Select the Groups tab to configure groups for the host.
  12. Click Add to associate the host with an existing group. Available groups appear in a separate selection window.

    image

  13. Select the groups to associate with the host and click Save.

    When a group is associated, it is displayed in the list of groups associated with the host.

  14. If a host was used to run a job, you can view details about those jobs in the Completed Jobs tab of the host.
  15. Click Expanded to view details about each job.

    image

Note

You can create hosts in bulk using the newly added endpoint in the API, /api/v2/bulk/host_create. This endpoint accepts JSON and you can specify the target inventory and a list of hosts to add to the inventory. These hosts must be unique within the inventory. Either all hosts are added, or an error is returned indicating why the operation was not able to complete. Use the OPTIONS request to return the relevant schema.

For more information, see Bulk endpoints in the Automation Controller API Guide.

18.4.4. Adding a source

Inventory sources are not associated with groups. Spawned groups are top-level and can still have child groups. All of these spawned groups can have hosts. Adding a source to an inventory only applies to standard inventories. Smart inventories inherit their source from the standard inventories they are associated with.

Use the following procedure to configure the source for the inventory:

Procedure

  1. Select the Inventory name you want to add a source to.
  2. In the Inventory Details page, select the Sources tab.
  3. Click Add. This opens the Create Source window.

    Inventories create source

  4. Enter the appropriate details:

    • Name (required):
    • Optional: Description: Enter a description as appropriate.
    • Optional: Execution Environment: Click the Search icon or enter the name of the execution environment with which you want to run your inventory imports. For more information on building an execution environment, see Execution environments.
    • Source: Choose a source for your inventory. For more information on sources, and supplying the appropriate information, see Inventory sources.
  5. When the information for your chosen Inventory sources is complete, you can optionally specify other common parameters, such as verbosity, host filters, and variables.
  6. Use the Verbosity menu to select the level of output on any inventory source’s update jobs.
  7. Use the Host Filter field to specify only matching host names to be imported into automation controller.
  8. In the Enabled Variable field, specify that automation controller retrieves the enabled state from the dictionary of host variables. You can specify the enabled variable using dot notation as 'foo.bar', in which case the lookup searches nested dictionaries, equivalent to: from_dict.get('foo', {}).get('bar', default).
  9. If you specified a dictionary of host variables in the Enabled Variable field, you can provide a value to enable on import. For example, for enabled_var='status.power_state' and 'enabled_value='powered_on' in the following host variables, the host is marked enabled:

    {
    "status": {
    "power_state": "powered_on",
    "created": "2020-08-04T18:13:04+00:00",
    "healthy": true
    },
    "name": "foobar",
    "ip_address": "192.168.2.1"
    }

    If power_state is any value other than powered_on, then the host is disabled when imported into automation controller. If the key is not found, then the host is enabled.

  10. All cloud inventory sources have the following update options:

    • Overwrite: If checked, any hosts and groups that were previously present on the external source but are now removed, are removed from the automation controller inventory. Hosts and groups that were not managed by the inventory source are promoted to the next manually created group, or, if there is no manually created group to promote them into, they are left in the "all" default group for the inventory.

      When not checked, local child hosts and groups not found on the external source remain untouched by the inventory update process.

    • Overwrite Variables: If checked, all variables for child groups and hosts are removed and replaced by those found on the external source.

      When not checked, a merge is performed, combining local variables with those found on the external source.

    • Update on Launch: Each time a job runs using this inventory, refresh the inventory from the selected source before executing job tasks.

      To avoid job overflows if jobs are spawned faster than the inventory can synchronize, selecting this enables you to configure a Cache Timeout to previous cache inventory synchronizations for a certain number of seconds.

      The Update on Launch setting refers to a dependency system for projects and inventory, and does not specifically exclude two jobs from running at the same time.

      If a cache timeout is specified, then the dependencies for the second job are created, and it uses the project and inventory update that the first job spawned.

      Both jobs then wait for that project or inventory update to finish before proceeding. If they are different job templates, they can then both start and run at the same time, if the system has the capacity to do so. If you intend to use automation controller’s provisioning callback feature with a dynamic inventory source, Update on Launch must be set for the inventory group.

      If you synchronize an inventory source that uses a project that has Update On Launch set, then the project might automatically update (according to cache timeout rules) before the inventory update starts.

      You can create a job template that uses an inventory that sources from the same project that the template uses. In such a case, the project updates and then the inventory updates (if updates are not already in progress, or if the cache timeout has not already expired).

  11. Review your entries and selections. This enables you to configure additional details, such as schedules and notifications.
  12. To configure schedules associated with this inventory source, click the Schedules tab:

    • If schedules are already set up, then review, edit, enable or disable your schedule preferences.
    • If schedules have not been set up, for more information about setting up schedules, see Schedules.

18.4.5. Configuring notifications for the source

Use the following procedure to configure notifications for the source:

  1. In the Inventory Details page, select the Notifications tab.

    Note

    The Notifications tab is only present when you have saved the newly-created source.

    Notification tab

  2. If notifications are already set up, use the toggles to enable or disable the notifications to use with your particular source. For more information, see Enable and Disable Notifications.
  3. If notifications have not been set up, see Notifications for more information.
  4. Review your entries and selections.
  5. Click Save.

When a source is defined, it is displayed in the list of sources associated with the inventory. From the Sources tab you can perform a sync on a single source, or sync all of them at once. You can also perform additional actions such as scheduling a sync process, and edit or delete the source.

Inventories view sources

18.4.5.1. Inventory sources

Choose a source which matches the inventory type against which a host can be entered:

18.4.5.1.1. Sourcing from a Project

An inventory that is sourced from a project means that it uses the SCM type from the project it is tied to. For example, if the project’s source is from GitHub, then the inventory uses the same source.

Use the following procedure to configure a project-sourced inventory:

Procedure

  1. In the Create new source page, select Sourced from a Project from the Source list.
  2. The Create Source window expands with additional fields. Enter the following details:

    • Optional: Source Control Branch/Tag/Commit: Enter the SCM branch, tags, commit hashes, arbitrary refs, or revision number (if applicable) from the source control (Git or Subversion) to checkout.

      This field only displays if the sourced project has the Allow Branch Override option checked. For further information, see SCM Types - Git and Subversion.

      Allow branch override

      Some commit hashes and refs might not be available unless you also provide a custom refspec in the next field. If left blank, the default is HEAD which is the last checked out Branch/Tag/Commit for this project.

    • Optional: Credential: Specify the credential to use for this source.
    • Project (required): Pre-populates with a default project, otherwise, specify the project this inventory is using as its source. Click the Search icon to choose from a list of projects. If the list is extensive, use the search to narrow the options.
    • Inventory File (required): Select an inventory file associated with the sourced project. If not already populated, you can type it into the text field within the menu to filter extraneous file types. In addition to a flat file inventory, you can point to a directory or an inventory script.

      image

  3. Optional: You can specify the verbosity, host filter, enabled variable/value, and update options as described in Adding a source.
  4. Optional: To pass to the custom inventory script, you can set environment variables in the Environment Variables field. You can also place inventory scripts in source control and then run it from a project. For more information, see Inventory File Importing in Automation controller Administration Guide.
Note

If you are executing a custom inventory script from SCM, ensure that you set the execution bit (chmod +x) for the script in your upstream source control.

If you do not, automation controller throws a [Errno 13] Permission denied error on execution.

18.4.5.1.2. Amazon Web Services EC2

Use the following procedure to configure an AWS EC2-sourced inventory,

Procedure

  1. In the Create new source page, select Amazon EC2 from the Source list.
  2. The Create Source window expands with additional fields. Enter the following details:

    • Optional: Credential: Choose from an existing AWS credential (for more information, see Credentials).

      If automation controller is running on an EC2 instance with an assigned IAM Role, the credential can be omitted, and the security credentials from the instance metadata are used instead. For more information on using IAM Roles, see IAM_Roles_for_Amazon_EC2_documentation_at_Amazon.

  3. Optional: You can specify the verbosity, host filter, enabled variables or values, and update options as described in Adding a source.
  4. Use the Source Variables field to override variables used by the aws_ec2 inventory plugin. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For more information on these variables, see the aws inventory plugin documentation.
Note

If you only use include_filters, the AWS plugin always returns all the hosts. To use this correctly, the first condition on the or must be on filters and then build the rest of the OR conditions on a list of include_filters.

18.4.5.1.3. Google Compute Engine

Use the following procedure to configure a Google-sourced inventory.

Procedure

  1. In the Create new source page, select Google Compute Engine from the Source.
  2. The Create Source window expands with the required Credential field. Choose from an existing GCE Credential. For more information, see Credentials.
  3. Optional: You can specify the verbosity, host filter, enabled variables or values, and update options as described in Adding a source.
  4. Use the Source Variables field to override variables used by the gcp_compute inventory plugin. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For more information on these variables, see the gcp_compute inventory plugin documentation.
18.4.5.1.4. Microsoft Azure resource manager

Use the following procedure to configure an Azure Resource Manager-sourced inventory:

Procedure

  1. In the Create new source page, select Microsoft Azure Resource Manager from the Source list.
  2. The Create Source window expands with the required Credential field. Choose from an existing Azure Credential. For more information, see Credentials.
  3. Optional: You can specify the verbosity, host filter, enabled variables or values, and update options as described in Adding a source.
  4. Use the Source Variables field to override variables used by the azure_rm inventory plugin. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For more information on these variables, see the azure_rm inventory plugin documentation.
18.4.5.1.5. VMware vCenter

Use the following procedure to configure a VMWare-sourced inventory.

Procedure

  1. In the Create new source page, select VMware vCenter from the Source list.
  2. The Create Source window expands with the required Credential field. Choose from an existing VMware Credential. For more information, see Credentials.
  3. Optional: You can specify the verbosity, host filter, enabled variables or values, and update options as described in Adding a source.
  4. Use the Source Variables field to override variables used by the vmware_inventory inventory plugin. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For more information on these variables, see the vmware_inventory inventory plugin.
Note

VMWare properties have changed from lower case to camelCase. Automation controller provides aliases for the top-level keys, but lower case keys in nested properties have been discontinued. For a list of valid and supported properties, refer to Using Virtual machine attributes in VMware dynamic inventory plugin.

18.4.5.1.6. Red Hat Satellite 6

Use the following procedure to configure a Red Hat Satellite-sourced inventory.

Procedure

  1. In the Create new source page, select Red Hat Satellite from the Source list.
  2. The Create Source window expands with the required Credential field. Choose from an existing Satellite Credential. For more information, see Credentials.
  3. Optional: You can specify the verbosity, host filter, enabled variables or values, and update options as described in Adding a source.
  4. Use the Source Variables field to specify parameters used by the foreman inventory source. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For more information on these variables, see the Foreman inventory source in the Ansible documentation.

If you encounter an issue with the automation controller inventory not having the "related groups" from Satellite, you might need to define these variables in the inventory source. For more information, see Red Hat Satellite 6.

If you see the message, "no foreman.id" variable(s) when syncing the inventory, refer to the solution on the Red Hat Customer Portal at: https://access.redhat.com/solutions/5826451. Be sure to login with your customer credentials to access the full article.

18.4.5.1.7. Red Hat Insights

Use the following procedure to configure a Red Hat Insights-sourced inventory.

Procedure

  1. In the Create new source page, select Red Hat Insights from the Source list.
  2. The Create Source window expands with the required Credential field. Choose from an existing GCE Credential. For more information, refer to Credentials.
  3. Optional: You can specify the verbosity, host filter, enabled variables or values, and update options as described in Adding a source.
  4. Use the Source Variables field to override variables used by the gcp_compute inventory plugin. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For more information on these variables, see insights inventory plugin.
18.4.5.1.8. OpenStack

Use the following procedure to configure an OpenStack-sourced inventory.

Procedure

  1. In the Create new source page, select Openstack from the Source list.
  2. The Create Source window expands with the required Credential field. Choose from an existing GCE Credential. For more information, refer to Credentials.
  3. Optional: You can specify the verbosity, host filter, enabled variables or values, and update options as described in Adding a source.
  4. Use the Source Variables field to override variables used by the gcp_compute inventory plugin. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For more information on these variables, see openstack inventory plugin.
18.4.5.1.9. Red hat virtualization

Use the following procedure to configure a Red Hat virtualization-sourced inventory.

Procedure

  1. In the Create new source page, select Red Hat Virtualization from the Source list.
  2. The Create Source window expands with the required Credential field. Choose from an existing GCE Credential. For more information, see Credentials.
  3. Optional: You can specify the verbosity, host filter, enabled variables or values, and update options as described in Adding a source.
  4. Use the Source Variables field to override variables used by the gcp_compute inventory plugin. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For more information on these variables, see ovirt inventory plugin
Note

Red Hat Virtualization (ovirt) inventory source requests are secure by default. To change this default setting, set the key ovirt_insecure to true in source_variables, which is only available from the API details of the inventory source at the /api/v2/inventory_sources/N/ endpoint.

18.4.5.1.10. Red Hat Ansible Automation Platform

Use the following procedure to configure an automation controller-sourced inventory.

Procedure

  1. In the Create new source page, select Red Hat Ansible Automation Platform from the *Source list.
  2. The Create Source window expands with the required Credential field. Choose from an existing GCE Credential. For more information, refer to Credentials.
  3. Optional: You can specify the verbosity, host filter, enabled variables or values, and update options as described in Adding a source.
  4. Use the Source Variables field to override variables used by the gcp_compute inventory plugin. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For more information on these variables, see Controller inventory plugin. This requires your Red Hat Customer login.
18.4.5.2. Export old inventory scripts

Despite the removal of the custom inventory scripts API, the scripts are still saved in the database. The commands described in this section enable you to recover the scripts from the database in a format that is suitable for you to subsequently check into source control.

Use the following commands:

$ awx-manage export_custom_scripts --filename=my_scripts.tar

Dump of old custom inventory scripts at my_scripts.tar

Making use of the output:

$ mkdir my_scripts
$ tar -xf my_scripts.tar -C my_scripts

The name of the scripts has the form: <pk>_<name>. This is the naming scheme used for project folders.

$ ls my_scripts
10inventory_script_rawhook _19 _30inventory_script_listenhospital _11inventory_script_upperorder _1inventory_script_commercialinternet45 _4inventory_script_whitestring _12inventory_script_eastplant _22inventory_script_pinexchange _5inventory_script_literaturepossession _13inventory_script_governmentculture _23inventory_script_brainluck _6inventory_script_opportunitytelephone _14inventory_script_bottomguess _25inventory_script_buyerleague _7inventory_script_letjury _15inventory_script_wallisland _26inventory_script_lifesport _8random_inventory_script
16inventory_script_wallisland _27inventory_script_exchangesomewhere _9random_inventory_script _17inventory_script_bidstory            _28inventory_script_boxchild _18p                                    _29__inventory_script_wearstress

Each file contains a script. Scripts can be bash/python/ruby/more, so the extension is not included. They are all directly executable. Executing the script dumps the inventory data.

$ ./my_scripts/11__inventory_script_upperorder {"group\ud801\udcb0\uc20e\u7b0e\ud81c\udfeb\ub12b\ub4d0\u9ac6\ud81e\udf07\u6ff9\uc17b": {"hosts":
["host_\ud821\udcad\u68b6\u7a51\u93b4\u69cf\uc3c2\ud81f\uddbe\ud820\udc92\u3143\u62c7",
"host_\u6057\u3985\u1f60\ufefb\u1b22\ubd2d\ua90c\ud81a\udc69\u1344\u9d15",
"host_\u78a0\ud820\udef3\u925e\u69da\ua549\ud80c\ude7e\ud81e\udc91\ud808\uddd1\u57d6\ud801\ude57",
"host_\ud83a\udc2d\ud7f7\ua18a\u779a\ud800\udf8b\u7903\ud820\udead\u4154\ud808\ude15\u9711",
"host_\u18a1\u9d6f\u08ac\u74c2\u54e2\u740e\u5f02\ud81d\uddee\ufbd6\u4506"], "vars": {"ansible_host": "127.0.0.1", "ansible_connection":
"local"}}}

You can verify functionality with ansible-inventory. This gives the same data, but reformatted.

$ ansible-inventory -i ./my_scripts/_11__inventory_script_upperorder --list --export

In the preceding example, you can cd into my_scripts and then issue a git init command, add the scripts you want, push it to source control, and then create an SCM inventory source in the user interface.

For more information on syncing or using custom inventory scripts, see Inventory file importing in the Automation controller Administration Guide.

18.5. View completed jobs

If an inventory was used to run a job, you can view details about those jobs in the Completed Jobs tab of the inventory and click Expanded to view details about each job.

Inventories view completed jobs

18.6. Running Ad Hoc commands

Ad hoc refers to using Ansible to perform a quick command, using /usr/bin/ansible, rather than the orchestration language, which is /usr/bin/ansible-playbook. An example of an ad hoc command might be rebooting 50 machines in your infrastructure. Anything you can do ad hoc can be accomplished by writing a Playbook. Playbooks can also glue lots of other operations together.

Use the following procedure to run an ad hoc command:

Procedure

  1. Select an inventory source from the list of hosts or groups. The inventory source can be a single group or host, a selection of multiple hosts, or a selection of multiple groups.

    ad hoc-commands-inventory-home

  2. Click Run Command. The Run command window opens.

    Run command window

  3. Enter the following information:

    • Module: Select one of the modules that the supports running commands against.

      command

      apt_repository

      mount

      win_service

      shell

      apt_rpm

      ping

      win_updates

      yum

      service

      selinux

      win_group

      apt

      group

      setup

      win_user

      apt_key

      user

      win_ping

      win_user

    • Arguments: Provide arguments to be used with the module you selected.
    • Limit: Enter the limit used to target hosts in the inventory. To target all hosts in the inventory enter all or *, or leave the field blank. This is automatically populated with whatever was selected in the previous view before clicking the launch button.
    • Machine Credential: Select the credential to use when accessing the remote hosts to run the command. Choose the credential containing the username and SSH key or password that Ansible needs to log into the remote hosts.
    • Verbosity: Select a verbosity level for the standard output.
    • Forks: If needed, select the number of parallel or simultaneous processes to use while executing the command.
    • Show Changes: Select to enable the display of Ansible changes in the standard output. The default is OFF.
    • Enable Privilege Escalation: If enabled, the playbook is run with administrator privileges. This is the equivalent of passing the --become option to the ansible command.
    • Extra Variables: Provide extra command line variables to be applied when running this inventory. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two.

      ad hoc-commands-inventory-run-command

  4. Click Next to choose the execution environment you want the ad hoc command to be run against.

    Chose execution ennvironment

  5. Click Next to choose the credential you want to use.
  6. Click Launch. The results display in the Output tab of the module’s job window.

    ad hoc-commands-inventory-results-example

Chapter 19. Supported Inventory plugin templates

After upgrade to 4.x, existing configurations are migrated to the new format that produces a backwards compatible inventory output. Use the following templates to aid in migrating your inventories to the new style inventory plugin output.

19.1. Amazon Web Services EC2

compose:
  ansible_host: public_ip_address
  ec2_account_id: owner_id
  ec2_ami_launch_index: ami_launch_index | string
  ec2_architecture: architecture
  ec2_block_devices: dict(block_device_mappings | map(attribute='device_name') | list | zip(block_device_mappings | map(attribute='ebs.volume_id') | list))
  ec2_client_token: client_token
  ec2_dns_name: public_dns_name
  ec2_ebs_optimized: ebs_optimized
  ec2_eventsSet: events | default("")
  ec2_group_name: placement.group_name
  ec2_hypervisor: hypervisor
  ec2_id: instance_id
  ec2_image_id: image_id
  ec2_instance_profile: iam_instance_profile | default("")
  ec2_instance_type: instance_type
  ec2_ip_address: public_ip_address
  ec2_kernel: kernel_id | default("")
  ec2_key_name: key_name
  ec2_launch_time: launch_time | regex_replace(" ", "T") | regex_replace("(\+)(\d\d):(\d)(\d)$", ".\g<2>\g<3>Z")
  ec2_monitored: monitoring.state in ['enabled', 'pending']
  ec2_monitoring_state: monitoring.state
  ec2_persistent: persistent | default(false)
  ec2_placement: placement.availability_zone
  ec2_platform: platform | default("")
  ec2_private_dns_name: private_dns_name
  ec2_private_ip_address: private_ip_address
  ec2_public_dns_name: public_dns_name
  ec2_ramdisk: ramdisk_id | default("")
  ec2_reason: state_transition_reason
  ec2_region: placement.region
  ec2_requester_id: requester_id | default("")
  ec2_root_device_name: root_device_name
  ec2_root_device_type: root_device_type
  ec2_security_group_ids: security_groups | map(attribute='group_id') | list |  join(',')
  ec2_security_group_names: security_groups | map(attribute='group_name') | list |  join(',')
  ec2_sourceDestCheck: source_dest_check | default(false) | lower | string
  ec2_spot_instance_request_id: spot_instance_request_id | default("")
  ec2_state: state.name
  ec2_state_code: state.code
  ec2_state_reason: state_reason.message if state_reason is defined else ""
  ec2_subnet_id: subnet_id | default("")
  ec2_tag_Name: tags.Name
  ec2_virtualization_type: virtualization_type
  ec2_vpc_id: vpc_id | default("")
filters:
  instance-state-name:
  - running
groups:
  ec2: true
hostnames:
  - network-interface.addresses.association.public-ip
  - dns-name
  - private-dns-name
keyed_groups:
  - key: image_id | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: images
    prefix: ''
    separator: ''
  - key: placement.availability_zone
    parent_group: zones
    prefix: ''
    separator: ''
  - key: ec2_account_id | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: accounts
    prefix: ''
    separator: ''
  - key: ec2_state | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: instance_states
    prefix: instance_state
  - key: platform | default("undefined") | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: platforms
    prefix: platform
  - key: instance_type | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: types
    prefix: type
  - key: key_name | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: keys
    prefix: key
  - key: placement.region
    parent_group: regions
    prefix: ''
    separator: ''
  - key: security_groups | map(attribute="group_name") | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list
    parent_group: security_groups
    prefix: security_group
  - key: dict(tags.keys() | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list | zip(tags.values()
      | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list))
    parent_group: tags
    prefix: tag
  - key: tags.keys() | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list
    parent_group: tags
    prefix: tag
  - key: vpc_id | regex_replace("[^A-Za-z0-9\_]", "_")
    parent_group: vpcs
    prefix: vpc_id
  - key: placement.availability_zone
    parent_group: '{{ placement.region }}'
    prefix: ''
    separator: ''
plugin: amazon.aws.aws_ec2
use_contrib_script_compatible_sanitization: true

19.2. Google Compute Engine

auth_kind: serviceaccount
compose:
  ansible_ssh_host: networkInterfaces[0].accessConfigs[0].natIP | default(networkInterfaces[0].networkIP)
  gce_description: description if description else None
  gce_id: id
  gce_image: image
  gce_machine_type: machineType
  gce_metadata: metadata.get("items", []) | items2dict(key_name="key", value_name="value")
  gce_name: name
  gce_network: networkInterfaces[0].network.name
  gce_private_ip: networkInterfaces[0].networkIP
  gce_public_ip: networkInterfaces[0].accessConfigs[0].natIP | default(None)
  gce_status: status
  gce_subnetwork: networkInterfaces[0].subnetwork.name
  gce_tags: tags.get("items", [])
  gce_zone: zone
hostnames:
- name
- public_ip
- private_ip
keyed_groups:
- key: gce_subnetwork
  prefix: network
- key: gce_private_ip
  prefix: ''
  separator: ''
- key: gce_public_ip
  prefix: ''
  separator: ''
- key: machineType
  prefix: ''
  separator: ''
- key: zone
  prefix: ''
  separator: ''
- key: gce_tags
  prefix: tag
- key: status | lower
  prefix: status
- key: image
  prefix: ''
  separator: ''
plugin: google.cloud.gcp_compute
retrieve_image_info: true
use_contrib_script_compatible_sanitization: true

19.3. Microsoft Azure Resource Manager

conditional_groups:
  azure: true
default_host_filters: []
fail_on_template_errors: false
hostvar_expressions:
  computer_name: name
  private_ip: private_ipv4_addresses[0] if private_ipv4_addresses else None
  provisioning_state: provisioning_state | title
  public_ip: public_ipv4_addresses[0] if public_ipv4_addresses else None
  public_ip_id: public_ip_id if public_ip_id is defined else None
  public_ip_name: public_ip_name if public_ip_name is defined else None
  tags: tags if tags else None
  type: resource_type
keyed_groups:
- key: location
  prefix: ''
  separator: ''
- key: tags.keys() | list if tags else []
  prefix: ''
  separator: ''
- key: security_group
  prefix: ''
  separator: ''
- key: resource_group
  prefix: ''
  separator: ''
- key: os_disk.operating_system_type
  prefix: ''
  separator: ''
- key: dict(tags.keys() | map("regex_replace", "^(.*)$", "\1_") | list | zip(tags.values() | list)) if tags else []
  prefix: ''
  separator: ''
plain_host_names: true
plugin: azure.azcollection.azure_rm
use_contrib_script_compatible_sanitization: true

19.4. VMware vCenter

compose:
  ansible_host: guest.ipAddress
  ansible_ssh_host: guest.ipAddress
  ansible_uuid: 99999999 | random | to_uuid
  availablefield: availableField
  configissue: configIssue
  configstatus: configStatus
  customvalue: customValue
  effectiverole: effectiveRole
  guestheartbeatstatus: guestHeartbeatStatus
  layoutex: layoutEx
  overallstatus: overallStatus
  parentvapp: parentVApp
  recenttask: recentTask
  resourcepool: resourcePool
  rootsnapshot: rootSnapshot
  triggeredalarmstate: triggeredAlarmState
filters:
- runtime.powerState == "poweredOn"
keyed_groups:
- key: config.guestId
  prefix: ''
  separator: ''
- key: '"templates" if config.template else "guests"'
  prefix: ''
  separator: ''
plugin: community.vmware.vmware_vm_inventory
properties:
- availableField
- configIssue
- configStatus
- customValue
- datastore
- effectiveRole
- guestHeartbeatStatus
- layout
- layoutEx
- name
- network
- overallStatus
- parentVApp
- permission
- recentTask
- resourcePool
- rootSnapshot
- snapshot
- triggeredAlarmState
- value
- capability
- config
- guest
- runtime
- storage
- summary
strict: false
with_nested_properties: true

19.5. Red Hat Satellite 6

group_prefix: foreman_
keyed_groups:
- key: foreman['environment_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_') | regex_replace('none', '')
  prefix: foreman_environment_
  separator: ''
- key: foreman['location_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
  prefix: foreman_location_
  separator: ''
- key: foreman['organization_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
  prefix: foreman_organization_
  separator: ''
- key: foreman['content_facet_attributes']['lifecycle_environment_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
  prefix: foreman_lifecycle_environment_
  separator: ''
- key: foreman['content_facet_attributes']['content_view_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
  prefix: foreman_content_view_
  separator: ''
legacy_hostvars: true
plugin: theforeman.foreman.foreman
validate_certs: false
want_facts: true
want_hostcollections: false
want_params: true

19.6. OpenStack

expand_hostvars: true
fail_on_errors: true
inventory_hostname: uuid
plugin: openstack.cloud.openstack

19.7. Red Hat Virtualization

compose:
  ansible_host: (devices.values() | list)[0][0] if devices else None
keyed_groups:
- key: cluster
  prefix: cluster
  separator: _
- key: status
  prefix: status
  separator: _
- key: tags
  prefix: tag
  separator: _
ovirt_hostname_preference:
- name
- fqdn
ovirt_insecure: false
plugin: ovirt.ovirt.ovirt

19.8. Red Hat Ansible Automation Platform

include_metadata: true
inventory_id: <inventory_id or url_quoted_named_url>
plugin: awx.awx.tower
validate_certs: <true or false>

Chapter 20. Job templates

A job template is a definition and set of parameters for running an Ansible job. Job templates are useful to run the same job many times. They also encourage the reuse of Ansible playbook content and collaboration between teams.

The Templates list view shows job templates that are currently available. The default view is collapsed (Compact), showing the template name, template type, and the timestamp of the last job that ran using that template. You can click the arrow Arrow icon next to each entry to expand and view more information. This list is sorted alphabetically by name, but you can sort by other criteria, or search by various fields and attributes of a template.

Job templates home

From this screen you can launch Rightrocket , edit Leftpencil , and copy Copy a workflow job template.

Note

Job templates can be used to build a workflow template. Templates that show the Workflow Visualizer Visualizer icon next to them are workflow templates. Clicking the icon allows you to build a workflow graphically. Many parameters in a job template enable you to select Prompt on Launch that you can change at the workflow level, and do not affect the values assigned at the job template level. For instructions, see the Workflow Visualizer section.

20.1. Creating a job template

Procedure

  1. On the Templates list view, select Add job template from the Add list.
  2. Enter the appropriate details in the following fields:

    Note

    If a field has the Prompt on launch checkbox selected, launching the job prompts you for the value for that field when launching. Most prompted values override any values set in the job template. Exceptions are noted in the following table.

    FieldOptionsPrompt on Launch

    Name

    Enter a name for the job.

    N/A

    Description

    Enter an arbitrary description as appropriate (optional).

    N/A

    Job Type

    Choose a job type:

    • Run: Start the playbook when launched, running Ansible tasks on the selected hosts.
    • Check: Perform a "dry run" of the playbook and report changes that would be made without actually making them. Tasks that do not support check mode are missed and do not report potential changes.

    For more information about job types see the Playbooks section of the Ansible documentation.

    Yes

    Inventory

    Choose the inventory to be used with this job template from the inventories available to the logged in user.

    A System Administrator must grant you or your team permissions to be able to use certain inventories in a job template.

    Yes.

    Inventory prompts show up as its own step in a later prompt window.

    Project

    Select the project to use with this job template from the projects available to the user that is logged in.

    N/A

    SCM branch

    This field is only present if you chose a project that allows branch override. Specify the overriding branch to use in your job run. If left blank, the specified SCM branch (or commit hash or tag) from the project is used.

    For more information, see Job branch overriding.

    Yes

    Execution Environment

    Select the container image to be used to run this job. You must select a project before you can select an execution environment.

    Yes.

    Execution environment prompts show up as its own step in a later prompt window.

    Playbook

    Choose the playbook to be launched with this job template from the available playbooks. This field automatically populates with the names of the playbooks found in the project base path for the selected project. Alternatively, you can enter the name of the playbook if it is not listed, such as the name of a file (such as foo.yml) you want to use to run with that playbook. If you enter a filename that is not valid, the template displays an error, or causes the job to fail.

    N/A

    Credentials

    Select the examine icon to open a separate window.

    Choose the credential from the available options to use with this job template.

    Use the drop-down menu list to filter by credential type if the list is extensive. Some credential types are not listed because they do not apply to certain job templates.

    • If selected, when launching a job template that has a default credential and supplying another credential replaces the default credential if it is the same type. The following is an example this message:

    Job Template default credentials must be replaced with one of the same type. Please select a credential for the following types in order to proceed: Machine.

    • Alternatively, you can add more credentials as you see fit.
    • Credential prompts show up as its own step in a later prompt window.

    Labels

    • Optionally supply labels that describe this job template, such as dev or test.
    • Use labels to group and filter job templates and completed jobs in the display.
    • Labels are created when they are added to the job template. Labels are associated with a single Organization by using the Project that is provided in the job template. Members of the Organization can create labels on a job template if they have edit permissions (such as the admin role).
    • Once you save the job template, the labels appear in the Job Templates overview in the Expanded view.
    • Select Disassociate beside a label to remove it. When a label is removed, it is no longer associated with that particular Job or Job Template, but it remains associated with any other jobs that reference it.
    • Jobs inherit labels from the Job Template at the time of launch. If you delete a label from a Job Template, it is also deleted from the Job.
    • If selected, even if a default value is supplied, you are prompted when launching to supply additional labels, if needed.
    • You cannot delete existing labels, selecting Disassociate only removes the newly added labels, not existing default labels.

    Variables

    • Pass extra command line variables to the playbook. This is the "-e" or "-extra-vars" command line parameter for ansible-playbook that is documented in the Ansible documentation at Defining variables at runtime.
    • Provide key or value pairs by using either YAML or JSON. These variables have a maximum value of precedence and overrides other variables specified elsewhere. The following is an example value: git_branch: production release_version: 1.5

    Yes.

    If you want to be able to specify extra_vars on a schedule, you must select Prompt on launch for Variables on the job template, or enable a survey on the job template. Those answered survey questions become extra_vars.

    Forks

    The number of parallel or simultaneous processes to use while executing the playbook. A value of zero uses the Ansible default setting, which is five parallel processes unless overridden in /etc/ansible/ansible.cfg.

    Yes

    Limit

    A host pattern to further constrain the list of hosts managed or affected by the playbook. You can separate many patterns by colons (:). As with core Ansible:

    • a:b means "in group a or b"
    • a:b:&c means "in a or b but must be in c"
    • a:!b means "in a, and definitely not in b"

    For more information, see Patterns: targeting hosts and groups in the Ansible documentation.

    Yes

    If not selected, the job template executes against all nodes in the inventory or only the nodes predefined on the Limit field. When running as part of a workflow, the workflow job template limit is used instead.

    Verbosity

    Control the level of output Ansible produces as the playbook executes. Choose the verbosity from Normal to various Verbose or Debug settings. This only appears in the details report view. Verbose logging includes the output of all commands. Debug logging is exceedingly verbose and includes information about SSH operations that can be useful in certain support instances.

    Verbosity 5 causes automation controller to block heavily when jobs are running, which could delay reporting that the job has finished (even though it has) and can cause the browser tab to lock up.

    Yes

    Job Slicing

    Specify the number of slices you want this job template to run. Each slice runs the same tasks against a part of the inventory. For more information about job slices, see Job Slicing.

    Yes

    Timeout

    This enables you to specify the length of time (in seconds) that the job can run before it is canceled. Consider the following for setting the timeout value:

    • There is a global timeout defined in the settings which defaults to 0, indicating no timeout.
    • A negative timeout (<0) on a job template is a true "no timeout" on the job.
    • A timeout of 0 on a job template defaults the job to the global timeout (which is no timeout by default).
    • A positive timeout sets the timeout for that job template.

    Yes

    Show Changes

    Enables you to see the changes made by Ansible tasks.

    Yes

    Instance Groups

    Choose Instance and Container Groups to associate with this job template. If the list is extensive, use the examine icon to narrow the options. Job template instance groups contribute to the job scheduling criteria, see Job Runtime Behavior and Control where a job runs for rules. A System Administrator must grant you or your team permissions to be able to use an instance group in a job template. Use of a container group requires admin rights.

    • Yes.

    If selected, you are providing the jobs preferred instance groups in order of preference. If the first group is out of capacity, later groups in the list are considered until one with capacity is available, at which point that is selected to run the job.

    • If you prompt for an instance group, what you enter replaces the normal instance group hierarchy and overrides all of the organizations' and inventories' instance groups.
    • The Instance Groups prompt shows up as its own step in a later prompt window.

    Job Tags

    Type and select the Create menu to specify which parts of the playbook should be executed. For more information and examples see Tags in the Ansible documentation.

    Yes

    Skip Tags

    Type and select the Create menu to specify certain tasks or parts of the playbook to skip. For more information and examples see Tags in the Ansible documentation.

    Yes

  3. Specify the following Options for launching this template, if necessary:

    • Privilege Escalation: If checked, you enable this playbook to run as an administrator. This is the equivalent of passing the --become option to the ansible-playbook command.
    • Provisioning Callbacks: If checked, you enable a host to call back to automation controller through the REST API and start a job from this job template. For more information, see Provisioning Callbacks.
    • Enable Webhook: If checked, you turn on the ability to interface with a predefined SCM system web service that is used to launch a job template. GitHub and GitLab are the supported SCM systems.

      • If you enable webhooks, other fields display, prompting for additional information:

        Job templates webhooks
      • Webhook Service: Select which service to listen for webhooks from.
      • Webhook URL: Automatically populated with the URL for the webhook service to POST requests to.
      • Webhook Key: Generated shared secret to be used by the webhook service to sign payloads sent to automation controller. You must configure this in the settings on the webhook service in order for automation controller to accept webhooks from this service.
      • Webhook Credential: Optionally, provide a GitHub or GitLab personal access token (PAT) as a credential to use to send status updates back to the webhook service. Before you can select it, the credential must exist. See Credential Types to create one.
      • For additional information about setting up webhooks, see Working with Webhooks.
    • Concurrent Jobs: If checked, you are allowing jobs in the queue to run simultaneously if not dependent on one another. Check this box if you want to run job slices simultaneously. For more information, see Automation controller capacity determination and job impact.
    • Enable Fact Storage: If checked, automation controller stores gathered facts for all hosts in an inventory related to the job running.
    • Prevent Instance Group Fallback: Check this option to allow only the instance groups listed in the Instance Groups field to run the job. If clear, all available instances in the execution pool are used based on the hierarchy described in Control where a job runs.
  4. Click Save, when you have completed configuring the details of the job template.

Saving the template does not exit the job template page but advances to the Job Template Details tab. After saving the template, you can click Launch to launch the job, or click Edit to add or change the attributes of the template, such as permissions, notifications, view completed jobs, and add a survey (if the job type is not a scan). You must first save the template before launching, otherwise, Launch remains disabled.

Job template details

Verification

  1. From the navigation panel, select ResourcesTemplates.
  2. Verify that the newly created template appears on the Templates list view.

20.2. Adding permissions to templates

Use the following steps to add permissions for the team.

Procedure

  1. From the navigation panel, select ResourcesTemplates.
  2. Select a template, and in the Access tab, click Add.
  3. Select Users or Teams and click Next.
  4. Select one or more users or teams from the list by clicking the check boxes next to the names to add them as members and click Next.

    The following example shows two users have been selected to be added:

    Add users to example organization
  5. Choose the roles that you want users or teams to have. Ensure that you scroll down for a complete list of roles. Each resource has different options available.
  6. Click Save to apply the roles to the selected users or teams and to add them as members.

The window to add users and teams closes to display the the updated roles assigned for each user and team:

Permissions tab roles assigned

To remove roles for a particular user, click the Disassociate icon next to its resource.

This launches a confirmation dialog, asking you to confirm the disassociation.

20.3. Deleting a job template

Before deleting a job template, ensure that it is not used in a workflow job template.

Procedure

  1. Delete a job template by using one of these methods:

    • Select the checkbox next to one or more job templates and click Delete.
    • Click the desired job template and click Delete, on the Details page.
Note

If deleting items that are used by other work items, a message opens listing the items that are affected by the deletion and prompts you to confirm the deletion. Some screens contain items that are invalid or previously deleted, and will fail to run. The following is an example of that message:

Deletion warning

20.4. Work with notifications

From the navigation panel, select AdministrationNotifications. This enables you to review any notification integrations you have set up and their statuses, if they have run.

Job template completed notifications

Use the toggles to enable or disable the notifications to use with your particular template. For more information, see Enable and Disable Notifications.

If no notifications have been set up, click Add to create a new notification. For more information on configuring various notification types and extended messaging, see Notification Types.

20.5. View completed jobs

The Jobs tab provides the list of job templates that have run. Click the expand icon next to each job to view the following details:

  • Status
  • ID and name
  • Type of job
  • Time started and completed
  • Who started the job and which template, inventory, project, and credential were used.

You can filter the list of completed jobs using any of these criteria.

Completed jobs view

Sliced jobs that display on this list are labeled accordingly, with the number of sliced jobs that have run:

Sliced jobs shown

20.6. Scheduling job templates

Access the schedules for a particular job template from the Schedules tab.

Job templates schedules

Procedure

  • To schedule a job template, select the Schedules tab, and choose the appropriate method:

    • If schedules are already set up, review, edit, enable or disable your schedule preferences.
    • If schedules have not been set up, see Schedules for more information.

If you select Prompt on Launch for the Credentials field, and you create or edit scheduling information for your job template, a Prompt option displays on the Schedules form.

You cannot remove the default machine credential in the Prompt dialog without replacing it with another machine credential before you can save it.

Note

To set extra_vars on schedules, you must select Prompt on Launch for Variables on the job template, or configure and enable a survey on the job template.

The answered survey questions then become extra_vars.

20.7. Surveys in job templates

Job types of Run or Check provide a way to set up surveys in the Job Template creation or editing screens. Surveys set extra variables for the playbook similar to Prompt for Extra Variables does, but in a user-friendly question and answer way. Surveys also permit for validation of user input. Select the Survey tab to create a survey.

Example

Surveys can be used for a number of situations. For example, operations want to give developers a "push to stage" button that they can run without advance knowledge of Ansible. When launched, this task could prompt for answers to questions such as "What tag should we release?".

Many types of questions can be asked, including multiple-choice questions.

20.7.1. Creating a survey

Procedure

  1. From the Survey tab, click Add.
  2. A survey can consist of any number of questions. For each question, enter the following information:

    • Question: The question to ask the user.
    • Optional: Description: A description of what is being asked of the user.
    • Answer variable name: The Ansible variable name to store the user’s response in. This is the variable to be used by the playbook. Variable names cannot contain spaces.
    • Answer type: Choose from the following question types:

      • Text: A single line of text. You can set the minimum and maximum length (in characters) for this answer.
      • Textarea: A multi-line text field. You can set the minimum and maximum length (in characters) for this answer.
      • Password: Responses are treated as sensitive information, much like an actual password is treated. You can set the minimum and maximum length (in characters) for this answer.
      • Multiple Choice (single select): A list of options, of which only one can be selected at a time. Enter the options, one per line, in the Multiple Choice Options field.
      • Multiple Choice (multiple select): A list of options, any number of which can be selected at a time. Enter the options, one per line, in the Multiple Choice Options field.
      • Integer: An integer number. You can set the minimum and maximum length (in characters) for this answer.
      • Float: A decimal number. You can set the minimum and maximum length (in characters) for this answer.
    • Required: Whether or not an answer to this question is required from the user.
    • Minimum length and Maximum length: Specify if a certain length in the answer is required.
    • Default answer: The default answer to the question. This value is pre-filled in the interface and is used if the answer is not provided by the user.

      Job template survey
  3. Once you have entered the question information, click Save to add the question.

    The survey question displays in the Survey list. For any question, you can click Pencil to edit it.

    Check the box next to each question and click Delete to delete the question, or use the toggle option in the menu bar to enable or disable the survey prompts.

    If you have more than one survey question, click Edit Order to rearrange the order of the questions by clicking and dragging on the grid icon.

    Rearrange survey
  4. To add more questions, click Add.

20.7.2. Optional survey questions

The Required setting on a survey question determines whether the answer is optional or not for the user interacting with it.

Optional survey variables can also be passed to the playbook in extra_vars.

  • If a non-text variable (input type) is marked as optional, and is not filled in, no survey extra_var is passed to the playbook.
  • If a text input or text area input is marked as optional, is not filled in, and has a minimum length > 0, no survey extra_var is passed to the playbook.
  • If a text input or text area input is marked as optional, is not filled in, and has a minimum length === 0, that survey extra_var is passed to the playbook, with the value set to an empty string ("").

20.8. Launching a job template

A benefit of automation controller is the push-button deployment of Ansible playbooks. You can configure a template to store all the parameters that you would normally pass to the Ansible playbook on the command line. In addition to the playbooks, the template passes the inventory, credentials, extra variables, and all options and settings that you can specify on the command line.

Easier deployments drive consistency, by running your playbooks the same way each time, and allowing you to delegate responsibilities.

Procedure

  • Launch a job template by using one of these methods:

    • From the navigation panel, select ResourcesTemplates and click Launch next to the job template.

      Job template launch
    • In the job template Details view of the job template you want to launch, click Launch.

A job can require additional information to run. The following data can be requested at launch:

  • Credentials that were setup
  • The option Prompt on Launch is selected for any parameter
  • Passwords or passphrases that have been set to Ask
  • A survey, if one has been configured for the job templates
  • Extra variables, if requested by the job template
Note

If a job has user-provided values, then those are respected upon relaunch. If the user did not specify a value, then the job uses the default value from the job template. Jobs are not relaunched as-is. They are relaunched with the user prompts re-applied to the job template.

If you provide values on one tab, return to a previous tab, continuing to the next tab results in having to re-provide values on the rest of the tabs. Ensure that you fill in the tabs in the order that the prompts appear.

When launching, automation controller automatically redirects the web browser to the Job Status page for this job under the Jobs tab.

You can re-launch the most recent job from the list view to re-run on all hosts or just failed hosts in the specified inventory. For more information, see the Jobs section.

When slice jobs are running, job lists display the workflow and job slices, as well as a link to view their details individually.

Note

You can launch jobs in bulk using the newly added endpoint in the API, /api/v2/bulk/job_launch. This endpoint accepts JSON and you can specify a list of unified job templates (such as job templates and project updates) to launch. The user must have the appropriate permission to launch all the jobs. If all jobs are not launched an error is returned indicating why the operation was not able to complete. Use the OPTIONS request to return relevant schema. For more information, see the Bulk endpoint of the Reference section of the Automation Controller API Guide.

20.9. Copying a job template

If you copy a job template, it does not copy any associated schedule, notifications, or permissions. Schedules and notifications must be recreated by the user or administrator creating the copy of the job template. The user copying the Job Template is be granted administrator permission, but no permissions are assigned (copied) to the job template.

Procedure

  1. From the navigation panel, select ResourcesTemplates.
  2. Click the Copy icon associated with the template that you want to copy.

    • The new template with the name of the template from which you copied and a timestamp displays in the list of templates.
  3. Click to open the new template and click Edit.
  4. Replace the contents of the Name field with a new name, and provide or modify the entries in the other fields to complete this page.
  5. Click Save.

20.10. Scan job templates

Scan jobs are no longer supported starting with automation controller 3.2. This system tracking feature was used as a way to capture and store facts as historical data. Facts are now stored in the controller through fact caching. For more information, see Fact Caching.

Job template scan jobs in your system before automation controller 3.2, are converted to type run, like normal job templates. They retain their associated resources, such as inventories and credentials. By default, job template scan jobs that do not have a related project are assigned a special playbook. You can also specify a project with your own scan playbook. A project is created for each organization that points to awx-facts-playbooks and the job template was set to the playbook: https://github.com/ansible/tower-fact-modules/blob/master/scan_facts.yml.

20.10.1. Fact scan playbooks

The scan job playbook, scan_facts.yml, contains invocations of three fact scan modules - packages, services, and files, along with Ansible’s standard fact gathering. The scan_facts.yml playbook file is similar to this:

- hosts: all
  vars:
    scan_use_checksum: false
    scan_use_recursive: false
  tasks:
    - scan_packages:
    - scan_services:
    - scan_files:
        paths: '{{ scan_file_paths }}'
        get_checksum: '{{ scan_use_checksum }}'
        recursive: '{{ scan_use_recursive }}'
      when: scan_file_paths is defined

The scan_files fact module is the only module that accepts parameters, passed through extra_vars on the scan job template:

scan_file_paths: /tmp/scan_use_checksum: true scan_use_recursive: true

  • The scan_file_paths parameter can have multiple settings (such as /tmp/ or /var/log).
  • The scan_use_checksum and scan_use_recursive parameters can also be set to false or omitted. An omission is the same as a false setting.

Scan job templates should enable become and use credentials for which become is a possibility. You can enable become by checking Privilege Escalation from the options list:

Job template become

20.10.2. Supported OSes for scan_facts.yml

If you use the scan_facts.yml playbook with use fact cache, ensure that you are using one of the following supported operating systems:

  • Red Hat Enterprise Linux 5, 6, 7, 8, and 9
  • Ubuntu 23.04 (Support for Ubuntu is deprecated and will be removed in a future release)
  • OEL 6 and 7
  • SLES 11 and 12
  • Debian 6, 7, 8, 9, 10, 11, and 12
  • Fedora 22, 23, and 24
  • Amazon Linux 2023.1.20230912

Some of these operating systems require initial configuration to run python or have access to the python packages, such as python-apt, which the scan modules depend on.

20.10.3. Pre-scan setup

The following are examples of playbooks that configure certain distributions so that scan jobs can be run against them:

Bootstrap Ubuntu (16.04)
---
- name: Get Ubuntu 16, and on ready
 hosts: all
 sudo: yes
 gather_facts: no
 tasks:
 - name: install python-simplejson
   raw: sudo apt-get -y update
   raw: sudo apt-get -y install python-simplejson
   raw: sudo apt-get install python-apt

Bootstrap Fedora (23, 24)
---
- name: Get Fedora ready
 hosts: all
 sudo: yes
 gather_facts: no
 tasks:
 - name: install python-simplejson
   raw: sudo dnf -y update
   raw: sudo dnf -y install python-simplejson
   raw: sudo dnf -y install rpm-python

20.10.4. Custom fact scans

A playbook for a custom fact scan is similar to the example in the Fact scan playbooks section. For example, a playbook that only uses a custom scan_foo Ansible fact module looks similar to this:

scan_foo.py:
def main():
    module = AnsibleModule(
        argument_spec = dict())


    foo = [
      {
        "hello": "world"
      },
      {
        "foo": "bar"
      }
    ]
    results = dict(ansible_facts=dict(foo=foo))
    module.exit_json(**results)


main()

To use a custom fact module, ensure that it lives in the /library/ subdirectory of the Ansible project used in the scan job template. This fact scan module returns a hard-coded set of facts:

[
   {
     "hello": "world"
   },
   {
     "foo": "bar"
   }
 ]

For more information, see the Developing modules section of the Ansible documentation.

20.10.5. Fact caching

Automation controller can store and retrieve facts on a per-host basis through an Ansible Fact Cache plugin. This behavior is configurable on a per-job template basis. Fact caching is turned off by default but can be enabled to serve fact requests for all hosts in an inventory related to the job running. This enables you to use job templates with --limit while still having access to the entire inventory of host facts. A global timeout setting that the plugin enforces per-host, can be specified (in seconds) by going to Settings and selecting Job settings from the Jobs option:

Jobs fact cache

After launching a job that uses fact cache (use_fact_cache=True), each host’s ansible_facts are all stored by the controller in the job’s inventory.

The Ansible Fact Cache plugin that ships with automation controller is enabled on jobs with fact cache enabled (use_fact_cache=True).

When a job that has fact cache enabled (use_fact_cache=True) has run, automation controller restores all records for the hosts in the inventory. Any records with update times newer than the currently stored facts per-host are updated in the database.

New and changed facts are logged through automation controller’s logging facility. Specifically, to the system_tracking namespace or logger. The logging payload includes the following fields:

  • host_name
  • inventory_id
  • ansible_facts

ansible facts is a dictionary of all Ansible facts for host_name in the automation controller inventory, inventory_id.

Note

If a hostname includes a forward slash (/), fact cache does not work for that host. If you have an inventory with 100 hosts and one host has a / in the name, the remaining 99 hosts still collect facts.

20.10.6. Benefits of fact caching

Fact caching saves you time over running fact gathering. If you have a playbook in a job that runs against a thousand hosts and forks, you can spend 10 minutes gathering facts across all of those hosts. However, if you run a job on a regular basis, the first run of it caches these facts and the next run pulls them from the database. This reduces the runtime of jobs against large inventories, including Smart Inventories.

Note

Do not modify the ansible.cfg file to apply fact caching. Custom fact caching could conflict with the controller’s fact caching feature. You must use the fact caching module that comes with automation controller.

You can choose to use cached facts in your job by enabling it in the Options field of the job templates window.

Cached facts

To clear facts, run the Ansible clear_facts meta task. The following is an example playbook that uses the Ansible clear_facts meta task.

- hosts: all
  gather_facts: false
  tasks:
    - name: Clear gathered facts from all currently targeted hosts
      meta: clear_facts

You can find the API endpoint for fact caching at:

http://<controller server name>/api/v2/hosts/x/ansible_facts

20.11. Use Cloud Credentials with a cloud inventory

Cloud Credentials can be used when syncing a cloud inventory. They can also be associated with a job template and included in the runtime environment for use by a playbook. The following Cloud Credentials are supported:

20.11.1. OpenStack

The following sample playbook invokes the nova_compute Ansible OpenStack cloud module and requires credentials:

  • auth_url
  • username
  • password
  • project name

These fields are made available to the playbook through the environmental variable OS_CLIENT_CONFIG_FILE, which points to a YAML file written by the controller based on the contents of the cloud credential. The following sample playbooks load the YAML file into the Ansible variable space:

  • OS_CLIENT_CONFIG_FILE example:
clouds:
  devstack:
    auth:
      auth_url: http://devstack.yoursite.com:5000/v2.0/
      username: admin
      password: your_password_here
      project_name: demo
  • Playbook example:
- hosts: all
  gather_facts: false
  vars:
    config_file: "{{ lookup('env', 'OS_CLIENT_CONFIG_FILE') }}"
    nova_tenant_name: demo
    nova_image_name: "cirros-0.3.2-x86_64-uec"
    nova_instance_name: autobot
    nova_instance_state: 'present'
    nova_flavor_name: m1.nano


    nova_group:
      group_name: antarctica
      instance_name: deceptacon
      instance_count: 3
  tasks:
    - debug: msg="{{ config_file }}"
    - stat: path="{{ config_file }}"
      register: st
    - include_vars: "{{ config_file }}"
      when: st.stat.exists and st.stat.isreg


    - name: "Print out clouds variable"
      debug: msg="{{ clouds|default('No clouds found') }}"


    - name: "Setting nova instance state to: {{ nova_instance_state }}"
      local_action:
        module: nova_compute
        login_username: "{{ clouds.devstack.auth.username }}"
        login_password: "{{ clouds.devstack.auth.password }}"

20.11.2. Amazon Web Services

Amazon Web Services (AWS) cloud credentials are exposed as the following environment variables during playbook execution (in the job template, choose the cloud credential needed for your setup):

  • AWS_ACCESS_KEY_ID
  • AWS-SECRET_ACCESS_KEY

Each AWS module implicitly uses these credentials when run through the controller without having to set the aws_access_key_id or aws_secret_access_key module options.

20.11.3. Google

Google cloud credentials are exposed as the following environment variables during playbook execution (in the job template, choose the cloud credential needed for your setup):

  • GCE_EMAIL
  • GCE_PROJECT
  • GCE_CREDENTIALS_FILE_PATH

Each Google module implicitly uses these credentials when run through the controller without having to set the service_account_email, project_id, or pem_file module options.

20.11.4. Azure

Azure cloud credentials are exposed as the following environment variables during playbook execution (in the job template, choose the cloud credential needed for your setup):

  • AZURE_SUBSCRIPTION_ID
  • AZURE_CERT_PATH

Each Azure module implicitly uses these credentials when run via the controller without having to set the subscription_id or management_cert_path module options.

20.11.5. VMware

VMware cloud credentials are exposed as the following environment variables during playbook execution (in the job template, choose the cloud credential needed for your setup):

  • VMWARE_USER
  • VMWARE_PASSWORD
  • VMWARE_HOST

The following sample playbook demonstrates the usage of these credentials:

- vsphere_guest:
    vcenter_hostname: "{{ lookup('env', 'VMWARE_HOST') }}"
    username: "{{ lookup('env', 'VMWARE_USER') }}"
    password: "{{ lookup('env', 'VMWARE_PASSWORD') }}"
    guest: newvm001
    from_template: yes
    template_src: linuxTemplate
    cluster: MainCluster
    resource_pool: "/Resources"
    vm_extra_config:
      folder: MyFolder

20.12. Provisioning Callbacks

Provisioning Callbacks are a feature of automation controller that enable a host to initiate a playbook run against itself, rather than waiting for a user to launch a job to manage the host from the automation controller console.

Provisioning Callbacks are only used to run playbooks on the calling host and are meant for cloud bursting. Cloud bursting is a cloud computing configuration that enables a private cloud to access public cloud resources by "bursting" into a public cloud when computing demand spikes.

Example

New instances with a need for client to server communication for configuration, such as transmitting an authorization key, not to run a job against another host. This provides for automatically configuring the following:

  • A system after it has been provisioned by another system (such as AWS auto-scaling, or an OS provisioning system like kickstart or preseed).
  • Launching a job programmatically without invoking the automation controller API directly.

The job template launched only runs against the host requesting the provisioning.

This is often accessed with a firstboot type script or from cron.

20.12.1. Enabling Provisioning Callbacks

Procedure

  • To enable callbacks, check the Provisioning Callbacks checkbox in the job template. This displays Provisioning Callback URL for the job template.

    Note

    If you intend to use automation controller’s provisioning callback feature with a dynamic inventory, set Update on Launch for the inventory group used in the job template.

    Provisioning Callback details

Callbacks also require a Host Config Key, to ensure that foreign hosts with the URL cannot request configuration. Provide a custom value for Host Config Key. The host key can be reused across multiple hosts to apply this job template against multiple hosts. If you want to control what hosts are able to request configuration, the key may be changed at any time.

To callback manually using REST:

Procedure

  1. Look at the callback URL in the UI, in the form: https://<CONTROLLER_SERVER_NAME>/api/v2/job_templates/7/callback/

    • The "7" in the sample URL is the job template ID in automation controller.
  2. Ensure that the request from the host is a POST. The following is an example using curl (all on a single line):

    curl -k -f -i -H 'Content-Type:application/json' -XPOST -d '{"host_config_key": "redhat"}' \
                      https://<CONTROLLER_SERVER_NAME>/api/v2/job_templates/7/callback/
  3. Ensure that the requesting host is defined in your inventory for the callback to succeed.

Troubleshooting

If automation controller fails to locate the host either by name or IP address in one of your defined inventories, the request is denied. When running a job template in this way, ensure that the host initiating the playbook run against itself is in the inventory. If the host is missing from the inventory, the job template fails with a No Hosts Matched type error message.

If your host is not in the inventory and Update on Launch is set for the inventory group automation controller attempts to update cloud based inventory sources before running the callback.

Verification

Successful requests result in an entry on the Jobs tab, where you can view the results and history. You can access the callback using REST, but the suggested method of using the callback is to use one of the example scripts that ships with automation controller:

  • /usr/share/awx/request_tower_configuration.sh (Linux/UNIX)
  • /usr/share/awx/request_tower_configuration.ps1 (Windows)

Their usage is described in the source code of the file by passing the -h flag, as the following shows:

./request_tower_configuration.sh -h
Usage: ./request_tower_configuration.sh <options>


Request server configuration from Ansible Tower.


OPTIONS:
 -h      Show this message
 -s      Controller server (e.g. https://ac.example.com) (required)
 -k      Allow insecure SSL connections and transfers
 -c      Host config key (required)
 -t      Job template ID (required)
 -e      Extra variables

This script can retry commands and is therefore a more robust way to use callbacks than a simple curl request. The script retries once per minute for up to ten minutes.

Note

This is an example script. Edit this script if you need more dynamic behavior when detecting failure scenarios, as any non-200 error code may not be a transient error requiring retry.

You can use callbacks with dynamic inventory in automation controller. For example, when pulling cloud inventory from one of the supported cloud providers. In these cases, along with setting Update On Launch, ensure that you configure an inventory cache timeout for the inventory source, to avoid hammering of your cloud’s API endpoints. Since the request_tower_configuration.sh script polls once per minute for up to ten minutes, a suggested cache invalidation time for inventory (configured on the inventory source itself) would be one or two minutes.

Running the request_tower_configuration.sh script from a cron job is not recommended, however, a suggested cron interval is every 30 minutes. Repeated configuration can be handled by scheduling automation controller so that the primary use of callbacks by most users is to enable a base image that is bootstrapped into the latest configuration when coming online. Running at first boot is best practice. First boot scripts are init scripts that typically self-delete, so you set up an init script that calls a copy of the request_tower_configuration.sh script and make that into an auto scaling image.

20.12.2. Passing extra variables to Provisioning Callbacks

You can pass extra_vars in Provisioning Callbacks the same way you can in a regular job template. To pass extra_vars, the data sent must be part of the body of the POST as application or JSON, as the content type.

Procedure

  • Pass extra variables by using one of these methods:

    • Use the following JSON format as an example when adding your own extra_vars to be passed:

      '{"extra_vars": {"variable1":"value1","variable2":"value2",...}}'
    • Pass extra variables to the job template call using curl:

      root@localhost:~$ curl -f -H 'Content-Type: application/json' -XPOST \
      -d '{"host_config_key": "redhat", "extra_vars": "{\"foo\": \"bar\"}"}' \
      https://<CONTROLLER_SERVER_NAME>/api/v2/job_templates/7/callback

For more information, see Launching Jobs with Curl in the Automation controller Administration Guide.

20.13. Extra variables

When you pass survey variables, they are passed as extra variables (extra_vars) within automation controller. However, passing extra variables to a job template (as you would do with a survey) can override other variables being passed from the inventory and project.

By default, extra_vars are marked as !unsafe unless you specify them on the Job Template’s Extra Variables section. These are trusted, because they can only be added by users with enough privileges to add or edit a Job Template. For example, nested variables do not expand when entered as a prompt, as the Jinja brackets are treated as a string. For more information about unsafe variables, see Unsafe or raw strings.

Note

extra_vars passed to the job launch API are only honored if one of the following is true:

  • They correspond to variables in an enabled survey.
  • ask_variables_on_launch is set to True.

Example

You have a defined variable for an inventory for debug = true. It is possible that this variable, debug = true, can be overridden in a job template survey.

To ensure the variables that you pass are not overridden, ensure they are included by redefining them in the survey. Extra variables can be defined at the inventory, group, and host levels.

If you are specifying the ALLOW_JINJA_IN_EXTRA_VARS parameter, see the Controller Tips and Tricks section of the Automation controller Administration Guide to configure it in the Jobs Settings screen of the controller UI.

The job template extra variables dictionary is merged with the survey variables.

The following are some simplified examples of extra_vars in YAML and JSON formats:

  • The configuration in YAML format:
launch_to_orbit: true
satellites:
  - sputnik
  - explorer
  - satcom
  • The configuration in JSON format:
{
  "launch_to_orbit": true,
  "satellites": ["sputnik", "explorer", "satcom"]
}

The following table notes the behavior (hierarchy) of variable precedence in automation controller as it compares to variable precedence in Ansible.

Table 20.1. Automation controller Variable Precedence Hierarchy (last listed wins)
Ansibleautomation controller

role defaults

role defaults

dynamic inventory variables

dynamic inventory variables

inventory variables

automation controller inventory variables

inventory group_vars

automation controller group variables

inventory host_vars

automation controller host variables

playbook group_vars

playbook group_vars

playbook host_vars

playbook host_vars

host facts

host facts

registered variables

registered variables

set facts

set facts

play variables

play variables

play vars_prompt

(not supported)

play vars_files

play vars_files

role and include variables

role and include variables

block variables

block variables

task variables

task variables

extra variables

Job Template extra variables

 

Job Template Survey (defaults)

 

Job Launch extra variables

20.13.1. Relaunch a job template

Instead of manually relaunching a job, a relaunch is denoted by setting launch_type to relaunch. The relaunch behavior deviates from the launch behavior in that it does not inherit extra_vars.

Job relaunching does not go through the inherit logic. It uses the same extra_vars that were calculated for the job being relaunched.

Example

You launch a job template with no extra_vars which results in the creation of a job called j1. Then you edit the job template and add extra_vars (such as adding "{ "hello": "world" }").

Relaunching j1 results in the creation of j2, but because there is no inherit logic and j1 has no extra_vars, j2 does not have any extra_vars.

If you launch the job template with the extra_vars that you added after the creation of j1, the relaunch job created (j3) includes the extra_vars. Relaunching j3 results in the creation of j4, which also includes extra_vars.

Chapter 21. Job slicing

A sliced job refers to the concept of a distributed job. Distributed jobs are used for running a job across a large number of hosts, enabling you to run multiple ansible-playbooks, each on a subset of an inventory, that can be scheduled in parallel across a cluster.

By default, Ansible runs jobs from a single control instance. For jobs that do not require cross-host orchestration, job slicing takes advantage of automation controller’s ability to distribute work to multiple nodes in a cluster.

Job slicing works by adding a Job Template field job_slice_count, which specifies the number of jobs into which to slice the Ansible run. When this number is greater than 1, automation controller generates a workflow from a job template instead of a job. The inventory is distributed evenly amongst the slice jobs. The workflow job is then started, and proceeds as though it were a normal workflow.

When launching a job, the API returns either a job resource (if job_slice_count = 1) or a workflow job resource. The corresponding User Interface (UI) redirects to the appropriate screen to display the status of the run.

21.1. Job slice considerations

When setting up job slices, consider the following:

  • A sliced job creates a workflow job, which then creates jobs.
  • A job slice consists of a job template, an inventory, and a slice count.
  • When executed, a sliced job splits each inventory into a number of "slice size" chunks. It then queues jobs of ansible-playbook runs on each chunk of the appropriate inventory. The inventory fed into ansible-playbook is a shortened version of the original inventory that only contains the hosts in that particular slice. The completed sliced job that displays on the Jobs list are labeled accordingly, with the number of sliced jobs that have run:

    Sliced jobs list view
  • These sliced jobs follow normal scheduling behavior (number of forks, queuing due to capacity, assignation to instance groups based on inventory mapping).

    Note

    Job slicing is intended to scale job executions horizontally. Enabling job slicing on a job template divides an inventory to be acted upon in the number of slices configured at launch time and then starts a job for each slice.

    Normally, the number of slices is equal to or less than the number of automation controller nodes. Setting an extremely high number of job slices, such as thousands, while permitted, can cause performance degradation as the job scheduler is not designed to simultaneously schedule thousands of workflow nodes, which are what the sliced jobs become.

    • Sliced job templates with prompts or extra variables behave the same as standard job templates, applying all variables and limits to the entire set of slice jobs in the resulting workflow job. However, when passing a limit to a sliced job, if the limit causes slices to have no hosts assigned, those slices will fail, causing the overall job to fail.
    • A job slice job status of a distributed job is calculated in the same manner as workflow jobs. It fails if there are any unhandled failures in its sub-jobs.
  • Any job that intends to orchestrate across hosts (rather than just applying changes to individual hosts) must not be configured as a slice job.
  • Any job that does, can fail, and automation controller does not attempt to discover or account for playbooks that fail when run as slice jobs.

21.2. Job slice execution behavior

When jobs are sliced, they can run on any node. Insufficient capacity in the system can cause some to run at a different time. When slice jobs are running, job details display the workflow and job slices currently running, as well as a link to view their details individually.

Sliced jobs output view

By default, job templates are not normally configured to execute simultaneously (allow_simultaneous must be checked in the API or Enable Concurrent Jobs in the UI). Slicing overrides this behavior and implies allow_simultaneous even if that setting is clear. See Job templates for information on how to specify this, as well as the number of job slices on your job template configuration.

The Job templates section provides additional detail on performing the following operations in the UI:

  • Launch workflow jobs with a job template that has a slice number greater than one.
  • Cancel the whole workflow or individual jobs after launching a slice job template.
  • Relaunch the whole workflow or individual jobs after slice jobs finish running.
  • View the details about the workflow and slice jobs after launching a job template.
  • Search slice jobs specifically after you create them, as per the subsequent section, "Search job slices").

21.3. Searching job slices

To make it easier to find slice jobs, use the search functionality to apply a search filter to:

  • Job lists to show only slice jobs
  • Job lists to show only parent workflow jobs of job slices
  • Job template lists to only show job templates that produce slice jobs

Procedure

  • Search for slice jobs by using one of the following methods:

    • To show only slice jobs in job lists, as with most cases, you can filter either on the type (jobs here) or unified_jobs:

      /api/v2/jobs/?job_slice_count__gt=1
    • To show only parent workflow jobs of job slices:

      /api/v2/workflow_jobs/?job_template__isnull=false
    • To show only job templates that produce slice jobs:

      /api/v2/job_templates/?job_slice_count__gt=1

Chapter 22. Workflows in automation controller

Workflows enable you to configure a sequence of disparate job templates (or workflow templates) that may or may not share inventory, playbooks, or permissions.

Workflows have admin and execute permissions, similar to job templates. A workflow accomplishes the task of tracking the full set of jobs that were part of the release process as a single unit.

Job or workflow templates are linked together using a graph-like structure called nodes. These nodes can be jobs, project syncs, or inventory syncs. A template can be part of different workflows or used multiple times in the same workflow. A copy of the graph structure is saved to a workflow job when you launch the workflow.

The following example shows a workflow that contains all three, as well as a workflow job template:

Node in workflow

As the workflow runs, jobs are spawned from the node’s linked template. Nodes linking to a job template which has prompt-driven fields (job_type, job_tags, skip_tags, limit) can contain those fields, and is not prompted on launch. Job templates that prompt for a credential or inventory, without defaults, are not available for inclusion in a workflow.

22.1. Workflow scenarios and considerations

When building workflows, consider the following:

  • A root node is set to ALWAYS by default and cannot be edited.
Node always
  • A node can have multiple parents, and children can be linked to any of the states of success, failure, or always. If always, then the state is neither success nor failure. States apply at the node level, not at the workflow job template level. A workflow job is marked as successful unless it is canceled or encounters an error.
Sibling nodes all edge types
  • If you remove a job or workflow template within the workflow, the nodes previously connected to those deleted, automatically get connected upstream and retain the edge type as in the following example:
Node delete scenario
  • You can have a convergent workflow, where multiple jobs converge into one. In this scenario, any of the jobs or all of them must complete before the next one runs, as shown in the following example:

    Node convergence
    • In this example, automation controller runs the first two job templates in parallel. When they both finish and succeed as specified, the third downstream (convergence node), triggers.
  • Prompts for inventory and surveys apply to workflow nodes in workflow job templates.
  • If you launch from the API, running a get command displays a list of warnings and highlights missing components. The following image illustrates a basic workflow for a workflow job template:
Workflow diagram
  • It is possible to launch several workflows simultaneously, and set a schedule for when to launch them. You can set notifications on workflows, such as when a job completes, similar to that of job templates.
Note

Job slicing is intended to scale job executions horizontally.

If you enable job slicing on a job template, it divides the inventory to be acted on in the number of slices configured at launch time. Then starts a job for each slice.

For more information see the Job slicing section.

  • You can build a recursive workflow, but if automation controller detects an error, it stops at the time the nested workflow attempts to run.
  • Artifacts gathered in jobs in the sub-workflow are passed to downstream nodes.
  • An inventory can be set at the workflow level, or prompt for inventory on launch.
  • When launched, all job templates in the workflow that have ask_inventory_on_launch=true use the workflow level inventory.
  • Job templates that do not prompt for inventory ignore the workflow inventory and run against their own inventory.
  • If a workflow prompts for inventory, schedules and other workflow nodes can provide the inventory.
  • In a workflow convergence scenario, set_stats data is merged in an undefined way, therefore you must set unique keys.

22.2. Workflow extra variables

Workflows use surveys to specify variables to be used in the playbooks in the workflow, called extra_vars. Survey variables are combined with extra_vars defined on the workflow job template, and saved to the workflow job extra_vars. extra_vars in the workflow job are combined w